源碼基于:Android R
0. 前言
下圖是android 8.0 之前binder 的軟件框架,依賴的驅(qū)動(dòng)設(shè)備是/dev/binder,binder機(jī)制的四要素分別是client、server、servicemanager和binder驅(qū)動(dòng)。
對(duì)于android 8.0后的binder 和vndbinder依然同這個(gè)框架,只不過(guò)驅(qū)動(dòng)的設(shè)備多加/dev/vndbinder
這篇主要分析servicemanger的流程,hwservicemanger后續(xù)補(bǔ)充。
2. servicemanager?生成
先來(lái)看下該bin 文件產(chǎn)生:
frameworks/native/cmds/servicemanager/Android.bp
cc_binary {
name: "servicemanager",
defaults: ["servicemanager_defaults"],
init_rc: ["servicemanager.rc"],
srcs: ["main.cpp"],
}
cc_binary {
name: "vndservicemanager",
defaults: ["servicemanager_defaults"],
init_rc: ["vndservicemanager.rc"],
vendor: true,
cflags: [
"-DVENDORSERVICEMANAGER=1",
],
srcs: ["main.cpp"],
}
該目錄下通過(guò)同樣的main.cpp 編譯出兩個(gè)bin 文件:servicemanager 和 vndservicemanager,兩個(gè)bin 文件對(duì)應(yīng)不同的 *.rc 文件。
frameworks/native/cmds/servicemanager/servicemanager.rc
service servicemanager /system/bin/servicemanager
class core animation
user system
group system readproc
critical
onrestart restart healthd
onrestart restart zygote
onrestart restart audioserver
onrestart restart media
onrestart restart surfaceflinger
onrestart restart inputflinger
onrestart restart drm
onrestart restart cameraserver
onrestart restart keystore
onrestart restart gatekeeperd
onrestart restart thermalservice
writepid /dev/cpuset/system-background/tasks
shutdown critical
frameworks/native/cmds/servicemanager/vndservicemanager.rc
service vndservicemanager /vendor/bin/vndservicemanager /dev/vndbinder
class core
user system
group system readproc
writepid /dev/cpuset/system-background/tasks
shutdown critical
3. servicemanager 的main()
frameworks/native/cmds/servicemanager/main.cpp
int main(int argc, char** argv) {
if (argc > 2) {
LOG(FATAL) << "usage: " << argv[0] << " [binder driver]";
}
//根據(jù)參數(shù)確認(rèn)代碼的設(shè)備是binder還是vndbinder
const char* driver = argc == 2 ? argv[1] : "/dev/binder";
//驅(qū)動(dòng)設(shè)備的初始化工作,后面第 3.1 節(jié)詳細(xì)說(shuō)明
sp<ProcessState> ps = ProcessState::initWithDriver(driver);
//告知驅(qū)動(dòng)最大線程數(shù),并設(shè)定servicemanger的線程最大數(shù)
ps->setThreadPoolMaxThreadCount(0);
//調(diào)用處理,以error方式還是fatal
ps->setCallRestriction(ProcessState::CallRestriction::FATAL_IF_NOT_ONEWAY);
//核心的接口都在這里
sp<ServiceManager> manager = new ServiceManager(std::make_unique<Access>());
//將servicemanager作為一個(gè)特殊service添加進(jìn)來(lái)
if (!manager->addService("manager", manager, false /*allowIsolated*/, IServiceManager::DUMP_FLAG_PRIORITY_DEFAULT).isOk()) {
LOG(ERROR) << "Could not self register servicemanager";
}
//保存進(jìn)程的context obj
IPCThreadState::self()->setTheContextObject(manager);
//通知驅(qū)動(dòng)context mgr是該context
ps->becomeContextManager(nullptr, nullptr);
//servicemanager 中新建一個(gè)looper,用以處理binder消息
sp<Looper> looper = Looper::prepare(false /*allowNonCallbacks*/);
//通知驅(qū)動(dòng)進(jìn)入looper,并開(kāi)始監(jiān)聽(tīng)驅(qū)動(dòng)消息,并設(shè)立callback進(jìn)行處理
BinderCallback::setupTo(looper);
ClientCallbackCallback::setupTo(looper, manager);
//啟動(dòng)looper,進(jìn)入每次的poll處理,進(jìn)程如果沒(méi)有出現(xiàn)異常情況導(dǎo)致abort是不會(huì)退出的
while(true) {
looper->pollAll(-1);
}
// should not be reached
return EXIT_FAILURE;
}
3.1 initWithDriver()
frameworks/native/libs/binder/ProcessState.cpp
sp<ProcessState> ProcessState::initWithDriver(const char* driver)
{
Mutex::Autolock _l(gProcessMutex);
if (gProcess != nullptr) {
// Allow for initWithDriver to be called repeatedly with the same
// driver.
if (!strcmp(gProcess->getDriverName().c_str(), driver)) {
return gProcess;
}
LOG_ALWAYS_FATAL("ProcessState was already initialized.");
}
if (access(driver, R_OK) == -1) {
ALOGE("Binder driver %s is unavailable. Using /dev/binder instead.", driver);
driver = "/dev/binder";
}
gProcess = new ProcessState(driver);
return gProcess;
}
函數(shù)的參數(shù)為driver 的設(shè)備名稱,是 /dev/binder 還是 /dev/vndbinder。
另外的邏輯比較簡(jiǎn)單,ProcesState 是管理 “進(jìn)程狀態(tài)”,Binder 中每個(gè)進(jìn)程都會(huì)有且只有一個(gè)mProcess 對(duì)象,如果該實(shí)例不為空,則確認(rèn)該實(shí)例打開(kāi)的driver 是否為當(dāng)前需要init 的driver 名稱;如果該實(shí)例不存在,則通過(guò) new 創(chuàng)建一個(gè)。
詳細(xì)的PorcessState 可以查看第 4 節(jié)。
3.2 new ServiceManager()
核心的接口都是位于Servicemanager 中,這里構(gòu)造時(shí)新建一個(gè)Access 對(duì)象,如下:
frameworks/native/cmds/servicemanager/ServiceManager.cpp
ServiceManager::ServiceManager(std::unique_ptr<Access>&& access) : mAccess(std::move(access)) {
...
}
通過(guò)move 接口調(diào)用 Access 的移動(dòng)構(gòu)造函數(shù),創(chuàng)建實(shí)例mAccess,mAccess 用于通過(guò)selinux 確認(rèn)servicemanager的權(quán)限。
3.3 addService()
if (!manager->addService("manager", manager, false /*allowIsolated*/, IServiceManager::DUMP_FLAG_PRIORITY_DEFAULT).isOk()) {
LOG(ERROR) << "Could not self register servicemanager";
}
如果是其他的service 注冊(cè)到servicemanager 是需要通過(guò) IServiceManager 經(jīng)過(guò)binder 最終調(diào)用到ServiceManager 中的addService(),而這里直接通過(guò) ServiceManager 對(duì)象直接注冊(cè)。
frameworks/native/cmds/servicemanager/ServiceManager.cpp
Status ServiceManager::addService(const std::string& name, const sp<IBinder>& binder, bool allowIsolated, int32_t dumpPriority) {
auto ctx = mAccess->getCallingContext();
//應(yīng)用進(jìn)程沒(méi)有權(quán)限注冊(cè)服務(wù)
if (multiuser_get_app_id(ctx.uid) >= AID_APP) {
return Status::fromExceptionCode(Status::EX_SECURITY);
}
// selinux 曲線是否允許注冊(cè)為SELABEL_CTX_ANDROID_SERVICE
if (!mAccess->canAdd(ctx, name)) {
return Status::fromExceptionCode(Status::EX_SECURITY);
}
//傳入的IBinder 不能為nullptr
if (binder == nullptr) {
return Status::fromExceptionCode(Status::EX_ILLEGAL_ARGUMENT);
}
//service name 需要符合要求,由0-9、a-z、A-Z、下劃線、短線、點(diǎn)號(hào)、斜杠組成,name 長(zhǎng)度不能超過(guò)127
if (!isValidServiceName(name)) {
LOG(ERROR) << "Invalid service name: " << name;
return Status::fromExceptionCode(Status::EX_ILLEGAL_ARGUMENT);
}
//這里應(yīng)該是需要普通的vnd service 進(jìn)行vintf 聲明
#ifndef VENDORSERVICEMANAGER
if (!meetsDeclarationRequirements(binder, name)) {
// already logged
return Status::fromExceptionCode(Status::EX_ILLEGAL_ARGUMENT);
}
#endif // !VENDORSERVICEMANAGER
//注冊(cè)linkToDeath,監(jiān)聽(tīng)service 狀態(tài)
if (binder->remoteBinder() != nullptr && binder->linkToDeath(this) != OK) {
LOG(ERROR) << "Could not linkToDeath when adding " << name;
return Status::fromExceptionCode(Status::EX_ILLEGAL_STATE);
}
//添加到map 中
auto entry = mNameToService.emplace(name, Service {
.binder = binder,
.allowIsolated = allowIsolated,
.dumpPriority = dumpPriority,
.debugPid = ctx.debugPid,
});
//確認(rèn)是否注冊(cè)了service callback,如果注冊(cè)調(diào)用回調(diào)
auto it = mNameToRegistrationCallback.find(name);
if (it != mNameToRegistrationCallback.end()) {
for (const sp<IServiceCallback>& cb : it->second) {
entry.first->second.guaranteeClient = true;
// permission checked in registerForNotifications
cb->onRegistration(name, binder);
}
}
return Status::ok();
}
3.4?setTheContextObject()
IPCThreadState::self()->setTheContextObject(manager);
ps->becomeContextManager(nullptr, nullptr);
第一行是創(chuàng)建 IPCThreadState,并將 servicemanager 存放到IPCThreadState 中,用于后面transact 使用。
第二行通過(guò)命令 BINDER_SET_CONTEXT_MGR_EXT 通知驅(qū)動(dòng) context mgr:
frameworks/native/libs/binder/ProcessState.cpp
bool ProcessState::becomeContextManager(context_check_func checkFunc, void* userData)
{
AutoMutex _l(mLock);
mBinderContextCheckFunc = checkFunc;
mBinderContextUserData = userData;
flat_binder_object obj {
.flags = FLAT_BINDER_FLAG_TXN_SECURITY_CTX,
};
int result = ioctl(mDriverFD, BINDER_SET_CONTEXT_MGR_EXT, &obj);
// fallback to original method
if (result != 0) {
android_errorWriteLog(0x534e4554, "121035042");
int dummy = 0;
result = ioctl(mDriverFD, BINDER_SET_CONTEXT_MGR, &dummy);
}
if (result == -1) {
mBinderContextCheckFunc = nullptr;
mBinderContextUserData = nullptr;
ALOGE("Binder ioctl to become context manager failed: %s\n", strerror(errno));
}
return result == 0;
}
通知驅(qū)動(dòng)創(chuàng)建context_mgr_node,下面是驅(qū)動(dòng)層的代碼:
drivers/android/binder.c
static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{
...
case BINDER_SET_CONTEXT_MGR_EXT: {
struct flat_binder_object fbo;
if (copy_from_user(&fbo, ubuf, sizeof(fbo))) {
ret = -EINVAL;
goto err;
}
ret = binder_ioctl_set_ctx_mgr(filp, &fbo);
if (ret)
goto err;
break;
}
case BINDER_SET_CONTEXT_MGR:
ret = binder_ioctl_set_ctx_mgr(filp, NULL);
if (ret)
goto err;
break;
...
...
}
兩個(gè)cmd 主要區(qū)別是否有flat_binder_object,最終都是調(diào)用binder_ioctl_set_ctx_mgr 函數(shù):
drivers/android/binder.c
static int binder_ioctl_set_ctx_mgr(struct file *filp,
struct flat_binder_object *fbo)
{
int ret = 0;
//進(jìn)程的binder_proc, 這里是ServiceManager的 binder_proc,之前通過(guò)open("/dev/binder")得來(lái)
struct binder_proc *proc = filp->private_data;
struct binder_context *context = proc->context;
struct binder_node *new_node;
kuid_t curr_euid = current_euid(); // 線程的uid
mutex_lock(&context->context_mgr_node_lock);
//正常第一次為null,如果不為null則說(shuō)明該進(jìn)程已經(jīng)設(shè)置過(guò)context mgr則直接退出
if (context->binder_context_mgr_node) {
pr_err("BINDER_SET_CONTEXT_MGR already set\n");
ret = -EBUSY;
goto out;
}
//檢查當(dāng)前進(jìn)程是否具有注冊(cè)Context Manager的SEAndroid安全權(quán)限
ret = security_binder_set_context_mgr(proc->tsk);
if (ret < 0)
goto out;
if (uid_valid(context->binder_context_mgr_uid)) {
//讀取binder_context_mgr_uid和當(dāng)前的比,如果不一樣,報(bào)錯(cuò)
if (!uid_eq(context->binder_context_mgr_uid, curr_euid)) {
pr_err("BINDER_SET_CONTEXT_MGR bad uid %d != %d\n",
from_kuid(&init_user_ns, curr_euid),
from_kuid(&init_user_ns,
context->binder_context_mgr_uid));
ret = -EPERM;
goto out;
}
} else {
context->binder_context_mgr_uid = curr_euid;
}
//創(chuàng)建binder_node對(duì)象
new_node = binder_new_node(proc, fbo);
if (!new_node) {
ret = -ENOMEM;
goto out;
}
binder_node_lock(new_node);
new_node->local_weak_refs++;
new_node->local_strong_refs++;
new_node->has_strong_ref = 1;
new_node->has_weak_ref = 1;
//把新創(chuàng)建的node對(duì)象賦值給context->binder_context_mgr_node,成為serviceManager的binder管理實(shí)體
context->binder_context_mgr_node = new_node;
binder_node_unlock(new_node);
binder_put_node(new_node);
out:
mutex_unlock(&context->context_mgr_node_lock);
return ret;
}
binder_ioctl_set_ctx_mgr()的流程也比較簡(jiǎn)單
- 先檢查當(dāng)前進(jìn)程是否具有注冊(cè)Context Manager的SEAndroid安全權(quán)限
- 如果具有SELinux權(quán)限,會(huì)為整個(gè)系統(tǒng)的上下文管理器專門(mén)生成一個(gè)binder_node節(jié)點(diǎn),使該節(jié)點(diǎn)的強(qiáng)弱應(yīng)用加1
- 新創(chuàng)建的binder_node 節(jié)點(diǎn),記入context->binder_context_mgr_node,即ServiceManager 進(jìn)程的context binder節(jié)點(diǎn),使之成為serviceManager的binder管理實(shí)體
3.5?Looper::prepare()
sp<Looper> looper = Looper::prepare(0 /* opts */);
詳細(xì)代碼不列出來(lái),主要是通過(guò) epoll 方式添加對(duì) fd 的監(jiān)聽(tīng)
3.6?BinderCallback::setupTo(looper)
frameworks/native/cmds/servicemanager/main.cpp
class BinderCallback : public LooperCallback {
public:
static sp<BinderCallback> setupTo(const sp<Looper>& looper) {
sp<BinderCallback> cb = new BinderCallback;
int binder_fd = -1;
//獲取主線程的binder fd,并通知驅(qū)動(dòng)ENTER_LOOPER
IPCThreadState::self()->setupPolling(&binder_fd);
LOG_ALWAYS_FATAL_IF(binder_fd < 0, "Failed to setupPolling: %d", binder_fd);
//將線程中的cmd flush 給驅(qū)動(dòng),此處應(yīng)該是ENTER_LOOPER
IPCThreadState::self()->flushCommands();
//looper 中的epoll 添加對(duì)binder_fd 的監(jiān)聽(tīng),并且將callback 注冊(cè)進(jìn)去,會(huì)回調(diào)handleEvent
int ret = looper->addFd(binder_fd,
Looper::POLL_CALLBACK,
Looper::EVENT_INPUT,
cb,
nullptr /*data*/);
LOG_ALWAYS_FATAL_IF(ret != 1, "Failed to add binder FD to Looper");
return cb;
}
//epoll 觸發(fā)該fd事件時(shí),會(huì)回調(diào)該函數(shù)
int handleEvent(int /* fd */, int /* events */, void* /* data */) override {
IPCThreadState::self()->handlePolledCommands();
return 1; // Continue receiving callbacks.
}
};
其實(shí),每一個(gè)普通的service 在創(chuàng)建后,都會(huì)調(diào)用 ProcessState::startThreadPool() 產(chǎn)生一個(gè)main IPC thread,進(jìn)而用其通過(guò)?IPCThreadState::joinThreadPool() 卵生其他的 IPCThreadState,但是 servicemanager 因?yàn)椴恍枰渌€程,所以只是在主線程中使用Looper進(jìn)行進(jìn)一步的監(jiān)聽(tīng)。
每一個(gè)IPCThreadState 核心應(yīng)該就是監(jiān)聽(tīng)、處理 binder 驅(qū)動(dòng)的交互信息,而這些操作都是在函數(shù)getAndExecuteCommand() 中,詳細(xì)看第 5.4 節(jié)。
3.7?ClientCallbackCallback::setupTo(looper, manager)
這里具體不清楚是為什么,從代碼上看是ServiceManager 在addService之前可以選擇先registerClientCallback,這樣如果 addService()?成功會(huì)回調(diào)通知。
至于這里ClientCallbackCallback,設(shè)定了定時(shí)器,5s 觸發(fā)一次,感覺(jué)是個(gè)心跳包。
3.8?looper->pollAll(-1)
進(jìn)入無(wú)限循環(huán)
4. ProcessState 類
ProcesState 是管理 “進(jìn)程狀態(tài)”,Binder 中每個(gè)進(jìn)程都會(huì)有且只有一個(gè)mProcess 對(duì)象。該對(duì)象用以:
- 初始化驅(qū)動(dòng)設(shè)備;
- 記錄驅(qū)動(dòng)的名稱、FD;
- 記錄進(jìn)程線程數(shù)量的上限;
- 記錄binder 的 context obj;
- 啟動(dòng) binder線程;
4.1 ProcessState 構(gòu)造
frameworks/native/libs/binder/ProcessState.cpp
ProcessState::ProcessState(const char *driver)
: mDriverName(String8(driver))
, mDriverFD(open_driver(driver))
, mVMStart(MAP_FAILED)
, mThreadCountLock(PTHREAD_MUTEX_INITIALIZER)
, mThreadCountDecrement(PTHREAD_COND_INITIALIZER)
, mExecutingThreadsCount(0)
, mMaxThreads(DEFAULT_MAX_BINDER_THREADS)
, mStarvationStartTimeMs(0)
, mBinderContextCheckFunc(nullptr)
, mBinderContextUserData(nullptr)
, mThreadPoolStarted(false)
, mThreadPoolSeq(1)
, mCallRestriction(CallRestriction::NONE)
{
// TODO(b/139016109): enforce in build system
#if defined(__ANDROID_APEX__)
LOG_ALWAYS_FATAL("Cannot use libbinder in APEX (only system.img libbinder) since it is not stable.");
#endif
if (mDriverFD >= 0) {
// mmap the binder, providing a chunk of virtual address space to receive transactions.
mVMStart = mmap(nullptr, BINDER_VM_SIZE, PROT_READ, MAP_PRIVATE | MAP_NORESERVE, mDriverFD, 0);
if (mVMStart == MAP_FAILED) {
// *sigh*
ALOGE("Using %s failed: unable to mmap transaction memory.\n", mDriverName.c_str());
close(mDriverFD);
mDriverFD = -1;
mDriverName.clear();
}
}
#ifdef __ANDROID__
LOG_ALWAYS_FATAL_IF(mDriverFD < 0, "Binder driver '%s' could not be opened. Terminating.", driver);
#endif
}
該函數(shù)主要做了如下:
在初始化列表中,通過(guò)調(diào)用 open_driver() 代碼設(shè)備驅(qū)動(dòng),詳見(jiàn)后面第 4.3 節(jié);
如果驅(qū)動(dòng)open 成功,mDriverFD 被賦值后,通過(guò)mmap() 創(chuàng)建大小 BINDER_VM_SIZE 的buffer,用以接收 transactions 數(shù)據(jù)。
通過(guò)命令行可以確認(rèn)這個(gè)大小,假設(shè) servicemanager 的PID 為510,則通過(guò):
cat /proc/510/maps 可以看到:
748c323000-748c421000 r--p 00000000 00:1f 4 /dev/binderfs/binder
不用奇怪為什么不是/dev/binder,軟連接而已:
lrwxrwxrwx 1 root root 20 1970-01-01 05:43 binder -> /dev/binderfs/binder
lrwxrwxrwx 1 root root 22 1970-01-01 05:43 hwbinder -> /dev/binderfs/hwbinder
lrwxrwxrwx 1 root root 22 1970-01-01 05:43 vndbinder -> /dev/binderfs/vndbinder
4.2 ProcessState 單例
ProcessState 使用self() 函數(shù)獲取對(duì)象,因?yàn)橛?vndbinder 和binder共用一份代碼,所以如果需要使用 vndbinder,需要在調(diào)用 self() 函數(shù)前調(diào)用 initWithDriver() 來(lái)指定驅(qū)動(dòng)設(shè)備名稱。當(dāng)然如果強(qiáng)制使用self() 函數(shù),那么獲取的單例針對(duì)的驅(qū)動(dòng)設(shè)備為 kDefaultDriver。
frameworks/native/libs/binder/ProcessState.cpp
#ifdef __ANDROID_VNDK__
const char* kDefaultDriver = "/dev/vndbinder";
#else
const char* kDefaultDriver = "/dev/binder";
#endif
sp<ProcessState> ProcessState::self()
{
Mutex::Autolock _l(gProcessMutex);
if (gProcess != nullptr) {
return gProcess;
}
gProcess = new ProcessState(kDefaultDriver);
return gProcess;
}
4.3 open_driver()
對(duì)于 binder 和 vndbinder 設(shè)備,在 ProcessState 構(gòu)造的時(shí)候會(huì)在初始化列表中調(diào)用 open_driver() 來(lái)對(duì)設(shè)備進(jìn)行 open 和初始化。
frameworks/native/libs/binder/ProcessState.cpp
static int open_driver(const char *driver)
{
//通過(guò)系統(tǒng)調(diào)用open 設(shè)備
int fd = open(driver, O_RDWR | O_CLOEXEC);
if (fd >= 0) {
int vers = 0;
//如果open 成功,會(huì)查詢binder version是否匹配
status_t result = ioctl(fd, BINDER_VERSION, &vers);
if (result == -1) {
ALOGE("Binder ioctl to obtain version failed: %s", strerror(errno));
close(fd);
fd = -1;
}
//如果不是當(dāng)前的version,為什么還是不return?
if (result != 0 || vers != BINDER_CURRENT_PROTOCOL_VERSION) {
ALOGE("Binder driver protocol(%d) does not match user space protocol(%d)! ioctl() return value: %d",
vers, BINDER_CURRENT_PROTOCOL_VERSION, result);
close(fd);
fd = -1;
}
size_t maxThreads = DEFAULT_MAX_BINDER_THREADS;
//如果binder version 是當(dāng)前的,會(huì)通知驅(qū)動(dòng)設(shè)置最大的線程數(shù)
result = ioctl(fd, BINDER_SET_MAX_THREADS, &maxThreads);
if (result == -1) {
ALOGE("Binder ioctl to set max threads failed: %s", strerror(errno));
}
} else {
ALOGW("Opening '%s' failed: %s\n", driver, strerror(errno));
}
return fd;
}
這里需要注意,每個(gè)進(jìn)程創(chuàng)建的 binder 的最大線程數(shù)為:DEFAULT_MAX_BINDER_THREADS
frameworks/native/libs/binder/ProcessState.cpp
#define DEFAULT_MAX_BINDER_THREADS 15
對(duì)于servicemanager,main 函數(shù)中設(shè)定為 0,也就是servicemanager 直接使用主線程,普通service 限制最大的binder thread為 15,詳細(xì)在后面進(jìn)行驅(qū)動(dòng)分析。
這里補(bǔ)充一下,在system_server 進(jìn)程中,binder 的數(shù)量最大值為 31:
frameworks/base/services/java/com/android/server/SystemServer.java
private static final int sMaxBinderThreads = 31;
private void run() {
...
BinderInternal.setMaxThreads(sMaxBinderThreads);
...
}
4.4?makeBinderThreadName()
frameworks/native/libs/binder/ProcessState.cpp
String8 ProcessState::makeBinderThreadName() {
int32_t s = android_atomic_add(1, &mThreadPoolSeq);
pid_t pid = getpid();
String8 name;
name.appendFormat("Binder:%d_%X", pid, s);
return name;
}
這是產(chǎn)生binder 線程名稱的函數(shù),通過(guò)變量 mThreadPoolSeq 控制順序和個(gè)數(shù),最終的binder 線程名稱類似 Binder:1234_F。
線程的最大數(shù)量是規(guī)定的,如上一些中說(shuō)到的?DEFAULT_MAX_BINDER_THREADS (默認(rèn)為15)。
當(dāng)然,也可以通過(guò)?setThreadPoolMaxThreadCount() 函數(shù)來(lái)設(shè)定最大線程數(shù),在servicemanager 中就是通過(guò)該函數(shù)指定 max threads為0,詳細(xì)見(jiàn)第 3 節(jié)。
4.4.1?setThreadPoolMaxThreadCount()
frameworks/native/libs/binder/ProcessState.cpp
status_t ProcessState::setThreadPoolMaxThreadCount(size_t maxThreads) {
status_t result = NO_ERROR;
if (ioctl(mDriverFD, BINDER_SET_MAX_THREADS, &maxThreads) != -1) {
mMaxThreads = maxThreads;
} else {
result = -errno;
ALOGE("Binder ioctl to set max threads failed: %s", strerror(-result));
}
return result;
}
在第 4.3 節(jié)的時(shí)候已經(jīng)說(shuō)明過(guò),當(dāng) ProcessState 構(gòu)造時(shí)會(huì)open_driver(),此處會(huì)將默認(rèn)的 binder thread 最大值通知給 binder 驅(qū)動(dòng)。默認(rèn)binder thread 的MAX 值為 15。
進(jìn)程可以通過(guò) ProcessState 單獨(dú)設(shè)定binder thread 的MAX 值,例如,system_server 進(jìn)程就將該值設(shè)為 31,servicemanager 進(jìn)程將該值設(shè)為0.
4.5?startThreadPool()
frameworks/native/libs/binder/ProcessState.cpp
void ProcessState::startThreadPool()
{
AutoMutex _l(mLock);
if (!mThreadPoolStarted) {
mThreadPoolStarted = true;
spawnPooledThread(true);
}
}
每個(gè)binder 通信的進(jìn)程,都需要調(diào)用該函數(shù)。
該函數(shù)可以說(shuō)是binder 通信的開(kāi)端和必然流程。原因大概有兩個(gè):
- 每個(gè)進(jìn)程的 binder 通信都會(huì)保存一個(gè) ProcessState 單例,都會(huì)有個(gè)狀態(tài)保護(hù),也就是這里?mThreadPoolStarted 變量,后面任何binder 線程卵生都需要調(diào)用?spawnPooledThread(),而這個(gè)函數(shù)前提條件是?mThreadPoolStarted 為true;
- 對(duì)于 binder 驅(qū)動(dòng)而言,每個(gè)進(jìn)程都需要?jiǎng)?chuàng)建一個(gè) 主 binder 線程,其他binder 線程都是非主線程;
4.6?spawnPooledThread()
frameworks/native/libs/binder/ProcessState.cpp
void ProcessState::spawnPooledThread(bool isMain)
{
if (mThreadPoolStarted) {
String8 name = makeBinderThreadName();
ALOGV("Spawning new pooled thread, name=%s\n", name.string());
sp<Thread> t = new PoolThread(isMain);
t->run(name.string());
}
}
這個(gè)用來(lái)卵生新的 binder thread,PoolThread 繼承自Thread,run 的時(shí)候會(huì)將 binder name 帶入,所以,在打印線程堆棧時(shí)能知道第幾個(gè)Binder 的線程。而每一個(gè)binder thread 都會(huì)通過(guò)IPCThreadState 管理:
frameworks/native/libs/binder/ProcessState.cpp
class PoolThread : public Thread
{
public:
explicit PoolThread(bool isMain)
: mIsMain(isMain)
{
}
protected:
virtual bool threadLoop()
{
IPCThreadState::self()->joinThreadPool(mIsMain);
return false;
}
const bool mIsMain;
};
5. IPCThreadState 類
同 ProcessState 類,每個(gè)進(jìn)程有很多的線程用來(lái)記錄 “線程狀態(tài)”,在每次binder 的BINDER_WRITE_READ 調(diào)用后,驅(qū)動(dòng)都會(huì)根據(jù)情況確定是否需要spawn 線程,而創(chuàng)建一個(gè)PoolThread(詳見(jiàn)ProcessState) 都會(huì)伴隨一個(gè)IPCThreadState進(jìn)行管理,而binder 線程中所有的操作都是通過(guò) IPCThreadState 進(jìn)行的。
5.1 IPCThreadState 構(gòu)造
frameworks/native/libs/binder/IPCThreadState.cpp
IPCThreadState::IPCThreadState()
: mProcess(ProcessState::self()),
mServingStackPointer(nullptr),
mWorkSource(kUnsetWorkSource),
mPropagateWorkSource(false),
mStrictModePolicy(0),
mLastTransactionBinderFlags(0),
mCallRestriction(mProcess->mCallRestriction)
{
pthread_setspecific(gTLS, this);
clearCaller();
mIn.setDataCapacity(256);
mOut.setDataCapacity(256);
}
5.2 self()
frameworks/native/libs/binder/IPCThreadState.cpp
IPCThreadState* IPCThreadState::self()
{
if (gHaveTLS.load(std::memory_order_acquire)) {
restart:
const pthread_key_t k = gTLS;
IPCThreadState* st = (IPCThreadState*)pthread_getspecific(k);
if (st) return st;
return new IPCThreadState;
}
// Racey, heuristic test for simultaneous shutdown.
if (gShutdown.load(std::memory_order_relaxed)) {
ALOGW("Calling IPCThreadState::self() during shutdown is dangerous, expect a crash.\n");
return nullptr;
}
pthread_mutex_lock(&gTLSMutex);
if (!gHaveTLS.load(std::memory_order_relaxed)) {
int key_create_value = pthread_key_create(&gTLS, threadDestructor);
if (key_create_value != 0) {
pthread_mutex_unlock(&gTLSMutex);
ALOGW("IPCThreadState::self() unable to create TLS key, expect a crash: %s\n",
strerror(key_create_value));
return nullptr;
}
gHaveTLS.store(true, std::memory_order_release);
}
pthread_mutex_unlock(&gTLSMutex);
goto restart;
}
每個(gè)線程創(chuàng)建后,都會(huì)通過(guò)pthread_getspecific() 確認(rèn)TLS中是否有已經(jīng)創(chuàng)建了IPCThreadState,如果有就直接返回,如果沒(méi)有則新建一個(gè)。
5.3 setupPolling()
frameworks/native/libs/binder/IPCThreadState.cpp
int IPCThreadState::setupPolling(int* fd)
{
if (mProcess->mDriverFD < 0) {
return -EBADF;
}
mOut.writeInt32(BC_ENTER_LOOPER);
*fd = mProcess->mDriverFD;
return 0;
}
主要做兩件事情,發(fā)送 BC_ENTER_LOOPER 通知驅(qū)動(dòng)進(jìn)入looper,并將驅(qū)動(dòng) fd 返回。
5.4?getAndExecuteCommand()
frameworks/native/libs/binder/IPCThreadState.cpp
status_t IPCThreadState::getAndExecuteCommand()
{
status_t result;
int32_t cmd;
//step1,與binder驅(qū)動(dòng)交互,等待binder驅(qū)動(dòng)返回
result = talkWithDriver();
if (result >= NO_ERROR) {
size_t IN = mIn.dataAvail();
if (IN < sizeof(int32_t)) return result;
//step2,解析從binder驅(qū)動(dòng)中的reply command
cmd = mIn.readInt32();
//step3,留意binder處理的thread count
//system server中會(huì)喂狗,這里當(dāng)處理的線程count超過(guò)最大值,monitor會(huì)阻塞直到有足夠的數(shù)量
pthread_mutex_lock(&mProcess->mThreadCountLock);
mProcess->mExecutingThreadsCount++;
if (mProcess->mExecutingThreadsCount >= mProcess->mMaxThreads &&
mProcess->mStarvationStartTimeMs == 0) {
mProcess->mStarvationStartTimeMs = uptimeMillis();
}
pthread_mutex_unlock(&mProcess->mThreadCountLock);
//step4,binder通信用戶端的核心處理函數(shù),根據(jù)reply command進(jìn)行對(duì)應(yīng)的處理
result = executeCommand(cmd);
//step5,每個(gè)線程executeCommand() 完成都會(huì)將thread count減1,且每次都會(huì)條件變量broadcast
pthread_mutex_lock(&mProcess->mThreadCountLock);
mProcess->mExecutingThreadsCount--;
if (mProcess->mExecutingThreadsCount < mProcess->mMaxThreads &&
mProcess->mStarvationStartTimeMs != 0) {
int64_t starvationTimeMs = uptimeMillis() - mProcess->mStarvationStartTimeMs;
if (starvationTimeMs > 100) {
ALOGE("binder thread pool (%zu threads) starved for %" PRId64 " ms",
mProcess->mMaxThreads, starvationTimeMs);
}
mProcess->mStarvationStartTimeMs = 0;
}
pthread_cond_broadcast(&mProcess->mThreadCountDecrement);
pthread_mutex_unlock(&mProcess->mThreadCountLock);
}
return result;
}
代碼邏輯上還是比較簡(jiǎn)單,主要是三部分:
- talkWithDriver() 與binder 驅(qū)動(dòng)交互,并確定返回值是否異常;
- 確定execute thread count,system server會(huì)喂狗;
- executeCommand() 進(jìn)行核心處理;
重點(diǎn)的兩個(gè)函數(shù)處理邏輯比較復(fù)雜,下面先簡(jiǎn)單分析下。
5.4.1 talkWithDriver()
frameworks/native/libs/binder/IPCThreadState.cpp
status_t IPCThreadState::talkWithDriver(bool doReceive)
{
if (mProcess->mDriverFD < 0) {
return -EBADF;
}
binder_write_read bwr;
// Is the read buffer empty?
const bool needRead = mIn.dataPosition() >= mIn.dataSize();
// We don't want to write anything if we are still reading
// from data left in the input buffer and the caller
// has requested to read the next data.
const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0;
bwr.write_size = outAvail;
bwr.write_buffer = (uintptr_t)mOut.data();
// This is what we'll read.
if (doReceive && needRead) {
bwr.read_size = mIn.dataCapacity();
bwr.read_buffer = (uintptr_t)mIn.data();
} else {
bwr.read_size = 0;
bwr.read_buffer = 0;
}
...
// Return immediately if there is nothing to do.
if ((bwr.write_size == 0) && (bwr.read_size == 0)) return NO_ERROR;
bwr.write_consumed = 0;
bwr.read_consumed = 0;
status_t err;
do {
if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)
err = NO_ERROR;
else
err = -errno;
if (mProcess->mDriverFD < 0) {
err = -EBADF;
}
} while (err == -EINTR);
if (err >= NO_ERROR) {
if (bwr.write_consumed > 0) {
if (bwr.write_consumed < mOut.dataSize())
LOG_ALWAYS_FATAL(...);
else {
mOut.setDataSize(0);
processPostWriteDerefs();
}
}
if (bwr.read_consumed > 0) {
mIn.setDataSize(bwr.read_consumed);
mIn.setDataPosition(0);
}
return NO_ERROR;
}
return err;
}
IPCThreadState 中存放了兩個(gè)信息:mIn 和 mOut,mIn 是用來(lái)read驅(qū)動(dòng)的數(shù)據(jù),mOut 是用來(lái)write 驅(qū)動(dòng)的數(shù)據(jù)。
這里核心是do...while循環(huán),通過(guò)命令?BINDER_WRITE_READ 與驅(qū)動(dòng)交互,如果 ioctl 沒(méi)有碰到中斷打擾,do...while 在處理完后會(huì)返回。
詳細(xì)的 BINDER_WIRTE_READ 驅(qū)動(dòng)端的處理,在后面會(huì)詳細(xì)分析。
5.4.2 executeCommand()
binder 線程核心處理部分,talkWithDriver() 之后對(duì)結(jié)果處理核心:
frameworks/native/libs/binder/IPCThreadState.cpp
status_t IPCThreadState::executeCommand(int32_t cmd)
{
BBinder* obj;
RefBase::weakref_type* refs;
status_t result = NO_ERROR;
switch ((uint32_t)cmd) {
case BR_ERROR:
result = mIn.readInt32();
break;
case BR_OK:
break;
case BR_ACQUIRE:
...
break;
case BR_RELEASE:
...
break;
case BR_INCREFS:
...
break;
case BR_DECREFS:
...
break;
case BR_ATTEMPT_ACQUIRE:
...
break;
case BR_TRANSACTION_SEC_CTX:
case BR_TRANSACTION:
...
break;
case BR_DEAD_BINDER:
...
case BR_CLEAR_DEATH_NOTIFICATION_DONE:
...
case BR_FINISHED:
result = TIMED_OUT;
break;
case BR_NOOP:
break;
case BR_SPAWN_LOOPER:
mProcess->spawnPooledThread(false);
break;
default:
ALOGE("*** BAD COMMAND %d received from Binder driver\n", cmd);
result = UNKNOWN_ERROR;
break;
}
if (result != NO_ERROR) {
mLastError = result;
}
return result;
}
這里暫時(shí)不做分析,后續(xù)會(huì)在 native 端 C-S 中詳細(xì)剖析。文章來(lái)源:http://www.zghlxwxcb.cn/news/detail-503459.html
至此,servicemanager 的啟動(dòng)流程已經(jīng)梳理完畢,基本流程如下:文章來(lái)源地址http://www.zghlxwxcb.cn/news/detail-503459.html
- 根據(jù)命令行參數(shù),選擇啟動(dòng)設(shè)備?binder 還是設(shè)備?vndbinder;
- 通過(guò)ProcessState::initWithDriver() open、初始化設(shè)備驅(qū)動(dòng),并通過(guò)該進(jìn)程的最大 thread 數(shù)量為 0;
- 實(shí)例化 ServcieManager,并將其以特殊的 servie,注冊(cè)到ServiceManager 中的mServiceMap 中;
- 將特殊的 context obj 存放到 IPCThreadState 中,并通過(guò) ProcessState 通知驅(qū)動(dòng)context mgr;
- 通過(guò) BinderCallback,通知驅(qū)動(dòng)servicemanger 就緒,進(jìn)入 BC_ENTER_LOOPER;
- 通過(guò)Looper 中的Epoll 將驅(qū)動(dòng)設(shè)備fd 添加監(jiān)聽(tīng),并回調(diào) hanleEvent();
- 在 handleEvent() 中處理poll cmd,處理所有的信息;
到了這里,關(guān)于Android Binder通信原理(二):servicemanager啟動(dòng)的文章就介紹完了。如果您還想了解更多內(nèi)容,請(qǐng)?jiān)谟疑辖撬阉鱐OY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!