您的位置:首页 > 移动开发 > Android开发

Android Binder学习(三)之defaultServiceManager()的分析

2017-06-01 09:25 513 查看

Android Binder学习(三)之defaultServiceManager()的分析

  文章还是按着函数调用的顺序来分析的。这里我们就在mediaServer进程中研究一下,serviceManage代理对象的获取过程。首先看到的就是mediaServer进程中的main函数了。下面可以看到meidaServer进程注册很多和多媒体相关的服务。

int main(int argc __unused, char** argv)
{
......
sp<ProcessState> proc(ProcessState::self());
sp<IServiceManager> sm = defaultServiceManager(); /*获取servicemange代理对象*/
ALOGI("ServiceManager: %p", sm.get());
AudioFlinger::instantiate();
MediaPlayerService::instantiate();
CameraService::instantiate();     /*字面上可以理解成创建了一个实例,其实里面也包含了addservice操作*/
AudioPolicyService::instantiate();
SoundTriggerHwService::instantiate();
registerExtensions(); /*目前adnroid5.1当中还是空的*/
ProcessState::self()->startThreadPool(); /*启动线程池*/
IPCThreadState::self()->joinThreadPool(); /*主线程加入到线程池中*/
}


  这里就以Camera这个服务来分析吧。在android5.1的版本中,camera服务仍然驻留在mediaServer中(目前新的android7.0版本中,Camera和media分成2个server了)。本文注重servicemanager代理对象的创建过程,后面使用代理对象添加camea service的过程,我们放到后面的博文来介绍。

1.ProcessState::self()

sp<ProcessState> ProcessState::self()
{
Mutex::Autolock _l(gProcessMutex);
if (gProcess != NULL) {
return gProcess;
}
gProcess = new ProcessState;
return gProcess;
}


  ProcessState对象是每一个使用了进程间binder通信的进程都会拥有的一个进程状态对象。ProcessState采用了单例模式,上面在调用self()方法时,我们应该可以体会到,后续有列举代码。其中gProcess是定义在static.cpp中的全局智能指针对象。如果这里gProcess为NULL的话,就会重新new一个ProcessState对象(这里当时我理解是,既然时一个全局的变量,那么多进程访问会不会出问题。后来考虑了一下,其实每一个进程都拥有一个gProcess,每一个进程都有4G的内存可用(32位))。

小总结:这里调用ProcessState::self()的作用就是确保gProcess全局智能指针是指向一个有效的ProcessState对象的,确保后续进程间通信正常进行。但是好像没发现和binder kernel驱动有关系,当我们看到下面的构造函数就明白了。

2.ProcessState构造函数

ProcessState::ProcessState()
: mDriverFD(open_driver()) /*卧槽,这里偷偷的打开了binder kerne设备文件*/
, mVMStart(MAP_FAILED)
, mManagesContexts(false)
, mBinderContextCheckFunc(NULL)
, mBinderContextUserData(NULL)
, mThreadPoolStarted(false)
, mThreadPoolSeq(1)
{
.......
#if !defined(HAVE_WIN32_IPC)
// mmap the binder, providing a chunk of virtual address space to receive transactions.
//头文件是这样定义的#define BINDER_VM_SIZE ((1*1024*1024) - (4096 *2)),下面会申请1M-8K大小的buffer
//供mediaServer进程间通信使用。
mVMStart = mmap(0, BINDER_VM_SIZE, PROT_READ, MAP_PRIVATE | MAP_NORESERVE, mDriverFD, 0);
//省略一些错误检查代码,详情请看源码......
#else
mDriverFD = -1;
#endif
}
}


  上面可以看到在new processState时,在构造函数中偷偷的打开了binder设备,并且申请了(1M-8K)的buffer供mediaserver进程间通信使用。即

1.open_binder()设备文件,并在kerner驱动中创建mediaserver的binder_proc对象,并将它加入到全局的binder_procs链表中。

2.mmap:这个我们就不多说了,在之前介绍servicemanage编程管理员时,我们已经讲过。

3.ProcessStage类open_driver方法干了什么

static int open_driver()
{
int fd = open("/dev/binder", O_RDWR);
if (fd >= 0) {
fcntl(fd, F_SETFD, FD_CLOEXEC);
int vers = 0;
status_t result = ioctl(fd, BINDER_VERSION, &vers);
//......
if (result != 0 || vers != BINDER_CURRENT_PROTOCOL_VERSION) {
ALOGE("Binder driver protocol does not match user space protocol!");
close(fd);
fd = -1;
}
size_t maxThreads = 15;
result = ioctl(fd, BINDER_SET_MAX_THREADS, &maxThreads); /*设置线程池的大小是15个*/
if (result == -1) {
ALOGE("Binder ioctl to set max threads failed: %s", strerror(errno));
}
} else {
ALOGW("Opening '/dev/binder' failed: %s\n", strerror(errno));
}
return fd;
}

/*-----------------kernel驱动------------------*/
binder_ioctl()
{
case BINDER_SET_MAX_THREADS:
if (copy_from_user(&proc->max_threads, ubuf, sizeof(proc->max_threads))) {
ret = -EINVAL;
goto err;
}
}


直击代码,主要做了下面3件事情

1.偷偷打开了/dev/binder设备节点文件,获得了一个文件描述符。

2.设置mediaServer进程线程池最大线程数目是15个。

3.defaultServiceManager()函数

sp<IServiceManager> defaultServiceManager()
{
if (gDefaultServiceManager != NULL) return gDefaultServiceManager;

{
AutoMutex _l(gDefaultServiceManagerLock);
while (gDefaultServiceManager == NULL) {
gDefaultServiceManager = interface_cast<IServiceManager>(
ProcessState::self()->getContextObject(NULL));
if (gDefaultServiceManager == NULL)
sleep(1);
}
}

return gDefaultServiceManager;
}


  上面是创建servicemange代理对象的代码,我们一一来介绍,这里gDefaultServiceManager也是一个全局的智能指针(单例模式),其指向一个IServiceManager对象,由于BpserviceManager是它的子类,所以这里是父类对象也是可以指向子类对象的。上面我们需注意getContextObject()传下去的参数是NULL,这里一定要牢记了!!!

ProcessState::self()->getContextObject(NULL)
sp<IBinder> ProcessState::getContextObject(const sp<IBinder>& /*caller*/)
{
return getStrongProxyForHandle(0);
}


  虽说我们上面传进来的是NULL,其实在函数内部也没有使用NULL,取而代之是直接写个0.下面我们就去getStrongProxyForHandle()看看。

4.ProcessState::getStrongProxyForHandle()

sp<IBinder> ProcessState::getStrongProxyForHandle(int32_t handle)
{
sp<IBinder> result;
AutoMutex _l(mLock);
handle_entry* e = lookupHandleLocked(handle); /*注意这里的handle的值是0,下面我们先跳进去看看*/

if (e != NULL) {/*这里肯定不为NULL,为NULL的话,在上一层lookupHandleLocked()还是会重新new一个的,可以看看下一段代码*/
// We need to create a new BpBinder if there isn't currently one, OR we
// are unable to acquire a weak reference on this current one.  See comment
// in getWeakProxyForHandle() for more info about this.
IBinder* b = e->binder;
if (b == NULL || !e->refs->attemptIncWeak(this)) { /*上面我们知道b==NULL,进去*/
if (handle == 0) { /*只有servicemanage才会执行下面的代码*/
// Special case for context manager...
// The context manager is the only object for which we create
// a BpBinder proxy without already holding a reference.
// Perform a dummy transaction to ensure the context manager
// is registered before we create the first local reference
// to it (which will occur when creating the BpBinder).
// If a local reference is created for the BpBinder when the
// context manager is not present, the driver will fail to
// provide a reference to the context manager, but the
// driver API does not return status.
//
// Note that this is not race-free if the context manager
// dies while this code runs.
//
// TODO: add a driver API to wait for context manager, or
// stop special casing handle 0 for context manager and add
// a driver API to get a handle to the context manager with
// proper reference counting.

Parcel data;/*后面我们会介绍Parcel类,binder通信和这个类息息相关*/
status_t status = IPCThreadState::self()->transact( /*请到下面进入看看,注意传进去的参数*/
0, IBinder::PING_TRANSACTION, data, NULL, 0);
if (status == DEAD_OBJECT)
return NULL;
}

b = new BpBinder(handle);
e->binder = b;
if (b) e->refs = b->getWeakRefs();
result = b;
} else {
// This little bit of nastyness is to allow us to add a primary
// reference to the remote proxy when this team doesn't have one
// but another team is sending the handle to us.
result.force_set(b);
e->refs->decWeak(this);
}
}

return result;
}


上面主要做了下面几件事情:

1.查找对应handle的Bpbinder对象,这里handle = 0,并确定其binder域是否为0。

2.如果binder域是空,就会根据handle重新new一个BpBinder对象。注意这里handle是一个引用对象的描述符,根据这个描述符就可以查找到其对应的binder_node对象。这里对应service_manage的binder_node引用对象描述符。

5.ProcessState::lookupHandleLocked()

ProcessState::handle_entry* ProcessState::lookupHandleLocked(int32_t handle)
{  /*上面传进来的参数是0*/
/*ProcessState对象内部维护着一个Vector<handle_entry>mHandleToObject;对象*/
const size_t N=mHandleToObject.size(); /*这里获取mHandleToObject的容量,一开始我们什么都没有,这里返回0*/
if (N <= (size_t)handle) { /*成立*/
handle_entry e;
e.binder = NULL; /*注意这两个成员变量都是NULL*/
e.refs = NULL;
status_t err = mHandleToObject.insertAt(e, N, handle+1-N);
if (err < NO_ERROR) return NULL;
}
return &mHandleToObject.editItemAt(handle);/*返回我们刚才创建的handle_entry对象*/
}


  上面可以发现ProcessStage维护了一个mHandleToObject对象,这些专门用来保存在进程中各种代理对象的,因为一个进程可能使用多种服务,所以就会有多个代理对象。这里我们要记着servicemanage的 “代理对象”放在mHandleToObject开始处。下图中的其他的1,2只是示意作用,不过第一个肯定是0.



6. IPCThreadState::self()

static pthread_mutex_t gTLSMutex = PTHREAD_MUTEX_INITIALIZER;
static bool gHaveTLS = false;
static pthread_key_t gTLS = 0;
static bool gShutdown = false;
static bool gDisableBackgroundScheduling = false;

IPCThreadState* IPCThreadState::self()
{
if (gHaveTLS) { /*第一次为false*/
restart:
const pthread_key_t k = gTLS;
IPCThreadState* st = (IPCThreadState*)pthread_getspecific(k); //获取
if (st) return st;
return new IPCThreadState;
}

if (gShutdown) return NULL;

pthread_mutex_lock(&gTLSMutex);
if (!gHaveTLS) {
if (pthread_key_create(&gTLS, threadDestructor) != 0) { //重新创建pthread_key值,
pthread_mutex_unlock(&gTLSMutex);
return NULL;
}
gHaveTLS = true; //线程已经有局部存储变量了。
}
pthread_mutex_unlock(&gTLSMutex);
goto restart;
}

IPCThreadState::IPCThreadState()
: mProcess(ProcessState::self()), //获取对应进程的processstage对象
mMyThreadId(androidGetTid()),  //获取线程ID
mStrictModePolicy(0),
mLastTransactionBinderFlags(0)
{
pthread_setspecific(gTLS, this); //将当前IPCThreadState对象引用保存到线程局部存储中,方面下次使用
clearCaller();
mIn.setDataCapacity(256); //分别设置输入buffer和输出buffer大小是256个字节。
mOut.setDataCapacity(256);
}


  这里引入了一个线程局部存储概念TLS: Thread-Local Storage,就是说 “如果需要在一个线程内部的各个函数调用都能访问、但其它线程不能访问的变量”.具体感兴趣的可以看一下,我之前转载的一篇博客

7.IPCThreadState::transact()

status_t IPCThreadState::transact(int32_t handle,
uint32_t code, const Parcel& data,
Parcel* reply, uint32_t flags)
{   //这里上层调用出传下来的参数是:(0, IBinder::PING_TRANSACTION, data, NULL, 0)
//flag = 0
//handle = 0
//code = IBinder::PING_TRANSACTION ,(还记得网络上的ping命令吗)
status_t err = data.errorCheck();

flags |= TF_ACCEPT_FDS;
if (err == NO_ERROR) {
err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL); //这是一个事务,请看函数定义
}
if (err != NO_ERROR) {
if (reply) reply->setError(err);
return (mLastError = err);
}

if ((flags & TF_ONE_WAY) == 0) {
.......
if (reply) {
err = waitForResponse(reply);
} else {
Parcel fakeReply;
err = waitForResponse(&fakeReply);
}
//省去一些log,感兴趣可以查看源码............
} else {
err = waitForResponse(NULL, NULL);
}

return err;
}
//-----------------------温习--------------------------------
enum transaction_flags {
TF_ONE_WAY     = 0x01, /* this is a one-way call: async, no return 异步的单向调用,没有返回值*/
TF_ROOT_OBJECT = 0x04, /* contents are the component's root object 里面的数据是一个组件的根对象*/
TF_STATUS_CODE = 0x08, /* contents are a 32-bit status code 数据包含的是一个32bit的状态吗*/
TF_ACCEPT_FDS  = 0x10, /* allow replies with file descriptors 允许返回对象中,包含文件描述符*/
};


  该函数就是通信的关键所在,他总体上做了2件事情。这里要注意,每一个使用binder的功能的进程都有IPCThreadState对象。

1.将通信事务数据(情书)发送给service_manager 内核对象(这里由于handle = 0,所以就会发送给service_manage)

2.既然情书已经发送给service_manage了,那么求爱者肯定在焦急等待着。这里waitForResponse()就体现出来了。

  上面在代码最后我们温习了一下事务的标志。那么可以看到上面的代码的flag = TF_ACCEPT_FDS,这样的话binder驱动返回的数据中就包含了一个binder引用对象成员desc的值了,继续看代码。

8.IPCThreadState::writeTransactionData

status_t IPCThreadState::writeTransactionData(int32_t cmd, uint32_t binderFlags,
int32_t handle, uint32_t code, const Parcel& data, status_t* statusBuffer)
{
binder_transaction_data tr;

tr.target.ptr = 0; /* Don't pass uninitialized stack data to a remote process */
tr.target.handle = handle; //别忘了此时handle=0
tr.code = code;            //IBinder::PING_TRANSACTION
tr.flags = binderFlags;    //flags = TF_ACCEPT_FDS
tr.cookie = 0;
tr.sender_pid = 0;
tr.sender_euid = 0;

const status_t err = data.errorCheck();
if (err == NO_ERROR) {
tr.data_size = data.ipcDataSize(); //data_size = 0;
tr.data.ptr.buffer = data.ipcData(); //buffer = NULL
tr.offsets_size = data.ipcObjectsCount()*sizeof(binder_size_t); //0
tr.data.ptr.offsets = data.ipcObjects(); //0
} else if (statusBuffer) {
tr.flags |= TF_STATUS_CODE;
*statusBuffer = err;
tr.data_size = sizeof(status_t);
tr.data.ptr.buffer = reinterpret_cast<uintptr_t>(statusBuffer);
tr.offsets_size = 0;
tr.data.ptr.offsets = 0;
} else {
return (mLastError = err);
}

mOut.writeInt32(cmd);
mOut.write(&tr, sizeof(tr));

return NO_ERROR;
}


  由函数的名字可以看到,这个是写事务数据函数。但是此时此刻我们是请求一个ServiceManage的代理对象,所以参数data是一个临时的局部变量。这里将Parcel data数据搬到binder_transaction_data 对象里面之后,在mOut对象中写入BC_TRANSACTION命令,紧跟着写入类型为binder_transaction_data的对象tr。到现在数据被封装了2次。

1.参数中包含的Parcel data数据,(只有在获取servicemanage代理对象时,才出现数据大小为0)

2.类型为binder_transaction_data 的对象 tr中保存了参数Parcel data对象的buffer的地址和数据大小,以及一些对象的个数

3.将打包好之后的数据写入类型为Parcel的对象mOut当中

此时mOut的数据成员分布是:

9.IPCThreadState::waitForResponse()

status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)
{
int32_t cmd;
int32_t err;

while (1) {
if ((err=talkWithDriver()) < NO_ERROR) break; //我们先来看看这个talkwithdriver
err = mIn.errorCheck();
if (err < NO_ERROR) break;
if (mIn.dataAvail() == 0) continue;

cmd = mIn.readInt32();

IF_LOG_COMMANDS() {
alog << "Processing waitForResponse Command: "
<< getReturnString(cmd) << endl;
}

switch (cmd) {
case BR_TRANSACTION_COMPLETE:
if (!reply && !acquireResult) goto finish;
break;

case BR_DEAD_REPLY:
err = DEAD_OBJECT;
goto finish;

case BR_FAILED_REPLY:
err = FAILED_TRANSACTION;
goto finish;

case BR_ACQUIRE_RESULT:
{
ALOG_ASSERT(acquireResult != NULL, "Unexpected brACQUIRE_RESULT");
const int32_t result = mIn.readInt32();
if (!acquireResult) continue;
*acquireResult = result ? NO_ERROR : INVALID_OPERATION;
}
goto finish;

case BR_REPLY:
{
binder_transaction_data tr;
err = mIn.read(&tr, sizeof(tr));
ALOG_ASSERT(err == NO_ERROR, "Not enough command data for brREPLY");
if (err != NO_ERROR) goto finish;

if (reply) {
if ((tr.flags & TF_STATUS_CODE) == 0) {
reply->ipcSetDataReference(
reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
tr.data_size,
reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
tr.offsets_size/sizeof(binder_size_t),
freeBuffer, this);
} else {
err = *reinterpret_cast<const status_t*>(tr.data.ptr.buffer);
freeBuffer(NULL,
reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
tr.data_size,
reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
tr.offsets_size/sizeof(binder_size_t), this);
}
} else {
freeBuffer(NULL,
reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
tr.data_size,
reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
tr.offsets_size/sizeof(binder_size_t), this);
continue;
}
}
goto finish;

default:
err = executeCommand(cmd);
if (err != NO_ERROR) goto finish;
break;
}
}

finish:
if (err != NO_ERROR) {
if (acquireResult) *acquireResult = err;
if (reply) reply->setError(err);
mLastError = err;
}

return err;
}


等待的过程时漫长的,上面可以看到,等待的过程中发生了很多事情,我们下面慢慢来看。

10.IPCThreadState::talkWithDriver()

status_t IPCThreadState::talkWithDriver(bool doReceive)
{
if (mProcess->mDriverFD <= 0) {
return -EBADF;
}

binder_write_read bwr; //显然这里定义了一个局部变量,因为这个结构是用户空间和内核空间传递信息的信使。

// Is the read buffer empty?
const bool needRead = mIn.dataPosition() >= mIn.dataSize();//还记着我们在new IPCThreadState对像时,我们把mIn和mOut大小设置为256.这里为假

// We don't want to write anything if we are still reading
// from data left in the input buffer and the caller
// has requested to read the next data.
const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0; //这里doReceive是默认参数为true,这里
//outAvail = mOut.datasieze()
bwr.write_size = outAvail;
bwr.write_buffer = (uintptr_t)mOut.data(); //还记着吗,上面传递情书的时候,只是把数据放到了mOut中了,还没有真正写到kernel中,情书还没送到位。

// This is what we'll read.
if (doReceive && needRead) { //这里位假
bwr.read_size = mIn.dataCapacity();
bwr.read_buffer = (uintptr_t)mIn.data();
} else {
bwr.read_size = 0;
bwr.read_buffer = 0;
}
//此处省略一些log
// Return immediately if there is nothing to do.
if ((bwr.write_size == 0) && (bwr.read_size == 0)) return NO_ERROR;

bwr.write_consumed = 0;
bwr.read_consumed = 0;
status_t err;
do {
#if defined(HAVE_ANDROID_OS)
if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0) //这里根据系统调用将数据写到kernel,请看下面的详情。
err = NO_ERROR;
else
err = -errno;
#else
err = INVALID_OPERATION;
#endif
if (mProcess->mDriverFD <= 0) {
err = -EBADF;
}

} while (err == -EINTR);
//删除一些log

if (err >= NO_ERROR) {
if (bwr.write_consumed > 0) {
if (bwr.write_consumed < mOut.dataSize())
mOut.remove(0, bwr.write_consumed);
else
mOut.setDataSize(0);
}
if (bwr.read_consumed > 0) {
mIn.setDataSize(bwr.read_consumed);
mIn.setDataPosition(0);
}
//删除一些log
return NO_ERROR;
}

return err;
}


这个地方是送情书的真正过程,干了下面几件事情

1.根据上一次talkWithDriver得到结果,来判断这一次talkWithDriver()时进行写操作还是进行读操作。

2.将打包好的通信数据(binder_write_read) bwr数据通过系统调用ioctl发送给kernel binder驱动。

具体kernel收到消息后干了什么,请看下面的代码分析。

11. binder_ioctl

static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{
switch (cmd) {
case BINDER_WRITE_READ: { //我们的命令就是这个
if (bwr.write_size > 0) {
ret = binder_thread_write(proc, thread, (void __user *)bwr.write_buffer, bwr.write_size, &bwr.write_consumed);
}
if (copy_to_user(ubuf, &bwr, sizeof(bwr))) {
ret = -EFAULT;
goto err;
}
break;
}

int binder_thread_write(struct binder_proc *proc, struct binder_thread *thread,
void __user *buffer, int size, signed long *consumed)
{
while (ptr < end && thread->return_error == BR_OK) {
if (get_user(cmd, (uint32_t __user *)ptr))
return -EFAULT;
ptr += sizeof(uint32_t);
switch (cmd) {
case BC_TRANSACTION: //上次送出的情书的口信就是:BC_TRANSACTION
case BC_REPLY: {
struct binder_transaction_data tr;

if (copy_from_user(&tr, ptr, sizeof(tr))) //将数据剥离出来。但愿你还记着他们数据封装顺序
return -EFAULT;
ptr += sizeof(tr);
binder_transaction(proc, thread, &tr, cmd == BC_REPLY);//注意这里我们的cmd = BC_TRANSACTION
break;
}
}
}


  上面将事务数据分离出来,并将处理交给了binder_transaction(),不要忘了,这里我们的目的时拿到一个ServiceManage的代理对象。

12.binder_transaction

static void binder_transaction(struct binder_proc *proc,
struct binder_thread *thread,
struct binder_transaction_data *tr, int reply)
{
struct binder_transaction *t;
struct binder_work *tcomplete;
size_t *offp, *off_end;
struct binder_proc *target_proc;
struct binder_thread *target_thread = NULL;
struct binder_node *target_node = NULL;
struct list_head *target_list;
wait_queue_head_t *target_wait;
struct binder_transaction *in_reply_to = NULL;
struct binder_transaction_log_entry *e;
uint32_t return_error;

//这里删掉了一些log,和一些不会执行的代码
if (tr->target.handle) { //还记得我们的handle是0吗?
struct binder_ref *ref;
ref = binder_get_ref(proc, tr->target.handle);//这里查找制定handle的binder引用对象。
if (ref == NULL) {
binder_user_error("binder: %d:%d got "
"transaction to invalid handle\n",
proc->pid, thread->pid);
return_error = BR_FAILED_REPLY;
goto err_invalid_target_handle;
}
target_node = ref->node;
} else { //这里代码会走到这里。
target_node = binder_context_mgr_node; //找到全局的管理员了servicemanage了。
if (target_node == NULL) {
return_error = BR_DEAD_REPLY;
goto err_no_context_mgr_node;
}
}
e->to_node = target_node->debug_id;
target_proc = target_node->proc; //servicemanage进程的binder_proc对象
//此处省略一万字.........

}
if (target_thread) { //目前target_thread还为NULL
e->to_thread = target_thread->pid;
target_list = &target_thread->todo;
target_wait = &target_thread->wait;
} else {
target_list = &target_proc->todo;//获取目的进程的工作队列指针后面添加事务工作项时需要用到。
target_wait = &target_proc->wait;//找到等待队列指针,后面唤醒时需要
}

/* TODO: reuse incoming transaction for reply */
t = kzalloc(sizeof(*t), GFP_KERNEL);
if (t == NULL) {
return_error = BR_FAILED_REPLY;
goto err_alloc_t_failed;
}
binder_stats_created(BINDER_STAT_TRANSACTION);

tcomplete = kzalloc(sizeof(*tcomplete), GFP_KERNEL);
if (tcomplete == NULL) {
return_error = BR_FAILED_REPLY;
goto err_alloc_tcomplete_failed;
}
binder_stats_created(BINDER_STAT_TRANSACTION_COMPLETE);

t->debug_id = ++binder_last_id;

if (!reply && !(tr->flags & TF_ONE_WAY))
t->from = thread; //记录事务是从哪个线程发出去的。
else
t->from = NULL;
t->sender_euid = proc->tsk->cred->euid;
t->to_proc = target_proc; //这里我们是要发给ServiceManage进程,后面的博文还会提到这里。
t->to_thread = target_thread; //目标线程
t->code = tr->code; //命令码,这里是 IBinder::PING_TRANSACTION
t->flags = tr->flags;
t->priority = task_nice(current);

trace_binder_transaction(reply, t, target_node);

t->buffer = binder_alloc_buf(target_proc, tr->data_size, //分配buffer了,
tr->offsets_size, !reply && (t->flags & TF_ONE_WAY));
if (t->buffer == NULL) {
return_error = BR_FAILED_REPLY;
goto err_binder_alloc_buf_failed;
}
t->buffer->allow_user_free = 0; //不允许user释放
t->buffer->debug_id = t->debug_id;
t->buffer->transaction = t;  //该binder_buffer被哪一个事务使用,这个地方
t->buffer->target_node = target_node; //请求的是哪个服务。
if (target_node)
binder_inc_node(target_node, 1, 0, NULL); //增加binder实体的引用计数,这里是servicemanage的binder实体。

offp = (size_t *)(t->buffer->data + ALIGN(tr->data_size, sizeof(void *)));

if (copy_from_user(t->buffer->data, tr->data.ptr.buffer, tr->data_size)) {
......
}
if (copy_from_user(offp, tr->data.ptr.offsets, tr->offsets_size)) {
.......
}
......
off_end = (void *)offp + tr->offsets_size;
for (; offp < off_end; offp++) { //下面是要找出buffer中所有binder对象,可惜这里我们的buffer里面什么都没有,下面的case就省略了
struct flat_binder_object *fp;
if (*offp > t->buffer->data_size - sizeof(*fp) ||
t->buffer->data_size < sizeof(*fp) ||
!IS_ALIGNED(*offp, sizeof(void *))) {
binder_user_error("binder: %d:%d got transaction with "
"invalid offset, %zd\n",
proc->pid, thread->pid, *offp);
return_error = BR_FAILED_REPLY;
goto err_bad_offset;
}
fp = (struct flat_binder_object *)(t->buffer->data + *offp);
switch (fp->type) {
case BINDER_TYPE_BINDER:
case BINDER_TYPE_WEAK_BINDER: {} break;
case BINDER_TYPE_HANDLE:
case BINDER_TYPE_WEAK_HANDLE: {} break;

case BINDER_TYPE_FD: {} break;

default:
binder_user_error("binder: %d:%d got transactio"
"n with invalid object type, %lx\n",
proc->pid, thread->pid, fp->type);
return_error = BR_FAILED_REPLY;
goto err_bad_object_type;
}
}
if (reply) { //这里之前得到reply = 0
BUG_ON(t->buffer->async_transaction != 0);
binder_pop_transaction(target_thread, in_reply_to);
} else if (!(t->flags & TF_ONE_WAY)) {
BUG_ON(t->buffer->async_transaction != 0);
t->need_reply = 1; //1标示这是一个同步事务,需要等待对方回复。0表示这是一个异步事务,不用等对方回复。
t->from_parent = thread->transaction_stack;//该事务依赖与当前线程的事务栈中的事务。
thread->transaction_stack = t;//当前线程处理该事务。
} else {
BUG_ON(target_node == NULL);
BUG_ON(t->buffer->async_transaction != 1);
if (target_node->has_async_transaction) {
target_list = &target_node->async_todo;
target_wait = NULL;
} else
target_node->has_async_transaction = 1;
}
t->work.type = BINDER_WORK_TRANSACTION;
list_add_tail(&t->work.entry, target_list); //将事务t添加到目的biner实体的工作队列中
tcomplete->type = BINDER_WORK_TRANSACTION_COMPLETE;
list_add_tail(&tcomplete->entry, &thread->todo);//经工作项添加到当前线程的工作队列中。
if (target_wait)
wake_up_interruptible(target_wait); //唤醒servicemanager进程
return;
//省略一些出现错误时,释放内存代码,详情请看源码
}


  上面的就是解析事务数据,并重新打包一个即将要发送给ServiceManage进程的事务数据,主要做了下面几件事情。

1.根据handle找到目前binder_node,进而找到事务数据是发送给哪一个进程的,既targe_proc

2.根据找到的目标binder实体,binder_proc,填充即将要发送给ServiceManage的事务数据。

3.将该事务的工作类型设置为BINDER_WORK_TRANSACTION,然后将事务添加到目标线程的等待队列里面。

4.唤醒ServiceManage进程,处理事务。

上面唤醒ServiceManage进程之后,由于sm上一次睡眠时在kernel的while循环中,如下所示,请详细查看下面代码。

13.binder_thread_read

static int binder_thread_read()
{
......
ret = wait_event_freezable_exclusive(proc->wait, binder_has_proc_work(proc, thread));
......
if (ret)
return ret; //刚唤醒的时候,会从这里直接返回
......
while (1) {
if (!list_empty(&thread->todo))
w = list_first_entry(&thread->todo, struct binder_work, entry);
else if (!list_empty(&proc->todo) && wait_for_proc_work)
w = list_first_entry(&proc->todo, struct binder_work, entry);
......
switch (w->type) {
case BINDER_WORK_TRANSACTION: {
t = container_of(w, struct binder_transaction, work);
} break;
......
}
if (t->buffer->target_node) {
struct binder_node *target_node = t->buffer->target_node;
tr.target.ptr = target_node->ptr; //因为target_node 就是servicemange的binder_node.这里为0
tr.cookie =  target_node->cookie; //由于是servicemanage,这里是0
t->saved_priority = task_nice(current);
if (t->priority < target_node->min_priority &&
!(t->flags & TF_ONE_WAY))
binder_set_nice(t->priority);
else if (!(t->flags & TF_ONE_WAY) ||
t->saved_priority > target_node->min_priority)
binder_set_nice(target_node->min_priority);
cmd = BR_TRANSACTION; //设置命令是事务返回,这个在上层解析时,会判断是哪一个命令
} else {
tr.target.ptr = NULL;
tr.cookie = NULL;
cmd = BR_REPLY;
}
tr.code = t->code; //还记着是ping命令
tr.flags = t->flags; //
tr.sender_euid = t->sender_euid;

if (t->from) {
struct task_struct *sender = t->from->proc->tsk;
tr.sender_pid = task_tgid_nr_ns(sender,
current->nsproxy->pid_ns);
} else {
tr.sender_pid = 0;
}

tr.data_size = t->buffer->data_size;
tr.offsets_size = t->buffer->offsets_size;
tr.data.ptr.buffer = (void *)t->buffer->data +
proc->user_buffer_offset;
tr.data.ptr.offsets = tr.data.ptr.buffer +
ALIGN(t->buffer->data_size,
sizeof(void *));

if (put_user(cmd, (uint32_t __user *)ptr)) //记录BR_TRANSACTION命令
return -EFAULT;
ptr += sizeof(uint32_t);
if (copy_to_user(ptr, &tr, sizeof(tr))) //将之前有media_server进程设置过来的事务数据复制给service_manage用户空间。
return -EFAULT;
ptr += sizeof(tr);

trace_binder_transaction_received(t);
binder_stat_br(proc, thread, cmd);
......
list_del(&t->work.entry);
t->buffer->allow_user_free = 1;//允许用户空间释放
if (cmd == BR_TRANSACTION && !(t->flags & TF_ONE_WAY)) {
t->to_parent = thread->transaction_stack;
t->to_thread = thread;
thread->transaction_stack = t;
} else {
t->buffer->transaction = NULL;
kfree(t);
binder_stats_deleted(BINDER_STAT_TRANSACTION);
}
break;
}

done:

*consumed = ptr - buffer;
}


  上面代码的工作也是解析数据,然后根据数据中的code,cookie等数据,重新打包一个binder_tranaction 数据对象,大概就下面几步。

1.保存target_node->ptr 和target_node->cookie,到新的事务数据中。并将cmd = BR_TRANSACTION打包打包进数据包中。

2.将事务数据拷贝到sm中读buffer中。

14.binder_parse

int binder_parse(struct binder_state *bs, struct binder_io *bio,
uintptr_t ptr, size_t size, binder_handler func)
{
int r = 1;
uintptr_t end = ptr + (uintptr_t) size;
while (ptr < end) {
uint32_t cmd = *(uint32_t *) ptr;
ptr += sizeof(uint32_t);
/*.........*/
case BR_TRANSACTION: { //还记着我们上一次发的命令就是BR_TRANSACTION.
struct binder_transaction_data *txn = (struct binder_transaction_data *) ptr;
if ((end - ptr) < sizeof(*txn)) {
ALOGE("parse: txn too small!\n");
return -1;
}
binder_dump_txn(txn);
if (func) {
unsigned rdata[256/4];
struct binder_io msg;
struct binder_io reply;
int res;

bio_init(&reply, rdata, sizeof(rdata), 4);
bio_init_from_txn(&msg, txn);
res = func(bs, txn, &msg, &reply);
binder_send_reply(bs, &reply, txn->data.ptr.buffer, res);
}
ptr += sizeof(*txn);
break;
}
......
return r;
}


上面的func = svcmgr_handler.上面由于我们的的code = PING_TRANSACTION。所以处理函数直接就返回了,详情请看下面代码。

15.svcmgr_handler

int svcmgr_handler(struct binder_state *bs,
struct binder_transaction_data *txn,
struct binder_io *msg,
struct binder_io *reply)
{
struct svcinfo *si;
uint16_t *s;
size_t len;
uint32_t handle;
uint32_t strict_policy;
int allow_isolated;
//ALOGI("target=%x code=%d pid=%d uid=%d\n",
//  txn->target.handle, txn->code, txn->sender_pid, txn->sender_euid);

if (txn->target.handle != svcmgr_handle)
return -1;

if (txn->code == PING_TRANSACTION)//这里就直接返回了。
return 0;
......
}


上面就会检测到是PING_TRANSACTION,就直接返回了。后面的serviceManage返回的消息,由于代码我们之前贴过了,为了简便,我就在这里口述一下过程。

1.ServiceManage一次下发了BC_FREE_BUFFER和BC_REPLY,前一个命令从名字就可以知道,它是为了释放上一次进程和SM通信的事务数据。BC_REPLY命令跟的就是对应服务的handle。

2.将上面的数据发送给kernel,仍然会打包进binder_tranaction数据中。由于这里我们找的sm的代理对象,所以返回的handle为0。

3.之前我们是在waitForResponse()函数while()循环中talkwithdriver的,由于mediaserver中负责处理camera的线程在等待sm回应时,已经睡着了。

4.这里sm回应mediaserver主线程后,就会唤醒mediaserver主线程,进而执行下一次talkwithdriver()就会将sm发过来的数据读到用户空间了。这里得到命令是cmd = BR_TRANSACTION_COMPLETE。

5.这样就会从IPCThreadState::Transact()函数返回到 getStrongProxyForHandle。自此我们得到一个handle = 0 的ServiceManage的代理对象。

status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)
{
int32_t cmd;
int32_t err;
//.......
switch (cmd) {
case BR_TRANSACTION_COMPLETE:
if (!reply && !acquireResult) goto finish;
break;
//......
}


  后面就返回到defaultServiceManager(),mediaServer进程就拿到serviceManage代理对象,就可以将后面的其它服务对象注册到servicemanage中了。这里最主要的就是那个handle,这在内核中时唯一标示一个binder_ref对象的。
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: