Binder IPC 通信学习笔记

2021-01-11  本文已影响0人  feifei_fly

一、Android 中IPC 跨进程通信方式

1.1、Linux下的进程通信:

当进程执行系统调用而陷入内核代码中执行时,称此进程处于内核态;当进程在执行自己的代码时,称为用户态。

copy内存相关的系统调用

copy_from_user()//将数据从用户空间拷贝到内核空间
copy_to_user()//将数据从内核空间拷贝到用户空间

传统ipc

IPC 进程通信的基础:

每个进程的用户空间是彼此独立的,但是内核空间是进程共享的。进程A 可以将数据从用户空间copy到内核空间,然后由内核做中转,再将数据copy到 进程B中,达到跨进程通信的目的

传统的IPC通信存在两个问题

  • 一次数据传递需要经过两次拷贝:内存缓存区-->内核缓存区-->内存缓存区
  • 浪费空间和时间:接收区的缓存区由数据接收进程提供,但接收进程并不知道要多大的空间来存放传递过来的数据。

1.2、Binder IPC 通信

Binder IPC 也是利用内核空间 做数据中转,不同的是Binder在内核空间 为Server端用户进程做了内存映射。

Binder驱动内存映射: binder_mmap

这样就实现了用户空间的Buffer和内核空间的Buffer同步操作的功能。这使得Binder通信只需要从用户空间复制一次信息到内核空间就可以了

image

1.3、Binder 与linux进程通信的区别

Linux进程通信机制大概分为:管道、消息队列、socket和共享内存。

两个进程都使用mmap就是共享内存;一个进程是用copy_from_user,另一个进程使用mmap 就是用的Binder机制。

二、Binder的实现原理和构成

binder实现原理和构成

Android Binder 通信有两套方法,一个是ServierManager 直接open、ioctl操作binder driver,另一个是通过Binder IPC 框架来进行通信。

Binder 是基于C/S 结构的,自上而下Java层(FrameWork层)、Native(包括JNI)层和Driver层。

FrameWork层 Binder 主要包括:Binder、BinderProxy、BinderInternal

Natvie层Binder主要包括:JavaBBinder、BBinder、BpBinder、ProcessState、IPCThreadState、ServiceManager

Driver层 也叫Kernel层,它直接操作“/dev/binder” 设备节点。主要操作open、ioctl、mmap()等。

Java Binder是native Binder的一个映射(Mirror),Java Binder 跨进程通信依托于由native Binder实现具体功能,最后都经过driver设备节点。

下面分别就各个类做一下简答的介绍。

2.1、FrameWork Binder

public class Binder implements IBinder {
    
     /**
     * Raw native pointer to JavaBBinderHolder object. Owned by this Java object. Not null.
     */
    private final long mObject; //指向native层的JavaBBinderHolder对象
    private IInterface mOwner;
    private String mDescriptor;
     public Binder() {
        mObject = getNativeBBinderHolder();
    }
     public @Nullable IInterface queryLocalInterface(@NonNull String descriptor) {
        if (mDescriptor != null && mDescriptor.equals(descriptor)) {
            return mOwner;
        }
        return null;
    }

    protected boolean onTransact(int code, @NonNull Parcel data, @Nullable Parcel reply,
            int flags){

    }
}
final class BinderProxy implements IBinder {
     /**
     * C++ pointer to BinderProxyNativeData. That consists of strong pointers to the
     * native IBinder object, and a DeathRecipientList.
     */
    private final long mNativeData;

    private BinderProxy(long nativeData) {
        mNativeData = nativeData;
    }

  
    public boolean transact(int code, Parcel data, Parcel reply, int flags) throws RemoteException {
        Binder.checkParcel(this, code, data, "Unreasonably large binder buffer");

        try {
            return transactNative(code, data, reply, flags);
        } finally {
          
        }
    }

    public native boolean transactNative(int code, Parcel data, Parcel reply,
            int flags) throws RemoteException;
}

public class BinderInternal {

    //GcWatcher 用于管理Binder的销毁和回收。
    static WeakReference<BinderInternal.GcWatcher> sGcWatcher = new WeakReference(new BinderInternal.GcWatcher());
 

    public BinderInternal() {
    }

    public static void addGcWatcher(Runnable watcher) {
        synchronized(sGcWatchers) {
            sGcWatchers.add(watcher);
        }
    }
    //当前线程加入Binder线程池
    public static final native void joinThreadPool();

    //getContextObject() 用于获取handle=0 的BpBinder,也就是ServiceManager的BpBinder
    public static final native IBinder getContextObject();
}

通过BinderInternal.getContextObject() 可以获得代表ServiceManager(C++)的BpBinder,进而生成ServiceManagerProxy()

frameworks/base/core/java/android/os/ServiceManager.java

ServiceManager{
    
    @UnsupportedAppUsage
    public static IBinder getService(String name) {
        try {
            IBinder service = sCache.get(name);
            if (service != null) {
                return service;
            } else {
                return Binder.allowBlocking(rawGetService(name));
            }
        } catch (RemoteException e) {
            Log.e(TAG, "error in getService", e);
        }
        return null;
    }


     public static void addService(String name, IBinder service, boolean allowIsolated,
            int dumpPriority) {
        try {
            getIServiceManager().addService(name, service, allowIsolated, dumpPriority);
        } catch (RemoteException e) {
            Log.e(TAG, "error in addService", e);
        }
    }

    
  private static IServiceManager getIServiceManager() {
        if (sServiceManager != null) {
            return sServiceManager;
        }

        // Find the service manager
        sServiceManager = ServiceManagerNative
                .asInterface(Binder.allowBlocking(BinderInternal.getContextObject()));
        return sServiceManager;
    }

}


public final class ServiceManagerNative {
    private ServiceManagerNative() {}

    @UnsupportedAppUsage
    public static IServiceManager asInterface(IBinder obj) {
        if (obj == null) {
            return null;
        }

        // ServiceManager is never local
        return new ServiceManagerProxy(obj);
    }
}

2.2 Native 层Binder

2.2.1、 BpBinder: Native层 Binder代理类,持有handle属性,记录Binder节点的id,是Client端代理类。

2.2.2、JavaBBinder:C++类,继承BBinder类,是Server端Binder的代表;

继承关系:

JavaBBinder -> BBinder->IBinder->RefBase

BpBinder->IBinder->RefBase

持有关系:

|Java | Binder.mObject -> |C++| JavaBBinderHolder->JavaBBinder

|Java | BinderProxy.mNativeData -> |C++| BinderProxyNativeData -> BinderProxyNativeData.mObject-> BpBinder

Java层的Binder对象mObject属性持有JavaBBinderHolder(C++)类,JavaBBinderHolder持有JavaBBinder对象。

Java层的BinderProxy.mNativeData属性持有BinderProxyNativeData属性,BinderProxyNativeData持有BpBinder对象。

2.2.3、ProcessState类

ProcessState 是一个单例类,一个进程仅存在一个ProcessState实例。
ProcessState 实例化时 做了两件事情:

static int open_driver(const char *driver)
{
    //"/dev/binder"
    int fd = open(driver, O_RDWR | O_CLOEXEC);
    if (fd >= 0) {
        int vers = 0;
        status_t result = ioctl(fd, BINDER_VERSION, &vers);
       
        size_t maxThreads = DEFAULT_MAX_BINDER_THREADS;
        result = ioctl(fd, BINDER_SET_MAX_THREADS, &maxThreads);
    } else {
        ALOGW("Opening '%s' failed: %s\n", driver, strerror(errno));
    }
    return fd;
}

//调用mmap 申请共享内存,共享内存大小为1M-8k.


if (mDriverFD >= 0) {
    // mmap the binder, providing a chunk of virtual address space to receive transactions.
    mVMStart = mmap(nullptr, BINDER_VM_SIZE, PROT_READ, MAP_PRIVATE | MAP_NORESERVE, mDriverFD, 0);
}

2.4、IPCThreadState Binder的线程管理类

主要工作:

IPCThreadState 内部有有mIn和mOut 两个Parcel类型的属性。

mIn - Parcel对象 用于接收/dev/binder驱动发来的消息

mOut - Parcel对象, 用于向/dev/binder驱动 发送消息

IPCThreadState::talkWithDriver 会从mOut中读取指令,发送给Binder Driver;Binder Driver 收到的指令会写到mInt中

三、Java Binder和Native Binder是如何建立联系的。

我们平时使用Binder用的是 Java层的Binder,但最终会调用Natvie 层的Binder,实现具体的功能,那么java 层的Binder和Natvie层的Binder是如何建立联系的呢?

在Android系统开机过程中,Zygote启动时会有一个虚拟机注册过程,该过程调用AndroidRunntime::startReg方法来完成jni方法的注册。

startReg

int AndroidRuntime::startReg(JNIEnv* env)
{
    androidSetCreateThreadFunc((android_create_thread_fn) javaCreateThreadEtc);

    env->PushLocalFrame(200);

    //注册jni方法
    if (register_jni_procs(gRegJNI, NELEM(gRegJNI), env) < 0) {
        env->PopLocalFrame(NULL);
        return -1;
    }
    env->PopLocalFrame(NULL);

    return 0;
}

注册jni方法,其中gRegJNI是一个数组,记录所有需要注册的jni方法,其中有一项就是REG_JNI(register_android_os_Binder)。

register_android_os_Binder

int register_android_os_Binder(JNIEnv* env) {
    // 注册Binder类的jni方法
    if (int_register_android_os_Binder(env) < 0)
        return -1;

    // 注册BinderInternal类的jni方法
    if (int_register_android_os_BinderInternal(env) < 0)
        return -1;

    // 注册BinderProxy类的jni方法
    if (int_register_android_os_BinderProxy(env) < 0)
        return -1;
    ...
    return 0;
}

注册Binder

static int int_register_android_os_Binder(JNIEnv* env) {
    //其中kBinderPathName = "android/os/Binder";查找kBinderPathName路径所属类
    jclass clazz = FindClassOrDie(env, kBinderPathName);

    //将Java层Binder类保存到mClass变量;
    gBinderOffsets.mClass = MakeGlobalRefOrDie(env, clazz);
    //将Java层execTransact()方法保存到mExecTransact变量;
    gBinderOffsets.mExecTransact = GetMethodIDOrDie(env, clazz, "execTransact", "(IJJI)Z");
    //将Java层mObject属性保存到mObject变量
    gBinderOffsets.mObject = GetFieldIDOrDie(env, clazz, "mObject", "J");

    //注册JNI方法
    return RegisterMethodsOrDie(env, kBinderPathName, gBinderMethods,
        NELEM(gBinderMethods));
}

把有关Binder的重要变量都保存到了gBinderOffsets中。主要有三步:

gBinderOffsets

gBinderOffsets是全局静态结构体,定义如下:

static struct bindernative_offsets_t
{
    jclass mClass; //记录Binder类
    jmethodID mExecTransact; //记录execTransact()方法
    jfieldID mObject; //记录mObject属性

} gBinderOffsets;

gBinderOffsets保存了Binder.java类本身以及成员方法execTransact()和成员属性mObject,这为JNI层访问java层提供通道。

gBidnerOffsets结构体保存binder类信息也是一种空间换时间的方法,不需要每次查找binder类信息,提高了查询效率。

gBinderMethods

static const JNINativeMethod gBinderMethods[] = {
     /* 名称, 签名, 函数指针 */
    { "getCallingPid", "()I", (void*)android_os_Binder_getCallingPid },
    { "getCallingUid", "()I", (void*)android_os_Binder_getCallingUid },
    { "clearCallingIdentity", "()J", (void*)android_os_Binder_clearCallingIdentity },
    { "restoreCallingIdentity", "(J)V", (void*)android_os_Binder_restoreCallingIdentity },
    { "setThreadStrictModePolicy", "(I)V", (void*)android_os_Binder_setThreadStrictModePolicy },
    { "getThreadStrictModePolicy", "()I", (void*)android_os_Binder_getThreadStrictModePolicy },
    { "flushPendingCommands", "()V", (void*)android_os_Binder_flushPendingCommands },
    { "init", "()V", (void*)android_os_Binder_init },
    { "destroy", "()V", (void*)android_os_Binder_destroy },
    { "blockUntilThreadAvailable", "()V", (void*)android_os_Binder_blockUntilThreadAvailable }
};

RegisterMethodsOrDie()中为gBinderMethods数组中的方法建立了一一映射关系,从而为java层访问JNI层提供通道。

int_register_android_os_Binder方法的主要功能:

也就是说该过程建立了Binder类在Native层与framework层之间相互调用的桥梁。

注册BinderInternal

static int int_register_android_os_BinderInternal(JNIEnv* env) {
    //其中kBinderInternalPathName = "com/android/internal/os/BinderInternal"
    jclass clazz = FindClassOrDie(env, kBinderInternalPathName);

    gBinderInternalOffsets.mClass = MakeGlobalRefOrDie(env, clazz);
    gBinderInternalOffsets.mForceGc = GetStaticMethodIDOrDie(env, clazz, "forceBinderGc", "()V");

    return RegisterMethodsOrDie(
        env, kBinderInternalPathName,
        gBinderInternalMethods, NELEM(gBinderInternalMethods));
}

注册了BinderInternal类的jni方法,gBinderInternalOffsets保存了BinderInternal的forceBinderGC()方法。

下面是BinderInternal类的jni方法注册

static const JNINativeMethod gBinderInternalMethods[] = {
    { "getContextObject", "()Landroid/os/IBinder;", (void*)android_os_BinderInternal_getContextObject },
    { "joinThreadPool", "()V", (void*)android_os_BinderInternal_joinThreadPool },
    { "disableBackgroundScheduling", "(Z)V", (void*)android_os_BinderInternal_disableBackgroundScheduling },
    { "handleGc", "()V", (void*)android_os_BinderInternal_handleGc }
};

和注册Binder非常类似,该过程建立了BinderInternal类在Native层与framework层之间的相互调用的桥梁。

注册BinderProxy

static int int_register_android_os_BinderProxy(JNIEnv* env) {
    //gErrorOffsets保存了Error类信息
    jclass clazz = FindClassOrDie(env, "java/lang/Error");
    gErrorOffsets.mClass = MakeGlobalRefOrDie(env, clazz);

    //gBinderProxyOffsets保存了BinderProxy类的信息
    //其中kBinderProxyPathName = "android/os/BinderProxy"
    clazz = FindClassOrDie(env, kBinderProxyPathName);
    gBinderProxyOffsets.mClass = MakeGlobalRefOrDie(env, clazz);
    gBinderProxyOffsets.mConstructor = GetMethodIDOrDie(env, clazz, "<init>", "()V");
    gBinderProxyOffsets.mSendDeathNotice = GetStaticMethodIDOrDie(env, clazz, "sendDeathNotice", "(Landroid/os/IBinder$DeathRecipient;)V");
    gBinderProxyOffsets.mObject = GetFieldIDOrDie(env, clazz, "mObject", "J");
    gBinderProxyOffsets.mSelf = GetFieldIDOrDie(env, clazz, "mSelf", "Ljava/lang/ref/WeakReference;");
    gBinderProxyOffsets.mOrgue = GetFieldIDOrDie(env, clazz, "mOrgue", "J");

    //gClassOffsets保存了Class.getName()方法
    clazz = FindClassOrDie(env, "java/lang/Class");
    gClassOffsets.mGetName = GetMethodIDOrDie(env, clazz, "getName", "()Ljava/lang/String;");

    return RegisterMethodsOrDie(
        env, kBinderProxyPathName,
        gBinderProxyMethods, NELEM(gBinderProxyMethods));
}

注册BinderProxy类的jni方法,gBinderProxyOffsets保存了BinderProxy的构造方法,sendDeathNotice(),mObject,mSelf,mOrgue信息。

BinderProxy类在Binder类中,是本地IBinder对象的java代理,BinderProxy 是由C++层创建后,传递到java层的。

ibinderForJavaObject:Java类转换成C++ IBinder类

将java Binder 类对象转换成JavaBBinder;
将Java BinderProxy 转换成BpBinder

sp<IBinder> ibinderForJavaObject(JNIEnv* env, jobject obj)
{
    if (obj == NULL) return NULL;

    // Instance of Binder?
    if (env->IsInstanceOf(obj, gBinderOffsets.mClass)) {
        JavaBBinderHolder* jbh = (JavaBBinderHolder*)
            env->GetLongField(obj, gBinderOffsets.mObject);
        return jbh->get(env, obj);
    }

    // Instance of BinderProxy?
    if (env->IsInstanceOf(obj, gBinderProxyOffsets.mClass)) {
        return getBPNativeData(env, obj)->mObject;
    }

    ALOGW("ibinderForJavaObject: %p is not a Binder object", obj);
    return NULL;

}

javaObjectForIBinder C++ IBinder类转换成对应的Java类

C++类 JavaBBBinder-> Java类 Binder

C++类 BpBinder -> Java类的 BinderProxy

object javaObjectForIBinder(JNIEnv* env, const sp<IBinder>& val) {
    if (val == NULL) return NULL;

    if (val->checkSubclass(&gBinderOffsets)) { //返回false
        jobject object = static_cast<JavaBBinder*>(val.get())->object();
        return object;
    }

    AutoMutex _l(mProxyLock);

    jobject object = (jobject)val->findObject(&gBinderProxyOffsets);
    if (object != NULL) { //第一次object为null
        jobject res = jniGetReferent(env, object);
        if (res != NULL) {
            return res;
        }
        android_atomic_dec(&gNumProxyRefs);
        val->detachObject(&gBinderProxyOffsets);
        env->DeleteGlobalRef(object);
    }

    //创建BinderProxy对象
    object = env->NewObject(gBinderProxyOffsets.mClass, gBinderProxyOffsets.mConstructor);
    if (object != NULL) {
        //BinderProxy.mObject成员变量记录BpBinder对象
        env->SetLongField(object, gBinderProxyOffsets.mObject, (jlong)val.get());
        val->incStrong((void*)javaObjectForIBinder);

        jobject refObject = env->NewGlobalRef(
                env->GetObjectField(object, gBinderProxyOffsets.mSelf));
        //将BinderProxy对象信息附加到BpBinder的成员变量mObjects中
        val->attachObject(&gBinderProxyOffsets, refObject,
                jnienv_to_javavm(env), proxy_cleanup);

        sp<DeathRecipientList> drl = new DeathRecipientList;
        drl->incStrong((void*)javaObjectForIBinder);
        //BinderProxy.mOrgue成员变量记录死亡通知对象
        env->SetLongField(object, gBinderProxyOffsets.mOrgue, reinterpret_cast<jlong>(drl.get()));

        android_atomic_inc(&gNumProxyRefs);
        incRefsCreated(env);
    }
    return object;
}

四、Binder 数据传递过程分析

Android系统中的众多服务(Binder)都需要向ServerManager进行注册,当需要使用某个Service时 再从ServiceManager进行查询。

ServiceManager.addService() 和getService()来简单看下,ServiceManager是如何注册和维护Service(IBinder)的

4.1、SM.addService()

public static void addService(String name, IBinder service, boolean allowIsolated) {
    try {
        //先获取SMP对象,则执行注册服务操作
        getIServiceManager().addService(name, service, allowIsolated);
    } catch (RemoteException e) {
        Log.e(TAG, "error in addService", e);
    }
}

getIServiceManager

private static IServiceManager getIServiceManager() {
    if (sServiceManager != null) {
        return sServiceManager;
    }
    
    //这里调用了ServiceManagerNative获取sServiceManager
    sServiceManager = ServiceManagerNative.asInterface(BinderInternal.getContextObject());
    return sServiceManager;
}

采用单例模式获取ServiceManager getIServiceManager()返回的是ServiceManagerProxy对象

getContextObject

static jobject android_os_BinderInternal_getContextObject(JNIEnv* env, jobject clazz)
{
    //获取一个BpBinder(0) C++对象
    sp<IBinder> b = ProcessState::self()->getContextObject(NULL);
    //由BpBinder(0) 创建一个BinderProxy()对象
    return javaObjectForIBinder(env, b);
}

BinderInternal.java中有一个native方法getContextObject(),JNI调用执行上述方法。

这里有个熟悉的ProcessState::self()->getContextObject()在分析获取ServiceManager的时候调用过,就相当于是new BpBinder(0)

javaObjectForIBinder

object javaObjectForIBinder(JNIEnv* env, const sp<IBinder>& val) {
    if (val == NULL) return NULL;

    if (val->checkSubclass(&gBinderOffsets)) { //返回false
        jobject object = static_cast<JavaBBinder*>(val.get())->object();
        return object;
    }

    AutoMutex _l(mProxyLock);

    jobject object = (jobject)val->findObject(&gBinderProxyOffsets);
    if (object != NULL) { //第一次object为null
        jobject res = jniGetReferent(env, object);
        if (res != NULL) {
            return res;
        }
        android_atomic_dec(&gNumProxyRefs);
        val->detachObject(&gBinderProxyOffsets);
        env->DeleteGlobalRef(object);
    }

    //创建BinderProxy对象
    object = env->NewObject(gBinderProxyOffsets.mClass, gBinderProxyOffsets.mConstructor);
    if (object != NULL) {
        //BinderProxy.mObject成员变量记录BpBinder对象
        env->SetLongField(object, gBinderProxyOffsets.mObject, (jlong)val.get());
        val->incStrong((void*)javaObjectForIBinder);

        jobject refObject = env->NewGlobalRef(
                env->GetObjectField(object, gBinderProxyOffsets.mSelf));
        //将BinderProxy对象信息附加到BpBinder的成员变量mObjects中
        val->attachObject(&gBinderProxyOffsets, refObject,
                jnienv_to_javavm(env), proxy_cleanup);

        sp<DeathRecipientList> drl = new DeathRecipientList;
        drl->incStrong((void*)javaObjectForIBinder);
        //BinderProxy.mOrgue成员变量记录死亡通知对象
        env->SetLongField(object, gBinderProxyOffsets.mOrgue, reinterpret_cast<jlong>(drl.get()));

        android_atomic_inc(&gNumProxyRefs);
        incRefsCreated(env);
    }
    return object;
}

根据BpBinder(c++)生成BinderProxy(java)对象,并把BpBinder对象地址保存到BinderProxy.mObject成员变量。

ServiceManagerNative.asInterface(BinderInternal.getContextObject())

等价于

ServiceManagerNative.asInterface(new BinderProxy());

SMN.asInterface

 static public IServiceManager asInterface(IBinder obj)
{
    if (obj == null) { //obj为BpBinder
        return null;
    }
    //由于obj为BpBinder,该方法默认返回null
    IServiceManager in = (IServiceManager)obj.queryLocalInterface(descriptor);
    if (in != null) {
        return in;
    }
    return new ServiceManagerProxy(obj);
}

这里可以知道ServiceManagerNative.asInterface(new BinderProxy())等价于ServiceManagerProxy(new BinderProxy())

ServiceManagerProxy初始化

class ServiceManagerProxy implements IServiceManager {
    public ServiceManagerProxy(IBinder remote) {
        mRemote = remote;
    }
}

mRemote为BinderProxy对象,该BinderProxy对象对应于BpBinder(0),其作为Binder代理端,指向Native层大管家SerivceManager。

framework层的ServiceManager的调用实际上是交给了ServiceManagerProxy的成员变量BinderProxy;
BinderProxy通过jni方式,最终会调用BpBinder对象。

4.2、SMP.addService()

public void addService(String name, IBinder service, boolean allowIsolated) throws RemoteException {
    Parcel data = Parcel.obtain();
    Parcel reply = Parcel.obtain();
    data.writeInterfaceToken(IServiceManager.descriptor);
    data.writeString(name);
    
    data.writeStrongBinder(service);
    data.writeInt(allowIsolated ? 1 : 0);
    //mRemote为BinderProxy
    mRemote.transact(ADD_SERVICE_TRANSACTION, data, reply, 0);
    reply.recycle();
    data.recycle();
}

大概做了两件事情:

1、 data.writeStrongBinder(service); 利用Parcel对Binder做处理后
2、 mRemote.transact(ADD_SERVICE_TRANSACTION, data, reply, 0);调用BinderProxy.trasanct() 向Binder Driver发消息.

writeStrongBinder

public writeStrongBinder(IBinder val){
    //此处为Native调用
    nativewriteStrongBinder(mNativePtr, val);
}

android_os_Parcel_writeStrongBinder

static void android_os_Parcel_writeStrongBinder(JNIEnv* env, jclass clazz, jlong nativePtr, jobject object) {
    //将java层Parcel转换为native层Parcel
    Parcel* parcel = reinterpret_cast<Parcel*>(nativePtr);
    if (parcel != NULL) {
        const status_t err = parcel->writeStrongBinder(ibinderForJavaObject(env, object));
        if (err != NO_ERROR) {
            signalExceptionForError(env, clazz, err);
        }
    }
}

ibinderForJavaObject

sp<IBinder> ibinderForJavaObject(JNIEnv* env, jobject obj)
{
    if (obj == NULL) return NULL;

    //Java层的Binder对象
    if (env->IsInstanceOf(obj, gBinderOffsets.mClass)) {
        JavaBBinderHolder* jbh = (JavaBBinderHolder*)
            env->GetLongField(obj, gBinderOffsets.mObject);
        return jbh != NULL ? jbh->get(env, obj) : NULL;
    }
    //Java层的BinderProxy对象
    if (env->IsInstanceOf(obj, gBinderProxyOffsets.mClass)) {
        return (IBinder*)env->GetLongField(obj, gBinderProxyOffsets.mObject);
    }
    return NULL;
}

frameworks/base/core/jni/android_util_Binder.cpp

JavaBBinderHolder

class JavaBBinderHolder

 sp<JavaBBinder> get(JNIEnv* env, jobject obj)
    {
        AutoMutex _l(mLock);
        sp<JavaBBinder> b = mBinder.promote();
        if (b == NULL) {
            b = new JavaBBinder(env, obj);
            if (mVintf) {
                ::android::internal::Stability::markVintf(b.get());
            }
            if (mExtension != nullptr) {
                b.get()->setExtension(mExtension);
            }
            mBinder = b;
            ALOGV("Creating JavaBinder %p (refs %p) for Object %p, weakCount=%" PRId32 "\n",
                 b.get(), b->getWeakRefs(), obj, b->getWeakRefs()->getWeakCount());
        }

        return b;
    }

Java Binder创建时 会自动创建一个JavaBBinderHolder对象。

JavaBBinderHolder有一个成员变量mBinder,保存当前创建的JavaBBinder对象,这是一个弱引用类型的,可能会被垃圾回收器给回收,所以每次使用前都需要判断是否存在。
当量mBinder不存在时,会创建一个新的JavaBBinder()实例,并且JavaBBinder持有Java 层的Binder对象。

此时的引用关系如下:

(Java) Binder->JavaBBinderHolder->JavaBBinder
JavaBBinder->(Java) Binder

data.writeStrongBinder(service)等价于parcel->writeStrongBinder(new JavaBinder(env,obj));

writeStrongBinder(C++)

status_t Parcel::writeStrongBinder(const sp<IBinder>& val)
{
    return flatten_binder(ProcessState::self(), val, this);
}

flatten_binder

flatten_binder 将Binder信息扁平化,保存在flat_binder_object结构体中。

status_t flatten_binder(const sp<ProcessState>& /*proc*/,
    const sp<IBinder>& binder, Parcel* out)
{
    flat_binder_object obj;

    obj.flags = 0x7f | FLAT_BINDER_FLAG_ACCEPTS_FDS;
    if (binder != NULL) {
        IBinder *local = binder->localBinder();
        if (!local) {
            BpBinder *proxy = binder->remoteBinder();
            const int32_t handle = proxy ? proxy->handle() : 0;
            obj.type = BINDER_TYPE_HANDLE; //远程Binder
            obj.binder = 0;
            obj.handle = handle;
            obj.cookie = 0;
        } else {
            obj.type = BINDER_TYPE_BINDER; //本地Binder,进入该分支
            obj.binder = reinterpret_cast<uintptr_t>(local->getWeakRefs());
            obj.cookie = reinterpret_cast<uintptr_t>(local);
        }
    } else {
        obj.type = BINDER_TYPE_BINDER;  //本地Binder
        obj.binder = 0;
        obj.cookie = 0;
    }
    
    return finish_flatten_binder(binder, obj, out);
}

由此可知在IPC前,Binder对象的处理过程:

- Parce.writeStrongBinder() - 输入Binder
- ibinderForJavaObject - 返回BpBinder
- flatten_binder - 返回flat_binder_object
- (C++) parcel.writeObject(flat); flat_binder_object保存到parcel

BinderProxy.transact

Binder对象被保存在Parcel对象后,调用BinderProxy.transact 执行进入到IPC的调用流程中。

public boolean transact(int code, Parcel data, Parcel reply, int flags) throws RemoteException {
    //用于检测Parcel大小是否大于800k
    Binder.checkParcel(this, code, data, "Unreasonably large binder buffer");
    return transactNative(code, data, reply, flags);
}

mRemote是BinderProxy。transactNative经过jni调用,进入下面的方法

android_os_BinderProxy_transact

static jboolean android_os_BinderProxy_transact(JNIEnv* env, jobject obj,
    jint code, jobject dataObj, jobject replyObj, jint flags)
{
    ...
    //java Parcel转为native Parcel
    Parcel* data = parcelForJavaObject(env, dataObj);
    Parcel* reply = parcelForJavaObject(env, replyObj);
    ...

    //gBinderProxyOffsets.mObject中保存的是new BpBinder(0)对象
    IBinder* target = (IBinder*)
        env->GetLongField(obj, gBinderProxyOffsets.mObject);
    ...

    //此处便是BpBinder::transact(), 经过native层,进入Binder驱动程序
    status_t err = target->transact(code, *data, reply, flags);
    ...
    return JNI_FALSE;
}

java层的BinderProxy.transact()最终交由Native层的BpBinder::transact()完成。

addService的核心过程:

public void addService(String name, IBinder service, boolean allowIsolated) throws RemoteException {
    ...
    Parcel data = Parcel.obtain(); //此处还需要将java层的Parcel转为Native层的Parcel
    data->writeStrongBinder(new JavaBBinder(env, obj));
    BpBinder::transact(ADD_SERVICE_TRANSACTION, *data, reply, 0); //与Binder驱动交互
    ...
}

4.2、 SM.getService

ServiceManager.java

public static IBinder getService(String name) {
    try {
        IBinder service = sCache.get(name); //先从缓存中查看
        if (service != null) {
            return service;
        } else {
            return getIServiceManager().getService(name); 【见4.2】
        }
    } catch (RemoteException e) {
        Log.e(TAG, "error in getService", e);
    }
    return null;
}

优先从sCache缓存中 获取已查询的IBinder,若缓存未命中,则从ServiceManagerProxy()中来查询

SMP.getService

class ServiceManagerProxy implements IServiceManager {
    public IBinder getService(String name) throws RemoteException {
        Parcel data = Parcel.obtain();
        Parcel reply = Parcel.obtain();
        data.writeInterfaceToken(IServiceManager.descriptor);
        data.writeString(name);
        //mRemote为BinderProxy 【见4.3】
        mRemote.transact(GET_SERVICE_TRANSACTION, data, reply, 0);
        //从reply里面解析出获取的IBinder对象【见4.8】
        IBinder binder = reply.readStrongBinder();
        reply.recycle();
        data.recycle();
        return binder;
    }
}

BinderProxy.transact

android_util_Binder.cpp

final class BinderProxy implements IBinder {
    public boolean transact(int code, Parcel data, Parcel reply, int flags) throws RemoteException {
        Binder.checkParcel(this, code, data, "Unreasonably large binder buffer");
        return transactNative(code, data, reply, flags);
    }
}

android_os_BinderProxy_transact

static jboolean android_os_BinderProxy_transact(JNIEnv* env, jobject obj,
    jint code, jobject dataObj, jobject replyObj, jint flags)
{
    ...
    //java Parcel转为native Parcel
    Parcel* data = parcelForJavaObject(env, dataObj);
    Parcel* reply = parcelForJavaObject(env, replyObj);
    ...

    //gBinderProxyOffsets.mObject中保存的是new BpBinder(0)对象
    IBinder* target = (IBinder*)
        env->GetLongField(obj, gBinderProxyOffsets.mObject);
    ...

    //此处便是BpBinder::transact(), 经过native层[见小节4.5]
    status_t err = target->transact(code, *data, reply, flags);
    ...
    return JNI_FALSE;
}

BpBinder.transact

BpBinder.cpp

status_t BpBinder::transact(
    uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
    if (mAlive) {
        // [见小节4.6]
        status_t status = IPCThreadState::self()->transact(
            mHandle, code, data, reply, flags);
        if (status == DEAD_OBJECT) mAlive = 0;
        return status;
    }

    return DEAD_OBJECT;
}
IPC.transact

IPCThreadState.cpp

status_t IPCThreadState::transact(int32_t handle,
                                  uint32_t code, const Parcel& data,
                                  Parcel* reply, uint32_t flags)
{
    status_t err = data.errorCheck(); //数据错误检查
    flags |= TF_ACCEPT_FDS;
    ....
    if (err == NO_ERROR) {
         //(1)将handle、code、data暂存在binder_transaction_data结构体,然后写如到mOut(Parcel)中
        err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL);
    }
    ...

    // 默认情况下,都是采用非oneway的方式, 也就是需要等待服务端的返回结果
    if ((flags & TF_ONE_WAY) == 0) {
        if (reply) {
            //(2)执行waitForResponse,等待回应事件
            err = waitForResponse(reply);
        }else {
            Parcel fakeReply;
            err = waitForResponse(&fakeReply);
        }
    } else {
        err = waitForResponse(NULL, NULL);
    }
    return err;
}

writeTransactionData

status_t IPCThreadState::writeTransactionData(int32_t cmd, uint32_t binderFlags,
    int32_t handle, uint32_t code, const Parcel& data, status_t* statusBuffer)
{
    binder_transaction_data tr;

    tr.target.ptr = 0; /* Don't pass uninitialized stack data to a remote process */
    tr.target.handle = handle;
    tr.code = code;
    tr.flags = binderFlags;
    tr.cookie = 0;
    tr.sender_pid = 0;
    tr.sender_euid = 0;

    const status_t err = data.errorCheck();
    if (err == NO_ERROR) {
        tr.data_size = data.ipcDataSize();
        tr.data.ptr.buffer = data.ipcData();
        tr.offsets_size = data.ipcObjectsCount()*sizeof(binder_size_t);
        tr.data.ptr.offsets = data.ipcObjects();
    } else if (statusBuffer) {
        tr.flags |= TF_STATUS_CODE;
        *statusBuffer = err;
        tr.data_size = sizeof(status_t);
        tr.data.ptr.buffer = reinterpret_cast<uintptr_t>(statusBuffer);
        tr.offsets_size = 0;
        tr.data.ptr.offsets = 0;
    } else {
        return (mLastError = err);
    }

    mOut.writeInt32(cmd);
    mOut.write(&tr, sizeof(tr));

    return NO_ERROR;
}

IPC.waitForResponse

status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)
{
    int32_t cmd;
    int32_t err;

    while (1) {
        if ((err=talkWithDriver()) < NO_ERROR) break; // 【见小节2.11】
        err = mIn.errorCheck();
        if (err < NO_ERROR) break; //当存在error则退出循环

         //每当跟Driver交互一次,若mIn收到数据则往下执行一次BR命令
        if (mIn.dataAvail() == 0) continue;

        cmd = mIn.readInt32();

        switch (cmd) {
        case BR_TRANSACTION_COMPLETE:
            //只有当不需要reply, 也就是oneway时 才会跳出循环,否则还需要等待.
            if (!reply && !acquireResult) goto finish; break;

        case BR_DEAD_REPLY:
            err = DEAD_OBJECT;         goto finish;
        case BR_FAILED_REPLY:
            err = FAILED_TRANSACTION;  goto finish;
        case BR_REPLY: ...             goto finish;

        default:
            err = executeCommand(cmd);  //【见小节2.12】
            if (err != NO_ERROR) goto finish;
            break;
        }
    }

finish:
    if (err != NO_ERROR) {
        if (reply) reply->setError(err); //将发送的错误代码返回给最初的调用者
    }
    return err;
}

在这个过程中, 收到以下任一BR_命令,处理后便会退出waitForResponse()的状态:

除了以上命令,其他命令 执行executeCommand()

这里SM.addService() 采用的是非ONE_WAY方法,所以waitForResponse中,收到BC_REPLAY时,返回调用者进程。

class ServiceManagerProxy implements IServiceManager {
    public IBinder getService(String name) throws RemoteException {
        Parcel data = Parcel.obtain();
        Parcel reply = Parcel.obtain();
        //(1)设置parcel参数
        data.writeInterfaceToken(IServiceManager.descriptor);
        data.writeString(name);
        //(2)mRemote为BinderProxy 【见4.3】
        mRemote.transact(GET_SERVICE_TRANSACTION, data, reply, 0);
        //(3)从reply里面解析出获取的IBinder对象【见4.8】
        IBinder binder = reply.readStrongBinder();
        reply.recycle();
        data.recycle();
        return binder;
    }
}

也就是会回到了(3)处

        IBinder binder = reply.readStrongBinder();

Parcel.java

static jobject android_os_Parcel_readStrongBinder(JNIEnv* env, jclass clazz, jlong nativePtr) {
    Parcel* parcel = reinterpret_cast<Parcel*>(nativePtr);
    if (parcel != NULL) {
        //【见小节4.8.1】
        return javaObjectForIBinder(env, parcel->readStrongBinder());
    }
    return NULL;
}

javaObjectForIBinder 将native层BpBinder对象转换为Java层BinderProxy对象。

readStrongBinder(C++)

sp<IBinder> Parcel::readStrongBinder() const
{
    sp<IBinder> val;
    //【见小节4.8.2】
    unflatten_binder(ProcessState::self(), *this, &val);
    return val;
}

unflatten_binder

status_t unflatten_binder(const sp<ProcessState>& proc,
    const Parcel& in, sp<IBinder>* out)
{
    const flat_binder_object* flat = in.readObject(false);
    if (flat) {
        switch (flat->type) {
            case BINDER_TYPE_BINDER:
                *out = reinterpret_cast<IBinder*>(flat->cookie);
                return finish_unflatten_binder(NULL, *flat, in);
            case BINDER_TYPE_HANDLE:
                //进入该分支【见4.8.3】
                *out = proc->getStrongProxyForHandle(flat->handle);
                //创建BpBinder对象
                return finish_unflatten_binder(
                    static_cast<BpBinder*>(out->get()), *flat, in);
        }
    }
    return BAD_TYPE;
}

getStrongProxyForHandle

ProcessState.cpp

sp<IBinder> ProcessState::getStrongProxyForHandle(int32_t handle)
{
    sp<IBinder> result;

    AutoMutex _l(mLock);
    //查找handle对应的资源项
    handle_entry* e = lookupHandleLocked(handle);

    if (e != NULL) {
        IBinder* b = e->binder;
        if (b == NULL || !e->refs->attemptIncWeak(this)) {
            ...
            //当handle值所对应的IBinder不存在或弱引用无效时,则创建BpBinder对象
            b = new BpBinder(handle);
            e->binder = b;
            if (b) e->refs = b->getWeakRefs();
            result = b;
        } else {
            result.force_set(b);
            e->refs->decWeak(this);
        }
    }
    return result;
}

经过该方法,最终创建了指向Binder服务端的BpBinder代理对象。
javaObjectForIBinder将native层BpBinder对象转换为Java层BinderProxy对象。
getService()最终获取了指向目标Binder服务端的代理对象BinderProxy。

Binder.readStrongBinder 具体过程:

  • Parcel.unflatten_binder 返回flat_binder_object
  • ProcessState.getStrongProxyForHandle(handle):返回BpBinder
  • javaObjectForIBinder 返回BinderProxy对象

getService的核心过程:

public static IBinder getService(String name) {
    ...
    Parcel reply = Parcel.obtain(); //此处还需要将java层的Parcel转为Native层的Parcel
    BpBinder::transact(GET_SERVICE_TRANSACTION, *data, reply, 0);  //与Binder驱动交互
    IBinder binder = javaObjectForIBinder(env, new BpBinder(handle));
    ...
}

五、Binder 线程创建

Binder线程的创建都是伴随着Java进程的创建而创建的。

ava层进程的创建都是通过Process.start()方法,向Zygote进程发出创建进程的socket消息。Zygote收到消息后会调用Zygote.forkAndSpecialize()来fork出新进程,在新进程中会调用到RuntimeInit.nativeZygoteInit方法,该方法经过jni映射,最终会调用到app_main.cpp中的onZygoteInit。

  • Process.start()
  • Zygote.forkAndSpecialize()
  • RuntimeInit.nativeZygoteInit
  • onZygoteInit

5.1、onZygoteInit

[-> app_main.cpp]

virtual void onZygoteInit() {
    //获取ProcessState对象
    sp<ProcessState> proc = ProcessState::self();
    //启动新binder线程 【见小节2.2】
    proc->startThreadPool();
}

ProcessState::self()是单例模式,主要工作是调用open()打开/dev/binder驱动设备,再利用mmap()映射内核的地址空间,将Binder驱动的fd赋值ProcessState对象中的变量mDriverFD,用于交互操作。startThreadPool()是创建一个新的binder线程,不断进行talkWithDriver()。

5.2、 PS.startThreadPool

[-> ProcessState.cpp]

void ProcessState::startThreadPool()
{
    AutoMutex _l(mLock);    //多线程同步
    if (!mThreadPoolStarted) {
        mThreadPoolStarted = true;
        spawnPooledThread(true);  【见小节2.3】
    }

启动Binder线程池后, 则设置mThreadPoolStarted=true. 通过变量mThreadPoolStarted来保证每个应用进程只允许启动一个binder主线程池。本次创建的是binder主线程(isMain=true). 其余binder线程池中的线程都是由Binder驱动来控制创建的。

5.3、PS.spawnPooledThread

void ProcessState::spawnPooledThread(bool isMain)
{
    if (mThreadPoolStarted) {
        //获取Binder线程名【见小节2.3.1】
        String8 name = makeBinderThreadName();
        //此处isMain=true【见小节2.3.2】
        sp<Thread> t = new PoolThread(isMain);
        t->run(name.string());
    }
}

5.4 PoolThread.run

[-> ProcessState.cpp]

class PoolThread : public Thread
{
public:
    PoolThread(bool isMain)
        : mIsMain(isMain)
    {
    }

protected:
    virtual bool threadLoop() {
        IPCThreadState::self()->joinThreadPool(mIsMain); //【见小节2.4】
        return false;
    }
    const bool mIsMain;
};

从函数名看起来是创建线程池,其实就只是创建一个线程,该PoolThread继承Thread类。t->run()方法最终调用 PoolThread的threadLoop()方法。

5.5、IPC.joinThreadPool

[-> IPCThreadState.cpp]

void IPCThreadState::joinThreadPool(bool isMain)
{
    //创建Binder线程
    mOut.writeInt32(isMain ? BC_ENTER_LOOPER : BC_REGISTER_LOOPER);
    set_sched_policy(mMyThreadId, SP_FOREGROUND); //设置前台调度策略

    status_t result;
    do {
        processPendingDerefs(); //清除队列的引用
        result = getAndExecuteCommand(); //处理下一条指令

        if (result < NO_ERROR && result != TIMED_OUT
                && result != -ECONNREFUSED && result != -EBADF) {
            abort();
        }

        if(result == TIMED_OUT && !isMain) {
            break; //非主线程出现timeout则线程退出
        }
    } while (result != -ECONNREFUSED && result != -EBADF);

    mOut.writeInt32(BC_EXIT_LOOPER);  // 线程退出循环
    talkWithDriver(false); //false代表bwr数据的read_buffer为空
}

5.6、IPC.getAndExecuteCommand

status_t IPCThreadState::getAndExecuteCommand()
{
    status_t result;
    int32_t cmd;

    result = talkWithDriver(); //与binder进行交互
    if (result >= NO_ERROR) {
        size_t IN = mIn.dataAvail();
        if (IN < sizeof(int32_t)) return result;
        cmd = mIn.readInt32();

        pthread_mutex_lock(&mProcess->mThreadCountLock);
        mProcess->mExecutingThreadsCount++;
        pthread_mutex_unlock(&mProcess->mThreadCountLock);

        result = executeCommand(cmd); //执行Binder响应码 

        pthread_mutex_lock(&mProcess->mThreadCountLock);
        mProcess->mExecutingThreadsCount--;
        pthread_cond_broadcast(&mProcess->mThreadCountDecrement);
        pthread_mutex_unlock(&mProcess->mThreadCountLock);

        set_sched_policy(mMyThreadId, SP_FOREGROUND);
    }
    return result;
}

5.7、talkWithDriver

//mOut有数据,mIn还没有数据。doReceive默认值为true
status_t IPCThreadState::talkWithDriver(bool doReceive)
{
    binder_write_read bwr;
    ...
    // 当同时没有输入和输出数据则直接返回
    if ((bwr.write_size == 0) && (bwr.read_size == 0)) return NO_ERROR;
    ...

    do {
        //ioctl执行binder读写操作,经过syscall,进入Binder驱动。调用Binder_ioctl
        if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)
            err = NO_ERROR;
        ...
    } while (err == -EINTR);
    ...
    return err;
}

5.8、binder_thread_write

[-> binder.c]

static int binder_thread_write(struct binder_proc *proc,
            struct binder_thread *thread,
            binder_uintptr_t binder_buffer, size_t size,
            binder_size_t *consumed)
{
    uint32_t cmd;
    void __user *buffer = (void __user *)(uintptr_t)binder_buffer;
    void __user *ptr = buffer + *consumed;
    void __user *end = buffer + size;
    while (ptr < end && thread->return_error == BR_OK) {
        //拷贝用户空间的cmd命令,此时为BC_ENTER_LOOPER
        if (get_user(cmd, (uint32_t __user *)ptr)) -EFAULT;
        ptr += sizeof(uint32_t);
        switch (cmd) {
          case BC_REGISTER_LOOPER:
              if (thread->looper & BINDER_LOOPER_STATE_ENTERED) {
                //出错原因:线程调用完BC_ENTER_LOOPER,不能执行该分支
                thread->looper |= BINDER_LOOPER_STATE_INVALID;

              } else if (proc->requested_threads == 0) {
                //出错原因:没有请求就创建线程
                thread->looper |= BINDER_LOOPER_STATE_INVALID;

              } else {
                proc->requested_threads--;
                proc->requested_threads_started++;
              }
              thread->looper |= BINDER_LOOPER_STATE_REGISTERED;
              break;

          case BC_ENTER_LOOPER:
              if (thread->looper & BINDER_LOOPER_STATE_REGISTERED) {
                //出错原因:线程调用完BC_REGISTER_LOOPER,不能立刻执行该分支
                thread->looper |= BINDER_LOOPER_STATE_INVALID;
              }
              //创建Binder主线程
              thread->looper |= BINDER_LOOPER_STATE_ENTERED;
              break;

          case BC_EXIT_LOOPER:
              thread->looper |= BINDER_LOOPER_STATE_EXITED;
              break;
        }
        ...
    }
    *consumed = ptr - buffer;
  }
  return 0;
}

处理完BC_ENTER_LOOPER命令后,一般情况下会设置成功,为thread->looper增加BINDER_LOOPER_STATE_ENTERED标志。

thread->looper |= BINDER_LOOPER_STATE_ENTERED。

至此主Binder线程的已经创建完成。之后就可以正常接收Binder消息了。

5.9、Binder驱动主动创建Binder线程的时机。

binder_thread_read

binder_thread_read(){
  ...
retry:
    //当前线程todo队列为空且transaction栈为空,则代表该线程是空闲的
    wait_for_proc_work = thread->transaction_stack == NULL &&
        list_empty(&thread->todo);

    if (thread->return_error != BR_OK && ptr < end) {
        ...
        put_user(thread->return_error, (uint32_t __user *)ptr);
        ptr += sizeof(uint32_t);
        goto done; //发生error,则直接进入done
    }

    thread->looper |= BINDER_LOOPER_STATE_WAITING;
    if (wait_for_proc_work)
        proc->ready_threads++; //可用线程个数+1
    binder_unlock(__func__);

    if (wait_for_proc_work) {
        if (non_block) {
            ...
        } else
            //当进程todo队列没有数据,则进入休眠等待状态
            ret = wait_event_freezable_exclusive(proc->wait, binder_has_proc_work(proc, thread));
    } else {
        if (non_block) {
            ...
        } else
            //当线程todo队列没有数据,则进入休眠等待状态
            ret = wait_event_freezable(thread->wait, binder_has_thread_work(thread));
    }

    binder_lock(__func__);
    if (wait_for_proc_work)
        proc->ready_threads--; //可用线程个数-1
    thread->looper &= ~BINDER_LOOPER_STATE_WAITING;

    if (ret)
        return ret; //对于非阻塞的调用,直接返回

    while (1) {
        uint32_t cmd;
        struct binder_transaction_data tr;
        struct binder_work *w;
        struct binder_transaction *t = NULL;

        //先考虑从线程todo队列获取事务数据
        if (!list_empty(&thread->todo)) {
            w = list_first_entry(&thread->todo, struct binder_work, entry);
        //线程todo队列没有数据, 则从进程todo对获取事务数据
        } else if (!list_empty(&proc->todo) && wait_for_proc_work) {
            w = list_first_entry(&proc->todo, struct binder_work, entry);
        } else {
            ... //没有数据,则返回retry
        }

        switch (w->type) {
            case BINDER_WORK_TRANSACTION: ...  break;
            case BINDER_WORK_TRANSACTION_COMPLETE:...  break;
            case BINDER_WORK_NODE: ...    break;
            case BINDER_WORK_DEAD_BINDER:
            case BINDER_WORK_DEAD_BINDER_AND_CLEAR:
            case BINDER_WORK_CLEAR_DEATH_NOTIFICATION:
                struct binder_ref_death *death;
                uint32_t cmd;

                death = container_of(w, struct binder_ref_death, work);
                if (w->type == BINDER_WORK_CLEAR_DEATH_NOTIFICATION)
                  cmd = BR_CLEAR_DEATH_NOTIFICATION_DONE;
                else
                  cmd = BR_DEAD_BINDER;
                put_user(cmd, (uint32_t __user *)ptr;
                ptr += sizeof(uint32_t);
                put_user(death->cookie, (void * __user *)ptr);
                ptr += sizeof(void *);
                ...
                if (cmd == BR_DEAD_BINDER)
                  goto done; //Binder驱动向client端发送死亡通知,则进入done
                break;
        }

        if (!t)
            continue; //只有BINDER_WORK_TRANSACTION命令才能继续往下执行
        ...
        break;
    }

done:
    *consumed = ptr - buffer;
    //创建线程的条件
    if (proc->requested_threads + proc->ready_threads == 0 &&
        proc->requested_threads_started < proc->max_threads &&
        (thread->looper & (BINDER_LOOPER_STATE_REGISTERED |
         BINDER_LOOPER_STATE_ENTERED))) {
        proc->requested_threads++;
        // 生成BR_SPAWN_LOOPER命令,用于创建新的线程
        put_user(BR_SPAWN_LOOPER, (uint32_t __user *)buffer);
    }
    return 0;
}

当发生以下3种情况之一,便会进入done:

任何一个Binder线程当同时满足以下条件,则会生成用于创建新线程的BR_SPAWN_LOOPER命令:

从system_server的binder线程一直的执行流: IPC.joinThreadPool –> IPC.getAndExecuteCommand() -> IPC.talkWithDriver() ,但talkWithDriver收到事务之后, 便进入IPC.executeCommand(), 接下来,从executeCommand说起.

status_t IPCThreadState::executeCommand(int32_t cmd)
{
    status_t result = NO_ERROR;
    switch ((uint32_t)cmd) {
      ...
      case BR_SPAWN_LOOPER:
          //创建新的binder线程 【见小节2.3】
          mProcess->spawnPooledThread(false);
          break;
      ...
    }
    return result;
}

Binder主线程的创建是在其所在进程创建的过程一起创建的,后面再创建的普通binder线程是由spawnPooledThread(false)方法所创建的。

思考:
每个进程的binder线程池的线程个数上限为15,该上限不统计通过BC_ENTER_LOOPER命令创建的binder主线程, 只计算BC_REGISTER_LOOPER命令创建的线程。

某个进程的主线程执行如下方法,那么该进程可创建的binder线程个数上限是多少呢?

ProcessState::self()->setThreadPoolMaxThreadCount(6);  // 6个线程
ProcessState::self()->startThreadPool();   // 1个线程
IPCThread::self()->joinThreadPool();   // 1个线程

首先由Binder驱动创建的的binder线程个数上限为6个,通过startThreadPool()创建的主线程不算在最大线程上限,最后一句是将当前线程成为binder线程,所以说可创建的binder线程个数上限为8。

Binder线程创建 小结

image

每次由Zygote fork出新进程的过程中,伴随着创建binder线程池,调用spawnPooledThread来创建binder主线程。当线程执行binder_thread_read的过程中,发现当前没有空闲线程,没有请求创建线程,且没有达到上限,则创建新的binder线程。

Binder系统中可分为3类binder线程:

六、Binder IPC 的一般调用流程。

Binder通信架构

上图为以IActivityManager.startService为例,Binder IPC 跨进程通信的流程图。

Client端 BinderProxy.transact -> Server端 Binder.onTransact Binder,为一个通用的调用过程。

6.1、发起端线程向Binder Driver发起binder ioctl请求后, 便采用环不断talkWithDriver,此时该线程处于阻塞状态, 直到收到如下BR_XXX命令才会结束该过程.

6.2、目标Binder线程创建后, 便进入joinThreadPool()方法, 采用循环不断地循环执行getAndExecuteCommand()方法, 当bwr的读写buffer都没有数据时,则阻塞在binder_thread_read的wait_event过程. 另外,正常情况下binder线程一旦创建则不会退出

七、参考文章

https://www.jianshu.com/p/1bef7e865498

http://gityuan.com/2015/11/21/binder-framework/#getiservicemanager

http://gityuan.com/2016/10/29/binder-thread-pool/

上一篇下一篇

猜你喜欢

热点阅读