业余整理的工作笔记

binder framework

2020-04-19  本文已影响0人  龙遁流

binder framework

基于 Android 9.0 源码。编译出 libbinder 库,提供native IPC 通讯框架。
即Java层binder框架,AIDL相关

涉及文件

libbinder:
./frameworks/native/include/binder
./frameworks/native/libs/binder

jni:
framework/base/core/jni/

java/aidl:
framework/base/core/java/android/os/
framework/base/core/java/com/android/internal/os/

结构

顶层接口

有两个最顶层的接口,IInterface 和 IBinder。分别是对Service和Binder驱动的抽象。

从上到下依次为接口层、实现层、代理层

IInterface           IBinder
   <=>                 <=>
INTERFACE         BBinder/BpBinder(BpRefBase)
   <=>                 <=>
BnInterface         BpInterface

IInterface

所有Service都是此接口的子类,定义了从IInterface到IBinder接口的转换方法。
但是具体的转换细节onAsBinder()是由子类决定的。

static sp<IBinder> IInterface::asBinder(const sp<IInterface>&); // 内部直接调用onAsBinder
virtual IBinder* onAsBinder() = 0;

从IBinder对象获取其代表的Service。一般作为访问Service的入口。因为要使用Service的特定服务必须获取其
真正的Service类,父类(接口类)用来构建框架,是抽象的,无需关心子类特定行为。

template<typename INTERFACE>
inline sp<INTERFACE> interface_cast(const sp<IBinder>& obj) {
  return INTERFACE::asInterface(obj);
}

所有Service都会有两个宏 DECLARE_META_INTERFACE, IMPLEMENT_META_INTERFACE
定义Service的专有信息,及定义了IBinder从IInterface的转换规则。

// 在Service类的定义处 .h
#define DECLARE_META_INTERFACE(INTERFACE) \
  static const String16 descriptor;\
  static sp<I##INTERFACE> asInterface(const sp<IBinder>& obj);\
  virtual const String16& getInterfaceDescriptor() const;\

// 在Service类实现处 .cpp, NAME 是唯一的
// asInterface获取的是BpInterface
#define IMPLEMENT_META_INTERFACE(INTERFACE, NAME) \
  const String16 I##INTERFACE::descriptor(NAME); \
  const String16& I##INTERFACE::getInterfaceDescriptor() const { \
      return I##INTERFACE::descriptor; \
  } \
  sp<I##INTERFACE> I##INTERFACE::asInterface(const sp<IBinder>& obj) { \
      sp<I##INTERFACE> intr; \
      if (obj != NULL) { \
          intr = static_cast<I##INTERFACE*>
          (\
              obj->queryLocalInterface(I##INTERFACE::descriptor).get()\
          );\
          if (intr == NULL) {                                         \
              intr = new Bp##INTERFACE(obj);                          \
          }                                                           \
      }                                                               \
      return intr;                                                    \
  }                                                                   \

IInterface 和 IBinder 是如何结合到一起的?两个重要的子类BnInterface, BpInterface将其关联到了一起。不直接使用Service本身,而是通过这两个子类。

Bn即Binder node,代表真正的binder实体;Bp即Binder proxy,代理

INTERFACE即自定义Service(本身实现IInterface接口)

// 定义
template<typename INTERFACE>
class BnInterface : public INTERFACE, public BBinder {
  public:
    virtual sp<IInterface> queryLocalInterface(const String16& _descriptor);
    virtual const String16& getInterfaceDescriptor() const;
  protected:
    virtual IBinder* onAsBinder();
};
template<typename INTERFACE>
class BpInterface : public INTERFACE, public BpRefBase {
  public:
    explicit BpInterface(const sp<IBinder>& remote);
  protected:
    virtual IBinder* onAsBinder();
};
// 实现
template<typename INTERFACE>
inline sp<IInterface> BnInterface<INTERFACE>::queryLocalInterface(
      const String16& _descriptor) {
  if (_descriptor == INTERFACE::descriptor) return this;
  return NULL;
}

template<typename INTERFACE>
inline const String16& BnInterface<INTERFACE>::getInterfaceDescriptor() const {
  return INTERFACE::getInterfaceDescriptor(); }

// 将Service转换为其对应的Binder对象,实际上获得的是其子类对象
template<typename INTERFACE>
IBinder* BnInterface<INTERFACE>::onAsBinder() { return this; }

// remote ??? BpInterface 持有Service对应Binder的引用
template<typename INTERFACE>
inline BpInterface<INTERFACE>::BpInterface(const sp<IBinder>& remote)
  : BpRefBase(remote) {}

// remote() ???
template<typename INTERFACE>
inline IBinder* BpInterface<INTERFACE>::onAsBinder() { return remote(); }

IBinder

定义了binder IPC通讯的基本方法。

BBinder 和 BpBinder

// BBinder及其引用类BpRefBase
class BBinder : public IBinder
{
public:
    virtual const String16& getInterfaceDescriptor() const;
    virtual status_t transact(uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags = 0);
    virtual BBinder* localBinder();
protected:
    virtual status_t onTransact(uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags = 0);
};

BBinder* BBinder::localBinder() { return this; }
bool BBinder::isBinderAlive() const { return true; }

// BBinder的包装,管理其引用计数,强引用或弱引用 ???
class BpRefBase : public virtual RefBase
{
protected:
    explicit BpRefBase(const sp<IBinder>& o);
    inline IBinder* remote() { return mRemote; }
    inline IBinder* remote() const { return mRemote; }
private:
    IBinder* const mRemote;
};

BBinder的角色是什么,从其功能上看,即transact()转调用了onTransact(),而onTransact()只是一些通用的
transact操作,比如interface,dump,shell_command_transact,sysprops transact。
一些transact调用了其他服务,一些没有经过IPC直接内部处理了,从onTransact名字即可看出。而BpRefBase包装了BBinder。

定义了一些通用的binder操作,如pingBinder,dump等都是通过其transact()来实现,而此方法是IPC方法。

class BpBinder : public IBinder
{
public:
    static BpBinder* create(int32_t handle);
    virtual status_t transact(uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags = 0);
    virtual BpBinder* remoteBinder();
protected:
    BpBinder(int32_t handle,int32_t trackedUid);
private:
    const int32_t mHandle;
    int32_t mTrackedUid;
};

// mAlive代表其代理的Binder实体是否还存在
bool BpBinder::isBinderAlive() const { return mAlive != 0; }
BpBinder* BpBinder::remoteBinder() { return this; }

// 关键的transact操作,转调用IPCThreadState的transact方法,该方法是真正和binder驱动通信的
status_t BpBinder::transact(uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
    if (mAlive) {
        status_t status = IPCThreadState::self()->transact(mHandle, code, data, reply, flags);
        if (status == DEAD_OBJECT) mAlive = 0;
        return status;
    }
    return DEAD_OBJECT;
}

IPCThreadState

这是真正和binder驱动打交道的。分析两条线,一是client通过它和service交互,一是不断处理收到的命令

class IPCThreadState
{
public:
    static IPCThreadState* self();
    static IPCThreadState* selfOrNull();  // self(), but won't instantiate
    static void shutdown(); // 销毁IPCThreadState

    sp<ProcessState> process() {return mProcess;}

    status_t handlePolledCommands();
    void flushCommands();

    void joinThreadPool(bool isMain = true);

    // Stop the local process.
    void stopProcess(bool immediate = true);

    status_t transact(int32_t handle, uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags);
private:
    status_t sendReply(const Parcel& reply, uint32_t flags);
    status_t waitForResponse(Parcel *reply, status_t *acquireResult=NULL);
    status_t talkWithDriver(bool doReceive=true);
    status_t writeTransactionData(int32_t cmd, uint32_t binderFlags,
                                  int32_t handle, uint32_t code, const Parcel& data,
                                  status_t* statusBuffer);
    status_t getAndExecuteCommand();
    status_t executeCommand(int32_t command);
    static  void threadDestructor(void *st);
    static  void freeBuffer(Parcel* parcel, const uint8_t* data, size_t dataSize,
                            const binder_size_t* objects, size_t objectsSize, void* cookie);
    const   sp<ProcessState>    mProcess; // 此线程所在进程
            Vector<BBinder*>    mPendingStrongDerefs;
            Parcel              mIn; // 输入transact数据
            Parcel              mOut; // 回复transact的数据
};

// 全局变量 作用???
sp<BBinder> the_context_object;
void setTheContextObject(sp<BBinder> obj) { the_context_object = obj; }

// 单例,通过线程私有数据保存IPCThreadState对象,确保唯一,没有就new一个。selfOrNull只查找不new
IPCThreadState* IPCThreadState::self() {... return new IPCThreadState(); ... }
IPCThreadState* IPCThreadState::selfOrNull() {...}

// 被动触发IPC, 一般时client线程
status_t IPCThreadState::transact(int32_t handle,
                                  uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
    status_t err;
    flags |= TF_ACCEPT_FDS;
    // 封装transact数据
    err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL);

    if ((flags & TF_ONE_WAY) == 0) { // 不是单向的操作,可能需要等待回复
        if (reply) {
            // 调用talkWithDriver和executeCommand
            err = waitForResponse(reply); // 和驱动通信并处理回复
        } else {
            Parcel fakeReply;
            err = waitForResponse(&fakeReply);
        }
    } else {
        err = waitForResponse(NULL, NULL);
    }
    return err;
}
// 封住transact数据到发送Parcel
status_t IPCThreadState::writeTransactionData(int32_t cmd, uint32_t binderFlags,
    int32_t handle, uint32_t code, const Parcel& data, status_t* statusBuffer)
{
    binder_transaction_data tr; // binder驱动从此结构中解析处IPC数据

    tr.target.ptr = 0; /* Don't pass uninitialized stack data to a remote process */
    tr.target.handle = handle;
    tr.code = code;
    tr.flags = binderFlags;
    tr.cookie = 0;
    tr.sender_pid = 0;
    tr.sender_euid = 0;

    const status_t err = data.errorCheck();
    if (err == NO_ERROR) { // 数据无误正常封装
        tr.data_size = data.ipcDataSize();
        tr.data.ptr.buffer = data.ipcData();
        tr.offsets_size = data.ipcObjectsCount()*sizeof(binder_size_t);
        tr.data.ptr.offsets = data.ipcObjects();
    } else if (statusBuffer) { // 数据有误,封装状态信息
        tr.flags |= TF_STATUS_CODE;
        *statusBuffer = err;
        tr.data_size = sizeof(status_t);
        tr.data.ptr.buffer = reinterpret_cast<uintptr_t>(statusBuffer);
        tr.offsets_size = 0;
        tr.data.ptr.offsets = 0;
    } else {
        return (mLastError = err);
    }
    // 写入待发送Parcel缓冲
    mOut.writeInt32(cmd);
    mOut.write(&tr, sizeof(tr));

    return NO_ERROR;
}
status_t IPCThreadState::sendReply(const Parcel& reply, uint32_t flags)
{
    status_t err;
    status_t statusBuffer;
    err = writeTransactionData(BC_REPLY, flags, -1, 0, reply, &statusBuffer);
    if (err < NO_ERROR) return err;
    return waitForResponse(NULL, NULL);
}
status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)
{
    uint32_t cmd;
    int32_t err;

    while (1) {
        // 将IPC数据传给驱动
        if ((err=talkWithDriver()) < NO_ERROR) break;
        err = mIn.errorCheck();
        if (err < NO_ERROR) break;
        if (mIn.dataAvail() == 0) continue;

        // 处理驱动的回复
        cmd = (uint32_t)mIn.readInt32();
        switch (cmd) {
        case BR_TRANSACTION_COMPLETE:
        case BR_DEAD_REPLY:
        case BR_FAILED_REPLY:
        case BR_ACQUIRE_RESULT:// 省略以上操作
        case BR_REPLY:
            {
                binder_transaction_data tr;
                err = mIn.read(&tr, sizeof(tr));
                if (reply) {
                    if ((tr.flags & TF_STATUS_CODE) == 0) {
                        reply->ipcSetDataReference(
                            reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer), tr.data_size,
                            reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
                            tr.offsets_size/sizeof(binder_size_t),
                            freeBuffer, this);
                    } else {
                        err = *reinterpret_cast<const status_t*>(tr.data.ptr.buffer);
                        freeBuffer(NULL,
                            reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
                            tr.data_size,
                            reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
                            tr.offsets_size/sizeof(binder_size_t), this);
                    }
                } else {
                    freeBuffer(NULL,
                        reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
                        tr.data_size,
                        reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
                        tr.offsets_size/sizeof(binder_size_t), this);
                    continue;
                }
            }
            goto finish;

        default:
            err = executeCommand(cmd);
            if (err != NO_ERROR) goto finish;
            break;
        }
    }

finish:
    if (err != NO_ERROR) {
        if (acquireResult) *acquireResult = err;
        if (reply) reply->setError(err);
        mLastError = err;
    }

    return err;
}
// 默认doReceive为true
status_t IPCThreadState::talkWithDriver(bool doReceive)
{
    binder_write_read bwr;

    // Is the read buffer empty?
    const bool needRead = mIn.dataPosition() >= mIn.dataSize();
    const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0;
    bwr.write_size = outAvail;
    bwr.write_buffer = (uintptr_t)mOut.data();

    // This is what we'll read.
    if (doReceive && needRead) {
        bwr.read_size = mIn.dataCapacity();
        bwr.read_buffer = (uintptr_t)mIn.data();
    } else {
        bwr.read_size = 0;
        bwr.read_buffer = 0;
    }

    // 如果什么数据都没有立即返回
    if ((bwr.write_size == 0) && (bwr.read_size == 0)) return NO_ERROR;

    bwr.write_consumed = 0;
    bwr.read_consumed = 0;
    status_t err;
    // 不断循环和驱动通讯,将IPC数据传给驱动
    do {
        if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)
            err = NO_ERROR;
        else
            err = -errno;
    } while (err == -EINTR);
    return err;
}
// client的处理请求通过service来处理,此方法即中间者
status_t IPCThreadState::executeCommand(int32_t cmd)
{
    BBinder* obj;
    RefBase::weakref_type* refs;
    status_t result = NO_ERROR;

    switch ((uint32_t)cmd) {
    case BR_ERROR:
    case BR_OK:
    case BR_ACQUIRE:
    case BR_RELEASE:
    case BR_INCREFS:
    case BR_DECREFS:
    case BR_ATTEMPT_ACQUIRE: // 省略
    case BR_TRANSACTION:
        {
            binder_transaction_data tr;
            result = mIn.read(&tr, sizeof(tr));
            if (result != NO_ERROR) break;

            Parcel buffer;
            buffer.ipcSetDataReference(
                reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
                tr.data_size,
                reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
                tr.offsets_size/sizeof(binder_size_t), freeBuffer, this);

            const pid_t origPid = mCallingPid;
            const uid_t origUid = mCallingUid;
            const int32_t origStrictModePolicy = mStrictModePolicy;
            const int32_t origTransactionBinderFlags = mLastTransactionBinderFlags;
            mCallingPid = tr.sender_pid;
            mCallingUid = tr.sender_euid;
            mLastTransactionBinderFlags = tr.flags;

            Parcel reply;
            status_t error;
            if (tr.target.ptr) {
                // We only have a weak reference on the target object, so we must first try to
                // safely acquire a strong reference before doing anything else with it.
                if (reinterpret_cast<RefBase::weakref_type*>(tr.target.ptr)->attemptIncStrong(this)) {
                    // 调用实际Service(BnInterface)的transact方法
                    error = reinterpret_cast<BBinder*>(tr.cookie)->
                                transact(tr.code, buffer, &reply, tr.flags);
                    reinterpret_cast<BBinder*>(tr.cookie)->decStrong(this);
                } else {
                    error = UNKNOWN_TRANSACTION;
                }

            } else {
                error = the_context_object->transact(tr.code, buffer, &reply, tr.flags);
            }
            // 发送回复
            if ((tr.flags & TF_ONE_WAY) == 0) {
                sendReply(reply, 0);
            }

            mCallingPid = origPid;
            mCallingUid = origUid;
            mStrictModePolicy = origStrictModePolicy;
            mLastTransactionBinderFlags = origTransactionBinderFlags;
        }
        break;

    case BR_DEAD_BINDER:
    case BR_CLEAR_DEATH_NOTIFICATION_DONE:
        {
            BpBinder *proxy = (BpBinder*)mIn.readPointer();
            proxy->getWeakRefs()->decWeak(proxy);
        } break;
    case BR_FINISHED:
    case BR_NOOP:
    case BR_SPAWN_LOOPER:
        mProcess->spawnPooledThread(false);
        break;
    default:
    }
    return result;
}

// 其他重要操作
// 将当前线程加入线程池执行,不断的取出要处理的命令。区别需要客户端触发,这是主动行为
void IPCThreadState::joinThreadPool(bool isMain)
{
    mOut.writeInt32(isMain ? BC_ENTER_LOOPER : BC_REGISTER_LOOPER);

    status_t result;
    do {
        processPendingDerefs();
        // now get the next command to be processed, waiting if necessary
        result = getAndExecuteCommand();
        // Let this thread exit the thread pool if it is no longer
        // needed and it is not the main process thread.
        if(result == TIMED_OUT && !isMain) {
            break;
        }
    } while (result != -ECONNREFUSED && result != -EBADF);

    mOut.writeInt32(BC_EXIT_LOOPER);
    talkWithDriver(false);
}
status_t IPCThreadState::getAndExecuteCommand()
{
    status_t result;
    int32_t cmd;

    result = talkWithDriver();
    if (result >= NO_ERROR) {
        size_t IN = mIn.dataAvail();
        cmd = mIn.readInt32();
        // 执行相关命令
        result = executeCommand(cmd);
    }
    return result;
}

获取和使用

获取Service Manager是通过defaultServiceManager()方法来完成,当进程注册服务(addService)或
获取服务(getService)的过程之前,都需要先调用defaultServiceManager()方法来获取gDefaultServiceManager对象。gDefaultServiceManager,实际是BpServiceManager,是service_manager服务的代理对象。
对于gDefaultServiceManager对象,如果存在则直接返回;如果不存在则创建该对象

binder分配的默认内存大小为1M-8k。binder默认的最大可并发访问的线程数为16

// IServiceManager.cpp/h  
// defaultServiceManager()
gDefaultServiceManager =
    interface_cast<IServiceManager>(ProcessState::self()->getContextObject(NULL));

//ProcessState::self()用于获取ProcessState对象(也是单例模式),每个进程有且只有一个ProcessState对象,存在则直接返回,不存在则创建
//getContextObject(NULL)用于获取BpBinder对象,对于handle=0的BpBinder对象,存在则直接返回,不存在才创建
//interface_cast<IServiceManager>()用于获取BpServiceManager对象

ProcessState::self() ==> new ProcessState("/dev/binder") ==> open_driver/mmap

ProcessState::getContextObject() ==> getStrongProxyForHandle(0)
==> lookupHandleLocked()
==> BpBinder::create(0) ==> obj(handle=0的)

interface_cast<IServiceManager>(obj) ==> IServiceManager::asInterface(obj)
==> new BpServiceManager(obj)

BpServiceManager对象封装了getService、checkService、addService、listServices方法。而这些方法通过obj(实质是BpBinder)的
transact方法与service_manager服务通讯。即BpServiceManager起始相当于service_manager的客户端。

BpServiceManager巧妙将通信层与业务层逻辑合为一体:

通过继承接口IServiceManager实现了接口中的业务逻辑函数;

通过成员变量mRemote= new BpBinder(0)进行Binder通信工作。

BpBinder通过handler来指向所对应BBinder, 在整个Binder系统中handle=0代表ServiceManager所对应的BBinder。

上一篇下一篇

猜你喜欢

热点阅读