Android音视频一:AudioPolicyService启动

2022-06-08  本文已影响0人  小城哇哇

文章开始前面的说明:

  1. 这是我看源码时候认为是流程的地方,代码只是拿了关键代码

简单的流程图

main_audioserver的初始化

可以看到就直接main_audioserver.cpp中调用了instantiate初始化方法,所以看一下instantiate是什么, 但是进入到AudioPolicyService发现是找不到该方法的,所以只好看一下他所继承的类了

# frameworks/av/media/audioserver/main_audioserver.cpp

int main(int argc __unused, char **argv)
{
    if (doLog && (childPid = fork()) != 0) {
        ...省略
    } else {
        // 两个重要的Service初始化
        AudioFlinger::instantiate();
        AudioPolicyService::instantiate();

        // AAudioService 还不知道作用
        if (mmapPolicy == AAUDIO_POLICY_AUTO || mmapPolicy == AAUDIO_POLICY_ALWAYS) {
            AAudioService::instantiate();
        }

        // ProcessState::self()获取对象,start启动进程内的binder线程池
        ProcessState::self()->startThreadPool();
        // 是负责与Binder驱动进⾏具体的命令交互
        IPCThreadState::self()->joinThreadPool();
    
    
    }
}


BinderService新建对象

可以看到一共继承了三个类,而instantiate方法再BinderService中,所以继续进入查看

# frameworks/av/services/audiopolicy/service/AudioPolicyService.h

class AudioPolicyService :
    // 每个点进去看一下,或者和AudioFlinger对比一下就知道是这个类了
    public BinderService<AudioPolicyService>,
    public BnAudioPolicyService,
    public IBinder::DeathRecipient
{
}

总的来说instantiate方法一共干了三件事

  1. 调用子类的构造方法,创建出Service
  2. 把子类Service加入到service_manager中去统一管理,命名通过子类户
  3. 其他的进程操作
namespace android {

// 模板类,相当于泛型
template<typename SERVICE>
class BinderService
{
public:
    // 调用到这边的静态方法
    static status_t publish(bool allowIsolated = false,
                            int dumpFlags = IServiceManager::DUMP_FLAG_PRIORITY_DEFAULT) {
        sp<IServiceManager> sm(defaultServiceManager());
        // 把Service添加到service_manager中
        return sm->addService(String16(SERVICE::getServiceName()), new SERVICE(), allowIsolated,
                              dumpFlags);
    }

    static void publishAndJoinThreadPool(
            bool allowIsolated = false,
            int dumpFlags = IServiceManager::DUMP_FLAG_PRIORITY_DEFAULT) {
        publish(allowIsolated, dumpFlags);
        joinThreadPool();
    }

    // 初始化方法,都会调用到这边来,接着调用自己的静态方法
    static void instantiate() { publish(); }

    static status_t shutdown() { return NO_ERROR; }

private:
    static void joinThreadPool() {
        sp<ProcessState> ps(ProcessState::self());
        ps->startThreadPool();
        ps->giveThreadPoolName();
        IPCThreadState::self()->joinThreadPool();
    }
};
}

AudioPolicyService的onFirstRef

所以我们还是继续进入到AudioPolicyService的构造方法中去,发现构造方法除了赋值什么都没干,流程进行不下去了 但经过一阵百度之后可以知道的是

  1. AudioPolicyService继承BnAudioPolicyService,而后者通过各种继承关系最终继承RefBase,继承RefBase的类在执行构造方法时,会调用onFirstRef方法(binder和sp,wp,RefBase智能指针的知识)

AudioPolicyService::AudioPolicyService()
    : BnAudioPolicyService(),
      mAudioPolicyManager(NULL),
      mAudioPolicyClient(NULL),
      mPhoneState(AUDIO_MODE_INVALID),
      mCaptureStateNotifier(false) {
}

// 构造函数调用,会执行到这里
void AudioPolicyService::onFirstRef()
{
    {

        // 创建两个线程,控制音频和输出?还不太清楚
        mAudioCommandThread = new AudioCommandThread(String8("ApmAudio"), this);
        mOutputCommandThread = new AudioCommandThread(String8("ApmOutput"), this);
        
        // 创建AudioPolicyClient,并把自己传入
        mAudioPolicyClient = new AudioPolicyClient(this);
        
        // createAudioPolicyManager 最终是执行到AudioPolicyFactory.cpp中的方法去
        // createAudioPolicyManager 把AudioPolicyClient传入
        mAudioPolicyManager = createAudioPolicyManager(mAudioPolicyClient);
    }
    ... 省略代码
}


AudioPolicyFactory帮助构造AudioPolicyManager

所以查看一下AudioPolicyFactory.cpp中的方法,总结一下干了三件事

  1. 构建AudioPolicyManager实例对象
  2. 调用AudioPolicyManager初始化方法initialize
  3. 成功后把对象返回出去
# frameworks/av/services/audiopolicy/manager/AudioPolicyFactory.cpp

extern "C" AudioPolicyInterface* createAudioPolicyManager(
        AudioPolicyClientInterface *clientInterface)
{
    AudioPolicyManager *apm = new AudioPolicyManager(clientInterface);
    status_t status = apm->initialize();
    if (status != NO_ERROR) {
        delete apm;
        apm = nullptr;
    }
    return apm;
}


AudioPolicyManager的新建和初始化

接着进入AudioPolicyManager的构造方法和初始化方法中看一下

  1. 构造方法主要是加载了配置文件
  2. 初始化方法主要是创建了引擎,和Manager绑定,并且通过onNewAudioModulesAvailableInt打开输入输出流
# frameworks/av/services/audiopolicy/managerdefault/AudioPolicyManager

AudioPolicyManager::AudioPolicyManager(AudioPolicyClientInterface *clientInterface)
        : AudioPolicyManager(clientInterface, false /*forTesting*/)
{
    // 加载配置文件 audio_policy.conf
    // 系统会首先加载vendor/etc目录下的configure文件,再加载system/etc目录下的configure文件。
    // 若这两者加载都发生错误的话,系统会加载default配置文件,并命名为primary module,从这可以看
   //  出,音频系统中一定必须存在的module就是primary了
    loadConfig();
}



status_t AudioPolicyManager::initialize() {
    {
        // 加载engine
        auto engLib = EngineLibrary::load("libaudiopolicyengine" + getConfig().getEngineLibraryNameSuffix() + ".so");

        // 创建engine
        mEngine = engLib->createEngine();
    }
    // 引擎和Manger绑定
    mEngine->setObserver(this);
    status_t status = mEngine->initCheck();

    // after parsing the config, mOutputDevicesAll and mInputDevicesAll contain all known devices;
    // open all output streams needed to access attached devices
    // 重要方法,打开输入输出流的,但是为什么传null还没搞清楚
    onNewAudioModulesAvailableInt(nullptr /*newDevices*/);

    // 更新所有信息
    updateDevicesAndOutputs();
    return status;
}


在AudioPolicyManager的onNewAudioModulesAvailableInt主要做了着几件事

  1. 加载硬件抽象库,通过loadHwModule方法
  2. 遍历配置声明中加载出来的输入输出设备
  3. 找到符合的输入输出设备并打开

void AudioPolicyManager::onNewAudioModulesAvailableInt(DeviceVector *newDevices)
{
    // mHwModulesAll 配置声明中的所有模块,猜测是加载配置文件的时候赋值的
    for (const auto& hwModule : mHwModulesAll) {
        if (std::find(mHwModules.begin(), mHwModules.end(), hwModule) != mHwModules.end()) {
            continue;
        }
        // 加载硬件抽象库,
        // 在构造函数中传入的mpClientInterface == AudioPolicyClient,其中实现类AudioPolicyClientImpI.cpp
        hwModule->setHandle(mpClientInterface->loadHwModule(hwModule->getName()));
        mHwModules.push_back(hwModule);

        // 打开访问连接设备所需的所有输出流,
        for (const auto& outProfile : hwModule->getOutputProfiles()) {
            // 支持的设备(猜测的)
            const DeviceVector &supportedDevices = outProfile->getSupportedDevices();
            // 在支持的设备中筛选我们的所有设备,筛选出来的就是可用的设备(猜测的)
            DeviceVector availProfileDevices = supportedDevices.filter(mOutputDevicesAll);
            sp<DeviceDescriptor> supportedDevice = 0;
            if (supportedDevices.contains(mDefaultOutputDevice)) {
                supportedDevice = mDefaultOutputDevice;
            } else {
                if (availProfileDevices.isEmpty()) {
                    continue;
                }
                supportedDevice = availProfileDevices.itemAt(0);
            }
            if (!mOutputDevicesAll.contains(supportedDevice)) {
                continue;
            }
            // 备描述符对象 ,传入了mClientInterface
            sp<SwAudioOutputDescriptor> outputDesc = new SwAudioOutputDescriptor(outProfile, mpClientInterface);
            // output表示这个设备的输出流的句柄
            audio_io_handle_t output = AUDIO_IO_HANDLE_NONE;
            // 打开输出流,这里实际也是调用了mpClientInterface->openOutput
            status_t status = outputDesc->open(nullptr, DeviceVector(supportedDevice),
                                               AUDIO_STREAM_DEFAULT,
                                               AUDIO_OUTPUT_FLAG_NONE, &output);

            for (const auto &device : availProfileDevices) {
                // give a valid ID to an attached device once confirmed it is reachable
                if (!device->isAttached()) {
                    device->attach(hwModule);
                    // 添加到可用设备列表中
                    mAvailableOutputDevices.add(device);
                    // 还没进去看过,猜测是给设备设置一个可以跟HAL层通讯的对象
                    device->setEncapsulationInfoFromHal(mpClientInterface);
                    // newDevices是传入的值,还不清楚什么意思
                    if (newDevices) newDevices->add(device);
                    setEngineDeviceConnectionState(device, AUDIO_POLICY_DEVICE_STATE_AVAILABLE);
                }
            }

            if (mPrimaryOutput == 0 &&outProfile->getFlags() & AUDIO_OUTPUT_FLAG_PRIMARY) {
                mPrimaryOutput = outputDesc;
            }
            // direct直接输出到设备的,就把流关闭了
            if ((outProfile->getFlags() & AUDIO_OUTPUT_FLAG_DIRECT) != 0) {
                outputDesc->close();
            } else {
                addOutput(output, outputDesc);
                // 设置输出设备
                setOutputDevices(outputDesc,DeviceVector(supportedDevice),true,0,NULL);
            }
        }

        // 打开所有输入流,流程和输出流一样
        for (const auto& inProfile : hwModule->getInputProfiles()) {

            sp<AudioInputDescriptor> inputDesc =  new AudioInputDescriptor(inProfile, mpClientInterface);

            audio_io_handle_t input = AUDIO_IO_HANDLE_NONE;
            // 和output同理
            status_t status = inputDesc->open(nullptr,
                                              availProfileDevices.itemAt(0),
                                              AUDIO_SOURCE_MIC,
                                              AUDIO_INPUT_FLAG_NONE,
                                              &input);

            for (const auto &device : availProfileDevices) {
                // give a valid ID to an attached device once confirmed it is reachable
                if (!device->isAttached()) {
                    device->attach(hwModule);
                    device->importAudioPortAndPickAudioProfile(inProfile, true);
                    mAvailableInputDevices.add(device);
                    if (newDevices) newDevices->add(device);
                    setEngineDeviceConnectionState(device, AUDIO_POLICY_DEVICE_STATE_AVAILABLE);
                }
            }
            inputDesc->close();
        }
    }
}


进入AudioFlinger

接着就进入loadHwModule方法看一下

  1. 通过AudioSystem找到AudioFlinger,再调用loadHwModule方法,这里开始就和AudioFlinger有了联系
  2. outputDesc->open跟踪一下最终也是也是AudioFlinger的方法
loadHwModule的流程
audio_module_handle_t AudioPolicyService::AudioPolicyClient::loadHwModule(const char *name)
{
    // 通过AudioSystem拿到AudioFlinger的代理对象,再调用AudioFlinger的方法
    sp<IAudioFlinger> af = AudioSystem::get_audio_flinger();
    return af->loadHwModule(name);
}

outputDesc->open的流程
outputDesc = SwAudioOutputDescriptor
mClientInterface = AudioPolicyClientImpI.cpp
status_t SwAudioOutputDescriptor::open(const audio_config_t *config,
                                       const DeviceVector &devices,
                                       audio_stream_type_t stream,
                                       audio_output_flags_t flags,
                                       audio_io_handle_t *output)
{
        // 最终回到AudioFligner,打开输出
    status_t status = mClientInterface->openOutput(mProfile->getModuleHandle(),
                                                   output,
                                                   &lConfig,
                                                   device,
                                                   &mLatency,
                                                   mFlags);
}

本文转自 https://juejin.cn/post/7103781381670453262,如有侵权,请联系删除。

上一篇下一篇

猜你喜欢

热点阅读