android 音频总结
Android音频系统详解
参考好文:
Android 音频系统:从 AudioTrack 到 AudioFlinger
https://blog.csdn.net/zyuanyun/article/details/60890534
Android系统Audio框架介绍
http://blog.csdn.net/yangwen123/article/details/39502689
4.1 分析思路
a. Thread如何创建?
AudioPolicyService是策略的制定者,
AudioFlinger是策略的执行者,
所以: AudioPolicyService根据配置文件使唤AudioFlinger来创建Thread
b. Thread对应output, output对应哪些设备节点?
c. AudioTrack和Track的创建过程: AudioTrack对应哪一个Thread, 对应哪一个output?
d. AudioTrack如何传输数据给Thread?
AudioTrack如何播放、暂停、关闭?
使用HardWare Module 来操作硬件
hw mododule 的名字是什么? 哪些so文件
module支持哪些output?
output支持哪些device,参数是啥?
需要设置/system/etc/audio_policy.conf
4.2 以例子说明几个重要概念
stream type, strategy, device, output, profile, module : policy
out flag : 比如对于某个专业APP, 它只从HDMI播放声音, 这时就可以指定out flag为AUDIO_OUTPUT_FLAG_DIRECT,这会导致最终的声音无需混音即直接输出到对应的device
Android系统里使用hardware module来访问硬件, 比如声卡
声卡上有喇叭、耳机等等,称为device
为了便于管理, 把一个设备上具有相同参数的一组device称为output,
一个module能支持哪些output,一个output能支持哪些device,使用配置文件/system/etc/audio_policy.conf来描述
app要播放声音, 要指定声音类型: stream type
有那么多的类型, 来来来, 先看它属于哪一类(策略): strategy
根据strategy确定要用什么设备播放: device, 喇叭、耳机还是蓝牙?
根据device确定output, 进而知道对应的playbackthread,
把声音数据传给这个thread
一个stream如何最终选择到一个device,
这些stream如何互相影响(一个高优先级的声音会使得其他声音静音),
等等等, 统称为policy (政策)
输出、输入设备:
https://blog.csdn.net/zzqhost/article/details/7711935
概念:
module: 硬件操作库, 用来操作device
output: 一组有相同参数的,来自同一硬件的device
device: 喇叭,耳机,....
需要通过设置/system/etc/audio_policy.conf
profile : 配置,用来描述output
a. 本可以支持哪些设备
b. 参数: 采样率,通道
output:
a. 现在可以,实际上可以支持的device
b. 参数
profile: 可以支持喇叭,耳机
output: 接上耳机时,才可以支持耳机
APP: 播放音乐
问; 播放音乐时,有哪么多的路径, App怎么办?
答: App不用管,只用表明声音类型(stream type)
App要指定声音类型(stream type): 太多了,分个组 strategy(具有相同行为的stream)
播放的device相同,播放的优先级相同
4.3 所涉及文件形象讲解
系统服务APP:
frameworks/av/media/mediaserver/main_mediaserver.cpp
AudioFlinger :
AudioFlinger.cpp (frameworks/av/services/audioflinger/AudioFlinger.cpp)
Threads.cpp (frameworks/av/services/audioflinger/Threads.cpp)
Tracks.cpp (frameworks/av/services/audioflinger/Tracks.cpp)
audio_hw_hal.cpp (hardware/libhardware_legacy/audio/Audio_hw_hal.cpp)
AudioHardware.cpp (device/friendly-arm/common/libaudio/AudioHardware.cpp)
AudioPolicyService:
AudioPolicyService.cpp (frameworks/av/services/audiopolicy/AudioPolicyService.cpp)
AudioPolicyClientImpl.cpp (frameworks/av/services/audiopolicy/AudioPolicyClientImpl.cpp)
AudioPolicyInterfaceImpl.cpp(frameworks/av/services/audiopolicy/AudioPolicyInterfaceImpl.cpp)
AudioPolicyManager.cpp (device/friendly-arm/common/libaudio/AudioPolicyManager.cpp)
AudioPolicyManager.h (device/friendly-arm/common/libaudio/AudioPolicyManager.h)
AudioPolicyManagerBase.cpp (hardware/libhardware_legacy/audio/AudioPolicyManagerBase.cpp)
堪误: 上面3个文件被以下文件替代
AudioPolicyManager.cpp (frameworks/av/services/audiopolicy/AudioPolicyManager.cpp)
应用程序APP所用文件:
AudioTrack.java (frameworks/base/media/java/android/media/AudioTrack.java)
android_media_AudioTrack.cpp (frameworks/base/core/jni/android_media_AudioTrack.cpp)
AudioTrack.cpp (frameworks/av/media/libmedia/AudioTrack.cpp)
AudioSystem.cpp (frameworks/av/media/libmedia/AudioSystem.cpp)
音频文件框架图:
4.4 AudioPolicyService启动过程分析
a. 加载解析/vendor/etc/audio_policy.conf或/system/etc/audio_policy.conf
对于配置文件里的每一个module项, new HwModule(name), 放入mHwModules数组
对于module里的每一个output, new IOProfile, 放入module的mOutputProfiles
对于module里的每一个input, new IOProfile, 放入module的mInputProfiles
b. 根据module的name加载厂家提供的so文件 (通过AudioFlinger来加载)
c. 打开对应的output (通过AudioFlinger来open output)
问: 默认声卡是? 声卡/有耳机孔/喇叭,如何告知Andrdoi系统?
由厂家决定,用一个配置文件申明
AndroidPolicyService:
a. 读取,解析配置文件
b. AndroidPolicyService根据配置文件,调用AudioFlinger来打开output,创建线程
总结: 对于audio_policy,conf 里的每一个module:
使用loadHwModule来处理
a. new HwModule(名字"primary")
b. mOutputProfiles: 每一项对应于output的profile
c. mInputProfiles: 每一项对应一个Input的prifile
AudioPolicyManagerBase.cpp (z:\android-5.0.2\hardware\libhardware_legacy\audio)
// ----------------------------------------------------------------------------
// AudioPolicyManagerBase
// ----------------------------------------------------------------------------
AudioPolicyManagerBase::AudioPolicyManagerBase(AudioPolicyClientInterface *clientInterface)
:
#ifdef AUDIO_POLICY_TEST
Thread(false),
#endif //AUDIO_POLICY_TEST
mPrimaryOutput((audio_io_handle_t)0),
mAvailableOutputDevices(AUDIO_DEVICE_NONE),
mPhoneState(AudioSystem::MODE_NORMAL),
mLimitRingtoneVolume(false), mLastVoiceVolume(-1.0f),
mTotalEffectsCpuLoad(0), mTotalEffectsMemory(0),
mA2dpSuspended(false), mHasA2dp(false), mHasUsb(false), mHasRemoteSubmix(false),
mSpeakerDrcEnabled(false)
{
mpClientInterface = clientInterface;
for (int i = 0; i < AudioSystem::NUM_FORCE_USE; i++) {
mForceUse[i] = AudioSystem::FORCE_NONE;
}
mA2dpDeviceAddress = String8("");
mScoDeviceAddress = String8("");
mUsbOutCardAndDevice = String8("");
if (loadAudioPolicyConfig(AUDIO_POLICY_VENDOR_CONFIG_FILE) != NO_ERROR) {
if (loadAudioPolicyConfig(AUDIO_POLICY_CONFIG_FILE) != NO_ERROR) {
ALOGE("could not load audio policy configuration file, setting defaults");
defaultAudioPolicyConfig();
}
}
// must be done after reading the policy
initializeVolumeCurves();
// open all output streams needed to access attached devices
for (size_t i = 0; i < mHwModules.size(); i++) {
// 找到frameworks\av\services\audioflinger\AudioFlinger.cpp\loadHwModule
mHwModules[i]->mHandle = mpClientInterface->loadHwModule(mHwModules[i]->mName);
if (mHwModules[i]->mHandle == 0) {
ALOGW("could not open HW module %s", mHwModules[i]->mName);
continue;
}
// open all output streams needed to access attached devices
// except for direct output streams that are only opened when they are actually
// required by an app.
for (size_t j = 0; j < mHwModules[i]->mOutputProfiles.size(); j++)
{
const IOProfile *outProfile = mHwModules[i]->mOutputProfiles[j];
if ((outProfile->mSupportedDevices & mAttachedOutputDevices) &&
((outProfile->mFlags & AUDIO_OUTPUT_FLAG_DIRECT) == 0)) {
AudioOutputDescriptor *outputDesc = new AudioOutputDescriptor(outProfile);
outputDesc->mDevice = (audio_devices_t)(mDefaultOutputDevice &
outProfile->mSupportedDevices);
audio_io_handle_t output = mpClientInterface->openOutput(
outProfile->mModule->mHandle,
&outputDesc->mDevice,
&outputDesc->mSamplingRate,
&outputDesc->mFormat,
&outputDesc->mChannelMask,
&outputDesc->mLatency,
outputDesc->mFlags);
if (output == 0) {
delete outputDesc;
} else {
mAvailableOutputDevices = (audio_devices_t)(mAvailableOutputDevices |
(outProfile->mSupportedDevices & mAttachedOutputDevices));
if (mPrimaryOutput == 0 &&
outProfile->mFlags & AUDIO_OUTPUT_FLAG_PRIMARY) {
mPrimaryOutput = output;
}
addOutput(output, outputDesc);
setOutputDevice(output,
(audio_devices_t)(mDefaultOutputDevice &
outProfile->mSupportedDevices),
true);
}
}
}
}
ALOGE_IF((mAttachedOutputDevices & ~mAvailableOutputDevices),
"Not output found for attached devices %08x",
(mAttachedOutputDevices & ~mAvailableOutputDevices));
ALOGE_IF((mPrimaryOutput == 0), "Failed to open primary output");
updateDevicesAndOutputs();
#ifdef AUDIO_POLICY_TEST
if (mPrimaryOutput != 0) {
AudioParameter outputCmd = AudioParameter();
outputCmd.addInt(String8("set_id"), 0);
mpClientInterface->setParameters(mPrimaryOutput, outputCmd.toString());
mTestDevice = AUDIO_DEVICE_OUT_SPEAKER;
mTestSamplingRate = 44100;
mTestFormat = AudioSystem::PCM_16_BIT;
mTestChannels = AudioSystem::CHANNEL_OUT_STEREO;
mTestLatencyMs = 0;
mCurOutput = 0;
mDirectOutput = false;
for (int i = 0; i < NUM_TEST_OUTPUTS; i++) {
mTestOutputs[i] = 0;
}
const size_t SIZE = 256;
char buffer[SIZE];
snprintf(buffer, SIZE, "AudioPolicyManagerTest");
run(buffer, ANDROID_PRIORITY_AUDIO);
}
#endif //AUDIO_POLICY_TEST
}
Audio_policy_conf.h (z:\android-5.0.2\hardware\libhardware_legacy\include\hardware_legacy)
#define AUDIO_POLICY_CONFIG_FILE "/system/etc/audio_policy.conf"
#define AUDIO_POLICY_VENDOR_CONFIG_FILE "/vendor/etc/audio_policy.conf"
查找上面目录:
/vendor/etc/audio_policy.conf
#
# Audio policy configuration for generic device builds (goldfish audio HAL - emulator)
#
# Global configuration section: lists input and output devices always present on the device
# as well as the output device selected by default.
# Devices are designated by a string that corresponds to the enum in audio.h
global_configuration {
attached_output_devices AUDIO_DEVICE_OUT_SPEAKER
default_output_device AUDIO_DEVICE_OUT_SPEAKER
attached_input_devices AUDIO_DEVICE_IN_BUILTIN_MIC
}
# audio hardware module section: contains descriptors for all audio hw modules present on the
# device. Each hw module node is named after the corresponding hw module library base name.
# For instance, "primary" corresponds to audio.primary..so.
# The "primary" module is mandatory and must include at least one output with
# AUDIO_OUTPUT_FLAG_PRIMARY flag.
# Each module descriptor contains one or more output profile descriptors and zero or more
# input profile descriptors. Each profile lists all the parameters supported by a given output
# or input stream category.
# The "channel_masks", "formats", "devices" and "flags" are specified using strings corresponding
# to enums in audio.h and audio_policy.h. They are concatenated by use of "|" without space or "\n".
audio_hw_modules {
primary { // 一个module对应厂家提供的一个so文件
outputs { // 一个module可以有多个output
primary { // 一个output里,表明它的参数
sampling_rates 44100
channel_masks AUDIO_CHANNEL_OUT_STEREO
formats AUDIO_FORMAT_PCM_16_BIT
devices AUDIO_DEVICE_OUT_SPEAKER|AUDIO_DEVICE_OUT_EARPIECE|AUDIO_DEVICE_OUT_WIRED_HEADSET|AUDIO_DEVICE_OUT_WIRED_HEADPHONE|AUDIO_DEVICE_OUT_ALL_SCO|AUDIO_DEVICE_OUT_AUX_DIGITAL
flags AUDIO_OUTPUT_FLAG_PRIMARY // 默认设备
}
}
inputs { //// 一个module可以有多个input
primary {
sampling_rates 8000|11025|12000|16000|22050|24000|32000|44100|48000
channel_masks AUDIO_CHANNEL_IN_MONO|AUDIO_CHANNEL_IN_STEREO
formats AUDIO_FORMAT_PCM_16_BIT
devices AUDIO_DEVICE_IN_BUILTIN_MIC|AUDIO_DEVICE_IN_WIRED_HEADSET|AUDIO_DEVICE_IN_BLUETOOTH_SCO_HEADSET|AUDIO_DEVICE_IN_AUX_DIGITAL|AUDIO_DEVICE_IN_VOICE_CALL
}
}
}
}
AudioPolicyManagerBase.cpp (z:\android-5.0.2\hardware\libhardware_legacy\audio)
status_t AudioPolicyManagerBase::loadAudioPolicyConfig(const char *path)
{
cnode *root;
char *data;
data = (char *)load_file(path, NULL);
if (data == NULL) {
return -ENODEV;
}
root = config_node("", "");
config_load(root, data);
loadGlobalConfig(root);
loadHwModules(root);
config_free(root);
free(root);
free(data);
ALOGI("loadAudioPolicyConfig() loaded %s\n", path);
return NO_ERROR;
}
4.5 AudioFlinger启动过程分析
笔记分析:
a. 注册AudioFlinger服务
b. 被AudioPolicyService调用以打开厂家提供的so文件
b.1 加载哪个so文件? 文件名是什么? 文件名从何而来?
名字从/system/etc/audio_policy.conf得到 : primary
所以so文件就是 : audio.primary.XXX.so, eg. audio.primary.tiny4412.so
b.2 该so文件由什么源文件组成? 查看Android.mk
audio.primary.$(TARGET_DEVICE) : device/friendly-arm/common/libaudio/AudioHardware.cpp
libhardware_legacy
libhardware_legacy : hardware/libhardware_legacy/audio/audio_hw_hal.cpp
/work/android-5.0.2/device/friendly-arm/common/libaudio
LOCAL_PATH:= $(call my-dir)
include $(CLEAR_VARS)
LOCAL_SRC_FILES:= \
AudioHardware.cpp
LOCAL_MODULE := audio.primary.$(TARGET_DEVICE)
LOCAL_MODULE_PATH := $(TARGET_OUT_SHARED_LIBRARIES)/hw
LOCAL_STATIC_LIBRARIES:= libmedia_helper
LOCAL_SHARED_LIBRARIES:= \
libutils \
liblog \
libhardware_legacy \
libtinyalsa \
libaudioutils
LOCAL_WHOLE_STATIC_LIBRARIES := libaudiohw_legacy
LOCAL_MODULE_TAGS := optional
LOCAL_SHARED_LIBRARIES += libdl
LOCAL_C_INCLUDES += \
external/tinyalsa/include \
system/media/audio_effects/include \
system/media/audio_utils/include \
device/friendly-arm/$(TARGET_DEVICE)/conf
ifeq ($(strip $(BOARD_USES_I2S_AUDIO)),true)
LOCAL_CFLAGS += -DUSES_I2S_AUDIO
endif
ifeq ($(strip $(BOARD_USES_PCM_AUDIO)),true)
LOCAL_CFLAGS += -DUSES_PCM_AUDIO
endif
ifeq ($(strip $(BOARD_USES_SPDIF_AUDIO)),true)
LOCAL_CFLAGS += -DUSES_SPDIF_AUDIO
endif
ifeq ($(strip $(USE_ULP_AUDIO)),true)
LOCAL_CFLAGS += -DUSE_ULP_AUDIO
endif
include $(BUILD_SHARED_LIBRARY)
include $(CLEAR_VARS)
LOCAL_SRC_FILES := AudioPolicyManager.cpp
LOCAL_SHARED_LIBRARIES := libcutils libutils
LOCAL_STATIC_LIBRARIES := libmedia_helper
LOCAL_WHOLE_STATIC_LIBRARIES := libaudiopolicy_legacy
LOCAL_MODULE := audio_policy.$(TARGET_DEVICE)
LOCAL_MODULE_PATH := $(TARGET_OUT_SHARED_LIBRARIES)/hw
LOCAL_MODULE_TAGS := optional
ifeq ($(BOARD_HAVE_BLUETOOTH),true)
LOCAL_CFLAGS += -DWITH_A2DP
endif
include $(BUILD_SHARED_LIBRARY)
/work/android-5.0.2/hardware/libhardware_legacy/audio
# Copyright 2011 The Android Open Source Project
#AUDIO_POLICY_TEST := true
#ENABLE_AUDIO_DUMP := true
LOCAL_PATH := $(call my-dir)
include $(CLEAR_VARS)
LOCAL_SRC_FILES := \
AudioHardwareInterface.cpp \
audio_hw_hal.cpp
LOCAL_MODULE := libaudiohw_legacy
LOCAL_MODULE_TAGS := optional
LOCAL_STATIC_LIBRARIES := libmedia_helper
LOCAL_CFLAGS := -Wno-unused-parameter
include $(BUILD_STATIC_LIBRARY)
include $(CLEAR_VARS)
LOCAL_SRC_FILES := \
AudioPolicyManagerBase.cpp \
AudioPolicyCompatClient.cpp \
audio_policy_hal.cpp
ifeq ($(AUDIO_POLICY_TEST),true)
LOCAL_CFLAGS += -DAUDIO_POLICY_TEST
endif
LOCAL_STATIC_LIBRARIES := libmedia_helper
LOCAL_MODULE := libaudiopolicy_legacy
LOCAL_MODULE_TAGS := optional
LOCAL_CFLAGS += -Wno-unused-parameter
include $(BUILD_STATIC_LIBRARY)
# The default audio policy, for now still implemented on top of legacy
# policy code
include $(CLEAR_VARS)
LOCAL_SRC_FILES := \
AudioPolicyManagerDefault.cpp
LOCAL_SHARED_LIBRARIES := \
libcutils \
libutils \
liblog
LOCAL_STATIC_LIBRARIES := \
libmedia_helper
LOCAL_WHOLE_STATIC_LIBRARIES := \
libaudiopolicy_legacy
LOCAL_MODULE := audio_policy.default
LOCAL_MODULE_RELATIVE_PATH := hw
LOCAL_MODULE_TAGS := optional
LOCAL_CFLAGS := -Wno-unused-parameter
include $(BUILD_SHARED_LIBRARY)
#ifeq ($(ENABLE_AUDIO_DUMP),true)
# LOCAL_SRC_FILES += AudioDumpInterface.cpp
# LOCAL_CFLAGS += -DENABLE_AUDIO_DUMP
#endif
#
#ifeq ($(strip $(BOARD_USES_GENERIC_AUDIO)),true)
# LOCAL_CFLAGS += -D GENERIC_AUDIO
#endif
#ifeq ($(BOARD_HAVE_BLUETOOTH),true)
# LOCAL_SRC_FILES += A2dpAudioInterface.cpp
# LOCAL_SHARED_LIBRARIES += liba2dp
# LOCAL_C_INCLUDES += $(call include-path-for, bluez)
#
# LOCAL_CFLAGS += \
# -DWITH_BLUETOOTH \
#endif
#
#include $(BUILD_SHARED_LIBRARY)
# AudioHardwareGeneric.cpp \
# AudioHardwareStub.cpp \
b.3 对硬件的封装:
AudioFlinger : AudioHwDevice (放入mAudioHwDevs数组中)
audio_hw_hal.cpp : audio_hw_device
厂家 : AudioHardware (派生自: AudioHardwareInterface)
AudioHwDevice是对audio_hw_device的封装,
audio_hw_device中函数的实现要通过AudioHardware类对象
audio_module_handle_t AudioFlinger::loadHwModule(const char *name)
{
if (name == NULL) {
return 0;
}
if (!settingsAllowed()) {
return 0;
}
Mutex::Autolock _l(mLock);
return loadHwModule_l(name);
}
// loadHwModule_l() must be called with AudioFlinger::mLock held
audio_module_handle_t AudioFlinger::loadHwModule_l(const char *name)
{
for (size_t i = 0; i < mAudioHwDevs.size(); i++) {
if (strncmp(mAudioHwDevs.valueAt(i)->moduleName(), name, strlen(name)) == 0) {
ALOGW("loadHwModule() module %s already loaded", name);
return mAudioHwDevs.keyAt(i);
}
}
audio_hw_device_t *dev;
int rc = load_audio_interface(name, &dev);
if (rc) {
ALOGI("loadHwModule() error %d loading module %s ", rc, name);
return 0;
}
......
return handle;
}
static int load_audio_interface(const char *if_name, audio_hw_device_t **dev)
{
const hw_module_t *mod;
int rc;
//if_name : primary
//AUDIO_HARDWARE_MODULE_ID : "audio"
// audio.promary.XXX.so
rc = hw_get_module_by_class(AUDIO_HARDWARE_MODULE_ID, if_name, &mod);
ALOGE_IF(rc, "%s couldn't load audio hw module %s.%s (%s)", __func__,
AUDIO_HARDWARE_MODULE_ID, if_name, strerror(-rc));
if (rc) {
goto out;
}
rc =audio_hw_device_open(mod, dev);
ALOGE_IF(rc, "%s couldn't open audio hw device in %s.%s (%s)", __func__,
AUDIO_HARDWARE_MODULE_ID, if_name, strerror(-rc));
if (rc) {
goto out;
}
if ((*dev)->common.version < AUDIO_DEVICE_API_VERSION_MIN) {
ALOGE("%s wrong audio hw device version %04x", __func__, (*dev)->common.version);
rc = BAD_VALUE;
goto out;
}
return 0;
out:
*dev = NULL;
return rc;
}
hardware.c F:\android_project\android_system_code\hardware\libhardware
127|shell@tiny4412:/ $ getprop "ro.hardware" // 查找属性值
tiny4412
static const char *variant_keys[] = {
"ro.hardware", /* This goes first so that it can pick up a different
file on the emulator. */
"ro.product.board",
"ro.board.platform",
"ro.arch"
};
int hw_get_module_by_class(const char *class_id, const char *inst,
const struct hw_module_t **module)
{
int i;
char prop[PATH_MAX];
char path[PATH_MAX];
char name[PATH_MAX];
char prop_name[PATH_MAX];
if (inst)
snprintf(name, PATH_MAX, "%s.%s", class_id, inst);
else
strlcpy(name, class_id, PATH_MAX);
/*
* Here we rely on the fact that calling dlopen multiple times on
* the same .so will simply increment a refcount (and not load
* a new copy of the library).
* We also assume that dlopen() is thread-safe.
*/
/* First try a property specific to the class and possibly instance */
snprintf(prop_name, sizeof(prop_name), "ro.hardware.%s", name);
if (property_get(prop_name, prop, NULL) > 0) {
if (hw_module_exists(path, sizeof(path), name, prop) == 0) {
goto found;
}
}
/* Loop through the configuration variants looking for a module */
for (i=0 ; i
if (property_get(variant_keys[i], prop, NULL) == 0) {
continue;
}
if (hw_module_exists(path, sizeof(path), name, prop) == 0) {
goto found;
}
}
/* Nothing found, try the default */
if (hw_module_exists(path, sizeof(path), name, "default") == 0) {
goto found;
}
return -ENOENT;
found:
/* load the module, if this fails, we're doomed, and we should not try
* to load a different variant. */
return load(class_id, path, module);
}
hardware\libhardware_legacy\audio\audio_hw_hal.cpp
static int legacy_adev_open(const hw_module_t* module, const char* name,
hw_device_t** device)
{
struct legacy_audio_device *ladev;
int ret;
if (strcmp(name, AUDIO_HARDWARE_INTERFACE) != 0)
return -EINVAL;
ladev = (struct legacy_audio_device *)calloc(1, sizeof(*ladev));
if (!ladev)
return -ENOMEM;
ladev->device.common.tag= HARDWARE_DEVICE_TAG;
ladev->device.common.version= AUDIO_DEVICE_API_VERSION_2_0;
ladev->device.common.module= const_cast(module);
ladev->device.common.close= legacy_adev_close;
ladev->device.init_check = adev_init_check;
ladev->device.set_voice_volume = adev_set_voice_volume;
ladev->device.set_master_volume = adev_set_master_volume;
ladev->device.get_master_volume = adev_get_master_volume;
ladev->device.set_mode = adev_set_mode;
ladev->device.set_mic_mute = adev_set_mic_mute;
ladev->device.get_mic_mute = adev_get_mic_mute;
ladev->device.set_parameters = adev_set_parameters;
ladev->device.get_parameters = adev_get_parameters;
ladev->device.get_input_buffer_size = adev_get_input_buffer_size;
ladev->device.open_output_stream = adev_open_output_stream;
ladev->device.close_output_stream = adev_close_output_stream;
ladev->device.open_input_stream = adev_open_input_stream;
ladev->device.close_input_stream = adev_close_input_stream;
ladev->device.dump = adev_dump;
ladev->hwif = createAudioHardware();
if (!ladev->hwif) {
ret = -EIO;
goto err_create_audio_hw;
}
*device = &ladev->device.common;
return 0;
err_create_audio_hw:
free(ladev);
return ret;
}
c. 被AudioPolicyService调用来open output、创建playback thread
hardware\libhardware_legacy\audio\AudioPolicyManagerBase.cpp
AudioPolicyManagerBase::AudioPolicyManagerBase(AudioPolicyClientInterface *clientInterface)
{
......
// open all output streams needed to access attached devices
// except for direct output streams that are only opened when they are actually
// required by an app.
for (size_t j = 0; j < mHwModules[i]->mOutputProfiles.size(); j++)
{
const IOProfile *outProfile = mHwModules[i]->mOutputProfiles[j];
if ((outProfile->mSupportedDevices & mAttachedOutputDevices) &&
((outProfile->mFlags & AUDIO_OUTPUT_FLAG_DIRECT) == 0)) {
AudioOutputDescriptor *outputDesc = new AudioOutputDescriptor(outProfile);
outputDesc->mDevice = (audio_devices_t)(mDefaultOutputDevice &
outProfile->mSupportedDevices);
audio_io_handle_t output = mpClientInterface->openOutput(
outProfile->mModule->mHandle,
&outputDesc->mDevice,
&outputDesc->mSamplingRate,
&outputDesc->mFormat,
&outputDesc->mChannelMask,
&outputDesc->mLatency,
outputDesc->mFlags);
if (output == 0) {
delete outputDesc;
} else {
mAvailableOutputDevices = (audio_devices_t)(mAvailableOutputDevices |
(outProfile->mSupportedDevices & mAttachedOutputDevices));
if (mPrimaryOutput == 0 &&
outProfile->mFlags & AUDIO_OUTPUT_FLAG_PRIMARY) {
mPrimaryOutput = output;
}
addOutput(output, outputDesc);
setOutputDevice(output,
(audio_devices_t)(mDefaultOutputDevice &
outProfile->mSupportedDevices),
true);
}
}
}
}
......
}
frameworks\av\services\audioflinger\AudioFlinger.cpp
status_t AudioFlinger::openOutput(audio_module_handle_t module,
audio_io_handle_t *output,
audio_config_t *config,
audio_devices_t *devices,
const String8& address,
uint32_t *latencyMs,
audio_output_flags_t flags)
{
ALOGV("openOutput(), module %d Device %x, SamplingRate %d, Format %#08x, Channels %x, flags %x",
module,
(devices != NULL) ? *devices : 0,
config->sample_rate,
config->format,
config->channel_mask,
flags);
if (*devices == AUDIO_DEVICE_NONE) {
return BAD_VALUE;
}
Mutex::Autolock _l(mLock);
// 创建一个播放线程
sp thread = openOutput_l(module, output, config, *devices, address, flags);
if (thread != 0) {
*latencyMs = thread->latency();
// notify client processes of the new output creation
thread->audioConfigChanged(AudioSystem::OUTPUT_OPENED);
// the first primary output opened designates the primary hw device
if ((mPrimaryHardwareDev == NULL) && (flags & AUDIO_OUTPUT_FLAG_PRIMARY)) {
ALOGI("Using module %d has the primary audio interface", module);
mPrimaryHardwareDev = thread->getOutput()->audioHwDev;
AutoMutex lock(mHardwareLock);
mHardwareStatus = AUDIO_HW_SET_MODE;
mPrimaryHardwareDev->hwDevice()->set_mode(mPrimaryHardwareDev->hwDevice(), mMode);
mHardwareStatus = AUDIO_HW_IDLE;
mPrimaryOutputSampleRate = config->sample_rate;
}
return NO_ERROR;
}
return NO_INIT;
}
// ----------------------------------------------------------------------------
sp AudioFlinger::openOutput_l(audio_module_handle_t module,
audio_io_handle_t *output,
audio_config_t *config,
audio_devices_t devices,
const String8& address,
audio_output_flags_t flags)
{
AudioHwDevice *outHwDev = findSuitableHwDev_l(module, devices);
if (outHwDev == NULL) {
return 0;
}
audio_hw_device_t *hwDevHal = outHwDev->hwDevice();
if (*output == AUDIO_IO_HANDLE_NONE) {
*output = nextUniqueId();
}
mHardwareStatus = AUDIO_HW_OUTPUT_OPEN;
audio_stream_out_t *outStream = NULL;
// FOR TESTING ONLY:
// This if statement allows overriding the audio policy settings
// and forcing a specific format or channel mask to the HAL/Sink device for testing.
if (!(flags & (AUDIO_OUTPUT_FLAG_COMPRESS_OFFLOAD | AUDIO_OUTPUT_FLAG_DIRECT))) {
// Check only for Normal Mixing mode
if (kEnableExtendedPrecision) {
// Specify format (uncomment one below to choose)
//config->format = AUDIO_FORMAT_PCM_FLOAT;
//config->format = AUDIO_FORMAT_PCM_24_BIT_PACKED;
//config->format = AUDIO_FORMAT_PCM_32_BIT;
//config->format = AUDIO_FORMAT_PCM_8_24_BIT;
// ALOGV("openOutput_l() upgrading format to %#08x", config->format);
}
if (kEnableExtendedChannels) {
// Specify channel mask (uncomment one below to choose)
//config->channel_mask = audio_channel_out_mask_from_count(4); // for USB 4ch
//config->channel_mask = audio_channel_mask_from_representation_and_bits(
// AUDIO_CHANNEL_REPRESENTATION_INDEX, (1 << 4) - 1); // another 4ch example
}
}
status_t status = hwDevHal->open_output_stream(hwDevHal,
*output,
devices,
flags,
config,
&outStream,
address.string());
mHardwareStatus = AUDIO_HW_IDLE;
ALOGV("openOutput_l() openOutputStream returned output %p, sampleRate %d, Format %#x, "
"channelMask %#x, status %d",
outStream,
config->sample_rate,
config->format,
config->channel_mask,
status);
if (status == NO_ERROR && outStream != NULL) {
AudioStreamOut *outputStream = new AudioStreamOut(outHwDev, outStream, flags);
PlaybackThread *thread;
if (flags & AUDIO_OUTPUT_FLAG_COMPRESS_OFFLOAD) {
thread = new OffloadThread(this, outputStream, *output, devices);
ALOGV("openOutput_l() created offload output: ID %d thread %p", *output, thread);
} else if ((flags & AUDIO_OUTPUT_FLAG_DIRECT)
|| !isValidPcmSinkFormat(config->format)
|| !isValidPcmSinkChannelMask(config->channel_mask)) {
thread = new DirectOutputThread(this, outputStream, *output, devices);
ALOGV("openOutput_l() created direct output: ID %d thread %p", *output, thread);
} else {
// 创建线程
thread = new MixerThread(this, outputStream, *output, devices);
ALOGV("openOutput_l() created mixer output: ID %d thread %p", *output, thread);
}
// 将线程添加到数组里
mPlaybackThreads.add(*output, thread);
return thread;
}
return 0;
}
总结:
打开厂家提供的so文件
由AudioFlinger实现:
1> 从mHwModule取出每一个HwModule,根据name打开so文件
2> 构造硬件封装对象
AudioFlinger:
3. 打开module里的output,创建playbackThread:
由AudioFlinger实现: 对每一个Module中每一个output profile:
4 把outputDesc放入AudioPolicyManager
.mOutputs表示"已经打开的output"
以后可以根据一个整数(output)找到对应的outputDesc
硬件封装:
AudioFlinger.cpp : AudioHwDevice ==> 对应一个Module(so文件)所支持的设备
|
audio_hw_hal.cpp: audio_hw_device
|
AudioHardware.cpp : AudioHardWare
4.6 AudioTrack创建过程概述
a. 体验测试程序: frameworks/base/media/tests/audiotests/shared_mem_test.cpp
frameworks/base/media/tests/mediaframeworktest/src/com/android/mediaframeworktest/functional/audio/MediaAudioTrackTest.java
public void testSetStereoVolumeMax() throws Exception {
// constants for test
final String TEST_NAME = "testSetStereoVolumeMax";
final int TEST_SR = 22050;
final int TEST_CONF = AudioFormat.CHANNEL_OUT_STEREO;
final int TEST_FORMAT = AudioFormat.ENCODING_PCM_16BIT;
final int TEST_MODE = AudioTrack.MODE_STREAM;
final int TEST_STREAM_TYPE = AudioManager.STREAM_MUSIC;
//-------- initialization --------------
int minBuffSize = AudioTrack.getMinBufferSize(TEST_SR, TEST_CONF, TEST_FORMAT);
AudioTrack track = new AudioTrack(TEST_STREAM_TYPE, TEST_SR, TEST_CONF, TEST_FORMAT,
minBuffSize, TEST_MODE);
byte data[] = new byte[minBuffSize/2];
//-------- test --------------
track.write(data, 0, data.length);
track.write(data, 0, data.length);
track.play();
float maxVol = AudioTrack.getMaxVolume();
assertTrue(TEST_NAME, track.setStereoVolume(maxVol, maxVol) == AudioTrack.SUCCESS);
//-------- tear down --------------
track.release();
}
frameworks\base\media\java\android\media\AudioTrack.java
/**
* Class constructor with {@link AudioAttributes} and {@link AudioFormat}.
* @param attributes a non-null {@link AudioAttributes} instance.
* @param format a non-null {@link AudioFormat} instance describing the format of the data
* that will be played through this AudioTrack. See {@link AudioFormat.Builder} for
* configuring the audio format parameters such as encoding, channel mask and sample rate.
* @param bufferSizeInBytes the total size (in bytes) of the buffer where audio data is read
* from for playback. If using the AudioTrack in streaming mode, you can write data into
* this buffer in smaller chunks than this size. If using the AudioTrack in static mode,
* this is the maximum size of the sound that will be played for this instance.
* See {@link #getMinBufferSize(int, int, int)} to determine the minimum required buffer size
* for the successful creation of an AudioTrack instance in streaming mode. Using values
* smaller than getMinBufferSize() will result in an initialization failure.
* @param mode streaming or static buffer. See {@link #MODE_STATIC} and {@link #MODE_STREAM}.
* @param sessionId ID of audio session the AudioTrack must be attached to, or
* {@link AudioManager#AUDIO_SESSION_ID_GENERATE} if the session isn't known at construction
* time. See also {@link AudioManager#generateAudioSessionId()} to obtain a session ID before
* construction.
* @throws IllegalArgumentException
*/
public AudioTrack(AudioAttributes attributes, AudioFormat format, int bufferSizeInBytes,
int mode, int sessionId)
throws IllegalArgumentException {
// mState already == STATE_UNINITIALIZED
if (attributes == null) {
throw new IllegalArgumentException("Illegal null AudioAttributes");
}
if (format == null) {
throw new IllegalArgumentException("Illegal null AudioFormat");
}
// remember which looper is associated with the AudioTrack instantiation
Looper looper;
if ((looper = Looper.myLooper()) == null) {
looper = Looper.getMainLooper();
}
int rate = 0;
if ((format.getPropertySetMask() & AudioFormat.AUDIO_FORMAT_HAS_PROPERTY_SAMPLE_RATE) != 0)
{
rate = format.getSampleRate();
} else {
rate = AudioSystem.getPrimaryOutputSamplingRate();
if (rate <= 0) {
rate = 44100;
}
}
int channelMask = AudioFormat.CHANNEL_OUT_FRONT_LEFT | AudioFormat.CHANNEL_OUT_FRONT_RIGHT;
if ((format.getPropertySetMask() & AudioFormat.AUDIO_FORMAT_HAS_PROPERTY_CHANNEL_MASK) != 0)
{
channelMask = format.getChannelMask();
}
int encoding = AudioFormat.ENCODING_DEFAULT;
if ((format.getPropertySetMask() & AudioFormat.AUDIO_FORMAT_HAS_PROPERTY_ENCODING) != 0) {
encoding = format.getEncoding();
}
audioParamCheck(rate, channelMask, encoding, mode);
mStreamType = AudioSystem.STREAM_DEFAULT;
audioBuffSizeCheck(bufferSizeInBytes);
mInitializationLooper = looper;
IBinder b = ServiceManager.getService(Context.APP_OPS_SERVICE);
mAppOps = IAppOpsService.Stub.asInterface(b);
mAttributes = (new AudioAttributes.Builder(attributes).build());
if (sessionId < 0) {
throw new IllegalArgumentException("Invalid audio session ID: "+sessionId);
}
int[] session = new int[1];
session[0] = sessionId;
// native initialization
int initResult = native_setup(new WeakReference(this), mAttributes,
mSampleRate, mChannels, mAudioFormat,
mNativeBufferSizeInBytes, mDataLoadMode, session);
if (initResult != SUCCESS) {
loge("Error code "+initResult+" when initializing AudioTrack.");
return; // with mState == STATE_UNINITIALIZED
}
mSessionId = session[0];
if (mDataLoadMode == MODE_STATIC) {
mState = STATE_NO_STATIC_DATA;
} else {
mState = STATE_INITIALIZED;
}
}
frameworks\av\media\libmedia\AudioTrack.cpp
AudioTrack::AudioTrack(
audio_stream_type_t streamType,
uint32_t sampleRate,
audio_format_t format,
audio_channel_mask_t channelMask,
size_t frameCount,
audio_output_flags_t flags,
callback_t cbf,
void* user,
uint32_t notificationFrames,
int sessionId,
transfer_type transferType,
const audio_offload_info_t *offloadInfo,
int uid,
pid_t pid,
const audio_attributes_t* pAttributes)
: mStatus(NO_INIT),
mIsTimed(false),
mPreviousPriority(ANDROID_PRIORITY_NORMAL),
mPreviousSchedulingGroup(SP_DEFAULT),
mPausedPosition(0)
{
mStatus = set(streamType, sampleRate, format, channelMask,
frameCount, flags, cbf, user, notificationFrames,
0 /*sharedBuffer*/, false /*threadCanCallJava*/, sessionId, transferType,
offloadInfo, uid, pid, pAttributes);
}
status_t AudioTrack::set(
audio_stream_type_t streamType,
uint32_t sampleRate,
audio_format_t format,
audio_channel_mask_t channelMask,
size_t frameCount,
audio_output_flags_t flags,
callback_t cbf,
void* user,
uint32_t notificationFrames,
const sp& sharedBuffer,
bool threadCanCallJava,
int sessionId,
transfer_type transferType,
const audio_offload_info_t *offloadInfo,
int uid,
pid_t pid,
const audio_attributes_t* pAttributes)
{
ALOGV("set(): streamType %d, sampleRate %u, format %#x, channelMask %#x, frameCount %zu, "
"flags #%x, notificationFrames %u, sessionId %d, transferType %d",
streamType, sampleRate, format, channelMask, frameCount, flags, notificationFrames,
sessionId, transferType);
switch (transferType) {
case TRANSFER_DEFAULT:
if (sharedBuffer != 0) {
transferType = TRANSFER_SHARED;
} else if (cbf == NULL || threadCanCallJava) {
transferType = TRANSFER_SYNC;
} else {
transferType = TRANSFER_CALLBACK;
}
break;
case TRANSFER_CALLBACK:
if (cbf == NULL || sharedBuffer != 0) {
ALOGE("Transfer type TRANSFER_CALLBACK but cbf == NULL || sharedBuffer != 0");
return BAD_VALUE;
}
break;
case TRANSFER_OBTAIN:
case TRANSFER_SYNC:
if (sharedBuffer != 0) {
ALOGE("Transfer type TRANSFER_OBTAIN but sharedBuffer != 0");
return BAD_VALUE;
}
break;
case TRANSFER_SHARED:
if (sharedBuffer == 0) {
ALOGE("Transfer type TRANSFER_SHARED but sharedBuffer == 0");
return BAD_VALUE;
}
break;
default:
ALOGE("Invalid transfer type %d", transferType);
return BAD_VALUE;
}
mSharedBuffer = sharedBuffer;
mTransfer = transferType;
ALOGV_IF(sharedBuffer != 0, "sharedBuffer: %p, size: %d", sharedBuffer->pointer(),
sharedBuffer->size());
ALOGV("set() streamType %d frameCount %zu flags %04x", streamType, frameCount, flags);
AutoMutex lock(mLock);
// invariant that mAudioTrack != 0 is true only after set() returns successfully
if (mAudioTrack != 0) {
ALOGE("Track already in use");
return INVALID_OPERATION;
}
// handle default values first.
if (streamType == AUDIO_STREAM_DEFAULT) {
streamType = AUDIO_STREAM_MUSIC;
}
if (pAttributes == NULL) {
if (uint32_t(streamType) >= AUDIO_STREAM_CNT) {
ALOGE("Invalid stream type %d", streamType);
return BAD_VALUE;
}
setAttributesFromStreamType(streamType);
mStreamType = streamType;
} else {
if (!isValidAttributes(pAttributes)) {
ALOGE("Invalid attributes: usage=%d content=%d flags=0x%x tags=[%s]",
pAttributes->usage, pAttributes->content_type, pAttributes->flags,
pAttributes->tags);
}
// stream type shouldn't be looked at, this track has audio attributes
memcpy(&mAttributes, pAttributes, sizeof(audio_attributes_t));
setStreamTypeFromAttributes(mAttributes);
ALOGV("Building AudioTrack with attributes: usage=%d content=%d flags=0x%x tags=[%s]",
mAttributes.usage, mAttributes.content_type, mAttributes.flags, mAttributes.tags);
}
status_t status;
if (sampleRate == 0) {
status = AudioSystem::getOutputSamplingRateForAttr(&sampleRate, &mAttributes);
if (status != NO_ERROR) {
ALOGE("Could not get output sample rate for stream type %d; status %d",
mStreamType, status);
return status;
}
}
mSampleRate = sampleRate;
// these below should probably come from the audioFlinger too...
if (format == AUDIO_FORMAT_DEFAULT) {
format = AUDIO_FORMAT_PCM_16_BIT;
}
// validate parameters
if (!audio_is_valid_format(format)) {
ALOGE("Invalid format %#x", format);
return BAD_VALUE;
}
mFormat = format;
if (!audio_is_output_channel(channelMask)) {
ALOGE("Invalid channel mask %#x", channelMask);
return BAD_VALUE;
}
mChannelMask = channelMask;
uint32_t channelCount = audio_channel_count_from_out_mask(channelMask);
mChannelCount = channelCount;
// AudioFlinger does not currently support 8-bit data in shared memory
if (format == AUDIO_FORMAT_PCM_8_BIT && sharedBuffer != 0) {
ALOGE("8-bit data in shared memory is not supported");
return BAD_VALUE;
}
// force direct flag if format is not linear PCM
// or offload was requested
if ((flags & AUDIO_OUTPUT_FLAG_COMPRESS_OFFLOAD)
|| !audio_is_linear_pcm(format)) {
ALOGV( (flags & AUDIO_OUTPUT_FLAG_COMPRESS_OFFLOAD)
? "Offload request, forcing to Direct Output"
: "Not linear PCM, forcing to Direct Output");
flags = (audio_output_flags_t)
// FIXME why can't we allow direct AND fast?
((flags | AUDIO_OUTPUT_FLAG_DIRECT) & ~AUDIO_OUTPUT_FLAG_FAST);
}
// only allow deep buffering for music stream type
if (mStreamType != AUDIO_STREAM_MUSIC) {
flags = (audio_output_flags_t)(flags &~AUDIO_OUTPUT_FLAG_DEEP_BUFFER);
}
if (flags & AUDIO_OUTPUT_FLAG_DIRECT) {
if (audio_is_linear_pcm(format)) {
mFrameSize = channelCount * audio_bytes_per_sample(format);
} else {
mFrameSize = sizeof(uint8_t);
}
mFrameSizeAF = mFrameSize;
} else {
ALOG_ASSERT(audio_is_linear_pcm(format));
mFrameSize = channelCount * audio_bytes_per_sample(format);
mFrameSizeAF = channelCount * audio_bytes_per_sample(
format == AUDIO_FORMAT_PCM_8_BIT ? AUDIO_FORMAT_PCM_16_BIT : format);
// createTrack will return an error if PCM format is not supported by server,
// so no need to check for specific PCM formats here
}
// Make copy of input parameter offloadInfo so that in the future:
// (a) createTrack_l doesn't need it as an input parameter
// (b) we can support re-creation of offloaded tracks
if (offloadInfo != NULL) {
mOffloadInfoCopy = *offloadInfo;
mOffloadInfo = &mOffloadInfoCopy;
} else {
mOffloadInfo = NULL;
}
mVolume[AUDIO_INTERLEAVE_LEFT] = 1.0f;
mVolume[AUDIO_INTERLEAVE_RIGHT] = 1.0f;
mSendLevel = 0.0f;
// mFrameCount is initialized in createTrack_l
mReqFrameCount = frameCount;
mNotificationFramesReq = notificationFrames;
mNotificationFramesAct = 0;
mSessionId = sessionId;
int callingpid = IPCThreadState::self()->getCallingPid();
int mypid = getpid();
if (uid == -1 || (callingpid != mypid)) {
mClientUid = IPCThreadState::self()->getCallingUid();
} else {
mClientUid = uid;
}
if (pid == -1 || (callingpid != mypid)) {
mClientPid = callingpid;
} else {
mClientPid = pid;
}
mAuxEffectId = 0;
mFlags = flags;
mCbf = cbf;
if (cbf != NULL) {
mAudioTrackThread = new AudioTrackThread(*this, threadCanCallJava);
mAudioTrackThread->run("AudioTrack", ANDROID_PRIORITY_AUDIO, 0 /*stack*/);
}
// create the IAudioTrack
status = createTrack_l();
if (status != NO_ERROR) {
if (mAudioTrackThread != 0) {
mAudioTrackThread->requestExit(); // see comment in AudioTrack.h
mAudioTrackThread->requestExitAndWait();
mAudioTrackThread.clear();
}
return status;
}
mStatus = NO_ERROR;
mState = STATE_STOPPED;
mUserData = user;
mLoopPeriod = 0;
mMarkerPosition = 0;
mMarkerReached = false;
mNewPosition = 0;
mUpdatePeriod = 0;
mServer = 0;
mPosition = 0;
mReleased = 0;
mStartUs = 0;
AudioSystem::acquireAudioSessionId(mSessionId, mClientPid);
mSequence = 1;
mObservedSequence = mSequence;
mInUnderrun = false;
return NO_ERROR;
}
播放声音时都要创建AudioTrack对象,
java的AudioTrack对象创建时会导致c++的AudioTrack对象被创建;
所以分析的核心是c++的AudioTrack类,
创建AudioTrack时涉及一个重要函数: set
b. 猜测创建过程的主要工作
b.1 使用AudioTrack的属性, 根据AudioPolicy找到对应的output、playbackThread
b.2 在playbackThread中创建对应的track
b.3 APP的AudioTrack 和 playbackThread的mTracks中的track之间建立共享内存
c. 源码时序图
4.7 AudioPolicyManager堪误与回顾
frameworks\av\services\audiopolicy\AudioPolicyService.cpp
void AudioPolicyService::onFirstRef()
{
char value[PROPERTY_VALUE_MAX];
const struct hw_module_t *module;
int forced_val;
int rc;
{
Mutex::Autolock _l(mLock);
// start tone playback thread
mTonePlaybackThread = new AudioCommandThread(String8("ApmTone"), this);
// start audio commands thread
mAudioCommandThread = new AudioCommandThread(String8("ApmAudio"), this);
// start output activity command thread
mOutputCommandThread = new AudioCommandThread(String8("ApmOutput"), this);
#ifdef USE_LEGACY_AUDIO_POLICY // 使用旧的的策略
ALOGI("AudioPolicyService CSTOR in legacy mode");
/* instantiate the audio policy manager */
rc = hw_get_module(AUDIO_POLICY_HARDWARE_MODULE_ID, &module);
if (rc) {
return;
}
rc = audio_policy_dev_open(module, &mpAudioPolicyDev);
ALOGE_IF(rc, "couldn't open audio policy device (%s)", strerror(-rc));
if (rc) {
return;
}
rc = mpAudioPolicyDev->create_audio_policy(mpAudioPolicyDev, &aps_ops, this,
&mpAudioPolicy);
ALOGE_IF(rc, "couldn't create audio policy (%s)", strerror(-rc));
if (rc) {
return;
}
rc = mpAudioPolicy->init_check(mpAudioPolicy);
ALOGE_IF(rc, "couldn't init_check the audio policy (%s)", strerror(-rc));
if (rc) {
return;
}
ALOGI("Loaded audio policy from %s (%s)", module->name, module->id);
#else
ALOGI("AudioPolicyService CSTOR in new mode");
mAudioPolicyClient = new AudioPolicyClient(this);
mAudioPolicyManager = createAudioPolicyManager(mAudioPolicyClient);
#endif
}
// load audio processing modules
spaudioPolicyEffects = new AudioPolicyEffects();
{
Mutex::Autolock _l(mLock);
mAudioPolicyEffects = audioPolicyEffects;
}
}
AudioPolicyFactory.cpp (z:\android-5.0.2\frameworks\av\services\audiopolicy)
extern "C" AudioPolicyInterface* createAudioPolicyManager(
AudioPolicyClientInterface *clientInterface)
{
returnnew AudioPolicyManager(clientInterface);
}
AudioPolicyManager.cpp (z:\android-5.0.2\frameworks\av\services\audiopolicy)
AudioPolicyManager::AudioPolicyManager(AudioPolicyClientInterface *clientInterface)
:
#ifdef AUDIO_POLICY_TEST
Thread(false),
#endif //AUDIO_POLICY_TEST
mPrimaryOutput((audio_io_handle_t)0),
mPhoneState(AUDIO_MODE_NORMAL),
mLimitRingtoneVolume(false), mLastVoiceVolume(-1.0f),
mTotalEffectsCpuLoad(0), mTotalEffectsMemory(0),
mA2dpSuspended(false),
mSpeakerDrcEnabled(false), mNextUniqueId(1),
mAudioPortGeneration(1)
{
mUidCached = getuid();
mpClientInterface = clientInterface;
for (int i = 0; i < AUDIO_POLICY_FORCE_USE_CNT; i++) {
mForceUse[i] = AUDIO_POLICY_FORCE_NONE;
}
mDefaultOutputDevice = new DeviceDescriptor(String8(""), AUDIO_DEVICE_OUT_SPEAKER);
if (loadAudioPolicyConfig(AUDIO_POLICY_VENDOR_CONFIG_FILE) != NO_ERROR) {
if (loadAudioPolicyConfig(AUDIO_POLICY_CONFIG_FILE) != NO_ERROR) {
ALOGE("could not load audio policy configuration file, setting defaults");
defaultAudioPolicyConfig();
}
}
// mAvailableOutputDevices and mAvailableInputDevices now contain all attached devices
// must be done after reading the policy
initializeVolumeCurves();
// open all output streams needed to access attached devices
audio_devices_t outputDeviceTypes = mAvailableOutputDevices.types();
audio_devices_t inputDeviceTypes = mAvailableInputDevices.types() & ~AUDIO_DEVICE_BIT_IN;
for (size_t i = 0; i < mHwModules.size(); i++) {
mHwModules[i]->mHandle = mpClientInterface->loadHwModule(mHwModules[i]->mName);
if (mHwModules[i]->mHandle == 0) {
ALOGW("could not open HW module %s", mHwModules[i]->mName);
continue;
}
// open all output streams needed to access attached devices
// except for direct output streams that are only opened when they are actually
// required by an app.
// This also validates mAvailableOutputDevices list
for (size_t j = 0; j < mHwModules[i]->mOutputProfiles.size(); j++)
{
const sp outProfile = mHwModules[i]->mOutputProfiles[j];
if (outProfile->mSupportedDevices.isEmpty()) {
ALOGW("Output profile contains no device on module %s", mHwModules[i]->mName);
continue;
}
if ((outProfile->mFlags & AUDIO_OUTPUT_FLAG_DIRECT) != 0) {
continue;
}
audio_devices_t profileType = outProfile->mSupportedDevices.types();
if ((profileType & mDefaultOutputDevice->mDeviceType) != AUDIO_DEVICE_NONE) {
profileType = mDefaultOutputDevice->mDeviceType;
} else {
// chose first device present in mSupportedDevices also part of
// outputDeviceTypes
for (size_t k = 0; k < outProfile->mSupportedDevices.size(); k++) {
profileType = outProfile->mSupportedDevices[k]->mDeviceType;
if ((profileType & outputDeviceTypes) != 0) {
break;
}
}
}
if ((profileType & outputDeviceTypes) == 0) {
continue;
}
sp outputDesc = new AudioOutputDescriptor(outProfile);
outputDesc->mDevice = profileType;
audio_config_t config = AUDIO_CONFIG_INITIALIZER;
config.sample_rate = outputDesc->mSamplingRate;
config.channel_mask = outputDesc->mChannelMask;
config.format = outputDesc->mFormat;
audio_io_handle_t output = AUDIO_IO_HANDLE_NONE;
status_t status = mpClientInterface->openOutput(outProfile->mModule->mHandle,
&output,
&config,
&outputDesc->mDevice,
String8(""),
&outputDesc->mLatency,
outputDesc->mFlags);
if (status != NO_ERROR) {
ALOGW("Cannot open output stream for device %08x on hw module %s",
outputDesc->mDevice,
mHwModules[i]->mName);
} else {
outputDesc->mSamplingRate = config.sample_rate;
outputDesc->mChannelMask = config.channel_mask;
outputDesc->mFormat = config.format;
for (size_t k = 0; k < outProfile->mSupportedDevices.size(); k++) {
audio_devices_t type = outProfile->mSupportedDevices[k]->mDeviceType;
ssize_t index =
mAvailableOutputDevices.indexOf(outProfile->mSupportedDevices[k]);
// give a valid ID to an attached device once confirmed it is reachable
if ((index >= 0) && (mAvailableOutputDevices[index]->mId == 0)) {
mAvailableOutputDevices[index]->mId = nextUniqueId();
mAvailableOutputDevices[index]->mModule = mHwModules[i];
}
}
if (mPrimaryOutput == 0 &&
outProfile->mFlags & AUDIO_OUTPUT_FLAG_PRIMARY) {
mPrimaryOutput = output;
}
addOutput(output, outputDesc);
setOutputDevice(output,
outputDesc->mDevice,
true);
}
}
// open input streams needed to access attached devices to validate
// mAvailableInputDevices list
for (size_t j = 0; j < mHwModules[i]->mInputProfiles.size(); j++)
{
const sp inProfile = mHwModules[i]->mInputProfiles[j];
if (inProfile->mSupportedDevices.isEmpty()) {
ALOGW("Input profile contains no device on module %s", mHwModules[i]->mName);
continue;
}
// chose first device present in mSupportedDevices also part of
// inputDeviceTypes
audio_devices_t profileType = AUDIO_DEVICE_NONE;
for (size_t k = 0; k < inProfile->mSupportedDevices.size(); k++) {
profileType = inProfile->mSupportedDevices[k]->mDeviceType;
if (profileType & inputDeviceTypes) {
break;
}
}
if ((profileType & inputDeviceTypes) == 0) {
continue;
}
sp inputDesc = new AudioInputDescriptor(inProfile);
inputDesc->mInputSource = AUDIO_SOURCE_MIC;
inputDesc->mDevice = profileType;
audio_config_t config = AUDIO_CONFIG_INITIALIZER;
config.sample_rate = inputDesc->mSamplingRate;
config.channel_mask = inputDesc->mChannelMask;
config.format = inputDesc->mFormat;
audio_io_handle_t input = AUDIO_IO_HANDLE_NONE;
status_t status = mpClientInterface->openInput(inProfile->mModule->mHandle,
&input,
&config,
&inputDesc->mDevice,
String8(""),
AUDIO_SOURCE_MIC,
AUDIO_INPUT_FLAG_NONE);
if (status == NO_ERROR) {
for (size_t k = 0; k < inProfile->mSupportedDevices.size(); k++) {
audio_devices_t type = inProfile->mSupportedDevices[k]->mDeviceType;
ssize_t index =
mAvailableInputDevices.indexOf(inProfile->mSupportedDevices[k]);
// give a valid ID to an attached device once confirmed it is reachable
if ((index >= 0) && (mAvailableInputDevices[index]->mId == 0)) {
mAvailableInputDevices[index]->mId = nextUniqueId();
mAvailableInputDevices[index]->mModule = mHwModules[i];
}
}
mpClientInterface->closeInput(input);
} else {
ALOGW("Cannot open input stream for device %08x on hw module %s",
inputDesc->mDevice,
mHwModules[i]->mName);
}
}
}
// make sure all attached devices have been allocated a unique ID
for (size_t i = 0; i < mAvailableOutputDevices.size();) {
if (mAvailableOutputDevices[i]->mId == 0) {
ALOGW("Input device %08x unreachable", mAvailableOutputDevices[i]->mDeviceType);
mAvailableOutputDevices.remove(mAvailableOutputDevices[i]);
continue;
}
i++;
}
for (size_t i = 0; i < mAvailableInputDevices.size();) {
if (mAvailableInputDevices[i]->mId == 0) {
ALOGW("Input device %08x unreachable", mAvailableInputDevices[i]->mDeviceType);
mAvailableInputDevices.remove(mAvailableInputDevices[i]);
continue;
}
i++;
}
// make sure default device is reachable
if (mAvailableOutputDevices.indexOf(mDefaultOutputDevice) < 0) {
ALOGE("Default device %08x is unreachable", mDefaultOutputDevice->mDeviceType);
}
ALOGE_IF((mPrimaryOutput == 0), "Failed to open primary output");
updateDevicesAndOutputs();
#ifdef AUDIO_POLICY_TEST
if (mPrimaryOutput != 0) {
AudioParameter outputCmd = AudioParameter();
outputCmd.addInt(String8("set_id"), 0);
mpClientInterface->setParameters(mPrimaryOutput, outputCmd.toString());
mTestDevice = AUDIO_DEVICE_OUT_SPEAKER;
mTestSamplingRate = 44100;
mTestFormat = AUDIO_FORMAT_PCM_16_BIT;
mTestChannels = AUDIO_CHANNEL_OUT_STEREO;
mTestLatencyMs = 0;
mCurOutput = 0;
mDirectOutput = false;
for (int i = 0; i < NUM_TEST_OUTPUTS; i++) {
mTestOutputs[i] = 0;
}
const size_t SIZE = 256;
char buffer[SIZE];
snprintf(buffer, SIZE, "AudioPolicyManagerTest");
run(buffer, ANDROID_PRIORITY_AUDIO);
}
#endif //AUDIO_POLICY_TEST
}
void AudioPolicyManager::addOutput(audio_io_handle_t output, sp outputDesc)
{
outputDesc->mIoHandle = output;
outputDesc->mId = nextUniqueId();
mOutputs.add(output, outputDesc);
nextAudioPortGeneration();
}
4.8 AudioTrack创建过程_选择output
a. APP构造AudioTrack时指定了 stream type
b. AudioTrack::setAttributesFromStreamType
c. AudioPolicyManager::getStrategyForAttr
d. AudioPolicyManager::getDeviceForStrategy
e. AudioPolicyManager::getOutputForDevice
e.1 AudioPolicyManager::getOutputsForDevice
e.2 output = selectOutput(outputs, flags, format);
AudioTrack.cpp (z:\android-5.0.2\frameworks\av\media\libmedia)
void AudioTrack::setAttributesFromStreamType(audio_stream_type_t streamType) {
mAttributes.flags = 0x0;
switch(streamType) {
case AUDIO_STREAM_DEFAULT:
case AUDIO_STREAM_MUSIC:
mAttributes.content_type = AUDIO_CONTENT_TYPE_MUSIC;
mAttributes.usage = AUDIO_USAGE_MEDIA;
break;
case AUDIO_STREAM_VOICE_CALL:
mAttributes.content_type = AUDIO_CONTENT_TYPE_SPEECH;
mAttributes.usage = AUDIO_USAGE_VOICE_COMMUNICATION;
break;
case AUDIO_STREAM_ENFORCED_AUDIBLE:
mAttributes.flags |= AUDIO_FLAG_AUDIBILITY_ENFORCED;
// intended fall through, attributes in common with STREAM_SYSTEM
case AUDIO_STREAM_SYSTEM:
mAttributes.content_type = AUDIO_CONTENT_TYPE_SONIFICATION;
mAttributes.usage = AUDIO_USAGE_ASSISTANCE_SONIFICATION;
break;
case AUDIO_STREAM_RING:
mAttributes.content_type = AUDIO_CONTENT_TYPE_SONIFICATION;
mAttributes.usage = AUDIO_USAGE_NOTIFICATION_TELEPHONY_RINGTONE;
break;
case AUDIO_STREAM_ALARM:
mAttributes.content_type = AUDIO_CONTENT_TYPE_SONIFICATION;
mAttributes.usage = AUDIO_USAGE_ALARM;
break;
case AUDIO_STREAM_NOTIFICATION:
mAttributes.content_type = AUDIO_CONTENT_TYPE_SONIFICATION;
mAttributes.usage = AUDIO_USAGE_NOTIFICATION;
break;
case AUDIO_STREAM_BLUETOOTH_SCO:
mAttributes.content_type = AUDIO_CONTENT_TYPE_SPEECH;
mAttributes.usage = AUDIO_USAGE_VOICE_COMMUNICATION;
mAttributes.flags |= AUDIO_FLAG_SCO;
break;
case AUDIO_STREAM_DTMF:
mAttributes.content_type = AUDIO_CONTENT_TYPE_SONIFICATION;
mAttributes.usage = AUDIO_USAGE_VOICE_COMMUNICATION_SIGNALLING;
break;
case AUDIO_STREAM_TTS:
mAttributes.content_type = AUDIO_CONTENT_TYPE_SPEECH;
mAttributes.usage = AUDIO_USAGE_ASSISTANCE_ACCESSIBILITY;
break;
default:
ALOGE("invalid stream type %d when converting to attributes", streamType);
}
}
AudioPolicyManager.h (z:\android-5.0.2\frameworks\av\services\audiopolicy)
uint32_t AudioPolicyManager::getStrategyForAttr(const audio_attributes_t *attr) {
// flags to strategy mapping
if ((attr->flags & AUDIO_FLAG_AUDIBILITY_ENFORCED) == AUDIO_FLAG_AUDIBILITY_ENFORCED) {
return (uint32_t) STRATEGY_ENFORCED_AUDIBLE;
}
// usage to strategy mapping
switch (attr->usage) {
case AUDIO_USAGE_MEDIA:
case AUDIO_USAGE_GAME:
case AUDIO_USAGE_ASSISTANCE_ACCESSIBILITY:
case AUDIO_USAGE_ASSISTANCE_NAVIGATION_GUIDANCE:
case AUDIO_USAGE_ASSISTANCE_SONIFICATION:
return (uint32_t) STRATEGY_MEDIA;
case AUDIO_USAGE_VOICE_COMMUNICATION:
return (uint32_t) STRATEGY_PHONE;
case AUDIO_USAGE_VOICE_COMMUNICATION_SIGNALLING:
return (uint32_t) STRATEGY_DTMF;
case AUDIO_USAGE_ALARM:
case AUDIO_USAGE_NOTIFICATION_TELEPHONY_RINGTONE:
return (uint32_t) STRATEGY_SONIFICATION;
case AUDIO_USAGE_NOTIFICATION:
case AUDIO_USAGE_NOTIFICATION_COMMUNICATION_REQUEST:
case AUDIO_USAGE_NOTIFICATION_COMMUNICATION_INSTANT:
case AUDIO_USAGE_NOTIFICATION_COMMUNICATION_DELAYED:
case AUDIO_USAGE_NOTIFICATION_EVENT:
return (uint32_t) STRATEGY_SONIFICATION_RESPECTFUL;
case AUDIO_USAGE_UNKNOWN:
default:
return (uint32_t) STRATEGY_MEDIA;
}
}
uint32_t AudioPolicyManager::getStrategyForAttr(const audio_attributes_t *attr) {
// flags to strategy mapping
if ((attr->flags & AUDIO_FLAG_AUDIBILITY_ENFORCED) == AUDIO_FLAG_AUDIBILITY_ENFORCED) {
return (uint32_t) STRATEGY_ENFORCED_AUDIBLE;
}
// usage to strategy mapping
switch (attr->usage) {
case AUDIO_USAGE_MEDIA:
case AUDIO_USAGE_GAME:
case AUDIO_USAGE_ASSISTANCE_ACCESSIBILITY:
case AUDIO_USAGE_ASSISTANCE_NAVIGATION_GUIDANCE:
case AUDIO_USAGE_ASSISTANCE_SONIFICATION:
return (uint32_t) STRATEGY_MEDIA;
case AUDIO_USAGE_VOICE_COMMUNICATION:
return (uint32_t) STRATEGY_PHONE;
case AUDIO_USAGE_VOICE_COMMUNICATION_SIGNALLING:
return (uint32_t) STRATEGY_DTMF;
case AUDIO_USAGE_ALARM:
case AUDIO_USAGE_NOTIFICATION_TELEPHONY_RINGTONE:
return (uint32_t) STRATEGY_SONIFICATION;
case AUDIO_USAGE_NOTIFICATION:
case AUDIO_USAGE_NOTIFICATION_COMMUNICATION_REQUEST:
case AUDIO_USAGE_NOTIFICATION_COMMUNICATION_INSTANT:
case AUDIO_USAGE_NOTIFICATION_COMMUNICATION_DELAYED:
case AUDIO_USAGE_NOTIFICATION_EVENT:
return (uint32_t) STRATEGY_SONIFICATION_RESPECTFUL;
case AUDIO_USAGE_UNKNOWN:
default:
return (uint32_t) STRATEGY_MEDIA;
}
}
audio_devices_t AudioPolicyManager::getDeviceForStrategy(routing_strategy strategy,
bool fromCache)
{
uint32_t device = AUDIO_DEVICE_NONE;
if (fromCache) {
ALOGVV("getDeviceForStrategy() from cache strategy %d, device %x",
strategy, mDeviceForStrategy[strategy]);
return mDeviceForStrategy[strategy];
}
audio_devices_t availableOutputDeviceTypes = mAvailableOutputDevices.types();
switch (strategy) {
case STRATEGY_SONIFICATION_RESPECTFUL:
if (isInCall()) {
device = getDeviceForStrategy(STRATEGY_SONIFICATION, false /*fromCache*/);
} else if (isStreamActiveRemotely(AUDIO_STREAM_MUSIC,
SONIFICATION_RESPECTFUL_AFTER_MUSIC_DELAY)) {
// while media is playing on a remote device, use the the sonification behavior.
// Note that we test this usecase before testing if media is playing because
// the isStreamActive() method only informs about the activity of a stream, not
// if it's for local playback. Note also that we use the same delay between both tests
device = getDeviceForStrategy(STRATEGY_SONIFICATION, false /*fromCache*/);
//user "safe" speaker if available instead of normal speaker to avoid triggering
//other acoustic safety mechanisms for notification
if (device == AUDIO_DEVICE_OUT_SPEAKER && (availableOutputDeviceTypes & AUDIO_DEVICE_OUT_SPEAKER_SAFE))
device = AUDIO_DEVICE_OUT_SPEAKER_SAFE;
} else if (isStreamActive(AUDIO_STREAM_MUSIC, SONIFICATION_RESPECTFUL_AFTER_MUSIC_DELAY)) {
// while media is playing (or has recently played), use the same device
device = getDeviceForStrategy(STRATEGY_MEDIA, false /*fromCache*/);
} else {
// when media is not playing anymore, fall back on the sonification behavior
device = getDeviceForStrategy(STRATEGY_SONIFICATION, false /*fromCache*/);
//user "safe" speaker if available instead of normal speaker to avoid triggering
//other acoustic safety mechanisms for notification
if (device == AUDIO_DEVICE_OUT_SPEAKER && (availableOutputDeviceTypes & AUDIO_DEVICE_OUT_SPEAKER_SAFE))
device = AUDIO_DEVICE_OUT_SPEAKER_SAFE;
}
break;
case STRATEGY_DTMF:
if (!isInCall()) {
// when off call, DTMF strategy follows the same rules as MEDIA strategy
device = getDeviceForStrategy(STRATEGY_MEDIA, false /*fromCache*/);
break;
}
// when in call, DTMF and PHONE strategies follow the same rules
// FALL THROUGH
case STRATEGY_PHONE:
// Force use of only devices on primary output if:
// - in call AND
// - cannot route from voice call RX OR
// - audio HAL version is < 3.0 and TX device is on the primary HW module
if (mPhoneState == AUDIO_MODE_IN_CALL) {
audio_devices_t txDevice = getDeviceForInputSource(AUDIO_SOURCE_VOICE_COMMUNICATION);
sp hwOutputDesc = mOutputs.valueFor(mPrimaryOutput);
if (((mAvailableInputDevices.types() &
AUDIO_DEVICE_IN_TELEPHONY_RX & ~AUDIO_DEVICE_BIT_IN) == 0) ||
(((txDevice & availablePrimaryInputDevices() & ~AUDIO_DEVICE_BIT_IN) != 0) &&
(hwOutputDesc->getAudioPort()->mModule->mHalVersion <
AUDIO_DEVICE_API_VERSION_3_0))) {
availableOutputDeviceTypes = availablePrimaryOutputDevices();
}
}
// for phone strategy, we first consider the forced use and then the available devices by order
// of priority
switch (mForceUse[AUDIO_POLICY_FORCE_FOR_COMMUNICATION]) {
case AUDIO_POLICY_FORCE_BT_SCO:
if (!isInCall() || strategy != STRATEGY_DTMF) {
device = availableOutputDeviceTypes & AUDIO_DEVICE_OUT_BLUETOOTH_SCO_CARKIT;
if (device) break;
}
device = availableOutputDeviceTypes & AUDIO_DEVICE_OUT_BLUETOOTH_SCO_HEADSET;
if (device) break;
device = availableOutputDeviceTypes & AUDIO_DEVICE_OUT_BLUETOOTH_SCO;
if (device) break;
// if SCO device is requested but no SCO device is available, fall back to default case
// FALL THROUGH
default: // FORCE_NONE
// when not in a phone call, phone strategy should route STREAM_VOICE_CALL to A2DP
if (!isInCall() &&
(mForceUse[AUDIO_POLICY_FORCE_FOR_MEDIA] != AUDIO_POLICY_FORCE_NO_BT_A2DP) &&
(getA2dpOutput() != 0) && !mA2dpSuspended) {
device = availableOutputDeviceTypes & AUDIO_DEVICE_OUT_BLUETOOTH_A2DP;
if (device) break;
device = availableOutputDeviceTypes & AUDIO_DEVICE_OUT_BLUETOOTH_A2DP_HEADPHONES;
if (device) break;
}
device = availableOutputDeviceTypes & AUDIO_DEVICE_OUT_WIRED_HEADPHONE;
if (device) break;
device = availableOutputDeviceTypes & AUDIO_DEVICE_OUT_WIRED_HEADSET;
if (device) break;
device = availableOutputDeviceTypes & AUDIO_DEVICE_OUT_USB_DEVICE;
if (device) break;
if (mPhoneState != AUDIO_MODE_IN_CALL) {
device = availableOutputDeviceTypes & AUDIO_DEVICE_OUT_USB_ACCESSORY;
if (device) break;
device = availableOutputDeviceTypes & AUDIO_DEVICE_OUT_DGTL_DOCK_HEADSET;
if (device) break;
device = availableOutputDeviceTypes & AUDIO_DEVICE_OUT_AUX_DIGITAL;
if (device) break;
device = availableOutputDeviceTypes & AUDIO_DEVICE_OUT_ANLG_DOCK_HEADSET;
if (device) break;
}
device = availableOutputDeviceTypes & AUDIO_DEVICE_OUT_EARPIECE;
if (device) break;
device = mDefaultOutputDevice->mDeviceType;
if (device == AUDIO_DEVICE_NONE) {
ALOGE("getDeviceForStrategy() no device found for STRATEGY_PHONE");
}
break;
case AUDIO_POLICY_FORCE_SPEAKER:
// when not in a phone call, phone strategy should route STREAM_VOICE_CALL to
// A2DP speaker when forcing to speaker output
if (!isInCall() &&
(mForceUse[AUDIO_POLICY_FORCE_FOR_MEDIA] != AUDIO_POLICY_FORCE_NO_BT_A2DP) &&
(getA2dpOutput() != 0) && !mA2dpSuspended) {
device = availableOutputDeviceTypes & AUDIO_DEVICE_OUT_BLUETOOTH_A2DP_SPEAKER;
if (device) break;
}
if (mPhoneState != AUDIO_MODE_IN_CALL) {
device = availableOutputDeviceTypes & AUDIO_DEVICE_OUT_USB_ACCESSORY;
if (device) break;
device = availableOutputDeviceTypes & AUDIO_DEVICE_OUT_USB_DEVICE;
if (device) break;
device = availableOutputDeviceTypes & AUDIO_DEVICE_OUT_DGTL_DOCK_HEADSET;
if (device) break;
device = availableOutputDeviceTypes & AUDIO_DEVICE_OUT_AUX_DIGITAL;
if (device) break;
device = availableOutputDeviceTypes & AUDIO_DEVICE_OUT_ANLG_DOCK_HEADSET;
if (device) break;
}
device = availableOutputDeviceTypes & AUDIO_DEVICE_OUT_LINE;
if (device) break;
device = availableOutputDeviceTypes & AUDIO_DEVICE_OUT_SPEAKER;
if (device) break;
device = mDefaultOutputDevice->mDeviceType;
if (device == AUDIO_DEVICE_NONE) {
ALOGE("getDeviceForStrategy() no device found for STRATEGY_PHONE, FORCE_SPEAKER");
}
break;
}
break;
case STRATEGY_SONIFICATION:
// If incall, just select the STRATEGY_PHONE device: The rest of the behavior is handled by
// handleIncallSonification().
if (isInCall()) {
device = getDeviceForStrategy(STRATEGY_PHONE, false /*fromCache*/);
break;
}
// FALL THROUGH
case STRATEGY_ENFORCED_AUDIBLE:
// strategy STRATEGY_ENFORCED_AUDIBLE uses same routing policy as STRATEGY_SONIFICATION
// except:
// - when in call where it doesn't default to STRATEGY_PHONE behavior
// - in countries where not enforced in which case it follows STRATEGY_MEDIA
if ((strategy == STRATEGY_SONIFICATION) ||
(mForceUse[AUDIO_POLICY_FORCE_FOR_SYSTEM] == AUDIO_POLICY_FORCE_SYSTEM_ENFORCED)) {
device = availableOutputDeviceTypes & AUDIO_DEVICE_OUT_SPEAKER;
if (device == AUDIO_DEVICE_NONE) {
ALOGE("getDeviceForStrategy() speaker device not found for STRATEGY_SONIFICATION");
}
}
// The second device used for sonification is the same as the device used by media strategy
// FALL THROUGH
case STRATEGY_MEDIA: {
uint32_t device2 = AUDIO_DEVICE_NONE;
if (strategy != STRATEGY_SONIFICATION) {
// no sonification on remote submix (e.g. WFD)
device2 = availableOutputDeviceTypes & AUDIO_DEVICE_OUT_REMOTE_SUBMIX;
}
if ((device2 == AUDIO_DEVICE_NONE) &&
(mForceUse[AUDIO_POLICY_FORCE_FOR_MEDIA] != AUDIO_POLICY_FORCE_NO_BT_A2DP) &&
(getA2dpOutput() != 0) && !mA2dpSuspended) {
device2 = availableOutputDeviceTypes & AUDIO_DEVICE_OUT_BLUETOOTH_A2DP;
if (device2 == AUDIO_DEVICE_NONE) {
device2 = availableOutputDeviceTypes & AUDIO_DEVICE_OUT_BLUETOOTH_A2DP_HEADPHONES;
}
if (device2 == AUDIO_DEVICE_NONE) {
device2 = availableOutputDeviceTypes & AUDIO_DEVICE_OUT_BLUETOOTH_A2DP_SPEAKER;
}
}
if (device2 == AUDIO_DEVICE_NONE) {
device2 = availableOutputDeviceTypes & AUDIO_DEVICE_OUT_WIRED_HEADPHONE;
}
if ((device2 == AUDIO_DEVICE_NONE)) {
device2 = availableOutputDeviceTypes & AUDIO_DEVICE_OUT_LINE;
}
if (device2 == AUDIO_DEVICE_NONE) {
device2 = availableOutputDeviceTypes & AUDIO_DEVICE_OUT_WIRED_HEADSET;
}
if (device2 == AUDIO_DEVICE_NONE) {
device2 = availableOutputDeviceTypes & AUDIO_DEVICE_OUT_USB_ACCESSORY;
}
if (device2 == AUDIO_DEVICE_NONE) {
device2 = availableOutputDeviceTypes & AUDIO_DEVICE_OUT_USB_DEVICE;
}
if (device2 == AUDIO_DEVICE_NONE) {
device2 = availableOutputDeviceTypes & AUDIO_DEVICE_OUT_DGTL_DOCK_HEADSET;
}
if ((device2 == AUDIO_DEVICE_NONE) && (strategy != STRATEGY_SONIFICATION)) {
// no sonification on aux digital (e.g. HDMI)
device2 = availableOutputDeviceTypes & AUDIO_DEVICE_OUT_AUX_DIGITAL;
}
if ((device2 == AUDIO_DEVICE_NONE) &&
(mForceUse[AUDIO_POLICY_FORCE_FOR_DOCK] == AUDIO_POLICY_FORCE_ANALOG_DOCK)) {
device2 = availableOutputDeviceTypes & AUDIO_DEVICE_OUT_ANLG_DOCK_HEADSET;
}
if (device2 == AUDIO_DEVICE_NONE) {
device2 = availableOutputDeviceTypes & AUDIO_DEVICE_OUT_SPEAKER;
}
int device3 = AUDIO_DEVICE_NONE;
if (strategy == STRATEGY_MEDIA) {
// ARC, SPDIF and AUX_LINE can co-exist with others.
device3 = availableOutputDeviceTypes & AUDIO_DEVICE_OUT_HDMI_ARC;
device3 |= (availableOutputDeviceTypes & AUDIO_DEVICE_OUT_SPDIF);
device3 |= (availableOutputDeviceTypes & AUDIO_DEVICE_OUT_AUX_LINE);
}
device2 |= device3;
// device is DEVICE_OUT_SPEAKER if we come from case STRATEGY_SONIFICATION or
// STRATEGY_ENFORCED_AUDIBLE, AUDIO_DEVICE_NONE otherwise
device |= device2;
// If hdmi system audio mode is on, remove speaker out of output list.
if ((strategy == STRATEGY_MEDIA) &&
(mForceUse[AUDIO_POLICY_FORCE_FOR_HDMI_SYSTEM_AUDIO] ==
AUDIO_POLICY_FORCE_HDMI_SYSTEM_AUDIO_ENFORCED)) {
device &= ~AUDIO_DEVICE_OUT_SPEAKER;
}
if (device) break;
device = mDefaultOutputDevice->mDeviceType;
if (device == AUDIO_DEVICE_NONE) {
ALOGE("getDeviceForStrategy() no device found for STRATEGY_MEDIA");
}
} break;
default:
ALOGW("getDeviceForStrategy() unknown strategy: %d", strategy);
break;
}
ALOGVV("getDeviceForStrategy() strategy %d, device %x", strategy, device);
return device;
}
AudioPolicyManager.h (z:\android-5.0.2\frameworks\av\services\audiopolicy)
audio_io_handle_t AudioPolicyManager::getOutputForDevice(
audio_devices_t device,
audio_stream_type_t stream,
uint32_t samplingRate,
audio_format_t format,
audio_channel_mask_t channelMask,
audio_output_flags_t flags,
const audio_offload_info_t *offloadInfo)
{
audio_io_handle_t output = AUDIO_IO_HANDLE_NONE;
uint32_t latency = 0;
status_t status;
#ifdef AUDIO_POLICY_TEST
if (mCurOutput != 0) {
ALOGV("getOutput() test output mCurOutput %d, samplingRate %d, format %d, channelMask %x, mDirectOutput %d",
mCurOutput, mTestSamplingRate, mTestFormat, mTestChannels, mDirectOutput);
if (mTestOutputs[mCurOutput] == 0) {
ALOGV("getOutput() opening test output");
sp outputDesc = new AudioOutputDescriptor(NULL);
outputDesc->mDevice = mTestDevice;
outputDesc->mLatency = mTestLatencyMs;
outputDesc->mFlags =
(audio_output_flags_t)(mDirectOutput ? AUDIO_OUTPUT_FLAG_DIRECT : 0);
outputDesc->mRefCount[stream] = 0;
audio_config_t config = AUDIO_CONFIG_INITIALIZER;
config.sample_rate = mTestSamplingRate;
config.channel_mask = mTestChannels;
config.format = mTestFormat;
if (offloadInfo != NULL) {
config.offload_info = *offloadInfo;
}
status = mpClientInterface->openOutput(0,
&mTestOutputs[mCurOutput],
&config,
&outputDesc->mDevice,
String8(""),
&outputDesc->mLatency,
outputDesc->mFlags);
if (status == NO_ERROR) {
outputDesc->mSamplingRate = config.sample_rate;
outputDesc->mFormat = config.format;
outputDesc->mChannelMask = config.channel_mask;
AudioParameter outputCmd = AudioParameter();
outputCmd.addInt(String8("set_id"),mCurOutput);
mpClientInterface->setParameters(mTestOutputs[mCurOutput],outputCmd.toString());
addOutput(mTestOutputs[mCurOutput], outputDesc);
}
}
return mTestOutputs[mCurOutput];
}
#endif //AUDIO_POLICY_TEST
// open a direct output if required by specified parameters
//force direct flag if offload flag is set: offloading implies a direct output stream
// and all common behaviors are driven by checking only the direct flag
// this should normally be set appropriately in the policy configuration file
if ((flags & AUDIO_OUTPUT_FLAG_COMPRESS_OFFLOAD) != 0) {
flags = (audio_output_flags_t)(flags | AUDIO_OUTPUT_FLAG_DIRECT);
}
if ((flags & AUDIO_OUTPUT_FLAG_HW_AV_SYNC) != 0) {
flags = (audio_output_flags_t)(flags | AUDIO_OUTPUT_FLAG_DIRECT);
}
sp profile;
// skip direct output selection if the request can obviously be attached to a mixed output
// and not explicitly requested
if (((flags & AUDIO_OUTPUT_FLAG_DIRECT) == 0) &&
audio_is_linear_pcm(format) && samplingRate <= MAX_MIXER_SAMPLING_RATE &&
audio_channel_count_from_out_mask(channelMask) <= 2) {
goto non_direct_output;
}
// Do not allow offloading if one non offloadable effect is enabled. This prevents from
// creating an offloaded track and tearing it down immediately after start when audioflinger
// detects there is an active non offloadable effect.
// FIXME: We should check the audio session here but we do not have it in this context.
// This may prevent offloading in rare situations where effects are left active by apps
// in the background.
if (((flags & AUDIO_OUTPUT_FLAG_COMPRESS_OFFLOAD) == 0) ||
!isNonOffloadableEffectEnabled()) {
profile = getProfileForDirectOutput(device,
samplingRate,
format,
channelMask,
(audio_output_flags_t)flags);
}
if (profile != 0) {
sp outputDesc = NULL;
for (size_t i = 0; i < mOutputs.size(); i++) {
sp desc = mOutputs.valueAt(i);
if (!desc->isDuplicated() && (profile == desc->mProfile)) {
outputDesc = desc;
// reuse direct output if currently open and configured with same parameters
if ((samplingRate == outputDesc->mSamplingRate) &&
(format == outputDesc->mFormat) &&
(channelMask == outputDesc->mChannelMask)) {
outputDesc->mDirectOpenCount++;
ALOGV("getOutput() reusing direct output %d", mOutputs.keyAt(i));
return mOutputs.keyAt(i);
}
}
}
// close direct output if currently open and configured with different parameters
if (outputDesc != NULL) {
closeOutput(outputDesc->mIoHandle);
}
outputDesc = new AudioOutputDescriptor(profile);
outputDesc->mDevice = device;
outputDesc->mLatency = 0;
outputDesc->mFlags =(audio_output_flags_t) (outputDesc->mFlags | flags);
audio_config_t config = AUDIO_CONFIG_INITIALIZER;
config.sample_rate = samplingRate;
config.channel_mask = channelMask;
config.format = format;
if (offloadInfo != NULL) {
config.offload_info = *offloadInfo;
}
status = mpClientInterface->openOutput(profile->mModule->mHandle,
&output,
&config,
&outputDesc->mDevice,
String8(""),
&outputDesc->mLatency,
outputDesc->mFlags);
// only accept an output with the requested parameters
if (status != NO_ERROR ||
(samplingRate != 0 && samplingRate != config.sample_rate) ||
(format != AUDIO_FORMAT_DEFAULT && format != config.format) ||
(channelMask != 0 && channelMask != config.channel_mask)) {
ALOGV("getOutput() failed opening direct output: output %d samplingRate %d %d,"
"format %d %d, channelMask %04x %04x", output, samplingRate,
outputDesc->mSamplingRate, format, outputDesc->mFormat, channelMask,
outputDesc->mChannelMask);
if (output != AUDIO_IO_HANDLE_NONE) {
mpClientInterface->closeOutput(output);
}
return AUDIO_IO_HANDLE_NONE;
}
outputDesc->mSamplingRate = config.sample_rate;
outputDesc->mChannelMask = config.channel_mask;
outputDesc->mFormat = config.format;
outputDesc->mRefCount[stream] = 0;
outputDesc->mStopTime[stream] = 0;
outputDesc->mDirectOpenCount = 1;
audio_io_handle_t srcOutput = getOutputForEffect();
addOutput(output, outputDesc);
audio_io_handle_t dstOutput = getOutputForEffect();
if (dstOutput == output) {
mpClientInterface->moveEffects(AUDIO_SESSION_OUTPUT_MIX, srcOutput, dstOutput);
}
mPreviousOutputs = mOutputs;
ALOGV("getOutput() returns new direct output %d", output);
mpClientInterface->onAudioPortListUpdate();
return output;
}
non_direct_output:
// ignoring channel mask due to downmix capability in mixer
// open a non direct output
// for non direct outputs, only PCM is supported
if (audio_is_linear_pcm(format)) {
// get which output is suitable for the specified stream. The actual
// routing change will happen when startOutput() will be called
SortedVector outputs = getOutputsForDevice(device, mOutputs);
// at this stage we should ignore the DIRECT flag as no direct output could be found earlier
flags = (audio_output_flags_t)(flags & ~AUDIO_OUTPUT_FLAG_DIRECT);
output = selectOutput(outputs, flags, format);
}
ALOGW_IF((output == 0), "getOutput() could not find output for stream %d, samplingRate %d,"
"format %d, channels %x, flags %x", stream, samplingRate, format, channelMask, flags);
ALOGV("getOutput() returns output %d", output);
return output;
}
SortedVector AudioPolicyManager::getOutputsForDevice(audio_devices_t device,
DefaultKeyedVector > openOutputs)
{
SortedVector outputs;
ALOGVV("getOutputsForDevice() device %04x", device);
for (size_t i = 0; i < openOutputs.size(); i++) {
ALOGVV("output %d isDuplicated=%d device=%04x",
i, openOutputs.valueAt(i)->isDuplicated(), openOutputs.valueAt(i)->supportedDevices());
if ((device & openOutputs.valueAt(i)->supportedDevices()) == device) {
ALOGVV("getOutputsForDevice() found output %d", openOutputs.keyAt(i));
outputs.add(openOutputs.keyAt(i));
}
}
return outputs;
}
4.9 AudioTrack创建过程_Track和共享内存
回顾:
a. APP创建AudioTrack <-----------------> AudioFlinger中PlaybackThread创建对应的Track
b. APP给AudioTrack提供音频数据有2种方式: 一次性提供(MODE_STATIC)、边播放边提供(MODE_STREAM)
问:
a. 音频数据存在buffer中, 这个buffer由谁提供? APP 还是 PlaybackThread ?
b. APP提供数据, PlaybackThread消耗数据, 如何同步?
共享内存有谁创建
a. MODE_STATIC(一次性提前提供数据) 由APP创建共享内存( app可知Buffer大小)
b. MODE_STREAM(边播放边提供) 由playbakcThread创建共享内存(为了让APP简单,省事)
APP: playbackThread如何同步数据?
a. MODE_STATIC: 无需同步,APP先构造,playbackThread后消费
b. MODE_STREAM: 需同步,使用环行BUFFER来同步
测试程序:
Shared_mem_test.cpp (z:\android-5.0.2\frameworks\base\media\tests\audiotests)
int AudioTrackTest::Test01() {
sp heap;
sp iMem;
uint8_t* p;
short smpBuf[BUF_SZ];
long rate = 44100;
unsigned long phi;
unsigned long dPhi;
long amplitude;
long freq = 1237;
float f0;
f0 = pow(2., 32.) * freq / (float)rate;
dPhi = (unsigned long)f0;
amplitude = 1000;
phi = 0;
Generate(smpBuf, BUF_SZ, amplitude, phi, dPhi); // fill buffer
for (int i = 0; i < 1024; i++) {
// 事先分配好内存
heap = new MemoryDealer(1024*1024, "AudioTrack Heap Base");
iMem = heap->allocate(BUF_SZ*sizeof(short));
p = static_cast(iMem->pointer());
memcpy(p, smpBuf, BUF_SZ*sizeof(short));
sp track = new AudioTrack(AUDIO_STREAM_MUSIC,// stream type
rate,
AUDIO_FORMAT_PCM_16_BIT,// word length, PCM
AUDIO_CHANNEL_OUT_MONO,
iMem);
status_t status = track->initCheck();
if(status != NO_ERROR) {
track.clear();
ALOGD("Failed for initCheck()");
return -1;
}
// start play
ALOGD("start");
track->start();
usleep(20000);
ALOGD("stop");
track->stop();
iMem.clear();
heap.clear();
usleep(20000);
}
return 0;
}
MediaAudioTrackTest.java (z:\android-5.0.2\frameworks\base\media\tests\mediaframeworktest\src\com\android\mediaframeworktest\functional\audio)
//Test case 4: setPlaybackHeadPosition() beyond what has been written
@LargeTest
public void testSetPlaybackHeadPositionTooFar() throws Exception {
// constants for test
final String TEST_NAME = "testSetPlaybackHeadPositionTooFar";
final int TEST_SR = 22050;
final int TEST_CONF = AudioFormat.CHANNEL_OUT_MONO;
final int TEST_FORMAT = AudioFormat.ENCODING_PCM_16BIT;
final int TEST_MODE = AudioTrack.MODE_STREAM;
final int TEST_STREAM_TYPE = AudioManager.STREAM_MUSIC;
//-------- initialization --------------
int minBuffSize = AudioTrack.getMinBufferSize(TEST_SR, TEST_CONF, TEST_FORMAT);
AudioTrack track = new AudioTrack(TEST_STREAM_TYPE, TEST_SR, TEST_CONF, TEST_FORMAT,
2*minBuffSize, TEST_MODE);
byte data[] = new byte[minBuffSize];
// make up a frame index that's beyond what has been written: go from buffer size to frame
// count (given the audio track properties), and add 77.
int frameIndexTooFar = (2*minBuffSize/2) + 77;
//-------- test --------------
assumeTrue(TEST_NAME, track.getState() == AudioTrack.STATE_INITIALIZED);
track.write(data, 0, data.length);
track.write(data, 0, data.length);
track.play();
track.stop();
assumeTrue(TEST_NAME, track.getPlayState() == AudioTrack.PLAYSTATE_STOPPED);
assertTrue(TEST_NAME, track.setPlaybackHeadPosition(frameIndexTooFar) == AudioTrack.ERROR_BAD_VALUE);
//-------- tear down --------------
track.release();
}
/**
* Class constructor with {@link AudioAttributes} and {@link AudioFormat}.
* @param attributes a non-null {@link AudioAttributes} instance.
* @param format a non-null {@link AudioFormat} instance describing the format of the data
* that will be played through this AudioTrack. See {@link AudioFormat.Builder} for
* configuring the audio format parameters such as encoding, channel mask and sample rate.
* @param bufferSizeInBytes the total size (in bytes) of the buffer where audio data is read
* from for playback. If using the AudioTrack in streaming mode, you can write data into
* this buffer in smaller chunks than this size. If using the AudioTrack in static mode,
* this is the maximum size of the sound that will be played for this instance.
* See {@link #getMinBufferSize(int, int, int)} to determine the minimum required buffer size
* for the successful creation of an AudioTrack instance in streaming mode. Using values
* smaller than getMinBufferSize() will result in an initialization failure.
* @param mode streaming or static buffer. See {@link #MODE_STATIC} and {@link #MODE_STREAM}.
* @param sessionId ID of audio session the AudioTrack must be attached to, or
* {@link AudioManager#AUDIO_SESSION_ID_GENERATE} if the session isn't known at construction
* time. See also {@link AudioManager#generateAudioSessionId()} to obtain a session ID before
* construction.
* @throws IllegalArgumentException
*/
public AudioTrack(AudioAttributes attributes, AudioFormat format, int bufferSizeInBytes,
int mode, int sessionId)
throws IllegalArgumentException {
// mState already == STATE_UNINITIALIZED
if (attributes == null) {
throw new IllegalArgumentException("Illegal null AudioAttributes");
}
if (format == null) {
throw new IllegalArgumentException("Illegal null AudioFormat");
}
// remember which looper is associated with the AudioTrack instantiation
Looper looper;
if ((looper = Looper.myLooper()) == null) {
looper = Looper.getMainLooper();
}
int rate = 0;
if ((format.getPropertySetMask() & AudioFormat.AUDIO_FORMAT_HAS_PROPERTY_SAMPLE_RATE) != 0)
{
rate = format.getSampleRate();
} else {
rate = AudioSystem.getPrimaryOutputSamplingRate();
if (rate <= 0) {
rate = 44100;
}
}
int channelMask = AudioFormat.CHANNEL_OUT_FRONT_LEFT | AudioFormat.CHANNEL_OUT_FRONT_RIGHT;
if ((format.getPropertySetMask() & AudioFormat.AUDIO_FORMAT_HAS_PROPERTY_CHANNEL_MASK) != 0)
{
channelMask = format.getChannelMask();
}
int encoding = AudioFormat.ENCODING_DEFAULT;
if ((format.getPropertySetMask() & AudioFormat.AUDIO_FORMAT_HAS_PROPERTY_ENCODING) != 0) {
encoding = format.getEncoding();
}
audioParamCheck(rate, channelMask, encoding, mode);
mStreamType = AudioSystem.STREAM_DEFAULT;
audioBuffSizeCheck(bufferSizeInBytes);
mInitializationLooper = looper;
IBinder b = ServiceManager.getService(Context.APP_OPS_SERVICE);
mAppOps = IAppOpsService.Stub.asInterface(b);
mAttributes = (new AudioAttributes.Builder(attributes).build());
if (sessionId < 0) {
throw new IllegalArgumentException("Invalid audio session ID: "+sessionId);
}
int[] session = new int[1];
session[0] = sessionId;
// native initialization
int initResult = native_setup(new WeakReference(this), mAttributes,
mSampleRate, mChannels, mAudioFormat,
mNativeBufferSizeInBytes, mDataLoadMode, session);
if (initResult != SUCCESS) {
loge("Error code "+initResult+" when initializing AudioTrack.");
return; // with mState == STATE_UNINITIALIZED
}
mSessionId = session[0];
if (mDataLoadMode == MODE_STATIC) {
mState = STATE_NO_STATIC_DATA;
} else {
mState = STATE_INITIALIZED;
}
}
android_media_AudioTrack.cpp (z:\android-5.0.2\frameworks\base\core\jni)
// ----------------------------------------------------------------------------
// ----------------------------------------------------------------------------
static JNINativeMethod gMethods[] = {
// name, signature, funcPtr
{"native_start", "()V", (void *)android_media_AudioTrack_start},
{"native_stop", "()V", (void *)android_media_AudioTrack_stop},
{"native_pause", "()V", (void *)android_media_AudioTrack_pause},
{"native_flush", "()V", (void *)android_media_AudioTrack_flush},
{"native_setup", "(Ljava/lang/Object;Ljava/lang/Object;IIIII[I)I",
(void *)android_media_AudioTrack_setup},
{"native_finalize", "()V", (void *)android_media_AudioTrack_finalize},
{"native_release", "()V", (void *)android_media_AudioTrack_release},
{"native_write_byte", "([BIIIZ)I",(void *)android_media_AudioTrack_write_byte},
{"native_write_native_bytes",
"(Ljava/lang/Object;IIIZ)I",
(void *)android_media_AudioTrack_write_native_bytes},
{"native_write_short", "([SIII)I", (void *)android_media_AudioTrack_write_short},
{"native_write_float", "([FIIIZ)I",(void *)android_media_AudioTrack_write_float},
{"native_setVolume", "(FF)V", (void *)android_media_AudioTrack_set_volume},
{"native_get_native_frame_count",
"()I", (void *)android_media_AudioTrack_get_native_frame_count},
{"native_set_playback_rate",
"(I)I", (void *)android_media_AudioTrack_set_playback_rate},
{"native_get_playback_rate",
"()I", (void *)android_media_AudioTrack_get_playback_rate},
{"native_set_marker_pos","(I)I", (void *)android_media_AudioTrack_set_marker_pos},
{"native_get_marker_pos","()I", (void *)android_media_AudioTrack_get_marker_pos},
{"native_set_pos_update_period",
"(I)I", (void *)android_media_AudioTrack_set_pos_update_period},
{"native_get_pos_update_period",
"()I", (void *)android_media_AudioTrack_get_pos_update_period},
{"native_set_position", "(I)I", (void *)android_media_AudioTrack_set_position},
{"native_get_position", "()I", (void *)android_media_AudioTrack_get_position},
{"native_get_latency", "()I", (void *)android_media_AudioTrack_get_latency},
{"native_get_timestamp", "([J)I", (void *)android_media_AudioTrack_get_timestamp},
{"native_set_loop", "(III)I", (void *)android_media_AudioTrack_set_loop},
{"native_reload_static", "()I", (void *)android_media_AudioTrack_reload},
{"native_get_output_sample_rate",
"(I)I", (void *)android_media_AudioTrack_get_output_sample_rate},
{"native_get_min_buff_size",
"(III)I", (void *)android_media_AudioTrack_get_min_buff_size},
{"native_setAuxEffectSendLevel",
"(F)I", (void *)android_media_AudioTrack_setAuxEffectSendLevel},
{"native_attachAuxEffect",
"(I)I", (void *)android_media_AudioTrack_attachAuxEffect},
};
android_media_AudioTrack.cpp (z:\android-5.0.2\frameworks\base\core\jni)
// ----------------------------------------------------------------------------
static jint
android_media_AudioTrack_setup(JNIEnv *env, jobject thiz, jobject weak_this,
jobject jaa,
jint sampleRateInHertz, jint javaChannelMask,
jint audioFormat, jint buffSizeInBytes, jint memoryMode, jintArray jSession) {
ALOGV("sampleRate=%d, audioFormat(from Java)=%d, channel mask=%x, buffSize=%d",
sampleRateInHertz, audioFormat, javaChannelMask, buffSizeInBytes);
if (jaa == 0) {
ALOGE("Error creating AudioTrack: invalid audio attributes");
return (jint) AUDIO_JAVA_ERROR;
}
// Java channel masks don't map directly to the native definition, but it's a simple shift
// to skip the two deprecated channel configurations "default" and "mono".
audio_channel_mask_t nativeChannelMask = ((uint32_t)javaChannelMask) >> 2;
if (!audio_is_output_channel(nativeChannelMask)) {
ALOGE("Error creating AudioTrack: invalid channel mask %#x.", javaChannelMask);
return (jint) AUDIOTRACK_ERROR_SETUP_INVALIDCHANNELMASK;
}
uint32_t channelCount = audio_channel_count_from_out_mask(nativeChannelMask);
// check the format.
// This function was called from Java, so we compare the format against the Java constants
audio_format_t format = audioFormatToNative(audioFormat);
if (format == AUDIO_FORMAT_INVALID) {
ALOGE("Error creating AudioTrack: unsupported audio format %d.", audioFormat);
return (jint) AUDIOTRACK_ERROR_SETUP_INVALIDFORMAT;
}
// for the moment 8bitPCM in MODE_STATIC is not supported natively in the AudioTrack C++ class
// so we declare everything as 16bitPCM, the 8->16bit conversion for MODE_STATIC will be handled
// in android_media_AudioTrack_native_write_byte()
if ((format == AUDIO_FORMAT_PCM_8_BIT)
&& (memoryMode == MODE_STATIC)) {
ALOGV("android_media_AudioTrack_setup(): requesting MODE_STATIC for 8bit \
buff size of %dbytes, switching to 16bit, buff size of %dbytes",
buffSizeInBytes, 2*buffSizeInBytes);
format = AUDIO_FORMAT_PCM_16_BIT;
// we will need twice the memory to store the data
buffSizeInBytes *= 2;
}
// compute the frame count
size_t frameCount;
if (audio_is_linear_pcm(format)) {
const size_t bytesPerSample = audio_bytes_per_sample(format);
frameCount = buffSizeInBytes / (channelCount * bytesPerSample);
} else {
frameCount = buffSizeInBytes;
}
jclass clazz = env->GetObjectClass(thiz);
if (clazz == NULL) {
ALOGE("Can't find %s when setting up callback.", kClassPathName);
return (jint) AUDIOTRACK_ERROR_SETUP_NATIVEINITFAILED;
}
if (jSession == NULL) {
ALOGE("Error creating AudioTrack: invalid session ID pointer");
return (jint) AUDIO_JAVA_ERROR;
}
jint* nSession = (jint *) env->GetPrimitiveArrayCritical(jSession, NULL);
if (nSession == NULL) {
ALOGE("Error creating AudioTrack: Error retrieving session id pointer");
return (jint) AUDIO_JAVA_ERROR;
}
int sessionId = nSession[0];
env->ReleasePrimitiveArrayCritical(jSession, nSession, 0);
nSession = NULL;
// create the native AudioTrack object
sp lpTrack = new AudioTrack();
audio_attributes_t *paa = NULL;
// read the AudioAttributes values
paa = (audio_attributes_t *) calloc(1, sizeof(audio_attributes_t));
const jstring jtags =
(jstring) env->GetObjectField(jaa, javaAudioAttrFields.fieldFormattedTags);
const char* tags = env->GetStringUTFChars(jtags, NULL);
// copying array size -1, char array for tags was calloc'd, no need to NULL-terminate it
strncpy(paa->tags, tags, AUDIO_ATTRIBUTES_TAGS_MAX_SIZE - 1);
env->ReleaseStringUTFChars(jtags, tags);
paa->usage = (audio_usage_t) env->GetIntField(jaa, javaAudioAttrFields.fieldUsage);
paa->content_type =
(audio_content_type_t) env->GetIntField(jaa, javaAudioAttrFields.fieldContentType);
paa->flags = env->GetIntField(jaa, javaAudioAttrFields.fieldFlags);
ALOGV("AudioTrack_setup for usage=%d content=%d flags=0x%#x tags=%s",
paa->usage, paa->content_type, paa->flags, paa->tags);
// initialize the callback information:
// this data will be passed with every AudioTrack callback
AudioTrackJniStorage* lpJniStorage = new AudioTrackJniStorage();
lpJniStorage->mCallbackData.audioTrack_class = (jclass)env->NewGlobalRef(clazz);
// we use a weak reference so the AudioTrack object can be garbage collected.
lpJniStorage->mCallbackData.audioTrack_ref = env->NewGlobalRef(weak_this);
lpJniStorage->mCallbackData.busy = false;
// initialize the native AudioTrack object
status_t status = NO_ERROR;
switch (memoryMode) {
case MODE_STREAM:
status = lpTrack->set(
AUDIO_STREAM_DEFAULT,// stream type, but more info conveyed in paa (last argument)
sampleRateInHertz,
format,// word length, PCM
nativeChannelMask,
frameCount,
AUDIO_OUTPUT_FLAG_NONE,
audioCallback, &(lpJniStorage->mCallbackData),//callback, callback data (user)
0,// notificationFrames == 0 since not using EVENT_MORE_DATA to feed the AudioTrack
0,// shared mem
true,// thread can call Java
sessionId,// audio session ID
AudioTrack::TRANSFER_SYNC,
NULL, // default offloadInfo
-1, -1, // default uid, pid values
paa);
break;
case MODE_STATIC:
// AudioTrack is using shared memory
if (!lpJniStorage->allocSharedMem(buffSizeInBytes)) {
ALOGE("Error creating AudioTrack in static mode: error creating mem heap base");
goto native_init_failure;
}
status = lpTrack->set(
AUDIO_STREAM_DEFAULT,// stream type, but more info conveyed in paa (last argument)
sampleRateInHertz,
format,// word length, PCM
nativeChannelMask,
frameCount,
AUDIO_OUTPUT_FLAG_NONE,
audioCallback, &(lpJniStorage->mCallbackData),//callback, callback data (user));
0,// notificationFrames == 0 since not using EVENT_MORE_DATA to feed the AudioTrack
lpJniStorage->mMemBase,// shared mem
true,// thread can call Java
sessionId,// audio session ID
AudioTrack::TRANSFER_SHARED,
NULL, // default offloadInfo
-1, -1, // default uid, pid values
paa);
break;
default:
ALOGE("Unknown mode %d", memoryMode);
goto native_init_failure;
}
if (status != NO_ERROR) {
ALOGE("Error %d initializing AudioTrack", status);
goto native_init_failure;
}
nSession = (jint *) env->GetPrimitiveArrayCritical(jSession, NULL);
if (nSession == NULL) {
ALOGE("Error creating AudioTrack: Error retrieving session id pointer");
goto native_init_failure;
}
// read the audio session ID back from AudioTrack in case we create a new session
nSession[0] = lpTrack->getSessionId();
env->ReleasePrimitiveArrayCritical(jSession, nSession, 0);
nSession = NULL;
{ // scope for the lock
Mutex::Autolock l(sLock);
sAudioTrackCallBackCookies.add(&lpJniStorage->mCallbackData);
}
// save our newly created C++ AudioTrack in the "nativeTrackInJavaObj" field
// of the Java object (in mNativeTrackInJavaObj)
setAudioTrack(env, thiz, lpTrack);
// save the JNI resources so we can free them later
//ALOGV("storing lpJniStorage: %x\n", (long)lpJniStorage);
env->SetLongField(thiz, javaAudioTrackFields.jniData, (jlong)lpJniStorage);
// since we had audio attributes, the stream type was derived from them during the
// creation of the native AudioTrack: push the same value to the Java object
env->SetIntField(thiz, javaAudioTrackFields.fieldStreamType, (jint) lpTrack->streamType());
// audio attributes were copied in AudioTrack creation
free(paa);
paa = NULL;
return (jint) AUDIO_JAVA_SUCCESS;
// failures:
native_init_failure:
if (paa != NULL) {
free(paa);
}
if (nSession != NULL) {
env->ReleasePrimitiveArrayCritical(jSession, nSession, 0);
}
env->DeleteGlobalRef(lpJniStorage->mCallbackData.audioTrack_class);
env->DeleteGlobalRef(lpJniStorage->mCallbackData.audioTrack_ref);
delete lpJniStorage;
env->SetLongField(thiz, javaAudioTrackFields.jniData, 0);
return (jint) AUDIOTRACK_ERROR_SETUP_NATIVEINITFAILED;
}
4.10 音频数据的传递
a. APP创建AudioTrack, playbackThread创建对应的Track
它们之间通过共享内存传递音频数据
b. APP有2种使用共享内存的方式:
b.1 MODE_STATIC:
APP创建共享内存, APP一次性填充数据
b.2 MODE_STREAM:
APP使用obtainBuffer获得空白内存, 填充数据后使用releaseBuffer释放内存
c. playbackThread使用obtainBuffer获得含有数据的内存, 使用数据后使用releaseBuffer释放内存
d. AudioTrack中含有mProxy, 它被用来管理共享内存, 里面含有obtainBuffer, releaseBuffer函数
Track中含有mServerProxy, 它被用来管理共享内存, 里面含有obtainBuffer, releaseBuffer函数
对于不同的MODE, 这些Proxy指向不同的对象
e. 对于MODE_STREAM, APP和playbackThread使用环型缓冲区的方式传递数据
frameworks\base\media\tests\MediaFrameworkTest\src\com\android\mediaframeworktest\functional\audio\MediaAudioTrackTest.java
//Test case 5: setLoopPoints() fails for MODE_STREAM
@LargeTest
public void testSetLoopPointsStream() throws Exception {
// constants for test
final String TEST_NAME = "testSetLoopPointsStream";
final int TEST_SR = 22050;
final int TEST_CONF = AudioFormat.CHANNEL_OUT_MONO;
final int TEST_FORMAT = AudioFormat.ENCODING_PCM_16BIT;
final int TEST_MODE = AudioTrack.MODE_STREAM;
final int TEST_STREAM_TYPE = AudioManager.STREAM_MUSIC;
//-------- initialization --------------
int minBuffSize = AudioTrack.getMinBufferSize(TEST_SR, TEST_CONF, TEST_FORMAT);
// 应用程序创建track,回导致驱动层创建共享内存
AudioTrack track = new AudioTrack(TEST_STREAM_TYPE, TEST_SR, TEST_CONF, TEST_FORMAT,
2*minBuffSize, TEST_MODE);
byte data[] = new byte[minBuffSize];
//-------- test --------------
track.write(data, 0, data.length);
assumeTrue(TEST_NAME, track.getState() == AudioTrack.STATE_INITIALIZED);
assertTrue(TEST_NAME, track.setLoopPoints(2, 50, 2) == AudioTrack.ERROR_INVALID_OPERATION);
//-------- tear down --------------
track.release();
}
frameworks\base\media\java\android\media\AudioTrack.java
/**
* Writes the audio data to the audio sink for playback (streaming mode),
* or copies audio data for later playback (static buffer mode).
* In static buffer mode, copies the data to the buffer starting at its 0 offset, and the write
* mode is ignored.
* In streaming mode, the blocking behavior will depend on the write mode.
* @param audioData the buffer that holds the data to play, starting at the position reported
* by audioData.position()
.
*
Note that upon return, the buffer position (audioData.position()
) will
* have been advanced to reflect the amount of data that was successfully written to
* the AudioTrack.
* @param sizeInBytes number of bytes to write.
*
Note this may differ from audioData.remaining()
, but cannot exceed it.
* @param writeMode one of {@link #WRITE_BLOCKING}, {@link #WRITE_NON_BLOCKING}. It has no
* effect in static mode.
*
With {@link #WRITE_BLOCKING}, the write will block until all data has been written
* to the audio sink.
*
With {@link #WRITE_NON_BLOCKING}, the write will return immediately after
* queuing as much audio data for playback as possible without blocking.
* @return 0 or a positive number of bytes that were written, or
* {@link #ERROR_BAD_VALUE}, {@link #ERROR_INVALID_OPERATION}
*/
public int write(ByteBuffer audioData, int sizeInBytes,
@WriteMode int writeMode) {
if (mState == STATE_UNINITIALIZED) {
Log.e(TAG, "AudioTrack.write() called in invalid state STATE_UNINITIALIZED");
return ERROR_INVALID_OPERATION;
}
if ((writeMode != WRITE_BLOCKING) && (writeMode != WRITE_NON_BLOCKING)) {
Log.e(TAG, "AudioTrack.write() called with invalid blocking mode");
return ERROR_BAD_VALUE;
}
if ( (audioData == null) || (sizeInBytes < 0) || (sizeInBytes > audioData.remaining())) {
Log.e(TAG, "AudioTrack.write() called with invalid size (" + sizeInBytes + ") value");
return ERROR_BAD_VALUE;
}
int ret = 0;
if (audioData.isDirect()) {
ret = native_write_native_bytes(audioData,
audioData.position(), sizeInBytes, mAudioFormat,
writeMode == WRITE_BLOCKING);
} else {
ret = native_write_byte(NioUtils.unsafeArray(audioData),
NioUtils.unsafeArrayOffset(audioData) + audioData.position(),
sizeInBytes, mAudioFormat,
writeMode == WRITE_BLOCKING);
}
if ((mDataLoadMode == MODE_STATIC)
&& (mState == STATE_NO_STATIC_DATA)
&& (ret > 0)) {
// benign race with respect to other APIs that read mState
mState = STATE_INITIALIZED;
}
if (ret > 0) {
audioData.position(audioData.position() + ret);
}
return ret;
}
frameworks\base\core\jni\android_media_AudioTrack.cpp
// ----------------------------------------------------------------------------
// ----------------------------------------------------------------------------
static JNINativeMethod gMethods[] = {
// name, signature, funcPtr
{"native_start", "()V", (void *)android_media_AudioTrack_start},
{"native_stop", "()V", (void *)android_media_AudioTrack_stop},
{"native_pause", "()V", (void *)android_media_AudioTrack_pause},
{"native_flush", "()V", (void *)android_media_AudioTrack_flush},
{"native_setup", "(Ljava/lang/Object;Ljava/lang/Object;IIIII[I)I",
(void *)android_media_AudioTrack_setup},
{"native_finalize", "()V", (void *)android_media_AudioTrack_finalize},
{"native_release", "()V", (void *)android_media_AudioTrack_release},
{"native_write_byte", "([BIIIZ)I",(void *)android_media_AudioTrack_write_byte},
{"native_write_native_bytes",
"(Ljava/lang/Object;IIIZ)I",
(void *)android_media_AudioTrack_write_native_bytes},
{"native_write_short", "([SIII)I", (void *)android_media_AudioTrack_write_short},
{"native_write_float", "([FIIIZ)I",(void *)android_media_AudioTrack_write_float},
{"native_setVolume", "(FF)V", (void *)android_media_AudioTrack_set_volume},
{"native_get_native_frame_count",
"()I", (void *)android_media_AudioTrack_get_native_frame_count},
{"native_set_playback_rate",
"(I)I", (void *)android_media_AudioTrack_set_playback_rate},
{"native_get_playback_rate",
"()I", (void *)android_media_AudioTrack_get_playback_rate},
{"native_set_marker_pos","(I)I", (void *)android_media_AudioTrack_set_marker_pos},
{"native_get_marker_pos","()I", (void *)android_media_AudioTrack_get_marker_pos},
{"native_set_pos_update_period",
"(I)I", (void *)android_media_AudioTrack_set_pos_update_period},
{"native_get_pos_update_period",
"()I", (void *)android_media_AudioTrack_get_pos_update_period},
{"native_set_position", "(I)I", (void *)android_media_AudioTrack_set_position},
{"native_get_position", "()I", (void *)android_media_AudioTrack_get_position},
{"native_get_latency", "()I", (void *)android_media_AudioTrack_get_latency},
{"native_get_timestamp", "([J)I", (void *)android_media_AudioTrack_get_timestamp},
{"native_set_loop", "(III)I", (void *)android_media_AudioTrack_set_loop},
{"native_reload_static", "()I", (void *)android_media_AudioTrack_reload},
{"native_get_output_sample_rate",
"(I)I", (void *)android_media_AudioTrack_get_output_sample_rate},
{"native_get_min_buff_size",
"(III)I", (void *)android_media_AudioTrack_get_min_buff_size},
{"native_setAuxEffectSendLevel",
"(F)I", (void *)android_media_AudioTrack_setAuxEffectSendLevel},
{"native_attachAuxEffect",
"(I)I", (void *)android_media_AudioTrack_attachAuxEffect},
};
frameworks\base\core\jni\android_media_AudioTrack.cpp
// ----------------------------------------------------------------------------
static jint android_media_AudioTrack_write_byte(JNIEnv *env, jobject thiz,
jbyteArray javaAudioData,
jint offsetInBytes, jint sizeInBytes,
jint javaAudioFormat,
jboolean isWriteBlocking) {
//ALOGV("android_media_AudioTrack_write_byte(offset=%d, sizeInBytes=%d) called",
// offsetInBytes, sizeInBytes);
sp lpTrack = getAudioTrack(env, thiz); // 把java对象转换成c++对象
if (lpTrack == NULL) {
jniThrowException(env, "java/lang/IllegalStateException",
"Unable to retrieve AudioTrack pointer for write()");
return 0;
}
// get the pointer for the audio data from the java array
// NOTE: We may use GetPrimitiveArrayCritical() when the JNI implementation changes in such
// a way that it becomes much more efficient. When doing so, we will have to prevent the
// AudioSystem callback to be called while in critical section (in case of media server
// process crash for instance)
jbyte* cAudioData = NULL;
if (javaAudioData) {
cAudioData = (jbyte *)env->GetByteArrayElements(javaAudioData, NULL);
if (cAudioData == NULL) {
ALOGE("Error retrieving source of audio data to play, can't play");
return 0; // out of memory or no data to load
}
} else {
ALOGE("NULL java array of audio data to play, can't play");
return 0;
}
jint written = writeToTrack(lpTrack, javaAudioFormat, cAudioData, offsetInBytes, sizeInBytes,
isWriteBlocking == JNI_TRUE /* blocking */);
env->ReleaseByteArrayElements(javaAudioData, cAudioData, 0);
//ALOGV("write wrote %d (tried %d) bytes in the native AudioTrack with offset %d",
// (int)written, (int)(sizeInBytes), (int)offsetInBytes);
return written;
}
frameworks\base\core\jni\android_media_AudioTrack.cpp
// ----------------------------------------------------------------------------
jint writeToTrack(const sp& track, jint audioFormat, const jbyte* data,
jint offsetInBytes, jint sizeInBytes, bool blocking = true) {
// give the data to the native AudioTrack object (the data starts at the offset)
ssize_t written = 0;
// regular write() or copy the data to the AudioTrack's shared memory?
// 如果应用程序没有提供共享内存,则调用write函数写数据
if (track->sharedBuffer() == 0) {
written = track->write(data + offsetInBytes, sizeInBytes, blocking);
// for compatibility with earlier behavior of write(), return 0 in this case
if (written == (ssize_t) WOULD_BLOCK) {
written = 0;
}
} else {
const audio_format_t format = audioFormatToNative(audioFormat);
switch (format) {
default:
case AUDIO_FORMAT_PCM_FLOAT:
case AUDIO_FORMAT_PCM_16_BIT: {
// writing to shared memory, check for capacity
if ((size_t)sizeInBytes > track->sharedBuffer()->size()) {
sizeInBytes = track->sharedBuffer()->size();
}
// 如果应用程序提供了共享内存则直接copy
memcpy(track->sharedBuffer()->pointer(), data + offsetInBytes, sizeInBytes);
written = sizeInBytes;
} break;
case AUDIO_FORMAT_PCM_8_BIT: {
// data contains 8bit data we need to expand to 16bit before copying
// to the shared memory
// writing to shared memory, check for capacity,
// note that input data will occupy 2X the input space due to 8 to 16bit conversion
if (((size_t)sizeInBytes)*2 > track->sharedBuffer()->size()) {
sizeInBytes = track->sharedBuffer()->size() / 2;
}
int count = sizeInBytes;
int16_t *dst = (int16_t *)track->sharedBuffer()->pointer();
const uint8_t *src = (const uint8_t *)(data + offsetInBytes);
memcpy_to_i16_from_u8(dst, src, count);
// even though we wrote 2*sizeInBytes, we only report sizeInBytes as written to hide
// the 8bit mixer restriction from the user of this function
written = sizeInBytes;
} break;
}
}
return written;
}
frameworks\av\media\libmedia\AudioTrack.cpp
// -------------------------------------------------------------------------
ssize_t AudioTrack::write(const void* buffer, size_t userSize, bool blocking)
{
if (mTransfer != TRANSFER_SYNC || mIsTimed) {
return INVALID_OPERATION;
}
if (isDirect()) {
AutoMutex lock(mLock);
int32_t flags = android_atomic_and(
~(CBLK_UNDERRUN | CBLK_LOOP_CYCLE | CBLK_LOOP_FINAL | CBLK_BUFFER_END),
&mCblk->mFlags);
if (flags & CBLK_INVALID) {
return DEAD_OBJECT;
}
}
if (ssize_t(userSize) < 0 || (buffer == NULL && userSize != 0)) {
// Sanity-check: user is most-likely passing an error code, and it would
// make the return value ambiguous (actualSize vs error).
ALOGE("AudioTrack::write(buffer=%p, size=%zu (%zd)", buffer, userSize, userSize);
return BAD_VALUE;
}
size_t written = 0;
Buffer audioBuffer;
while (userSize >= mFrameSize) {
audioBuffer.frameCount = userSize / mFrameSize;
// 获得空白buff
status_t err = obtainBuffer(&audioBuffer,
blocking ? &ClientProxy::kForever : &ClientProxy::kNonBlocking);
if (err < 0) {
if (written > 0) {
break;
}
return ssize_t(err);
}
size_t toWrite;
if (mFormat == AUDIO_FORMAT_PCM_8_BIT && !(mFlags & AUDIO_OUTPUT_FLAG_DIRECT)) {
// Divide capacity by 2 to take expansion into account
toWrite = audioBuffer.size >> 1;
memcpy_to_i16_from_u8(audioBuffer.i16, (const uint8_t *) buffer, toWrite);
} else {
toWrite = audioBuffer.size;
memcpy(audioBuffer.i8, buffer, toWrite);
}
buffer = ((const char *) buffer) + toWrite;
userSize -= toWrite;
written += toWrite;
releaseBuffer(&audioBuffer);
}
return written;
}
frameworks\av\services\audioflinger\Tracks.cpp
// AudioBufferProvider interface
status_t AudioFlinger::PlaybackThread::Track::getNextBuffer(
AudioBufferProvider::Buffer* buffer, int64_t pts __unused)
{
ServerProxy::Buffer buf;
size_t desiredFrames = buffer->frameCount;
buf.mFrameCount = desiredFrames;
// 获得数据,处理数据,但是没有releasebuff操作
status_t status = mServerProxy->obtainBuffer(&buf);
buffer->frameCount = buf.mFrameCount;
buffer->raw = buf.mRaw;
if (buf.mFrameCount == 0) {
mAudioTrackServerProxy->tallyUnderrunFrames(desiredFrames);
}
return status;
}
// releaseBuffer() is not overridden
// ExtendedAudioBufferProvider interface
frameworks\av\services\audioflinger\Tracks.cpp
// AudioBufferProvider interface
// getNextBuffer() = 0;
// This implementation of releaseBuffer() is used by Track and RecordTrack, but not TimedTrack
void AudioFlinger::ThreadBase::TrackBase::releaseBuffer(AudioBufferProvider::Buffer* buffer)
{
#ifdef TEE_SINK
if (mTeeSink != 0) {
(void) mTeeSink->write(buffer->raw, buffer->frameCount);
}
#endif
ServerProxy::Buffer buf;
buf.mFrameCount = buffer->frameCount;
buf.mRaw = buffer->raw;
buffer->frameCount = 0;
buffer->raw = NULL;
// 基类TrackBase调用releaseBuffer释放buff
mServerProxy->releaseBuffer(&buf);
}
frameworks\av\services\audioflinger\Tracks.cpp
// ----------------------------------------------------------------------------
// Track constructor must be called with AudioFlinger::mLock and ThreadBase::mLock held
AudioFlinger::PlaybackThread::Track::Track(
PlaybackThread *thread,
const sp& client,
audio_stream_type_t streamType,
uint32_t sampleRate,
audio_format_t format,
audio_channel_mask_t channelMask,
size_t frameCount,
void *buffer,
const sp& sharedBuffer,
int sessionId,
int uid,
IAudioFlinger::track_flags_t flags,
track_type type)
: TrackBase(thread, client, sampleRate, format, channelMask, frameCount,
(sharedBuffer != 0) ? sharedBuffer->pointer() : buffer,
sessionId, uid, flags, true /*isOut*/,
(type == TYPE_PATCH) ? ( buffer == NULL ? ALLOC_LOCAL : ALLOC_NONE) : ALLOC_CBLK,
type),
mFillingUpStatus(FS_INVALID),
// mRetryCount initialized later when needed
mSharedBuffer(sharedBuffer),
mStreamType(streamType),
mName(-1), // see note below
mMainBuffer(thread->mixBuffer()),
mAuxBuffer(NULL),
mAuxEffectId(0), mHasVolumeController(false),
mPresentationCompleteFrames(0),
mFastIndex(-1),
mCachedVolume(1.0),
mIsInvalid(false),
mAudioTrackServerProxy(NULL),
mResumeToStopping(false),
mFlushHwPending(false),
mPreviousValid(false),
mPreviousFramesWritten(0)
// mPreviousTimestamp
{
// client == 0 implies sharedBuffer == 0
ALOG_ASSERT(!(client == 0 && sharedBuffer != 0));
ALOGV_IF(sharedBuffer != 0, "sharedBuffer: %p, size: %d", sharedBuffer->pointer(),
sharedBuffer->size());
if (mCblk == NULL) {
return;
}
if (sharedBuffer == 0) {
// 如果没有应用程序没有创建buff,则使用AudioTrackServerProxy创建buff,管理buff
mAudioTrackServerProxy = new AudioTrackServerProxy(mCblk, mBuffer, frameCount,
mFrameSize, !isExternalTrack(), sampleRate);
} else {
// 应用程序创建buff,则StaticAudioTrackServerProxy来管理buff
mAudioTrackServerProxy = new StaticAudioTrackServerProxy(mCblk, mBuffer, frameCount,
mFrameSize);
}
mServerProxy = mAudioTrackServerProxy;
mName = thread->getTrackName_l(channelMask, format, sessionId);
if (mName < 0) {
ALOGE("no more track names available");
return;
}
// only allocate a fast track index if we were able to allocate a normal track name
if (flags & IAudioFlinger::TRACK_FAST) {
mAudioTrackServerProxy->framesReadyIsCalledByMultipleThreads();
ALOG_ASSERT(thread->mFastTrackAvailMask != 0);
int i = __builtin_ctz(thread->mFastTrackAvailMask);
ALOG_ASSERT(0 < i && i < (int)FastMixerState::kMaxFastTracks);
// FIXME This is too eager. We allocate a fast track index before the
// fast track becomes active. Since fast tracks are a scarce resource,
// this means we are potentially denying other more important fast tracks from
// being created. It would be better to allocate the index dynamically.
mFastIndex = i;
// Read the initial underruns because this field is never cleared by the fast mixer
mObservedUnderruns = thread->getFastTrackUnderruns(i);
thread->mFastTrackAvailMask &= ~(1 << i);
}
}
frameworks\av\media\libmedia\AudioTrackShared.cpp
.mPosition = 0
.mEnd =frameCount 初值
// ---------------------------------------------------------------------------
StaticAudioTrackServerProxy::StaticAudioTrackServerProxy(audio_track_cblk_t* cblk, void *buffers,
size_t frameCount, size_t frameSize)
: AudioTrackServerProxy(cblk, buffers, frameCount, frameSize),
mObserver(&cblk->u.mStatic.mSingleStateQueue), mPosition(0),
mEnd(frameCount), mFramesReadyIsCalledByMultipleThreads(false)
{
mState.mLoopStart = 0;
mState.mLoopEnd = 0;
mState.mLoopCount = 0;
}
frameworks\av\media\libmedia\AudioTrackShared.cpp
obtainBuffer 返回mPosition 所示位置
status_t StaticAudioTrackServerProxy::obtainBuffer(Buffer* buffer, bool ackFlush __unused)
{
if (mIsShutdown) {
buffer->mFrameCount = 0;
buffer->mRaw = NULL;
buffer->mNonContig = 0;
mUnreleased = 0;
return NO_INIT;
}
// 获得数据
ssize_t positionOrStatus = pollPosition();
if (positionOrStatus < 0) {
buffer->mFrameCount = 0;
buffer->mRaw = NULL;
buffer->mNonContig = 0;
mUnreleased = 0;
return (status_t) positionOrStatus;
}
size_t position = (size_t) positionOrStatus;
size_t avail;
if (position < mEnd) {
avail = mEnd - position;
size_t wanted = buffer->mFrameCount;
if (avail < wanted) {
buffer->mFrameCount = avail;
} else {
avail = wanted;
}
buffer->mRaw = &((char *) mBuffers)[position * mFrameSize];
} else {
avail = 0;
buffer->mFrameCount = 0;
buffer->mRaw = NULL;
}
buffer->mNonContig = 0; // FIXME should be > 0 for looping
mUnreleased = avail;
return NO_ERROR;
}
frameworks\av\include\private\media\AudioTrackShared.h
struct Buffer {
size_t mFrameCount; // number of frames available in this buffer 想取几帧数据
void* mRaw; // pointer to first frame 执行first frame
size_t mNonContig; // number of additional non-contiguous frames available
};
frameworks\av\media\libmedia\AudioTrackShared.cpp
void StaticAudioTrackServerProxy::releaseBuffer(Buffer* buffer)
{
size_t stepCount = buffer->mFrameCount;
LOG_ALWAYS_FATAL_IF(!(stepCount <= mUnreleased));
if (stepCount == 0) {
// prevent accidental re-use of buffer
buffer->mRaw = NULL;
buffer->mNonContig = 0;
return;
}
mUnreleased -= stepCount;
audio_track_cblk_t* cblk = mCblk;
size_t position = mPosition;
size_t newPosition = position + stepCount;
int32_t setFlags = 0;
if (!(position <= newPosition && newPosition <= mFrameCount)) {
ALOGW("%s newPosition %zu outside [%zu, %zu]", __func__, newPosition, position, mFrameCount);
newPosition = mFrameCount;
} else if (mState.mLoopCount != 0 && newPosition == mState.mLoopEnd) {
if (mState.mLoopCount == -1 || --mState.mLoopCount != 0) {
newPosition = mState.mLoopStart;
setFlags = CBLK_LOOP_CYCLE;
} else {
mEnd = mFrameCount; // this is what allows playback to continue after the loop
setFlags = CBLK_LOOP_FINAL;
}
}
if (newPosition == mFrameCount) {
setFlags |= CBLK_BUFFER_END;
}
// 调整新位置
mPosition = newPosition;
cblk->mServer += stepCount;
// This may overflow, but client is not supposed to rely on it
cblk->u.mStatic.mBufferPosition = (uint32_t) newPosition;
if (setFlags != 0) {
(void) android_atomic_or(setFlags, &cblk->mFlags);
// this would be a good place to wake a futex
}
buffer->mFrameCount = 0;
buffer->mRaw = NULL;
buffer->mNonContig = 0;
}
frameworks\av\media\libmedia\AudioTrackShared.cpp
mFront : R指针
mRear: W指针
mFrameCount: LEN
mFrameCountP2: LEN向上取为2的幂
#define MEASURE_NS 10000000 // attempt to provide accurate timeouts if requested >= MEASURE_NS
// To facilitate quicker recovery from server failure, this value limits the timeout per each futex
// wait. However it does not protect infinite timeouts. If defined to be zero, there is no limit.
// FIXME May not be compatible with audio tunneling requirements where timeout should be in the
// order of minutes.
#define MAX_SEC 5
status_t ClientProxy::obtainBuffer(Buffer* buffer, const struct timespec *requested,
struct timespec *elapsed)
{
LOG_ALWAYS_FATAL_IF(buffer == NULL || buffer->mFrameCount == 0);
struct timespec total; // total elapsed time spent waiting
total.tv_sec = 0;
total.tv_nsec = 0;
bool measure = elapsed != NULL; // whether to measure total elapsed time spent waiting
status_t status;
enum {
TIMEOUT_ZERO, // requested == NULL || *requested == 0
TIMEOUT_INFINITE, // *requested == infinity
TIMEOUT_FINITE, // 0 < *requested < infinity
TIMEOUT_CONTINUE, // additional chances after TIMEOUT_FINITE
} timeout;
if (requested == NULL) {
timeout = TIMEOUT_ZERO;
} else if (requested->tv_sec == 0 && requested->tv_nsec == 0) {
timeout = TIMEOUT_ZERO;
} else if (requested->tv_sec == INT_MAX) {
timeout = TIMEOUT_INFINITE;
} else {
timeout = TIMEOUT_FINITE;
if (requested->tv_sec > 0 || requested->tv_nsec >= MEASURE_NS) {
measure = true;
}
}
struct timespec before;
bool beforeIsValid = false;
audio_track_cblk_t* cblk = mCblk;
bool ignoreInitialPendingInterrupt = true;
// check for shared memory corruption
if (mIsShutdown) {
status = NO_INIT;
goto end;
}
for (;;) {
int32_t flags = android_atomic_and(~CBLK_INTERRUPT, &cblk->mFlags);
// check for track invalidation by server, or server death detection
if (flags & CBLK_INVALID) {
ALOGV("Track invalidated");
status = DEAD_OBJECT;
goto end;
}
// check for obtainBuffer interrupted by client
if (!ignoreInitialPendingInterrupt && (flags & CBLK_INTERRUPT)) {
ALOGV("obtainBuffer() interrupted by client");
status = -EINTR;
goto end;
}
ignoreInitialPendingInterrupt = false;
// compute number of frames available to write (AudioTrack) or read (AudioRecord)
int32_t front;
int32_t rear;
if (mIsOut) {
// The barrier following the read of mFront is probably redundant.
// We're about to perform a conditional branch based on 'filled',
// which will force the processor to observe the read of mFront
// prior to allowing data writes starting at mRaw.
// However, the processor may support speculative execution,
// and be unable to undo speculative writes into shared memory.
// The barrier will prevent such speculative execution.
front = android_atomic_acquire_load(&cblk->u.mStreaming.mFront);
rear = cblk->u.mStreaming.mRear;
} else {
// On the other hand, this barrier is required.
rear = android_atomic_acquire_load(&cblk->u.mStreaming.mRear);
front = cblk->u.mStreaming.mFront;
}
ssize_t filled = rear - front;
// pipe should not be overfull
if (!(0 <= filled && (size_t) filled <= mFrameCount)) {
if (mIsOut) {
ALOGE("Shared memory control block is corrupt (filled=%zd, mFrameCount=%zu); "
"shutting down", filled, mFrameCount);
mIsShutdown = true;
status = NO_INIT;
goto end;
}
// for input, sync up on overrun
filled = 0;
cblk->u.mStreaming.mFront = rear;
(void) android_atomic_or(CBLK_OVERRUN, &cblk->mFlags);
}
// don't allow filling pipe beyond the nominal size
size_t avail = mIsOut ? mFrameCount - filled : filled;
if (avail > 0) {
// 'avail' may be non-contiguous, so return only the first contiguous chunk
size_t part1;
if (mIsOut) {
rear &= mFrameCountP2 - 1;
part1 = mFrameCountP2 - rear;
} else {
front &= mFrameCountP2 - 1;
part1 = mFrameCountP2 - front;
}
if (part1 > avail) {
part1 = avail;
}
if (part1 > buffer->mFrameCount) {
part1 = buffer->mFrameCount;
}
buffer->mFrameCount = part1;
buffer->mRaw = part1 > 0 ?
&((char *) mBuffers)[(mIsOut ? rear : front) * mFrameSize] : NULL;
buffer->mNonContig = avail - part1;
mUnreleased = part1;
status = NO_ERROR;
break;
}
struct timespec remaining;
const struct timespec *ts;
switch (timeout) {
case TIMEOUT_ZERO:
status = WOULD_BLOCK;
goto end;
case TIMEOUT_INFINITE:
ts = NULL;
break;
case TIMEOUT_FINITE:
timeout = TIMEOUT_CONTINUE;
if (MAX_SEC == 0) {
ts = requested;
break;
}
// fall through
case TIMEOUT_CONTINUE:
// FIXME we do not retry if requested < 10ms? needs documentation on this state machine
if (!measure || requested->tv_sec < total.tv_sec ||
(requested->tv_sec == total.tv_sec && requested->tv_nsec <= total.tv_nsec)) {
status = TIMED_OUT;
goto end;
}
remaining.tv_sec = requested->tv_sec - total.tv_sec;
if ((remaining.tv_nsec = requested->tv_nsec - total.tv_nsec) < 0) {
remaining.tv_nsec += 1000000000;
remaining.tv_sec++;
}
if (0 < MAX_SEC && MAX_SEC < remaining.tv_sec) {
remaining.tv_sec = MAX_SEC;
remaining.tv_nsec = 0;
}
ts = &remaining;
break;
default:
LOG_ALWAYS_FATAL("obtainBuffer() timeout=%d", timeout);
ts = NULL;
break;
}
int32_t old = android_atomic_and(~CBLK_FUTEX_WAKE, &cblk->mFutex);
if (!(old & CBLK_FUTEX_WAKE)) {
if (measure && !beforeIsValid) {
clock_gettime(CLOCK_MONOTONIC, &before);
beforeIsValid = true;
}
errno = 0;
(void) syscall(__NR_futex, &cblk->mFutex,
mClientInServer ? FUTEX_WAIT_PRIVATE : FUTEX_WAIT, old & ~CBLK_FUTEX_WAKE, ts);
// update total elapsed time spent waiting
if (measure) {
struct timespec after;
clock_gettime(CLOCK_MONOTONIC, &after);
total.tv_sec += after.tv_sec - before.tv_sec;
long deltaNs = after.tv_nsec - before.tv_nsec;
if (deltaNs < 0) {
deltaNs += 1000000000;
total.tv_sec--;
}
if ((total.tv_nsec += deltaNs) >= 1000000000) {
total.tv_nsec -= 1000000000;
total.tv_sec++;
}
before = after;
beforeIsValid = true;
}
switch (errno) {
case 0: // normal wakeup by server, or by binderDied()
case EWOULDBLOCK: // benign race condition with server
case EINTR: // wait was interrupted by signal or other spurious wakeup
case ETIMEDOUT: // time-out expired
// FIXME these error/non-0 status are being dropped
break;
default:
status = errno;
ALOGE("%s unexpected error %s", __func__, strerror(status));
goto end;
}
}
}
end:
if (status != NO_ERROR) {
buffer->mFrameCount = 0;
buffer->mRaw = NULL;
buffer->mNonContig = 0;
mUnreleased = 0;
}
if (elapsed != NULL) {
*elapsed = total;
}
if (requested == NULL) {
requested = &kNonBlocking;
}
if (measure) {
ALOGV("requested %ld.%03ld elapsed %ld.%03ld",
requested->tv_sec, requested->tv_nsec / 1000000,
total.tv_sec, total.tv_nsec / 1000000);
}
return status;
}
frameworks\av\media\libmedia\AudioTrackShared.cpp
void ClientProxy::releaseBuffer(Buffer* buffer)
{
LOG_ALWAYS_FATAL_IF(buffer == NULL);
size_t stepCount = buffer->mFrameCount;
if (stepCount == 0 || mIsShutdown) {
// prevent accidental re-use of buffer
buffer->mFrameCount = 0;
buffer->mRaw = NULL;
buffer->mNonContig = 0;
return;
}
LOG_ALWAYS_FATAL_IF(!(stepCount <= mUnreleased && mUnreleased <= mFrameCount));
mUnreleased -= stepCount;
audio_track_cblk_t* cblk = mCblk;
// Both of these barriers are required
if (mIsOut) {
int32_t rear = cblk->u.mStreaming.mRear;
android_atomic_release_store(stepCount + rear, &cblk->u.mStreaming.mRear);
} else {
int32_t front = cblk->u.mStreaming.mFront;
android_atomic_release_store(stepCount + front, &cblk->u.mStreaming.mFront);
}
}
4.11 PlaybackThread处理流程
a. prepareTracks_l :
确定enabled track, disabled track
对于enabled track, 设置mState.tracks[x]中的参数
b. threadLoop_mix : 处理数据(比如重采样)、混音
确定hook:
逐个分析mState.tracks[x]的数据, 根据它的格式确定tracks[x].hook
再确定总的mState.hook
调用hook:
调用总的mState.hook即可, 它会再去调用每一个mState.tracks[x].hook
混音后的数据会放在mState.outputTemp临时BUFFER中
然后转换格式后存入 thread.mMixerBuffer
c. memcpy_by_audio_format :
把数据从thread.mMixerBuffer或thread.mEffectBuffer复制到thread.mSinkBuffer
d. threadLoop_write:
把thread.mSinkBuffer写到声卡上
e. threadLoop_exit
Android 4.4 KitKat的环形缓冲机制
http://blog.sina.com.cn/s/blog_4d2f77990102ux8m.html
《深入解析Android 5.0系统》——第6章,第6.1节原子操作
https://yq.aliyun.com/articles/95441
内存屏障
http://ifeve.com/memory-barriers-or-fences/
frameworks\av\services\audioflinger\AudioMixer.cpp
比如说手机静音了,不做任何处理,
// no-op case
void AudioMixer::process__nop(state_t* state, int64_t pts)
{
ALOGVV("process__nop\n");
uint32_t e0 = state->enabledTracks;
while (e0) {
// process by group of tracks with same output buffer to
// avoid multiple memset() on same buffer
uint32_t e1 = e0, e2 = e0;
int i = 31 - __builtin_clz(e1);
{
track_t& t1 = state->tracks[i];
e2 &= ~(1<
while (e2) {
i = 31 - __builtin_clz(e2);
e2 &= ~(1<
track_t& t2 = state->tracks[i];
if (CC_UNLIKELY(t2.mainBuffer != t1.mainBuffer)) {
e1 &= ~(1<
}
}
e0 &= ~(e1);
memset(t1.mainBuffer, 0, state->frameCount * t1.mMixerChannelCount
* audio_bytes_per_sample(t1.mMixerFormat));
}
while (e1) {
i = 31 - __builtin_clz(e1);
e1 &= ~(1<
{
track_t& t3 = state->tracks[i];
size_t outFrames = state->frameCount;
while (outFrames) {
t3.buffer.frameCount = outFrames;
int64_t outputPTS = calculateOutputPTS(
t3, pts, state->frameCount - outFrames);
t3.bufferProvider->getNextBuffer(&t3.buffer, outputPTS);
if (t3.buffer.raw == NULL) break;
outFrames -= t3.buffer.frameCount;
t3.bufferProvider->releaseBuffer(&t3.buffer);
}
}
}
}
}
frameworks\av\services\audioflinger\AudioMixer.cpp
如果是声卡支持的格式,就不需要重采样
// generic code without resampling
void AudioMixer::process__genericNoResampling(state_t* state, int64_t pts)
{
ALOGVV("process__genericNoResampling\n");
int32_t outTemp[BLOCKSIZE * MAX_NUM_CHANNELS] __attribute__((aligned(32)));
// acquire each track's buffer
uint32_t enabledTracks = state->enabledTracks;
uint32_t e0 = enabledTracks;
while (e0) {
const int i = 31 - __builtin_clz(e0);
e0 &= ~(1<
track_t& t = state->tracks[i];
t.buffer.frameCount = state->frameCount;
t.bufferProvider->getNextBuffer(&t.buffer, pts);
t.frameCount = t.buffer.frameCount;
t.in = t.buffer.raw;
}
e0 = enabledTracks;
while (e0) {
// process by group of tracks with same output buffer to
// optimize cache use
uint32_t e1 = e0, e2 = e0;
int j = 31 - __builtin_clz(e1);
track_t& t1 = state->tracks[j];
e2 &= ~(1<
while (e2) {
j = 31 - __builtin_clz(e2);
e2 &= ~(1<
track_t& t2 = state->tracks[j];
if (CC_UNLIKELY(t2.mainBuffer != t1.mainBuffer)) {
e1 &= ~(1<
}
}
e0 &= ~(e1);
// this assumes output 16 bits stereo, no resampling
int32_t *out = t1.mainBuffer;
size_t numFrames = 0;
do {
memset(outTemp, 0, sizeof(outTemp));
e2 = e1;
while (e2) {
const int i = 31 - __builtin_clz(e2);
e2 &= ~(1<
track_t& t = state->tracks[i];
size_t outFrames = BLOCKSIZE;
int32_t *aux = NULL;
if (CC_UNLIKELY(t.needs & NEEDS_AUX)) {
aux = t.auxBuffer + numFrames;
}
while (outFrames) {
// t.in == NULL can happen if the track was flushed just after having
// been enabled for mixing.
if (t.in == NULL) {
enabledTracks &= ~(1<
e1 &= ~(1<
break;
}
size_t inFrames = (t.frameCount > outFrames)?outFrames:t.frameCount;
if (inFrames > 0) {
t.hook(&t, outTemp + (BLOCKSIZE - outFrames) * t.mMixerChannelCount,
inFrames, state->resampleTemp, aux);
t.frameCount -= inFrames;
outFrames -= inFrames;
if (CC_UNLIKELY(aux != NULL)) {
aux += inFrames;
}
}
if (t.frameCount == 0 && outFrames) {
t.bufferProvider->releaseBuffer(&t.buffer);
t.buffer.frameCount = (state->frameCount - numFrames) -
(BLOCKSIZE - outFrames);
int64_t outputPTS = calculateOutputPTS(
t, pts, numFrames + (BLOCKSIZE - outFrames));
t.bufferProvider->getNextBuffer(&t.buffer, outputPTS);
t.in = t.buffer.raw;
if (t.in == NULL) {
enabledTracks &= ~(1<
e1 &= ~(1<
break;
}
t.frameCount = t.buffer.frameCount;
}
}
}
convertMixerFormat(out, t1.mMixerFormat, outTemp, t1.mMixerInFormat,
BLOCKSIZE * t1.mMixerChannelCount);
// TODO: fix ugly casting due to choice of out pointer type
out = reinterpret_cast((uint8_t*)out
+ BLOCKSIZE * t1.mMixerChannelCount
* audio_bytes_per_sample(t1.mMixerFormat));
numFrames += BLOCKSIZE;
} while (numFrames < state->frameCount);
}
// release each track's buffer
e0 = enabledTracks;
while (e0) {
const int i = 31 - __builtin_clz(e0);
e0 &= ~(1<
track_t& t = state->tracks[i];
t.bufferProvider->releaseBuffer(&t.buffer);
}
}
frameworks\av\services\audioflinger\AudioMixer.cpp
// generic code with resampling
void AudioMixer::process__genericResampling(state_t* state, int64_t pts)
{
ALOGVV("process__genericResampling\n");
// this const just means that local variable outTemp doesn't change
int32_t* const outTemp = state->outputTemp;
size_t numFrames = state->frameCount;
uint32_t e0 = state->enabledTracks;
while (e0) {
// process by group of tracks with same output buffer
// to optimize cache use
uint32_t e1 = e0, e2 = e0;
int j = 31 - __builtin_clz(e1);
track_t& t1 = state->tracks[j];
e2 &= ~(1<
while (e2) {
j = 31 - __builtin_clz(e2);
e2 &= ~(1<
track_t& t2 = state->tracks[j];
if (CC_UNLIKELY(t2.mainBuffer != t1.mainBuffer)) {
e1 &= ~(1<
}
}
e0 &= ~(e1);
int32_t *out = t1.mainBuffer;
memset(outTemp, 0, sizeof(*outTemp) * t1.mMixerChannelCount * state->frameCount);
while (e1) {
const int i = 31 - __builtin_clz(e1);
e1 &= ~(1<
track_t& t = state->tracks[i];
int32_t *aux = NULL;
if (CC_UNLIKELY(t.needs & NEEDS_AUX)) {
aux = t.auxBuffer;
}
// this is a little goofy, on the resampling case we don't
// acquire/release the buffers because it's done by
// the resampler.
if (t.needs & NEEDS_RESAMPLE) {
t.resampler->setPTS(pts);
t.hook(&t, outTemp, numFrames, state->resampleTemp, aux);
} else {
size_t outFrames = 0;
while (outFrames < numFrames) {
t.buffer.frameCount = numFrames - outFrames;
int64_t outputPTS = calculateOutputPTS(t, pts, outFrames);
t.bufferProvider->getNextBuffer(&t.buffer, outputPTS);
t.in = t.buffer.raw;
// t.in == NULL can happen if the track was flushed just after having
// been enabled for mixing.
if (t.in == NULL) break;
if (CC_UNLIKELY(aux != NULL)) {
aux += outFrames;
}
t.hook(&t, outTemp + outFrames * t.mMixerChannelCount, t.buffer.frameCount,
state->resampleTemp, aux);
outFrames += t.buffer.frameCount;
t.bufferProvider->releaseBuffer(&t.buffer);
}
}
}
convertMixerFormat(out, t1.mMixerFormat,
outTemp, t1.mMixerInFormat, numFrames * t1.mMixerChannelCount);
}
}