音频音频处理(AudioUnit)iOS 音视频文章

Audio Unit采集音频实战

2019-05-11  本文已影响0人  小东邪啊

需求

iOS中使用Audio unit实现音频数据采集,直接采集PCM无损数据, Audio Unit不能直接采集压缩数据,在以后的文章会讲到音频压缩.


实现原理

使用Audio Unit采集硬件输入端,如麦克风,其他外置具备麦克风功能设备(带麦的耳机,话筒等,前提是其本身要和苹果兼容).


阅读前提

本文直接为实战篇,如需了解理论基础参考上述链接中的内容,本文侧重于实战中注意点.

本项目实现低耦合,高内聚,所以直接将相关模块拖入你的项目设置参数就可直接使用.


GitHub地址(附代码) : Audio Unit Capture

简书地址 : Audio Unit Capture

掘金地址 : Audio Unit Capture

博客地址 : Audio Unit Capture


具体实现

1.代码结构

1

如上所示,我们总体分为两大类,一个是负责采集的类,一个是负责做音频录制的类,你可以根据需求在适当时机启动,关闭Audio Unit, 并且在Audio Unit已经启动的情况下可以进行音频文件录制,前面需求仅仅需要如下四个API即可完成.

// Start / Stop Audio Queue
[[XDXAudioCaptureManager getInstance] startAudioCapture];
[[XDXAudioCaptureManager getInstance] stopAudioCapture];

// Start / Stop Audio Record
[[XDXAudioQueueCaptureManager getInstance] startRecordFile];
[[XDXAudioQueueCaptureManager getInstance] stopRecordFile];

2. 初始化audio unit

本例采用单例实现,故将audio unit的实现放在初始化中,仅执行一次,如果销毁了audio unit则需要在外层重新调用初始化API,一般不建议反复销毁创建audio unit,所以最好就是在单例初始化中配置audio unit其后仅仅需要打开关闭即可.

iPhone设备默认仅支持单声道,如果设置双声道代码无法正常初始化. 如果需要模拟双声道,可以手动用代码对单声道数据做一次拷贝.具体方法以后文章会讲到.

注意: 这里的采样buffer大小的设置与采样时间的设置不可随意设置,换句话说,当采样时间一定,我们设置的采样数据大小不能超过其最大值,可通过公式算出采样时间与采样数据的关系.

采样公式计算

数据量(字节 / 秒)=(采样频率(Hz)* 采样位数(bit)* 声道数)/ 8
- (instancetype)init {
    static dispatch_once_t onceToken;
    dispatch_once(&onceToken, ^{
        _instace = [super init];
        
        // Note: audioBufferSize can not more than durationSec max size.
        [_instace configureAudioInfoWithDataFormat:&m_audioDataFormat
                                          formatID:kAudioFormatLinearPCM
                                        sampleRate:44100
                                      channelCount:1
                                   audioBufferSize:2048
                                       durationSec:0.02
                                          callBack:AudioCaptureCallback];
    });
    return _instace;
    

- (void)configureAudioInfoWithDataFormat:(AudioStreamBasicDescription *)dataFormat formatID:(UInt32)formatID sampleRate:(Float64)sampleRate channelCount:(UInt32)channelCount audioBufferSize:(int)audioBufferSize durationSec:(float)durationSec callBack:(AURenderCallback)callBack {
    // Configure ASBD
    [self configureAudioToAudioFormat:dataFormat
                      byParamFormatID:formatID
                           sampleRate:sampleRate
                         channelCount:channelCount];
    
    // Set sample time
    [[AVAudioSession sharedInstance] setPreferredIOBufferDuration:durationSec error:NULL];
    
    // Configure Audio Unit
    m_audioUnit = [self configreAudioUnitWithDataFormat:*dataFormat
                                        audioBufferSize:audioBufferSize
                                               callBack:callBack];
}
}

3. 设置音频流数据格式 ASBD

需要注意的是,音频数据格式与硬件直接相关,如果想获取最高性能,最好直接使用硬件本身的采样率,声道数等音频属性,所以,如采样率,当我们手动进行更改后,Audio Unit会在内部自行转换一次,虽然代码上没有感知,但一定程序上还是降低了性能.

iOS中不支持直接设置双声道,如果想模拟双声道,可以自行填充音频数据,具体会在以后的文章中讲到,喜欢请持续关注.

理解AudioSessionGetProperty函数,该函数表明查询当前硬件指定属性的值,如下,kAudioSessionProperty_CurrentHardwareSampleRate为查询当前硬件采样率,kAudioSessionProperty_CurrentHardwareInputNumberChannels为查询当前采集的声道数.因为本例中使用手动赋值方式更加灵活,所以没有使用查询到的值.

首先,你必须了解未压缩格式(PCM...)与压缩格式(AAC...). 使用iOS直接采集未压缩数据是可以直接拿到硬件采集到的数据,由于audio unit不能直接采集aac类型数据,所以这里仅采集原始的PCM数据.

使用PCM数据格式必须设置采样值的flag:mFormatFlags,每个声道中采样的值换算成二进制的位宽mBitsPerChannel,iOS中每个声道使用16位的位宽,每个包中有多少帧mFramesPerPacket,对于PCM数据而言,因为其未压缩,所以每个包中仅有1帧数据.每个包中有多少字节数(即每一帧中有多少字节数),可以根据如下简单计算得出

注意,如果是其他压缩数据格式,大多数不需要单独设置以上参数,默认为0.这是因为对于压缩数据而言,每个音频采样包中压缩的帧数以及每个音频采样包压缩出来的字节数可能是不同的,所以我们无法预知进行设置,就像mFramesPerPacket参数,因为压缩出来每个包具体有多少帧只有压缩完成后才能得知.

#define kXDXAudioPCMFramesPerPacket 1
#define KXDXAudioBitsPerChannel 16

-(void)configureAudioToAudioFormat:(AudioStreamBasicDescription *)audioFormat byParamFormatID:(UInt32)formatID  sampleRate:(Float64)sampleRate channelCount:(UInt32)channelCount {
    AudioStreamBasicDescription dataFormat = {0};
    UInt32 size = sizeof(dataFormat.mSampleRate);
    // Get hardware origin sample rate. (Recommended it)
    Float64 hardwareSampleRate = 0;
    AudioSessionGetProperty(kAudioSessionProperty_CurrentHardwareSampleRate,
                            &size,
                            &hardwareSampleRate);
    // Manual set sample rate
    dataFormat.mSampleRate = sampleRate;
    
    size = sizeof(dataFormat.mChannelsPerFrame);
    // Get hardware origin channels number. (Must refer to it)
    UInt32 hardwareNumberChannels = 0;
    AudioSessionGetProperty(kAudioSessionProperty_CurrentHardwareInputNumberChannels,
                            &size,
                            &hardwareNumberChannels);
    dataFormat.mChannelsPerFrame = channelCount;
    
    dataFormat.mFormatID = formatID;
    
    if (formatID == kAudioFormatLinearPCM) {
        dataFormat.mFormatFlags     = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked;
        dataFormat.mBitsPerChannel  = KXDXAudioBitsPerChannel;
        dataFormat.mBytesPerPacket  = dataFormat.mBytesPerFrame = (dataFormat.mBitsPerChannel / 8) * dataFormat.mChannelsPerFrame;
        dataFormat.mFramesPerPacket = kXDXAudioPCMFramesPerPacket;
    }

    memcpy(audioFormat, &dataFormat, sizeof(dataFormat));
    NSLog(@"%@:  %s - sample rate:%f, channel count:%d",kModuleName, __func__,sampleRate,channelCount);
}

4. 设置采样时间

使用AVAudioSession可以设置采样时间,注意,在采样时间一定的情况下,我们设置的采样大小不能超过其最大值.

数据量(字节 / 秒)=(采样频率(Hz)* 采样位数(bit)* 声道数)/ 8

比如: 采样率是44.1kHz, 采样位数是16, 声道数是1, 采样时间为0.01秒,则最大的采样数据为882. 所以即使我们设置超过此数值,系统最大也只能采集882个字节的音频数据.

[[AVAudioSession sharedInstance] setPreferredIOBufferDuration:durationSec error:NULL];

5. 配置Audio Unit

m_audioUnit = [self configreAudioUnitWithDataFormat:*dataFormat
                                    audioBufferSize:audioBufferSize
                                           callBack:callBack];
                                               
- (AudioUnit)configreAudioUnitWithDataFormat:(AudioStreamBasicDescription)dataFormat audioBufferSize:(int)audioBufferSize callBack:(AURenderCallback)callBack {
    AudioUnit audioUnit = [self createAudioUnitObject];
    
    if (!audioUnit) {
        return NULL;
    }
    
    [self initCaptureAudioBufferWithAudioUnit:audioUnit
                                 channelCount:dataFormat.mChannelsPerFrame
                                 dataByteSize:audioBufferSize];
    
    
    [self setAudioUnitPropertyWithAudioUnit:audioUnit
                                 dataFormat:dataFormat];
    
    [self initCaptureCallbackWithAudioUnit:audioUnit callBack:callBack];
    
    // Calls to AudioUnitInitialize() can fail if called back-to-back on different ADM instances. A fall-back solution is to allow multiple sequential calls with as small delay between each. This factor sets the max number of allowed initialization attempts.
    OSStatus status = AudioUnitInitialize(audioUnit);
    if (status != noErr) {
        NSLog(@"%@:  %s - couldn't init audio unit instance, status : %d \n",kModuleName,__func__,status);
    }
    
    return audioUnit;
}

这里可以指定使用audio unit哪个分类创建. 这里使用的kAudioUnitSubType_VoiceProcessingIO分类是做回声消除及增强人声的分类,如果仅仅需要原始未处理音频数据也可以改用kAudioUnitSubType_RemoteIO分类,如果想了解更多关于audio unit分类,文章最上方有相关链接可以访问.

AudioComponentFindNext:第一个参数设置为NULL表示使用系统定义的顺序查找第一个匹配的audio unit.如果你将上一个使用的audio unit引用传给该参数,则该函数将继续寻找下一个与之描述匹配的audio unit.

- (AudioUnit)createAudioUnitObject {
    AudioUnit audioUnit;
    AudioComponentDescription audioDesc;
    audioDesc.componentType         = kAudioUnitType_Output;
    audioDesc.componentSubType      = kAudioUnitSubType_VoiceProcessingIO;//kAudioUnitSubType_RemoteIO;
    audioDesc.componentManufacturer = kAudioUnitManufacturer_Apple;
    audioDesc.componentFlags        = 0;
    audioDesc.componentFlagsMask    = 0;
    
    AudioComponent inputComponent = AudioComponentFindNext(NULL, &audioDesc);
    OSStatus status = AudioComponentInstanceNew(inputComponent, &audioUnit);
    if (status != noErr)  {
        NSLog(@"%@:  %s - create audio unit failed, status : %d \n",kModuleName, __func__, status);
        return NULL;
    }else {
        return audioUnit;
    }
}

kAudioUnitProperty_ShouldAllocateBuffer: 默认为true, 它将创建一个回调函数中接收数据的buffer, 在这里设置为false, 我们自己定义了一个bufferList用来接收采集到的音频数据.

- (void)initCaptureAudioBufferWithAudioUnit:(AudioUnit)audioUnit channelCount:(int)channelCount dataByteSize:(int)dataByteSize {
    // Disable AU buffer allocation for the recorder, we allocate our own.
    UInt32 flag     = 0;
    OSStatus status = AudioUnitSetProperty(audioUnit,
                                           kAudioUnitProperty_ShouldAllocateBuffer,
                                           kAudioUnitScope_Output,
                                           INPUT_BUS,
                                           &flag,
                                           sizeof(flag));
    if (status != noErr) {
        NSLog(@"%@:  %s - could not allocate buffer of callback, status : %d \n", kModuleName, __func__, status);
    }
    
    AudioBufferList * buffList = (AudioBufferList*)malloc(sizeof(AudioBufferList));
    buffList->mNumberBuffers               = 1;
    buffList->mBuffers[0].mNumberChannels  = channelCount;
    buffList->mBuffers[0].mDataByteSize    = dataByteSize;
    buffList->mBuffers[0].mData            = (UInt32 *)malloc(dataByteSize);
    m_buffList = buffList;
}

input bus / input element: 连接设备硬件输入端(如:麦克风)

output bus / output element: 连接设备硬件输出端(如:扬声器)

input scope: 每个element/scope可能有一个input scope或output scope,以采集为例,音频从audio unit的input scope流入,我们仅仅只能从output scope中获取音频数据.因为input scope是audio unit与硬件之间的交互.所以你可以看到代码中设置的两项INPUT_BUS,kAudioUnitScope_Output.

remote I/O audio unit默认是打开输出端,关闭输入端的,而本文讲的是利用audio unit做音频数据采集,所以我们要打开输入端,禁止输出端.

- (void)setAudioUnitPropertyWithAudioUnit:(AudioUnit)audioUnit dataFormat:(AudioStreamBasicDescription)dataFormat {
    OSStatus status;
    status = AudioUnitSetProperty(audioUnit,
                                  kAudioUnitProperty_StreamFormat,
                                  kAudioUnitScope_Output,
                                  INPUT_BUS,
                                  &dataFormat,
                                  sizeof(dataFormat));
    if (status != noErr) {
        NSLog(@"%@:  %s - set audio unit stream format failed, status : %d \n",kModuleName, __func__,status);
    }
    
    /*
     // remove echo but can not effect by testing.
     UInt32 echoCancellation = 0;
     AudioUnitSetProperty(m_audioUnit,
     kAUVoiceIOProperty_BypassVoiceProcessing,
     kAudioUnitScope_Global,
     0,
     &echoCancellation,
     sizeof(echoCancellation));
     */
    
    UInt32 enableFlag = 1;
    status = AudioUnitSetProperty(audioUnit,
                                  kAudioOutputUnitProperty_EnableIO,
                                  kAudioUnitScope_Input,
                                  INPUT_BUS,
                                  &enableFlag,
                                  sizeof(enableFlag));
    if (status != noErr) {
        NSLog(@"%@:  %s - could not enable input on AURemoteIO, status : %d \n",kModuleName, __func__, status);
    }
    
    UInt32 disableFlag = 0;
    status = AudioUnitSetProperty(audioUnit,
                                  kAudioOutputUnitProperty_EnableIO,
                                  kAudioUnitScope_Output,
                                  OUTPUT_BUS,
                                  &disableFlag,
                                  sizeof(disableFlag));
    if (status != noErr) {
        NSLog(@"%@:  %s - could not enable output on AURemoteIO, status : %d \n",kModuleName, __func__,status);
    }
}

- (void)initCaptureCallbackWithAudioUnit:(AudioUnit)audioUnit callBack:(AURenderCallback)callBack {
    AURenderCallbackStruct captureCallback;
    captureCallback.inputProc        = callBack;
    captureCallback.inputProcRefCon  = (__bridge void *)self;
    OSStatus status                  = AudioUnitSetProperty(audioUnit,
                                                            kAudioOutputUnitProperty_SetInputCallback,
                                                            kAudioUnitScope_Global,
                                                            INPUT_BUS,
                                                            &captureCallback,
                                                            sizeof(captureCallback));
    
    if (status != noErr) {
        NSLog(@"%@:  %s - Audio Unit set capture callback failed, status : %d \n",kModuleName, __func__,status);
    }
}

6. 开启audio unit

直接调用AudioOutputUnitStart即可开启audio unit.如果以上配置都正确,audio unit可以直接工作.

- (void)startAudioCaptureWithAudioUnit:(AudioUnit)audioUnit isRunning:(BOOL *)isRunning {
    OSStatus status;
    
    if (*isRunning) {
        NSLog(@"%@:  %s - start recorder repeat \n",kModuleName,__func__);
        return;
    }
    
    status = AudioOutputUnitStart(audioUnit);
    if (status == noErr) {
        *isRunning        = YES;
        NSLog(@"%@:  %s - start audio unit success \n",kModuleName,__func__);
    }else {
        *isRunning  = NO;
        NSLog(@"%@:  %s - start audio unit failed \n",kModuleName,__func__);
    }
}

7. 回调函数中处理音频数据

static OSStatus AudioCaptureCallback(void                       *inRefCon,
                                     AudioUnitRenderActionFlags *ioActionFlags,
                                     const AudioTimeStamp       *inTimeStamp,
                                     UInt32                     inBusNumber,
                                     UInt32                     inNumberFrames,
                                     AudioBufferList            *ioData) {
    AudioUnitRender(m_audioUnit, ioActionFlags, inTimeStamp, inBusNumber, inNumberFrames, m_buffList);
    
    XDXAudioCaptureManager *manager = (__bridge XDXAudioCaptureManager *)inRefCon;
    
    /*  Test audio fps
     static Float64 lastTime = 0;
     Float64 currentTime = CMTimeGetSeconds(CMClockMakeHostTimeFromSystemUnits(inTimeStamp->mHostTime))*1000;
     NSLog(@"Test duration - %f",currentTime - lastTime);
     lastTime = currentTime;
     */
    
    void    *bufferData = m_buffList->mBuffers[0].mData;
    UInt32   bufferSize = m_buffList->mBuffers[0].mDataByteSize;
    
    //    NSLog(@"demon = %d",bufferSize);
    
    if (manager.isRecordVoice) {
        [[XDXAudioFileHandler getInstance] writeFileWithInNumBytes:bufferSize
                                                      ioNumPackets:inNumberFrames
                                                          inBuffer:bufferData
                                                      inPacketDesc:NULL];
    }
    
    return noErr;
}

8. 停止audio unit

AudioOutputUnitStop : 停止audio unit.

-(void)stopAudioCaptureWithAudioUnit:(AudioUnit)audioUnit isRunning:(BOOL *)isRunning {
    if (*isRunning == NO) {
        NSLog(@"%@:  %s - stop capture repeat \n",kModuleName,__func__);
        return;
    }
    
    *isRunning = NO;
    if (audioUnit != NULL) {
        OSStatus status = AudioOutputUnitStop(audioUnit);
        if (status != noErr){
            NSLog(@"%@:  %s - stop audio unit failed. \n",kModuleName,__func__);
        }else {
            NSLog(@"%@:  %s - stop audio unit successful",kModuleName,__func__);
        }
    }
}

9.释放audio unit

当我们彻底不使用audio unit时,可以释放本类audio unit相关的资源,注意释放具有先后顺序,首先应停止audio unit, 然后将初始化状态还原,最后释放audio unit所有相关内存资源.

- (void)freeAudioUnit:(AudioUnit)audioUnit {
    if (!audioUnit) {
        NSLog(@"%@:  %s - repeat call!",kModuleName,__func__);
        return;
    }
    
    OSStatus result = AudioOutputUnitStop(audioUnit);
    if (result != noErr){
        NSLog(@"%@:  %s - stop audio unit failed.",kModuleName,__func__);
    }
    
    result = AudioUnitUninitialize(m_audioUnit);
    if (result != noErr) {
        NSLog(@"%@:  %s - uninitialize audio unit failed, status : %d",kModuleName,__func__,result);
    }
    
    // It will trigger audio route change repeatedly
    result = AudioComponentInstanceDispose(m_audioUnit);
    if (result != noErr) {
        NSLog(@"%@:  %s - dispose audio unit failed. status : %d",kModuleName,__func__,result);
    }else {
        audioUnit = nil;
    }
}

10. 音频文件录制

此部分可参考另一篇文章: 音频文件录制

上一篇下一篇

猜你喜欢

热点阅读