iOS-FFmpeg音视频开发iOS音视频开发

音频解码 Audio Converter

2019-05-19  本文已影响4人  小东邪啊

需求

iOS中将压缩音频数据(如AAC)进行解码以得到原始音频数据类型:线性PCM.

本例最终实现的是通过Audio Queue采集到AAC压缩数据,将其解码为PCM数据,并将解码后的PCM数据以录制的形式保存在沙盒中.可调整解码后采样率,解码器类型等参数.

本例可拓展,不仅仅解码AAC音频数据流,还可以是音频文件,视频文件中的音频等等.


实现原理

利用Audio Toolbox Framework中的Audio Converter可以实现音频数据解码,即将AAC数据转为原始音频数据PCM.


阅读前提:


GitHub地址(附代码) : 音频解码

简书地址 : 音频解码

掘金地址 : 音频解码

博客地址 : 音频解码


1.初始化

1.1. 初始化解码器

初始化解码器实例, 通过指定原始数据格式,最终解码后的格式,采样率,以及使用硬编还是软编,以下是具体步骤.

- (instancetype)initWithSourceFormat:(AudioStreamBasicDescription)sourceFormat destFormatID:(AudioFormatID)destFormatID sampleRate:(float)sampleRate isUseHardwareDecode:(BOOL)isUseHardwareDecode {
    if (self = [super init]) {
        mSourceFormat   = sourceFormat;
        mAudioConverter = [self configureDecoderBySourceFormat:sourceFormat
                                                    destFormat:&mDestinationFormat
                                                  destFormatID:destFormatID
                                                    sampleRate:sampleRate
                                           isUseHardwareDecode:isUseHardwareDecode];
    }
    return self;
}

1.2. 配置解码后ASBD音频流信息

    AudioStreamBasicDescription destinationFormat = {0};
    destinationFormat.mSampleRate = sampleRate;
    if (destFormatID != kAudioFormatLinearPCM) {
        NSLog(@"Not get compression format after decoding !");
        return NULL;
    } else {
        destinationFormat.mFormatID = destFormatID;
        destinationFormat.mChannelsPerFrame = sourceFormat.mChannelsPerFrame;
        destinationFormat.mFormatID          = kAudioFormatLinearPCM;
        destinationFormat.mFormatFlags       = (kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked);
        destinationFormat.mFramesPerPacket   = kXDXAudioPCMFramesPerPacket;
        destinationFormat.mBitsPerChannel    = KXDXAudioBitsPerChannel;
        destinationFormat.mBytesPerFrame     = destinationFormat.mBitsPerChannel / 8 *destinationFormat.mChannelsPerFrame;
        destinationFormat.mBytesPerPacket    = destinationFormat.mBytesPerFrame * destinationFormat.mFramesPerPacket;
        destinationFormat.mReserved          =  0;
    }
    memcpy(destFormat, &destinationFormat, sizeof(AudioStreamBasicDescription));

对音频做解码操作,实际就是将压缩数据格式如AAC格式转为线性PCM原始音频数据,通过kAudioFormatProperty_FormatInfo属性可以自动获取指定音频格式的参数信息.

1.3. 选择解码器类型

AudioClassDescription结构体描述了系统使用音频解码器信息,其中最重要的就是使用硬编或软编。然后解码器的数量,即数组的个数,由当前的声道数决定。

//获取解码器的描述信息
    AudioClassDescription *audioClassDesc = [self getAudioCalssDescriptionWithType:destFormatID fromManufacture:kAppleHardwareAudioCodecManufacturer];
...

- (AudioClassDescription *)getAudioCalssDescriptionWithType:(AudioFormatID)type fromManufacture:(uint32_t)manufacture {
    static AudioClassDescription desc;
    UInt32 decoderSpecific = type;
    UInt32 size;
    OSStatus status = AudioFormatGetPropertyInfo(kAudioFormatProperty_Decoders,
                                                 sizeof(decoderSpecific),
                                                 &decoderSpecific,
                                                 &size);
    
    if (status != noErr) {
        NSLog(@"Error!:硬解码AAC get info 失败, status= %d", (int)status);
        return nil;
    }
    
    //计算aac解码器的个数
    unsigned int count = size / sizeof(AudioClassDescription);
    //创建一个包含count个解码器的数组
    AudioClassDescription description[count];
    //将满足aac解码的解码器的信息写入数组
    status = AudioFormatGetProperty(kAudioFormatProperty_Encoders,
                                    sizeof(decoderSpecific),
                                    &decoderSpecific,
                                    &size,
                                    &description);
    
    if (status != noErr) {
        NSLog(@"Error!:硬解码AAC get propery 失败, status= %d", (int)status);
        return nil;
    }
    
    for (unsigned int i = 0; i < count; i++) {
        if (type == description[i].mSubType && manufacture == description[i].mManufacturer) {
            desc = description[i];
            return &desc;
        }
    }
    return nil;
}

注意:硬解即利用设备GPU硬件完成高效解码,降低CPU消耗. 软解就是传统的通过CPU计算。

1.4. 创建解码器

AudioConverterNewSpecific: 通过指定解码器来创建audio converter实例对象.第3,4个
分别是解码器的数量与解码器描述,同上,与声道数保持一致.

    // Create the AudioConverterRef.
    AudioConverterRef converter = NULL;
    if (![self checkError:AudioConverterNewSpecific(&sourceFormat, &destinationFormat, destinationFormat.mChannelsPerFrame, audioClassDesc, &converter) withErrorString:@"Audio Converter New failed"]) {
        return NULL;
    }else {
        printf("Audio converter create successful \n");
    }

2.解码

2.1. 计算解码数据大小

注意,当使用Audio Convert无论做编解码,每次都需要1024个采样点才能完成一次转换,此值是固定的.

根据解码器的采样点,计算解码出音频数据的大小.因为线性PCM的数据可以通过公式算出,即数据包数量*声道数*每个数据包中字节数.

// Note: audio convert must set 1024.
    UInt32 ioOutputDataPackets = kIOOutputDataPackets;
    UInt32 outputBufferSize = (UInt32)(ioOutputDataPackets * destFormat.mChannelsPerFrame * destFormat.mBytesPerFrame);

2.2. 为解码后音频数据预分配内存

我们可以将2.1中算出的size为这个Buffer list分配内存.

// Set up output buffer list.
    // Set up output buffer list.
    AudioBufferList fillBufferList = {0};
    fillBufferList.mNumberBuffers = 1;
    fillBufferList.mBuffers[0].mNumberChannels  = destFormat.mChannelsPerFrame;
    fillBufferList.mBuffers[0].mDataByteSize    = outputBufferSize;
    fillBufferList.mBuffers[0].mData            = malloc(outputBufferSize * sizeof(char));

2.3. 解码音频数据

解析AudioConverterFillComplexBuffer:用来解码音频数据.同时需要指定回调函数(C语言函数),

第二个参数即指定回调函数,此回调函数中主要做的是为即将解码的数据进行赋值,即我们要把原始音频数据赋值给回调函数中的ioData参数,这是我们在解码前最后一次控制原始音频数据,此回调函数执行后即完成了解码的过程,新的数据会填充到第五个参数中,也就是我们上面预定义的fillBufferList.

最终,我们将转换后得到的AAC数据以回调函数的形式传给调用者.


OSStatus DecodeConverterComplexInputDataProc(AudioConverterRef              inAudioConverter,
                                             UInt32                         *ioNumberDataPackets,
                                             AudioBufferList                *ioData,
                                             AudioStreamPacketDescription   **outDataPacketDescription,
                                             void                           *inUserData) {
    XDXConverterInfoType *info = (XDXConverterInfoType *)inUserData;
    
    if (info->sourceDataSize <= 0) {
        ioNumberDataPackets = 0;
        return -1;
    }
    
    *outDataPacketDescription = &info->packetDesc;
    (*outDataPacketDescription)[0].mStartOffset             = 0;
    (*outDataPacketDescription)[0].mDataByteSize            = info->sourceDataSize;
    (*outDataPacketDescription)[0].mVariableFramesInPacket  = 0;
    
    ioData->mNumberBuffers              = 1;
    ioData->mBuffers[0].mData           = info->sourceBuffer;
    ioData->mBuffers[0].mNumberChannels = info->sourceChannelsPerFrame;
    ioData->mBuffers[0].mDataByteSize   = info->sourceDataSize;
    
    return noErr;
}


- (void)decodeFormatByConverter:(AudioConverterRef)audioConverter sourceBuffer:(void *)sourceBuffer sourceBufferSize:(UInt32)sourceBufferSize sourceFormat:(AudioStreamBasicDescription)sourceFormat dest:(AudioStreamBasicDescription)destFormat completeHandler:(void(^)(AudioBufferList *destBufferList, UInt32 outputPackets, AudioStreamPacketDescription *outputPacketDescriptions))completeHandler {
    ...
    
    XDXConverterInfoType userInfo        = {0};
    userInfo.sourceBuffer                = sourceBuffer;
    userInfo.sourceDataSize              = sourceBufferSize;
    userInfo.sourceChannelsPerFrame      = sourceFormat.mChannelsPerFrame;
    userInfo.packetDesc.mDataByteSize    = (UInt32)sourceBufferSize;
    userInfo.packetDesc.mStartOffset     = 0;
    userInfo.packetDesc.mVariableFramesInPacket = 0;
    
    AudioStreamPacketDescription outputPacketDesc;
    OSStatus status = AudioConverterFillComplexBuffer(audioConverter,
                                                      DecodeConverterComplexInputDataProc,
                                                      &userInfo,
                                                      &ioOutputDataPackets,
                                                      &fillBufferList,
                                                      &outputPacketDesc);
    
    // if interrupted in the process of the conversion call, we must handle the error appropriately
    if (status != noErr) {
        if (status == kAudioConverterErr_HardwareInUse) {
            printf("Audio Converter returned kAudioConverterErr_HardwareInUse!\n");
        } else {
            if (![self checkError:status withErrorString:@"AudioConverterFillComplexBuffer error!"]) {
                return;
            }
        }
    } else {
        if (ioOutputDataPackets == 0) {
            // This is the EOF condition.
            status = noErr;
        }
        
        if (completeHandler) {
            completeHandler(&fillBufferList, ioOutputDataPackets, &outputPacketDesc);
        }
    }
}

3. 模块对接

因为音频解码要依赖音频采集,所以我们这里以audio unit采集为例作示范,即使用audio unit采集pcm数据然后使用此模块解码得到aac数据.如需了解请参考如下链接

3.1. 初始化解码器

如下,在音频采集的类中声明一个解码器实例变量,然后初始化它. 仅仅需要设置原始数据格式,解码后的格式,采样率,使用硬编,软编即可.

@property (nonatomic, strong) XDXAduioDecoder *audioDecoder;

...

        // audio decode: aac->pcm
        self.audioDecoder = [[XDXAduioDecoder alloc] initWithSourceFormat:m_audioInfo->mDataFormat
                                                             destFormatID:kAudioFormatLinearPCM
                                                               sampleRate:48000
                                                      isUseHardwareDecode:YES];

3.2. 解码音频数据

在Audio Queue采集AAC音频数据的回调中将AAC数据送入解码器,然后在回调函数中将得到的PCM数据其写入文件.

注意: 直接用Audio Queue采集AAC类型音频数据,实际系统在其内部做了一次转换,即直接采集其实只能采原始PCM数据,直接用Audio Queue设置采集AAC相当于系统在内部为我们做了一次转换.

static void CaptureAudioDataCallback(void *                                 inUserData,
                                     AudioQueueRef                          inAQ,
                                     AudioQueueBufferRef                    inBuffer,
                                     const AudioTimeStamp *                 inStartTime,
                                     UInt32                                 inNumPackets,
                                     const AudioStreamPacketDescription*    inPacketDesc) {
    
    XDXAudioQueueCaptureManager *instance = (__bridge XDXAudioQueueCaptureManager *)inUserData;
    
    [instance.audioDecoder decodeAudioWithSourceBuffer:inBuffer->mAudioData
                                      sourceBufferSize:inBuffer->mAudioDataByteSize
                                       completeHandler:^(AudioBufferList * _Nonnull destBufferList, UInt32 outputPackets, AudioStreamPacketDescription * _Nonnull outputPacketDescriptions) {
                                           if (instance.isRecordVoice) {
                                               [[XDXAudioFileHandler getInstance] writeFileWithInNumBytes:destBufferList->mBuffers->mDataByteSize
                                                                                             ioNumPackets:outputPackets
                                                                                                 inBuffer:destBufferList->mBuffers->mData
                                                                                             inPacketDesc:outputPacketDescriptions];
                                           }
                                           
                                           free(destBufferList->mBuffers->mData);
                                       }];
    
    if (instance.isRunning) {
        AudioQueueEnqueueBuffer(inAQ, inBuffer, 0, NULL);
    }
}

4. 文件录制

此部分可参考另一篇文章: 音频文件录制

5. 释放解码器资源

如需释放内存,请保证解码器工作彻底结束后再释放内存.

- (void)freeEncoder {
    if (mAudioConverter) {
        AudioConverterDispose(mAudioConverter);
        mAudioConverter = NULL;
    }
}
上一篇下一篇

猜你喜欢

热点阅读