iOS中的音频文件合成

2019-01-24  本文已影响0人  丶过客匆匆

音频合成:

使用场景:

题外话:之前使用UNNotificationServiceExtension + AVSpeechSynthesizer 做语音播报,iOS12之后的推送扩展程序不能使用AVSpeechSynthesizer播报后最开始想到了本地推送+音频拼接的方法解决此问题,之后顺便也考虑了下能否合成音频来解决频繁震动的问题,当然也是一头撞了个南墙,但总归有些收获,故有此文。

术语解释:
实现代码:
/**
 *  音频合并
 *
 *  @param soundArr 资源文件数组
 */
- (void)mergeAudioWithSoundArray:(NSArray *)soundArr
{
    NSMutableArray <AVURLAsset *> *audioAssetArr = [NSMutableArray arrayWithCapacity:0];
    // 音频轨道数组
    NSMutableArray <AVMutableCompositionTrack *> *audioCompositionTrackArr = [NSMutableArray arrayWithCapacity:0];
    // 音频素材轨道数组
    NSMutableArray <AVAssetTrack *> *audioAssetTrackArr = [NSMutableArray arrayWithCapacity:0];

    AVMutableComposition *audioCompostion = [AVMutableComposition composition];

    for (NSString *soundStr in soundArr)
    {
        NSString *audioPath = [[NSBundle mainBundle] pathForResource:soundStr ofType:@"mp3"];
        AVURLAsset *audioAsset = [AVURLAsset assetWithURL:[NSURL fileURLWithPath:audioPath]];
        [audioAssetArr addObject:audioAsset];
        // 音频轨道
        AVMutableCompositionTrack *audioCompositionTrack = [audioCompostion addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:0];
        [audioCompositionTrackArr addObject:audioCompositionTrack];
        // 音频素材轨道
        AVAssetTrack *audioAssetTrack = [[audioAsset tracksWithMediaType:AVMediaTypeAudio] firstObject];
        [audioAssetTrackArr addObject:audioAssetTrack];

    }

    for (int i = 0; i < audioAssetArr.count; i ++)
    {
        //设置处理时间
        CMTime cmTime;
        if (i == 0) {
            cmTime = kCMTimeZero;
        }else{
            cmTime = audioAssetArr[i-1].duration;
        }
        // 音频合并 - 插入音轨文件
        [audioCompositionTrackArr[i] insertTimeRange:CMTimeRangeMake(kCMTimeZero, audioAssetArr[i].duration) ofTrack:audioAssetTrackArr[i] atTime:cmTime error:nil];
    }
    
    // 合并后的文件导出 - `presetName`要和之后的`session.outputFileType`相对应。
    AVAssetExportSession *session = [[AVAssetExportSession alloc] initWithAsset:audioCompostion presetName:AVAssetExportPresetAppleM4A];
    NSString *outPutFilePath = [self getFilePath];
    // 查看当前session支持的fileType类型
    NSLog(@"---%@",[session supportedFileTypes]);
    session.outputURL = [NSURL fileURLWithPath:outPutFilePath];
    session.outputFileType = AVFileTypeAppleM4A; //与上述的`present`相对应
    session.shouldOptimizeForNetworkUse = YES;   //优化网络
    
    [session exportAsynchronouslyWithCompletionHandler:^{
        if (session.status == AVAssetExportSessionStatusCompleted) {
            NSLog(@"合并成功----%@", outPutFilePath);
            
            NSURL *soundURL = [NSURL fileURLWithPath:outPutFilePath];
            SystemSoundID soundID;
            AudioServicesCreateSystemSoundID((__bridge CFURLRef)soundURL, &soundID);
            AudioServicesPlaySystemSound(soundID);
            AudioServicesPlayAlertSoundWithCompletion(soundID, ^{
            });
            
        } else {
            NSLog(@"session.status:%ld",(long)session.status);
            // 其他情况, 具体请看这里`AVAssetExportSessionStatus`.
        }
    }];
}

- (NSString *)getFilePath
{
    NSString *filePath = [NSSearchPathForDirectoriesInDomains(NSLibraryDirectory, NSUserDomainMask, YES) firstObject];
    NSString *folderName = [filePath stringByAppendingPathComponent:@"Sounds"];
    BOOL isCreateSuccess = [[NSFileManager defaultManager] createDirectoryAtPath:filePath withIntermediateDirectories:YES attributes:nil error:nil];
    if (isCreateSuccess) filePath = [folderName stringByAppendingPathComponent:@"speak.m4a"];
    return filePath;
}
上一篇下一篇

猜你喜欢

热点阅读