视频iOS视频的编辑音视频

AVFoundation AVassetReader 和 AVA

2017-06-21  本文已影响425人  Y_Swordsman

AVFoundation 教你如何处理混音,拼接,消音,快进倒放等功能! 如果你对于AVfoundation如何处理视频数据,请先参考这篇文章
==> GitHub直通车(终于抽空把demo整理了一份发到了GitHub.demo如有出错可以联系我!)
在我们对音视轨做了处理,比如上文的混合,之后我们就要将把这些混合后的音视轨进行输出保存.
那么我们一般使用的是AVassetwriter和AVassetReader结合.我们知道上个文章知道我们处理音视轨之后会获得三个实例,分别为 AVMutableComposition AVMutableVideoComposition AVMutableAudioMix 拿到了这三个实例我们就要去创建AVAssetReader实例对象.

- (AVAssetReader*)createAssetReader:(AVComposition *)compositon
                   videoComposition:(AVVideoComposition *)videoComposition
                           audioMix:(AVAudioMix *)audioMix{
    
    NSError *error = nil;
    AVAssetReader *assetReader = [AVAssetReader assetReaderWithAsset:compositon error:&error];
    
    assetReader.timeRange = CMTimeRangeMake(kCMTimeZero, CMTimeMake(compositon.duration.value, compositon.duration.timescale));
    
    NSDictionary *outputSettings = @{(id)kCVPixelBufferPixelFormatTypeKey: @(kCVPixelFormatType_420YpCbCr8BiPlanarFullRange)};
    AVAssetReaderVideoCompositionOutput *readerVideoOutput = [AVAssetReaderVideoCompositionOutput assetReaderVideoCompositionOutputWithVideoTracks:[compositon tracksWithMediaType:AVMediaTypeVideo]
                                                                                                                                     videoSettings:outputSettings];
#if ! TARGET_IPHONE_SIMULATOR
    if( [AVVideoComposition isKindOfClass:[AVMutableVideoComposition class]] )
        [(AVMutableVideoComposition*)videoComposition setRenderScale:1.0];
#endif
    readerVideoOutput.videoComposition = videoComposition;
    readerVideoOutput.alwaysCopiesSampleData = NO;
    [assetReader addOutput:readerVideoOutput];
    
    NSArray *audioTracks = [compositon tracksWithMediaType:AVMediaTypeAudio];
    
    BOOL shouldRecordAudioTrack = ([audioTracks count] > 0);//modify be Chen siyang 2017-04-08
    AVAssetReaderAudioMixOutput *readerAudioOutput = nil;
    
    if (shouldRecordAudioTrack)
    {
        readerAudioOutput = [AVAssetReaderAudioMixOutput assetReaderAudioMixOutputWithAudioTracks:audioTracks audioSettings:nil];
        readerAudioOutput.audioMix = audioMix;
        readerAudioOutput.alwaysCopiesSampleData = NO;
        [assetReader addOutput:readerAudioOutput];
    }
    
    return assetReader;
}

AVAssetReaderTrackOutput

我们创建AVassetReader的方法里创建了两个AVAssetReaderTrackOutput 分别是AVAssetReaderVideoCompositionOutputAVAssetReaderAudioMixOutput 两个实例,都是AVAssetReaderTrackOutput的子类.分别包含了 视频的输出和音频的输出.
我们也可以从 AVassetReader 中获取到这个变量

    //获取输出音视频数据
    AVAssetReaderTrackOutput *readerVideoTrackOutput = nil;
    AVAssetReaderAudioMixOutput *readerAudioOutput = nil;
    for( AVAssetReaderTrackOutput *output in assetReader.outputs ) {
        if( [output.mediaType isEqualToString:AVMediaTypeVideo] ) {
            readerVideoTrackOutput = output;
        }
        if ([output.mediaType isEqualToString:AVMediaTypeAudio]) {
            readerAudioOutput = (AVAssetReaderAudioMixOutput *)output;
        }
    }

AVassetWriter

AVassetWriter 该类的作用是把 AVassetReader 的输出(AVAssetReaderTrackOutput) 通过回调 写成文件.

创建 AVassetWriter

    //初始化avassetWriter
    //输出地址
    NSFileManager *fileManager = [NSFileManager defaultManager];
    [fileManager removeItemAtURL:outPutUrl error:nil];
    AVAssetWriter *assetWriter = [AVAssetWriter assetWriterWithURL:outPutUrl fileType:AVFileTypeQuickTimeMovie error:nil];
    //音频
    NSDictionary *audioInputSetting = [self configAudioInput];
    //此处配置音频输入设置,audioInputSetting可以为nil,表示不经过处理
    AVAssetWriterInput *audioTrackInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeAudio outputSettings:audioInputSetting];
    
    //视频
    NSDictionary *videoInputSetting = [self configVideoInput] ;
    //此处配置视频频输入设置,videoInputSetting可以为nil,表示不经过处理
    AVAssetWriterInput *videoTrackInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:videoInputSetting];
    
    
    if ([assetWriter canAddInput:audioTrackInput]) {
        [assetWriter addInput:audioTrackInput];
    }
    if ([assetWriter canAddInput:videoTrackInput]) {
        [assetWriter addInput:videoTrackInput];
    }

这里对应的音视频InputSetting 是 对你写出的音视频格式的设置.如果你不设置,就会把数据全部写出来.是无损无压缩的,大小大概会比一般的MP3格式 打1000倍.

编码设置

音频编码


/**
 编码音频

 @return 返回编码字典
 */
- (NSDictionary *)configAudioInput{
    AudioChannelLayout channelLayout = {
        .mChannelLayoutTag = kAudioChannelLayoutTag_Stereo,
        .mChannelBitmap = kAudioChannelBit_Left,
        .mNumberChannelDescriptions = 0
    };
    NSData *channelLayoutData = [NSData dataWithBytes:&channelLayout length:offsetof(AudioChannelLayout, mChannelDescriptions)];
    NSDictionary *audioInputSetting = @{
                                        AVFormatIDKey: @(kAudioFormatMPEG4AAC),
                                        AVSampleRateKey: @(44100),
                                        AVNumberOfChannelsKey: @(2),
                                        AVChannelLayoutKey:channelLayoutData
                                        };
    return audioInputSetting;
}

视频编码

/**
 编码视频

 @return 返回编码字典
 */
- (NSDictionary *)configVideoInput{
    NSDictionary *videoInputSetting = @{
                                        AVVideoCodecKey:AVVideoCodecH264,
                                        AVVideoWidthKey: @(374),
                                        AVVideoHeightKey: @(666)
                                        };
    return videoInputSetting;
}

SampleBufferRef

到这里我们AVassetWriter和AVassetReader的设置都已经设置成功,就开始读和写了.

    [assetReader startReading];
    [assetWriter startWriting];

然后我们创建两个同步队列,同时解析音频和视频的outPut生成的sampleBufferRef

 dispatch_queue_t rwAudioSerializationQueue = dispatch_queue_create("Audio Queue", DISPATCH_QUEUE_SERIAL);
    dispatch_queue_t rwVideoSerializationQueue = dispatch_queue_create("Video Queue", DISPATCH_QUEUE_SERIAL);
    dispatch_group_t dispatchGroup = dispatch_group_create();
  
    //这里开始时间是可以自己设置的
    [assetWriter startSessionAtSourceTime:kCMTimeZero];
    
    
    dispatch_group_enter(dispatchGroup);
    __block BOOL isAudioFirst = YES;
    [assetWriterAudioInput requestMediaDataWhenReadyOnQueue:rwAudioSerializationQueue usingBlock:^{
        
        while ([assetWriterAudioInput isReadyForMoreMediaData]&&assetReader.status == AVAssetReaderStatusReading) {
            CMSampleBufferRef nextSampleBuffer = [assetReaderAudioOutput copyNextSampleBuffer];
            if (isAudioFirst) {
                isAudioFirst = !isAudioFirst;
                continue;
            }
            if (nextSampleBuffer) {
                [assetWriterAudioInput appendSampleBuffer:nextSampleBuffer];
                CFRelease(nextSampleBuffer);
            } else {
                [assetWriterAudioInput markAsFinished];
                dispatch_group_leave(dispatchGroup);
                break;
            }
            
        }
        
    }];
    
    dispatch_group_enter(dispatchGroup);
    __block BOOL isVideoFirst = YES;
    [assetWriterVideoInput requestMediaDataWhenReadyOnQueue:rwVideoSerializationQueue usingBlock:^{
        
        while ([assetWriterVideoInput isReadyForMoreMediaData]&&assetReader.status == AVAssetReaderStatusReading) {
            
            CMSampleBufferRef nextSampleBuffer = [assetReaderVideoOutput copyNextSampleBuffer];
            if (isVideoFirst) {
                isVideoFirst = !isVideoFirst;
                continue;
            }
            if (nextSampleBuffer) {
                [assetWriterVideoInput appendSampleBuffer:nextSampleBuffer];
                CFRelease(nextSampleBuffer);
                NSLog(@"加载");
            } else {
                [assetWriterVideoInput markAsFinished];
                dispatch_group_leave(dispatchGroup);
                break;
            }
        }
    }];
    
    dispatch_group_notify(dispatchGroup, dispatch_get_main_queue(), ^{
        [assetWriter finishWritingWithCompletionHandler:^{
            if (assetWriter.status == AVAssetWriterStatusCompleted) {
                NSLog(@"加载完毕");
                
            } else {
                NSLog(@"加载失败");
            }
            if ([self.delegate respondsToSelector:@selector(synthesisResult)]) {
                [self.delegate synthesisResult];
            }

            
        }];
    });
    

到这里我们的音视频的混音功能和保存就完成了.对于拼接 和 消音,快进和倒放就之后会抽时间也写出来.其实原理都在这了.demo 我后面再补!

原创文章转载需获授权并注明出处
请在后台留言联系转载

上一篇下一篇

猜你喜欢

热点阅读