iOSiOS之视频处理AVFoundation框架

AVFoundation框架(六) 媒体数据的编辑- 读取与写入

2017-04-20  本文已影响261人  ValienZh

视频应用的大部分场景使用前面介绍的AVFoundation各种功能即可.但有时候会遇到特殊要求,这时就可能需要直接编辑处理媒体样本来解决.

1. 读取和写入

WechatIMG2.jpeg
AVAssetReader用于从AVAsset实例中读取媒体样本. 而对应的把媒体资源进行编码并写入容器文件的是AVAssetWriter类;

AVAssetReader只针对带有一个资源的媒体样本.如果需要同时从多个基于文件的资源中读取样本, 可将他们组合到一个AVComposition中. 另外, AVAssetReader虽然是以多线程的方式不断读取下一个可用样本,但是依旧不适合实时操作,比如播放.

AVAssetWriter支持自动交叉媒体样本 (音视频错开存储,方便读取), 为了保持这个方式,只有AVAssetWriterInput的readyForMoreMediaData属性为YES时才可以将更多新的样本添加到写入信息中.

AVAssetWriter有实时操作和离线操作两种情况;

1.2 离线非实时的读写示例

使用AVAssetReader直接从资源文件中读取样本,并使用AVAssetWriter写入一个新的QuickTime文件中.

// 0. 获取资源。
AVAsset *asset = [AVAsset  assetWithURL:url]; 

// 1. 配置AVAssetReader
AVAssetTrack *track = [[asset trackWithMediaType:AVMediaTypeVideo] firstObject];
self.assetReader = [[AVAssetReader alloc] initWithAsset:asset error:nil];
NSDictionary *readerOutputSettings = @{
  (id)kCVPixelBufferPixelFormatTypeKey : @(kCVPixelFormatType_32BGRA)
};
AVAssetReaderTrackOutput *trackOutput = [[AVAssetReaderTrackOutput alloc] initWithTrack:track outputSettings: readerOutputSettings];
[self.assetReader addOutput: trackOutput];
// 2. 开始读取
[self.assetReader startReading];



// 配置AVAssetWriter , 传递一个新文件的写入地址和类型
self.assetWriter = [[AVAssetWriter alloc] initWithURL:outputURL fileType:AVFileTypeQuickTime];
NSDictionary * writerOutputSettings =@{  
  AVVideoCodecKey : AVVideoCodecH264,
  AVVideoWidthKey : @1280,
  AVVideoHeightKey :@720,
  AVVideoCompressionPropertiesKey : @{
    AVVideoMaxKeyFrameIntervalKey : @1,
    AVVideoAverageBitRateKey : @10500000,
    AVVideoProfileLevelKey : AVVideoProfileLevelH264Main31,
  }
};
AVAssetWriterInput *writerInput = [[AVAssetWriterInput alloc] initWithMediaType:AVMediaTypeVideo
                                           outputSettings: writerOutputSettings];
[self.assetWriter addInput: writerInput];

// 开始写入
[self.assetWriter startWriting];

之前介绍过AVAssetExportSession也可以用来导出新资源文件, AVAssetWriter与其对比的优势在于,它可以在进行输出编码时有更多的控制,比如指定关键帧间隔,视频比特率,像素宽高比,H264配置文件等.

在完成AVAssetReader和AVAssetWriter对象设置之后,创建一个新的写入会话来完成读取写入过程. 对于离线模式,一般使用pull model方式 , 即当AssetWriterInput准备好附加更多样本时才从资源中拉取样本.

dispatch_queue_t dispatchQueue = dispatch_queue_create("writerQueue",NULL); 
// 创建一个新的写入会话, 传入开始时间
[self.assetWriter startSessionAtSourceTime:kCMTimeZero ];
writerInput requestMediaDataWhenReadyOnQueue usingBlock:^{

  BOOL complete = NO;
  while ([writerInput isReadYForMoreMediaData] && !conmpelte) { // pull model . 这个代码块在写入器准备好添加更多样本时会不断调用.  在这期间,从trackOutput 复制可用的sampleBuffer,并添加到输入中.
    CMSampleBufferRef sampleBuffer = [trackOutput copyNextSampleBuffer];

    if (sampleBuffer) {
      BOOL result = [writerInput appendSampleBuffer:sampleBuffer];
      CFRelease(sampleBuffer);
      complete = !result;
    } else {
      [writerInput markAsFinished];
      complete = YES;
    }
    //  直到所有sampleBuffer添加写入完成. 关闭会话
    if (complete) {
      [self.assetWriter finishWritingWithCompletionHandler:^{
        AVAssetWriterStatus status = self.assetWriter.status;
        if(status == AVAssetWriterStatusComplete) {
          // 写入成功
        } else {
           //写入失败
        }
      }];
    }
  }];

2. 绘制音频波形图

有些应用可能要显示音频波动. 使用波形图; 一般分三步.

3. 捕捉录制的高级方法

之前介绍过通过AVCaptureVideoDataOutput捕捉CVPixelBuffer对象 再来OpenGL渲染制作视频特效. 但这有一个小问题, 使用AVCaptureVideoDataOutput就会失去AVCaptureMovieFileOutput来记录Output的便捷性. 无法记录就不能分享到其他地方. 下面我们使用AVAssetWriter自己实现从CaptureOutput中记录输出.

-(BOOL)setupSession:(NSError **)error {
    self.captureSession = [[AVCaptureSession alloc] init];
    self.captureSession.sessionPreset = AVCaptureSessionPresetHigh;
    if (![self setupSessionInputs:error]) {
        return NO;
    }
    if (![self setupSessionOutputs:error]) {
        return NO;
    }
    return YES;
}
// 
-(BOOL)setupSessionInputs:(NSError **)error {
    // Set up default camera device
    AVCaptureDevice *videoDevice =
        [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];

    AVCaptureDeviceInput *videoInput =
        [AVCaptureDeviceInput deviceInputWithDevice:videoDevice error:error];
    if (videoInput) {
        if ([self.captureSession canAddInput:videoInput]) {
            [self.captureSession addInput:videoInput];
            self.activeVideoInput = videoInput;
        } else {
            NSDictionary *userInfo = @{NSLocalizedDescriptionKey : @"Failed to add video input."};
            *error = [NSError errorWithDomain:THCameraErrorDomain
                                         code:THCameraErrorFailedToAddInput
                                     userInfo:userInfo];
            return NO;
        }
    } else {
        return NO;
    }

    // Setup default microphone
    AVCaptureDevice *audioDevice =
        [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio];

    AVCaptureDeviceInput *audioInput =
        [AVCaptureDeviceInput deviceInputWithDevice:audioDevice error:error];
    if (audioInput) {
        if ([self.captureSession canAddInput:audioInput]) {
            [self.captureSession addInput:audioInput];
        } else {
            NSDictionary *userInfo = @{NSLocalizedDescriptionKey : @"Failed to add audio input."};
            *error = [NSError errorWithDomain:THCameraErrorDomain
                                         code:THCameraErrorFailedToAddInput
                                     userInfo:userInfo];
            return NO;
        }
    } else {
        return NO;
    }
    return YES;
}
// 
-(BOOL)setupSessionOutputs:(NSError **)error {
    self.videoDataOutput = [[AVCaptureVideoDataOutput alloc] init];
    // kCVPixelFormatType_32BGRA格式适用于OpenGL和CoreImage.
    NSDictionary *outputSettings =
        @{(id)kCVPixelBufferPixelFormatTypeKey : @(kCVPixelFormatType_32BGRA)};
    
    self.videoDataOutput.videoSettings = outputSettings;
    self.videoDataOutput.alwaysDiscardsLateVideoFrames = NO;                // 由于要记录输出内容,设置NO,这样会给捕捉回调方法一些额外的时间来处理样本buffer.
    
    [self.videoDataOutput setSampleBufferDelegate:self
                                            queue:self.dispatchQueue];
    
    if ([self.captureSession canAddOutput:self.videoDataOutput]) {
        [self.captureSession addOutput:self.videoDataOutput];
    } else {
        return NO;
    }
    
    self.audioDataOutput = [[AVCaptureAudioDataOutput alloc] init];         // 捕捉音频内容.
    [self.audioDataOutput setSampleBufferDelegate:self
                                            queue:self.dispatchQueue];
    
    if ([self.captureSession canAddOutput:self.audioDataOutput]) {
        [self.captureSession addOutput:self.audioDataOutput];
    } else {
        return NO;
    }
    // 下面代码是调用第二步THMovieWriter类的地方.
    NSString *fileType = AVFileTypeQuickTimeMovie;
    
    NSDictionary *videoSettings =
        [self.videoDataOutput
            recommendedVideoSettingsForAssetWriterWithOutputFileType:fileType];
    
    NSDictionary *audioSettings =
        [self.audioDataOutput
            recommendedAudioSettingsForAssetWriterWithOutputFileType:fileType];
    
    self.movieWriter =
        [[THMovieWriter alloc] initWithVideoSettings:videoSettings
                                       audioSettings:audioSettings
                                       dispatchQueue:self.dispatchQueue];
    self.movieWriter.delegate = self;
    return YES;
}
// 回调处理捕捉到的sampleBuffer
-(void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
       fromConnection:(AVCaptureConnection *)connection {
 
    // 这个方法在第二步中解释. 
    [self.movieWriter processSampleBuffer:sampleBuffer];

    if (captureOutput == self.videoDataOutput) {
        
        CVPixelBufferRef imageBuffer =
            CMSampleBufferGetImageBuffer(sampleBuffer);
        
        CIImage *sourceImage =
            [CIImage imageWithCVPixelBuffer:imageBuffer options:nil];
        // 传递给屏幕显示图片.
        [self.imageTarget setImage:sourceImage];
    }
}

上面为AVCaptureVideoDataOutputAVCaptureAudioDataOutput使用了一个调度队列,对于示例是足够的, 但是如果希望对数据进行更复杂的处理,可能需要为每一个使用单独的队列.详细参考苹果示例代码RosyWriter.

// .h
#import <AVFoundation/AVFoundation.h>

@protocol THMovieWriterDelegate <NSObject>
- (void)didWriteMovieAtURL:(NSURL *)outputURL;    // 定义代理提示什么时候影片文件被写入磁盘.
@end

@interface THMovieWriter : NSObject

- (id)initWithVideoSettings:(NSDictionary *)videoSettings                   // 两个字典用来描述AVAssetWriter配置参数和调度对象.
              audioSettings:(NSDictionary *)audioSettings
              dispatchQueue:(dispatch_queue_t)dispatchQueue;
// 另外定义开始和停止写入进程的接口方法.
- (void)startWriting;
- (void)stopWriting;
@property (nonatomic) BOOL isWriting;

@property (weak, nonatomic) id<THMovieWriterDelegate> delegate;            

- (void)processSampleBuffer:(CMSampleBufferRef)sampleBuffer;                // 在每当有新的样本被捕捉到时调用这个方法, 用于处理sampleBuffer.
@end
// .m

static NSString *const THVideoFilename = @"movie.mov";

@interface THMovieWriter ()

@property (strong, nonatomic) AVAssetWriter *assetWriter;
@property (strong, nonatomic) AVAssetWriterInput *assetWriterVideoInput;
@property (strong, nonatomic) AVAssetWriterInput *assetWriterAudioInput;
@property (strong, nonatomic)
    AVAssetWriterInputPixelBufferAdaptor *assetWriterInputPixelBufferAdaptor;

@property (strong, nonatomic) dispatch_queue_t dispatchQueue;

@property (weak, nonatomic) CIContext *ciContext;
@property (nonatomic) CGColorSpaceRef colorSpace;
@property (strong, nonatomic) CIFilter *activeFilter;

@property (strong, nonatomic) NSDictionary *videoSettings;
@property (strong, nonatomic) NSDictionary *audioSettings;

@property (nonatomic) BOOL firstSample;


- (id)initWithVideoSettings:(NSDictionary *)videoSettings
              audioSettings:(NSDictionary *)audioSettings
              dispatchQueue:(dispatch_queue_t)dispatchQueue {

    self = [super init];
    if (self) {
        _videoSettings = videoSettings;
        _audioSettings = audioSettings;
        _dispatchQueue = dispatchQueue;

        _ciContext = [THContextManager sharedInstance].ciContext;           // Core Image 上下文,用于筛选sampleBuffer来得到CVPixelBuffer
        _colorSpace = CGColorSpaceCreateDeviceRGB();
        // 封装的文件写入管理类.
        _activeFilter = [THPhotoFilters defaultFilter];
        _firstSample = YES;

        NSNotificationCenter *nc = [NSNotificationCenter defaultCenter];    // 注册通知,当用户切换筛选器时调用.用于更新activeFilter属性.
        [nc addObserver:self
               selector:@selector(filterChanged:)
                   name:THFilterSelectionChangedNotification
                 object:nil];
    }
    return self;
}

- (void)startWriting {
    dispatch_async(self.dispatchQueue, ^{                                   // 异步避免按钮点击卡顿.

        NSError *error = nil;

        NSString *fileType = AVFileTypeQuickTimeMovie;
        self.assetWriter =                                                  // 创建新的AVAssetWriter
            [AVAssetWriter assetWriterWithURL:[self outputURL]
                                     fileType:fileType
                                        error:&error];
        if (!self.assetWriter || error) {
            NSString *formatString = @"Could not create AVAssetWriter: %@";
            NSLog(@"%@", [NSString stringWithFormat:formatString, error]);
            return;
        }

        self.assetWriterVideoInput =                                        // 创建新的AVAssetWriterInput,来附加从captureOutput得到的sampleBuffer.
            [[AVAssetWriterInput alloc] initWithMediaType:AVMediaTypeVideo
                                           outputSettings:self.videoSettings];

        self.assetWriterVideoInput.expectsMediaDataInRealTime = YES;    // 告诉应用这个输入 应该对实时性进行优化.

        UIDeviceOrientation orientation = [UIDevice currentDevice].orientation;
        self.assetWriterVideoInput.transform =                              // 这里是为捕捉时设备方向做适配.
            THTransformForDeviceOrientation(orientation);

        NSDictionary *attributes = @{                                       // 用于配置assetWriterInputPixelBufferAdaptor. 要保证最大效率,字典的值要对应配置AVCaptureVideoDataOutput时的值.
            (id)kCVPixelBufferPixelFormatTypeKey : @(kCVPixelFormatType_32BGRA),
            (id)kCVPixelBufferWidthKey : self.videoSettings[AVVideoWidthKey],
            (id)kCVPixelBufferHeightKey : self.videoSettings[AVVideoHeightKey],
            (id)kCVPixelFormatOpenGLESCompatibility : (id)kCFBooleanTrue
        };

        self.assetWriterInputPixelBufferAdaptor =                           // 创建,用于提供一个优化的CVPixelBufferPool ,使用它可以创建CVPixelBuffer对象来渲染筛选视频帧. 
            [[AVAssetWriterInputPixelBufferAdaptor alloc]
                initWithAssetWriterInput:self.assetWriterVideoInput
             sourcePixelBufferAttributes:attributes];


        if ([self.assetWriter canAddInput:self.assetWriterVideoInput]) {    // 基本步骤,将视频输入添加到写入器.
            [self.assetWriter addInput:self.assetWriterVideoInput];
        } else {
            NSLog(@"Unable to add video input.");
            return;
        }

        self.assetWriterAudioInput =                                        // 同上面Video的操作.
            [[AVAssetWriterInput alloc] initWithMediaType:AVMediaTypeAudio
                                           outputSettings:self.audioSettings];

        self.assetWriterAudioInput.expectsMediaDataInRealTime = YES;

        if ([self.assetWriter canAddInput:self.assetWriterAudioInput]) {   
            [self.assetWriter addInput:self.assetWriterAudioInput];
        } else {
            NSLog(@"Unable to add audio input.");
        }

        self.isWriting = YES;                                              // 表示可以附sampleBuffer了.
        self.firstSample = YES;
    });
}

- (void)processSampleBuffer:(CMSampleBufferRef)sampleBuffer {
    
    if (!self.isWriting) {
        return;
    }
    
    CMFormatDescriptionRef formatDesc =                                     // 这个方法会处理音频和视频两类样本. 所以要进行判断分类处理.
        CMSampleBufferGetFormatDescription(sampleBuffer);
    
    CMMediaType mediaType = CMFormatDescriptionGetMediaType(formatDesc);

    if (mediaType == kCMMediaType_Video) {

        CMTime timestamp =
            CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
        
        if (self.firstSample) {                                             // 如果是刚开始处理的第一个sampleBuffer,则调用资源写入器开启一个新的写入会话.
            if ([self.assetWriter startWriting]) {
                [self.assetWriter startSessionAtSourceTime:timestamp];
            } else {
                NSLog(@"Failed to start writing.");
            }
            self.firstSample = NO;
        }
        
        CVPixelBufferRef outputRenderBuffer = NULL;
        
        CVPixelBufferPoolRef pixelBufferPool =
            self.assetWriterInputPixelBufferAdaptor.pixelBufferPool;
        
        OSStatus err = CVPixelBufferPoolCreatePixelBuffer(NULL,             // 从PixelBuffer适配器中 创建一个空的CVPixelBuffer,使用该PixelBuffer渲染筛选好的视频帧的output.
                                                          pixelBufferPool,
                                                          &outputRenderBuffer);
        if (err) {
            NSLog(@"Unable to obtain a pixel buffer from the pool.");
            return;
        }

        CVPixelBufferRef imageBuffer =                                      // 获取当前sampleBuffer的CVPixelBuffer.根据CVPixelBuffer创建一个新的CIImage. 并将它设置为活动筛选器的kCIInputImageKey值.   通过筛选器来得到输出的图片.
            CMSampleBufferGetImageBuffer(sampleBuffer);

        CIImage *sourceImage = [CIImage imageWithCVPixelBuffer:imageBuffer
                                                       options:nil];

        [self.activeFilter setValue:sourceImage forKey:kCIInputImageKey];

        CIImage *filteredImage = self.activeFilter.outputImage;

        if (!filteredImage) {
            filteredImage = sourceImage;
        }

        [self.ciContext render:filteredImage                                // 将筛选好的CIImage的输出渲染到上面创建的CVPixelBuffer中.
               toCVPixelBuffer:outputRenderBuffer
                        bounds:filteredImage.extent
                    colorSpace:self.colorSpace];


        if (self.assetWriterVideoInput.readyForMoreMediaData) {             // 如果视频输入的此属性为YES. 则将PixelBuffer连同当前样本的呈现时间都附加到assetWriterInputPixelBufferAdaptor适配器. 至此就完成了对当前视频样本的处理.
            if (![self.assetWriterInputPixelBufferAdaptor
                            appendPixelBuffer:outputRenderBuffer
                         withPresentationTime:timestamp]) {
                NSLog(@"Error appending pixel buffer.");
            }
        }
        
        CVPixelBufferRelease(outputRenderBuffer);
        
    }

    else if (!self.firstSample && mediaType == kCMMediaType_Audio) {        // 如果第一个样本处理完成且当前为音频样本, 则添加到输入.
        if (self.assetWriterAudioInput.isReadyForMoreMediaData) {
            if (![self.assetWriterAudioInput appendSampleBuffer:sampleBuffer]) {
                NSLog(@"Error appending audio sample buffer.");
            }
        }
    }

}

- (void)stopWriting {

    self.isWriting = NO;                                                    // 让processSampleBuffer方法停止处理更多样本. 

    dispatch_async(self.dispatchQueue, ^{

        [self.assetWriter finishWritingWithCompletionHandler:^{             // 终止写入会话. 并关闭磁盘上的文件.

            if (self.assetWriter.status == AVAssetWriterStatusCompleted) {
                dispatch_async(dispatch_get_main_queue(), ^{                // 判断写入器状态, 如果成功写入,回到主线程写入用户的 photos Library
                    NSURL *fileURL = [self.assetWriter outputURL];
                    [self.delegate didWriteMovieAtURL:fileURL];
                });
            } else {
                NSLog(@"Failed to write movie: %@", self.assetWriter.error);
            }
        }];
    });
}
// 用于配置AVAssetWriter. 在临时目录定义一个URL,并将之前的同名文件删除.
- (NSURL *)outputURL {
    NSString *filePath =
        [NSTemporaryDirectory() stringByAppendingPathComponent:THVideoFilename];
    NSURL *url = [NSURL fileURLWithPath:filePath];
    if ([[NSFileManager defaultManager] fileExistsAtPath:url.path]) {
        [[NSFileManager defaultManager] removeItemAtURL:url error:nil];
    }
    return url;
}
上一篇下一篇

猜你喜欢

热点阅读