iOS屏幕录制步骤一:截屏、将截屏图片合成视频
2017-03-01 本文已影响2374人
OC笔记
写在前面
公司近期让做一个录制屏幕类的App,我研究了iOS9新增的Replaykit框架,使用起来确实挺简单的,性能也很好,但是获取不到视频文件,这一点就决定了我不能使用这个框架。那么我只能使用最原始的方法,抓取view的截图,然后将这些截图合成视频,最后再把同时录制的音频合成到视频中。这篇文章先介绍如何抓取截图并合成视频。
抓取截屏
先贴一下代码:通过一个layer抓取屏幕截图
/// view => screen shot image
- (UIImage *)fetchScreenshot {
UIImage *image = nil;
if (self.captureLayer) {
CGSize imageSize = self.captureLayer.bounds.size;
UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
[self.captureLayer renderInContext:context];
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
return image;
}
上面的方法主要通过使用图形上下文CGContextRef来抓取layer的内容。代码很简单,不做过多的解释了。
将CGImage转换成CVPixelBufferRef缓存数据
/// image => PixelBuffer
- (CVPixelBufferRef)pixelBufferFromCGImage:(CGImageRef)image {
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
CVPixelBufferRef pxbuffer = NULL;
CGFloat frameWidth = CGImageGetWidth(image);
CGFloat frameHeight = CGImageGetHeight(image);
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault,frameWidth,frameHeight,kCVPixelFormatType_32ARGB,(__bridge CFDictionaryRef) options, &pxbuffer);
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(pxdata != NULL);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, frameWidth, frameHeight, 8,CVPixelBufferGetBytesPerRow(pxbuffer),rgbColorSpace,(CGBitmapInfo)kCGImageAlphaNoneSkipFirst);
NSParameterAssert(context);
CGContextConcatCTM(context, CGAffineTransformIdentity);
CGContextDrawImage(context, CGRectMake(0, 0,frameWidth,frameHeight), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}
只有将CGImage转换成CVPixelBufferRef缓存数据后,才能存储到视频中。
将多张图片合成视频
将图片合成视频需要使用到以下几个类:
AVAssetWriter
AVAssetWriter负责将媒体数据写入到文件。创建AVAssetWriter对象需要传入的参数包括文件的输出路径URL和文件格式。文件格式选择AVFileTypeMPEG4即是MP4格式。
NSURL *fileUrl = [NSURL fileURLWithPath:self.videoPath];
self.videoWriter = [[AVAssetWriter alloc] initWithURL:fileUrl fileType:AVFileTypeMPEG4 error:&error];
AVAssetWriterInput
AVAssetWriterInput负责存储视频或者音频缓存数据,AVAssetWriterInput对象创建完成后需要添加到AVAssetWriter中。
// 设置视频的编码和尺寸
NSDictionary *videoCompressionProps = [NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithDouble:size.width * size.height], AVVideoAverageBitRateKey, nil];
NSDictionary *videoSettings = @{AVVideoCodecKey: AVVideoCodecH264,
AVVideoWidthKey: @(size.width),
AVVideoHeightKey: @(size.height),
AVVideoCompressionPropertiesKey: videoCompressionProps};
self.videoWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:videoSettings];
NSParameterAssert(self.videoWriterInput);
// expectsMediaDataInRealTime设置为YES, 表示实时获取摄像头和麦克风采集到的视频数据和音频数据
self.videoWriterInput.expectsMediaDataInRealTime = YES;
AVAssetWriterInputPixelBufferAdaptor
AVAssetWriterInputPixelBufferAdaptor负责将图片转成的缓存数据CVPixelBufferRef追加到AVAssetWriterInput中。
NSDictionary *bufferAttributes = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithInt:kCVPixelFormatType_32ARGB], kCVPixelBufferPixelFormatTypeKey, nil];
self.adaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:self.videoWriterInput sourcePixelBufferAttributes:bufferAttributes];
以上只写了图片合成视频的主要几个点,如果大家有不明白的,或者需要参考代码的话,请点击以下链接:GitHub