iOS开发 AVCapture实现视频录制
1. AVCaptureDevice
首先, 我们需要判断设备是否支持前置摄像头和后置摄像头, 这里需要用到AVCaptureDevice, 我们看看开发文档怎么说的.
- An
AVCaptureDevice
object represents a physical capture device and the properties associated with that device. You use a capture device to configure the properties of the underlying hardware. A capture device also provides input data (such as audio or video) to anAVCaptureSession
object.- You use the methods of the
AVCaptureDevice
class to enumerate the available devices, query their capabilities, and be informed about when devices come and go. Before you attempt to set properties of a capture device (its focus mode, exposure mode, and so on), you must first acquire a lock on the device using thelockForConfiguration:
method. You should also query the device’s capabilities to ensure that the new modes you intend to set are valid for that device. You can then set the properties and release the lock using theunlockForConfiguration
method. You may hold the lock if you want all settable device properties to remain unchanged. However, holding the device lock unnecessarily may degrade capture quality in other applications sharing the device and is not recommended.
- AVCaptureDevice代表硬件设备, 并且为AVCaptureSession提供input
- 要想使用AVCaptureDevice, 应该先将设备支持的device枚举出来, 根据摄像头的位置( 前置摄像头或者后置摄像头 )获取需要用的那个摄像头, 再使用;
- 如果想要对AVCaptureDevice对象的一些属性进行设置, 应该先调用
lockForConfiguration:
方法, 设置结束后, 调用unlockForConfiguration
方法
先通过遍历的方法, 根据所需要的媒体类型, 获取到可用的摄像头设备
//根据摄像头的位置获取到摄像头设备
- (AVCaptureDevice *)getCaptureDeviceWithCameraPosition:(AVCaptureDevicePosition)position {
NSArray *devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
for (AVCaptureDevice *de in devices) {
if (de.position == position) {
return de;
}
}
return nil;
}
如果需要更改AVCaptureDevice的属性, 则可以这样更改
NSError *error = nil;
[self.captureDevice lockForConfiguration:&error];
//这里对device进行配置
[self.captureDevice unlockForConfiguration];
2. 输入(AVCaptureDeviceInput)
看一下苹果开发文档的介绍
- A capture input that provides media from a capture device to a capture session.
AVCaptureDeviceInput
is a concrete sub-class ofAVCaptureInput
you use to capture data from anAVCaptureDevice
object.
- AVCaptureDeviceInput是AVCaptureInput的子类
- AVCaptureDeviceInput从AVCaptureDevice采集数据, 然后输送给AVCaptureSession
看一下初始化方法:
- (AVCaptureDeviceInput *)deviceInput {
if (!_deviceInput) {
//根据device进行初始化
_deviceInput = [[AVCaptureDeviceInput alloc]initWithDevice:self.captureDevice error:nil];
}
return _deviceInput;
}
3. . AVCaptureVideoDataOutput
- You can use a video data output to process uncompressed frames from the video being captured or to access compressed frames.
- An instance of
AVCaptureVideoDataOutput
produces video frames you can process using other media APIs. You can access the frames with thecaptureOutput:didOutputSampleBuffer:fromConnection:
delegate method.
AVCaptureVideoDataOutput是用来处理捕捉到的视频帧数的, 可使用代理方法captureOutput:didOutputSampleBuffer:fromConnection:
来进行视频处理
- (AVCaptureVideoDataOutput *)videoOutput {
if (!_videoOutput) {
_videoOutput = [[AVCaptureVideoDataOutput alloc]init];
dispatch_queue_t queue = dispatch_queue_create("videoQueue", DISPATCH_QUEUE_SERIAL);
[_videoOutput setSampleBufferDelegate:self queue:queue];
}
return _videoOutput;
}
4. AVCaptureSession
有了input和output, 我们需要用AVCaptureSession来协调输入和输出.
- An object that manages capture activity and coordinates the flow of data from input devices to capture outputs.
- To perform a real-time or offline capture, you instantiate an
AVCaptureSession
object and add appropriate inputs (such asAVCaptureDeviceInput
), and outputs (such asAVCaptureMovieFileOutput
).- You invoke
startRuning
to start the flow of data from the inputs to the outputs, and invokestopRuning
to stop the flow.- The
startRunning
method is a blocking call which can take some time, therefore you should perform session setup on a serial queue so that the main queue isn't blocked (which keeps the UI responsive).
- AVCaptureSession是用来协调输入inputs和输出outputs的
- AVCaptureSession对象要添加一个input(如 AVCaptureDeviceInput)和一个output(如AVCaptureMovieFileOutput)来操作实时或者离线的捕捉
- AVCaptureSession对象调用startRuning开始从输入到输出的数据输送, 调用stopRuning来停止数据输送
- 大概就是要把session放到一个穿行队列中去执行, 防止startRunning中的block未执行完毕导致一些错误.
下面是AVCaptureSession
的初始化
- (AVCaptureSession *)captureSession {
if (!_captureSession) {
_captureSession = [[AVCaptureSession alloc]init];
//添加输入
if ([_captureSession canAddInput:self.deviceInput]) {
[_captureSession addInput:self.deviceInput];
}
//添加输出
if ([_captureSession canAddOutput:self.videoOutput]) {
[_captureSession addOutput:self.videoOutput];
}
}
return _captureSession;
}
调用startRuning, 这里我没有尝试过, 看文章的小伙伴指教一下
dispatch_sync(dispatch_queue_create("serialQueue", DISPATCH_QUEUE_SERIAL), ^{
[self.captureSession startRunning];
//session的其他操作
});
5. 预览视图 AVCaptureVideoPreviewLayer
AVCaptureVideoPreviewLayer
is a subclass ofCALayer
that you use to display video as it is being captured by an input device.- You use this preview layer in conjunction with an AV capture session
AVCaptureVideoPreviewLayer是CALayer的子类, 用来展示摄像设备捕捉到的视频, 需要用到AVCaptureSession
- (AVCaptureVideoPreviewLayer *)previewLayer {
if (!_previewLayer) {
_previewLayer = [[AVCaptureVideoPreviewLayer alloc]initWithSession:self.captureSession];
[_previewLayer setFrame:CGRectMake(0, 64, CGRectGetWidth([UIScreen mainScreen].bounds), CGRectGetHeight([UIScreen mainScreen].bounds) - 64 - CGRectGetHeight(self.takePhotoBtn.frame) - 50)];
[self.view.layer addSublayer:_previewLayer];
_previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
}
return _previewLayer;
}
关系图
关系图.pic.jpg完成上面的所有部分, 已经可以从previewLayer中看到实时的音像了......
接下来就是录制并保存到相册
6. 视频存储
1. 将录制的视频存储到本地沙盒
2. 将沙盒里的视频加入到相册中
3. 为这个视频制作封面(首帧)
这里我们采用AVAssetWriter
和AVAssetWriterInput
将视频写入本地沙盒
6.1首先我们看一下AVAssetWriter
的用法
You use an AVAssetWriter object to write media data to a new file of a specified audiovisual container type, such as a QuickTime movie file or an MPEG-4 file, with support for automatic interleaving of media data for multiple concurrent tracks.
可以通过AVAssetWriter对象将媒体数据写入特定的类型, 比如QuickTime视频文件或者MPEG-4文件类型.
- You can get the media data for one or more assets from instances of
AVAssetReader
or even from outside the AV Foundation API set. Media data is presented toAVAssetWriter
for writing in the form of CMSampleBuffers (see CMSampleBuffer). Sequences of sample data appended to the asset writer inputs are considered to fall within “sample-writing sessions.” You must callstartSessionAtSourceTime:
to begin one of these sessions.- Using
AVAssetWriter
, you can optionally re-encode media samples as they are written. You can also optionally write metadata collections to the output file.- You can only use a given instance of
AVAssetWriter
once to write to a single file. If you want to write to files multiple times, you must use a new instance ofAVAssetWriter
each time.
- AVAssetWriter可以通过AVAssetReader(操作离线数据)或者AVFoundation的API拿到媒体数据(操作实时数据), 这些数据将会以CMSampleBuffers的形式呈献给AVAssetWriter, 想要将这些缓冲区里的数据append to(补入?) AVAssetWriterInput, 必须要调用startSessionAtSourceTime:方法;
需要注意的是You must callstartSessionAtSourceTime:
to begin one of these sessions., 也就是说, 每一次的AVAssetWriter的写入(猜测AVAssetWriter里也有一个输入到输出的过程, 被称为一个session), 都需要调用这个方法.
if (self.writer.status == AVAssetWriterStatusUnknown) {
CMTime startTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
[self.writer startWriting];
[self.writer startSessionAtSourceTime:startTime];
}
- 可以选择性的将媒体数据进行重新编码, 可以选择性地将媒体数据写入output file中.
- 如果想要写入多个文件, 每次都要新建一个AVAssetWriter对象(比如音频和视频).
由于音频和视频都是从AVCaptureVideoDataOutputSampleBufferDelegate的代理方法中获取到的
- (void)captureOutput:(AVCaptureOutput *)output didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection{
//根据AVCaptureOutput判断是音频还是视频
}
所以要分别用不同的AVAssetWriter对象去写入音频和视频数据
看一下AVAssetWriter常用的方法
//初始化方法, 每个新的视频都要有个新的AVAssetWriter 新的视频本地存储url(outputURL)
+ (nullable instancetype)assetWriterWithURL:(NSURL *)outputURL fileType:(AVFileType)outputFileType error:(NSError * _Nullable * _Nullable)outError;
//添加AVAssetWriterInput
- (void)addInput:(AVAssetWriterInput *)input;
//开始写入
- (BOOL)startWriting;
//设定时间, 应该是指每一帧的时间, CMTime包含四个参数, value表示当前第几帧,
timescale表示每秒钟多少帧
- (void)startSessionAtSourceTime:(CMTime)startTime;
6.2 AVAssetWriterInput
- 可以使用AVAssetWriterInput将AVAssetWriter的输出的文件补入封装成CMSampleBufferRef对象的媒体样本( 如 [self.writerInput appendSampleBuffer:sampleBuffer]).
- 当我们是从实时的数据源拿媒体数据的时候, 必须要先设置expectsMediaDataInRealTime为YES;
如果不是从实时数据源拿数据(比如AVAssetReader), 则设为NO, 然后调用requestMediaDataWhenReadyOnQueue:usingBlock:
方法来处理
初始化AVAssetWriterInput对象
- (AVAssetWriterInput *)writerInput {
if (!_writerInput) {
//录制视频的一些配置,分辨率,编码方式等等
NSDictionary* settings = [NSDictionary dictionaryWithObjectsAndKeys:
AVVideoCodecH264, AVVideoCodecKey,
[NSNumber numberWithInteger: SCREEN_WIDTH * 2], AVVideoWidthKey,
[NSNumber numberWithInteger: SCREEN_HEIGHT * 2], AVVideoHeightKey,
nil];
_writerInput = [[AVAssetWriterInput alloc]initWithMediaType:AVMediaTypeVideo outputSettings:settings];
//录制视频是获取实时数据
_writerInput.expectsMediaDataInRealTime = YES;
}
return _writerInput;
}
使用的时候需要先判断readyForMoreMediaData, 然后再append(补入?拼接?)
if (self.writerInput.readyForMoreMediaData == YES) {
BOOL success = [self.writerInput appendSampleBuffer:sampleBuffer];
if (!success) {
[self.writer finishWritingWithCompletionHandler:^{
NSLog(@"finished");
}];
}else {
NSLog(@"succeed!");
}
}
但是AVAssetWriterInput必须要在AVAssetWriter处理完每一帧之后, 才能将数据写入指定的内存中. 所以一般是这样
- (void)writeInSampleBuffer:(CMSampleBufferRef)sampleBuffer {
if (CMSampleBufferDataIsReady(sampleBuffer)) {
//AVAssetWriterStatusUnknown表示这一次的write还没开始
if (self.writer.status == AVAssetWriterStatusUnknown) {
CMTime startTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
[self.writer startWriting];
[self.writer startSessionAtSourceTime:startTime];
}
if (self.writerInput.readyForMoreMediaData == YES) {
BOOL success = [self.writerInput appendSampleBuffer:sampleBuffer];
if (!success) {
[self.writer finishWritingWithCompletionHandler:^{
NSLog(@"finished");
}];
}else {
NSLog(@"succeed!");
}
}
}
}
6.3 写入沙盒
视频的每一帧都会通过didOutputSampleBuffer方法, 所以要在该方法中进行处理.
#pragma mark AVCaptureVideoDataOutputSampleBufferDelegate
- (void)captureOutput:(AVCaptureOutput *)output didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection;
- 每当捕获--输出一个新的视频帧的时候, Delegate会收到这个消息, 并且按照视频的设置进行解码或者重新编码; 也可以通过更多的API做更多的处理.
- 这个方法是被声明在AVCaptureVideoDataOutput的sampleBufferCallbackQueue属性中的, 它将会被定期调用, 所以必须要保证捕获的性能, 防止掉帧
- 如果你需要在该方法的外部引用CMSampleBufferRef对象, 必须要使用
CFRetain(sampleBuffer);
和CFRelease(sampleBuffer);
防止CMSampleBufferRef对象被释放- 关于长时间保留CMSampleBufferRef对象(CFRetain(sampleBuffer)), 将会导致内存问题, 尽量不要长时间retainCMSampleBufferRef对象.
代码示例
#pragma mark AVCaptureVideoDataOutputSampleBufferDelegate
- (void)captureOutput:(AVCaptureOutput *)output didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
//当开始捕捉的时候, 这里就拿到了数据
if (output == self.videoOutput) {
CFRetain(sampleBuffer);
[self writeInSampleBuffer:sampleBuffer];
CFRelease(sampleBuffer);
}
}
6.4 存入相册
需要导入#import <Photos/Photos.h>
//保存到相册
- (void)saveToAlbum {
[[PHPhotoLibrary sharedPhotoLibrary] performChanges:^{
[PHAssetChangeRequest creationRequestForAssetFromVideoAtFileURL:self.videoURL];
} completionHandler:^(BOOL success, NSError * _Nullable error) {
NSLog(@"保存成功");
}];
}
如果需要预览, 就截取第一帧作为预览图
Demo地址: https://github.com/YuePei/Camera