使用AVFoundation实现视频的录制(仿微信视频录制)

2020-06-11  本文已影响0人  TigerManBoy

前言:项目中有发送视频和图片的需求,产品想要微信小视频录制的功能,即单击拍照,长按录制小视频,参考很多大神的demo,以及翻阅AVFoundation相关文献,自己用AVFoundation的API封装了一个小视频录制,后面会加上闪光灯,聚焦,滤镜...等功能。

AVFoundation官方文档介绍:

The AVFoundation framework combines four major technology areas that together encompass a wide range of tasks for capturing, processing, synthesizing, controlling, importing and exporting audiovisual media on Apple platforms.

解释为:
AVFoundation框架结合了四个主要的技术领域,它们共同包含了在苹果平台上捕获、处理、合成、控制、导入和导出视听媒体的广泛任务。

AVFoundation在相关框架栈中的位置图如:

AVFoundation框架图

我们本文章只讨论关于AVFoundation关于音视频录制的功能,AVFoundation包含很多头文件,其中涉及音视频的头文件主要有以下几个:

//AVCaptureDevice提供实时输入媒体数据(如视频和音频)的物理设备
#import <AVFoundation/AVCaptureDevice.h>
//AVCaptureInput是一个抽象类,它提供了一个接口,用于将捕获输入源连接到AVCaptureSession
#import <AVFoundation/AVCaptureInput.h> 
//AVCaptureOutput用于处理未压缩或压缩的音视频样本被捕获,一般用AVCaptureAudioDataOutput和AVCaptureVideoDataOutput子类
#import <AVFoundation/AVCaptureOutput.h> 
//AVCaptureSession是AVFoundation捕获类的中心枢纽
#import <AVFoundation/AVCaptureSession.h>
//用于预览AVCaptureSession的可视输出的CoreAnimation层的子类
#import <AVFoundation/AVCaptureVideoPreviewLayer.h>
//AVAssetWriter提供将媒体数据写入新文件的服务
#import <AVFoundation/AVAssetWriter.h>
//用于将新媒体样本或对打包为CMSampleBuffer对象的现有媒体样本的引用附加到AVAssetWriter输出文件的单个轨迹中
#import <AVFoundation/AVAssetWriterInput.h>
//系统提供的处理视频的类(压缩)
#import <AVFoundation/AVAssetExportSession.h>

可以用如下一幅图来概述:


AVFoundation功能概述图

从图上可以清晰的看出各个模块的功能,接下来详细介绍一下每个模块如何使用的。

1. AVCaptureSession

AVCaptureSession是AVFoundation捕获类的中心枢纽,用法:

- (AVCaptureSession *)session{
    if (_session == nil){
        _session = [[AVCaptureSession alloc] init];
        //高质量采集率
        [_session setSessionPreset:AVCaptureSessionPresetHigh];
        if([_session canAddInput:self.videoInput]) [_session addInput:self.videoInput]; //添加视频输入流
        if([_session canAddInput:self.audioInput])  [_session addInput:self.audioInput];  //添加音频输入流
        if([_session canAddOutput:self.videoDataOutput]) [_session addOutput:self.videoDataOutput];  //视频数据输出流 纯画面
        if([_session canAddOutput:self.audioDataOutput]) [_session addOutput:self.audioDataOutput];  //音频数据输出流
        
        AVCaptureConnection * captureVideoConnection = [self.videoDataOutput connectionWithMediaType:AVMediaTypeVideo];
        // 设置是否为镜像,前置摄像头采集到的数据本来就是翻转的,这里设置为镜像把画面转回来
        if (self.devicePosition == AVCaptureDevicePositionFront && captureVideoConnection.supportsVideoMirroring) {
            captureVideoConnection.videoMirrored = YES;
        }
        captureVideoConnection.videoOrientation = AVCaptureVideoOrientationPortrait;
    }
    return _session;
}

AVCaptureSessionPreset此属性的值是AVCaptureSession预设值,表示接收方正在使用的当前会话预设值。可以在接收器运行时设置session预设属性,有一下几个值:

AVCaptureSession需要添加相应的视频/音频的输入/输出流才能捕获音视频的样本。

AVCaptureSession可以调用startRunning来启动捕获和stopRunning来停止捕获。

2. AVCaptureDeviceInput

AVCaptureDeviceInputAVCaptureInput的子类,提供了一个接口,用于将捕获输入源链接到AVCaptureSession,用法:

① 视频输入源:

- (AVCaptureDeviceInput *)videoInput {
    if (_videoInput == nil) {
        //添加一个视频输入设备  默认是后置摄像头
        AVCaptureDevice *videoCaptureDevice =  [self getCameraDeviceWithPosition:AVCaptureDevicePositionBack];
        //创建视频输入流
        _videoInput = [AVCaptureDeviceInput deviceInputWithDevice:videoCaptureDevice error:nil];
        if (!_videoInput){
            NSLog(@"获得摄像头失败");
            return nil;
        }
    }
    return _videoInput;
}

② 音频输入源:

- (AVCaptureDeviceInput *)audioInput {
    if (_audioInput == nil) {
        NSError * error = nil;
        //添加一个音频输入/捕获设备
        AVCaptureDevice * audioCaptureDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio];
        _audioInput = [[AVCaptureDeviceInput alloc] initWithDevice:audioCaptureDevice error:&error];
        if (error) {
            NSLog(@"获得音频输入设备失败:%@",error.localizedDescription);
        }
    }
    return _audioInput;
}

音视频输入源都用到了AVCaptureDevice,AVCaptureDevice表示提供实时输入媒体数据(如视频和音频)的物理设备,AVCaptureDevice通过AVCaptureDevicePosition参数获取,AVCaptureDevicePosition有一下几个参数:

-AVCaptureDevicePositionUnspecified:默认(后置)
-AVCaptureDevicePositionBack:后置
-AVCaptureDevicePositionFront:前置

获取视频的AVCaptureDevice的代码如下:

//获取指定位置的摄像头
- (AVCaptureDevice *)getCameraDeviceWithPosition:(AVCaptureDevicePosition)positon {
    if (@available(iOS 10.2, *)) {
        AVCaptureDeviceDiscoverySession *dissession = [AVCaptureDeviceDiscoverySession discoverySessionWithDeviceTypes:@[AVCaptureDeviceTypeBuiltInDualCamera,AVCaptureDeviceTypeBuiltInTelephotoCamera,AVCaptureDeviceTypeBuiltInWideAngleCamera] mediaType:AVMediaTypeVideo position:positon];
        for (AVCaptureDevice *device in dissession.devices) {
            if ([device position] == positon) {
                return device;
            }
        }
    } else {
        NSArray *devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
        for (AVCaptureDevice *device in devices) {
            if ([device position] == positon) {
                return device;
            }
        }
    }
    return nil;
}

注:切换前后置摄像头需要调用beginConfigurationcommitConfiguration来进行摄像头设备的切换,移除之前的设备输入源,添加新的设备输入源,代码如下:

//切换前/后置摄像头
- (void)switchsCamera:(AVCaptureDevicePosition)devicePosition {
    //当前设备方向
    if (self.devicePosition == devicePosition) {
        return;
    }
    AVCaptureDeviceInput *videoInput = [AVCaptureDeviceInput deviceInputWithDevice:[self getCameraDeviceWithPosition:devicePosition] error:nil];
    //先开启配置,配置完成后提交配置改变
    [self.session beginConfiguration];
    //移除原有输入对象
    [self.session removeInput:self.videoInput];
    //添加新的输入对象
    if ([self.session canAddInput:videoInput]) {
        [self.session addInput:videoInput];
        self.videoInput = videoInput;
    }
    
    //视频输入对象发生了改变  视频输出的链接也要重新初始化
    AVCaptureConnection * captureConnection = [self.videoDataOutput connectionWithMediaType:AVMediaTypeVideo];
    if (self.devicePosition == AVCaptureDevicePositionFront && captureConnection.supportsVideoMirroring) {
        captureConnection.videoMirrored = YES;
    }
    captureConnection.videoOrientation = AVCaptureVideoOrientationPortrait;
    
    //提交新的输入对象
    [self.session commitConfiguration];
}

获取音频的AVCaptureDevice则是通过defaultDeviceWithMediaType:方法的,需要的参数是AVMediaType-媒体类型,常用的有视频的:AVMediaTypeVideo和音频的:AVMediaTypeAudio

3. AVCaptureVideoDataOutput和AVCaptureAudioDataOutput

AVCaptureVideoDataOutputAVCaptureAudioDataOutputAVCaptureOutput的子类,用于捕获未压缩或压缩的音视频样本,使用代码如下:

//视频输入源
- (AVCaptureVideoDataOutput *)videoDataOutput {
    if (_videoDataOutput == nil) {
        _videoDataOutput = [[AVCaptureVideoDataOutput alloc] init];
        [_videoDataOutput setSampleBufferDelegate:self queue:dispatch_get_global_queue(0, 0)];
    }
    return _videoDataOutput;
}
//音频输入源
- (AVCaptureAudioDataOutput *)audioDataOutput {
    if (_audioDataOutput == nil) {
        _audioDataOutput = [[AVCaptureAudioDataOutput alloc] init];
        [_audioDataOutput setSampleBufferDelegate:self queue:dispatch_get_global_queue(0, 0)];
    }
    return _audioDataOutput;
}

需要设置AVCaptureVideoDataOutputSampleBufferDelegate和AVCaptureAudioDataOutputSampleBufferDelegate代理,有两个捕获音视频数据的代理方法:

#pragma mark -  AVCaptureVideoDataOutputSampleBufferDelegate AVCaptureAudioDataOutputSampleBufferDelegate 实时输出音视频
/// 实时输出采集到的音视频帧内容
- (void)captureOutput:(AVCaptureOutput *)output didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
    if (!sampleBuffer) {
        return;
    }
    //提供对外接口,方便自定义处理
    if (output == self.videoDataOutput) {
        if([self.delegate respondsToSelector:@selector(captureSession:didOutputVideoSampleBuffer:fromConnection:)]) {
            [self.delegate captureSession:self didOutputVideoSampleBuffer:sampleBuffer fromConnection:connection];
        }
    }
    if (output == self.audioDataOutput) {
        if([self.delegate respondsToSelector:@selector(captureSession:didOutputAudioSampleBuffer:fromConnection:)]) {
            [self.delegate captureSession:self didOutputAudioSampleBuffer:sampleBuffer fromConnection:connection];
        }
    }
}
/// 实时输出丢弃的音视频帧内容
- (void)captureOutput:(AVCaptureOutput *)output didDropSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection API_AVAILABLE(ios(6.0)) {
    
}

捕获到音视频之后,用户就可以用于压缩和本地保存了,后面会说一下音视频的压缩和本地保存。

4. AVCaptureVideoPreviewLayer

AVCaptureVideoPreviewLayer用于预览AVCaptureSession的可视输出的CoreAnimation层的子类,简单点说就是实时预览摄像头捕获到的视图。

- (AVCaptureVideoPreviewLayer *)previewLayer {
    if (_previewLayer == nil) {
        _previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:self.session];
        _previewLayer.videoGravity = AVLayerVideoGravityResizeAspect;
    }
    return _previewLayer;
}

需要AVCaptureSession参数来初始化,在AVCaptureSession对象startRunning(启动运行)时,显示出摄像头捕获到的视图。
videoGravity参数是视频如何在AVCaptureVideoPreviewLayer边界矩形内显示,有三种样式:

-AVLayerVideoGravityResizeAspect:在视图内保持长宽比,可能预览的视图不是全屏的。
-AVLayerVideoGravityResizeAspectFill:在视图内保持长宽比的情况下填充满。
-AVLayerVideoGravityResize:拉伸填充层边界。

默认是AVLayerVideoGravityResizeAspect

5. CMMotionManager

CMMotionManager是运动传感器,用来监测设备方向的,初始化如下,

- (CMMotionManager *)motionManager {
    if (!_motionManager) {
        _motionManager = [[CMMotionManager alloc] init];
    }
    return _motionManager;
}

当用户startRunning开启时,则需要监测设备方向了,当用户stopRunning停止时,则停止监测设备方向。在代码中调用startUpdateDeviceDirectionstopUpdateDeviceDirection来开启监测和停止监测,这里就不贴代码了。

6. AVAssetWriter

AVAssetWriter提供将媒体数据写入新文件的服务,通过assetWriterWithURL:fileType:error:方法来初始化,需要AVAssetWriterInput将新媒体样本或打包为CMSampleBuffer对象的现有媒体样本引用附加到AVAssetWriter输出文件中,有几种方法:

startWriting:为接收输入和将输出写入输出文件做好准备。
startSessionAtSourceTime::为接收方启动一个示例编写会话。
finishWritingWithCompletionHandler::将所有未完成的输入标记为完成,并完成输出文件的写入。

7.AVAssetWriterInput

AVAssetWriterInput用于将新媒体样本或对打包为CMSampleBuffer对象的现有媒体样本的引用附加到AVAssetWriter输出文件的单个轨迹中。

视频写入文件初始化:

- (AVAssetWriterInput *)assetWriterVideoInput {
    if (!_assetWriterVideoInput) {
        //写入视频大小
        NSInteger numPixels = self.videoSize.width * [UIScreen mainScreen].scale  * self.videoSize.height * [UIScreen mainScreen].scale;
        //每像素比特
        CGFloat bitsPerPixel = 24.0;
        NSInteger bitsPerSecond = numPixels * bitsPerPixel;
        // 码率和帧率设置
        NSDictionary *compressionProperties = @{ AVVideoAverageBitRateKey : @(bitsPerSecond),
                                                 AVVideoExpectedSourceFrameRateKey : @(30),
                                                 AVVideoMaxKeyFrameIntervalKey : @(30),
                                                 AVVideoProfileLevelKey : AVVideoProfileLevelH264BaselineAutoLevel };
        CGFloat width = self.videoSize.width * [UIScreen mainScreen].scale;
        CGFloat height = self.videoSize.height * [UIScreen mainScreen].scale;
        //视频属性
        self.videoCompressionSettings = @{ AVVideoCodecKey : AVVideoCodecH264,
                                           AVVideoWidthKey : @(width),
                                           AVVideoHeightKey : @(height),
                                           AVVideoScalingModeKey : AVVideoScalingModeResizeAspectFill,
                                           AVVideoCompressionPropertiesKey : compressionProperties };
        
        _assetWriterVideoInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:self.videoCompressionSettings];
        //expectsMediaDataInRealTime 必须设为yes,需要从capture session 实时获取数据
        _assetWriterVideoInput.expectsMediaDataInRealTime = YES;
    }
    return _assetWriterVideoInput;
}

音频写入文件初始化:

- (AVAssetWriterInput *)assetWriterAudioInput {
    if (_assetWriterAudioInput == nil) {
        /* 注:
         <1>AVNumberOfChannelsKey 通道数  1为单通道 2为立体通道
         <2>AVSampleRateKey 采样率 取值为 8000/44100/96000 影响音频采集的质量
         <3>d 比特率(音频码率) 取值为 8 16 24 32
         <4>AVEncoderAudioQualityKey 质量  (需要iphone8以上手机)
         <5>AVEncoderBitRateKey 比特采样率 一般是128000
         */
        
        /*另注:aac的音频采样率不支持96000,当我设置成8000时,assetWriter也是报错*/
        // 音频设置
        _audioCompressionSettings = @{ AVEncoderBitRatePerChannelKey : @(28000),
                                       AVFormatIDKey : @(kAudioFormatMPEG4AAC),
                                       AVNumberOfChannelsKey : @(1),
                                       AVSampleRateKey : @(22050) };
        
        _assetWriterAudioInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeAudio outputSettings:self.audioCompressionSettings];
        _assetWriterAudioInput.expectsMediaDataInRealTime = YES;
    }
    return _assetWriterAudioInput;
}

8. AVAssetExportSession

AVAssetExportSession是系统自带的压缩
主要有几个参数:

-outputURL:输出URL
-shouldOptimizeForNetworkUse:优化网络
-outputFileType:转换后的格式

设置完上面的参数之后,就可以调用下面的方法来压缩视频了,压缩之后就可以根据视频地址保存被压缩之后的视频了

//异步导出
[videoExportSession exportAsynchronouslyWithCompletionHandler:^(NSError * _Nonnull error) {
            if (error) {
                NSLog(@"%@",error.localizedDescription);
            } else {
                //获取第一帧
                UIImage *cover = [UIImage dx_videoFirstFrameWithURL:url];
                //保存到相册,没有权限走出错处理
                [TMCaptureTool saveVideoToPhotoLibrary:url completion:^(PHAsset * _Nonnull asset, NSString * _Nonnull errorMessage) {
                    if (errorMessage) {  //保存失败
                        NSLog(@"%@",errorMessage);
                        [weakSelf finishWithImage:cover asset:nil videoPath:outputVideoFielPath];
                    } else {
                        [weakSelf finishWithImage:cover asset:asset videoPath:outputVideoFielPath];
                    }
                }];
            }
        } progress:^(float progress) {
            //NSLog(@"视频导出进度 %f",progress);
        }];

这里保存视频的方法为:

        [[PHPhotoLibrary sharedPhotoLibrary] performChanges:^{
           PHAssetChangeRequest *request = [PHAssetChangeRequest creationRequestForAssetFromVideoAtFileURL:url];
            localIdentifier = request.placeholderForCreatedAsset.localIdentifier;
            request.creationDate = [NSDate date];
        } completionHandler:^(BOOL success, NSError * _Nullable error) {
            TM_DISPATCH_ON_MAIN_THREAD(^{
                if (success) {
                    PHAsset *asset = [[PHAsset fetchAssetsWithLocalIdentifiers:@[localIdentifier] options:nil] firstObject];
                    if (completion) completion(asset,nil);
                } else if (error) {
                    NSLog(@"保存视频失败 %@",error.localizedDescription);
                    if (completion) completion(nil,[NSString stringWithFormat:@"保存视频失败 %@",error.localizedDescription]);
                }
            });
        }];

到此,视频就录制和保存完了在介绍中只贴出来了部分代码,还有很多是自己封装起来的,想要查看完整项目的,可以查看TMCaptureVideo,是上传到github上的完整项目,仅供大家参考,欢迎大家指导!

上一篇下一篇

猜你喜欢

热点阅读