Web

视频添加拼接

2019-10-12  本文已影响0人  白马青衫少年郎0

最近遇到一个需求:画板(一张图片)上放入N个视频,N张照片,最后保存成一个完成视频。实例:一张背景图嵌入2个视频和1个图片; 看下图(gif上传到这里,就变糊了😂)

实现过程:

第一步:生成主视频(背景视频)

1.背景板生成视频;

2.此视频时长要大于等于子视频的长度

2.此视频的尺寸要大于等于子视频的尺寸

图片转换CVPixelBufferRef方法在网上都能找到

- (CVPixelBufferRef)pixelBufferFromCGImage:(CGImageRef)image withSize:(CGSize)size{
    int width = size.width;
    int height = size.height;
   
    NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
                             [NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
                             [NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
                             nil];
    
    CVPixelBufferRef pxbuffer = NULL;
    
    CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, width,
                                          height, kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef) options,
                                          &pxbuffer);
    
    NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
    CVPixelBufferLockBaseAddress(pxbuffer, 0);
    void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
    
    NSParameterAssert(pxdata != NULL);
    CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
    CGContextRef context = CGBitmapContextCreate(pxdata, width,
                                                 height, 8, 4*width, rgbColorSpace,
                                                 kCGImageAlphaNoneSkipFirst);
    
    NSParameterAssert(context);
    
    CGContextConcatCTM(context, CGAffineTransformMakeRotation(0));
    
    CGContextDrawImage(context, CGRectMake(0, 0, width,
                                           
                                           height), image);
    CGColorSpaceRelease(rgbColorSpace);
    CGContextRelease(context);
    CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
    return pxbuffer;
}

图片构造背景视频

- (void)writeImageAsMovie:(NSArray *)array toPath:(NSString*)path size:(CGSize)size completion:(HandleVideoCompletion)completion {
    NSError *error = nil;
    AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:[NSURL fileURLWithPath:path]
                                                           fileType:AVFileTypeQuickTimeMovie
                                                              error:&error];
    NSParameterAssert(videoWriter);
    NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
                                   AVVideoCodecH264, AVVideoCodecKey,
                                   [NSNumber numberWithInt:size.width], AVVideoWidthKey,
                                   [NSNumber numberWithInt:size.height], AVVideoHeightKey,
                                   nil];
    
    AVAssetWriterInput* writerInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo
                                                                         outputSettings:videoSettings];
    AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerInput
                                                                        sourcePixelBufferAttributes:nil];
    
    NSParameterAssert(writerInput);
    NSParameterAssert([videoWriter canAddInput:writerInput]);
    [videoWriter addInput:writerInput];
    [videoWriter startWriting];
    [videoWriter startSessionAtSourceTime:kCMTimeZero];
    CVPixelBufferRef buffer = NULL;
    CMTime presentTime;
    for (NSInteger i = 0; i < array.count; ++i) {
        if(writerInput.readyForMoreMediaData){
            if (i == 0) {
                presentTime = CMTimeMake(0, 30);
            } else {
                presentTime = CMTimeMake(30 * i, 30);
            }
            
            buffer = [self pixelBufferFromCGImage:[[array objectAtIndex:i] CGImage] withSize:size];
            [adaptor appendPixelBuffer:buffer withPresentationTime:presentTime]; //异步添加 延时
            [NSThread sleepForTimeInterval:0.05];
            CMTimeShow(presentTime);
        }
    }
    
    [writerInput markAsFinished];
    [videoWriter finishWritingWithCompletionHandler:^{
        dispatch_async(dispatch_get_main_queue(), ^{
            if (videoWriter.status != AVAssetWriterStatusFailed && videoWriter.status == AVAssetWriterStatusCompleted)  {
                NSURL *videoTempURL = [NSURL fileURLWithPath:[NSString stringWithFormat:@"%@", path]];
                [[PHPhotoLibrary sharedPhotoLibrary]  performChanges:^{
                    [PHAssetChangeRequest creationRequestForAssetFromVideoAtFileURL:videoTempURL];
                } completionHandler:^(BOOL success, NSError * _Nullable error) {
                    if (success) {
                        AVAsset *asset = [AVAsset assetWithURL:videoTempURL];
                        completion(asset, nil);
                    } else {
                        completion(nil, error);
                    }
                }];
            } else {
                completion(nil, videoWriter.error);
            }
        });
       
    }];
    
    CVPixelBufferPoolRelease(adaptor.pixelBufferPool);
}

第二步:合成视频
背景板生成了1个视频 再加上要嵌入的2个视频 那么一共就有3个视频了
最开始的思路是通过下面的方法来实现,每一个videoLayer就是一个视频 调整位置坐标,结果掉坑里了OZT

+ (instancetype)videoCompositionCoreAnimationToolWithPostProcessingAsVideoLayers:(NSArray<CALayer *> *)videoLayers inLayer:(CALayer *)animationLayer

最后的实现方法:

for (NSInteger i = 0; i < assets.count; ++i) {
        AVAsset *asset = assets[i];
        AVAssetTrack *currentVideoTrack = [asset tracksWithMediaType:AVMediaTypeVideo].firstObject;
        if (i == assets.count - 1) { //背景视频
            AVMutableVideoCompositionLayerInstruction *layerInstruction = [AVMutableVideoCompositionLayerInstruction
                                                                           videoCompositionLayerInstructionWithAssetTrack:compositionVideoTracks[i]];
            [layerInstruction setTransform:currentVideoTrack.preferredTransform atTime:kCMTimeZero];
            [layerInstructions addObject:layerInstruction];
            continue;
        }
      
        for (NSInteger k = 0; k < pandentViews.count; ++k) { //嵌入视频
            if ([pandentViews[k] isKindOfClass:[TMVideoElementView class]]) {
                TMVideoElementView *videoEleView = (TMVideoElementView *)pandentViews[k];
                AVMutableVideoCompositionLayerInstruction *layerInstruction = [AVMutableVideoCompositionLayerInstruction
                                                                               videoCompositionLayerInstructionWithAssetTrack:compositionVideoTracks[i]];
                CGRect rect = videoEleView.frame; //控件在画布上的大小
                CGSize videoSize = currentVideoTrack.naturalSize;  //子视频大小
                
                CGFloat scaleX = rect.size.width / videoSize.width * ratio;
                CGFloat x = rect.origin.x * ratio;
                CGFloat y = rect.origin.y * ratio;
                
                CGAffineTransform t = CGAffineTransformMake(scaleX, 0, 0, scaleX, x, y);
                [layerInstruction setTransform:t atTime:kCMTimeZero];
                [layerInstructions addObject:layerInstruction];
            }
         }
     }

    AVMutableVideoCompositionInstruction *videoCompositionInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
    videoCompositionInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, backgroundAsset.duration);
    videoCompositionInstruction.layerInstructions = layerInstructions;

    AVMutableVideoComposition *mutableVideoComposition = [AVMutableVideoComposition videoComposition];
    mutableVideoComposition.renderSize = renderSize;
    mutableVideoComposition.frameDuration = CMTimeMake(1, 30);
    mutableVideoComposition.instructions = @[videoCompositionInstruction];

还有视频中添加图片,就不说了,百度->添加水印。
如果有更好的方法,敬请告知。

上一篇下一篇

猜你喜欢

热点阅读