iOS 移动端开发iOS 开发GPUImage

GPUImage详细解析(十)用GPUImage和指令配合合并视

2016-10-10  本文已影响5285人  落影loyinglin

前言

GPUImage详细解析在GPUImage文集,在前文GPUImage详细解析(八)视频合并混音中使用了github上的开源方式实现,这篇使用GPUImage原生的GPUImageMovieComposition来合并视频。

核心思路

先加载视频信息,再配置轨道信息、视频操作指令和音频指令参数,创建GPUImageMovieComposition类,设置输出目标为GPUImageMovieWriter并开始处理,最后把处理完毕的数据写入手机。

效果展示

具体细节

1、加载视频信息

通过GCD的dispatch_group_notify方法实现异步等待,加载完毕后调用-synchronizeWithEditor方法。

- (void)loadAsset:(AVAsset *)asset withKeys:(NSArray *)assetKeysToLoad usingDispatchGroup:(dispatch_group_t)dispatchGroup
{
    dispatch_group_enter(dispatchGroup);
    [asset loadValuesAsynchronouslyForKeys:assetKeysToLoad completionHandler:^(){
        // 测试是否成功加载
        BOOL bSuccess = YES;
        for (NSString *key in assetKeysToLoad) {
            NSError *error;
            
            if ([asset statusOfValueForKey:key error:&error] == AVKeyValueStatusFailed) {
                NSLog(@"Key value loading failed for key:%@ with error: %@", key, error);
                bSuccess = NO;
                break;
            }
        }
        if (![asset isComposable]) {
            NSLog(@"Asset is not composable");
            bSuccess = NO;
        }
        if (bSuccess && CMTimeGetSeconds(asset.duration) > 5) {
            [self.clips addObject:asset];
            [self.clipTimeRanges addObject:[NSValue valueWithCMTimeRange:CMTimeRangeMake(CMTimeMakeWithSeconds(0, 1), CMTimeMakeWithSeconds(5, 1))]];
        }
        else {
            NSLog(@"error ");
        }
        dispatch_group_leave(dispatchGroup);
    }];
}

2、配置轨道信息、音视频指令

- (void)buildCompositionObjectsForPlayback
{
    if ( (self.clips == nil) || [self.clips count] == 0 ) {
        self.composition = nil;
        self.videoComposition = nil;
        return;
    }
    
    CGSize videoSize = [[self.clips objectAtIndex:0] naturalSize];
    AVMutableComposition *composition = [AVMutableComposition composition];
    AVMutableVideoComposition *videoComposition = nil;
    AVMutableAudioMix *audioMix = nil;
    
    composition.naturalSize = videoSize;
    
    videoComposition = [AVMutableVideoComposition videoComposition];
    audioMix = [AVMutableAudioMix audioMix];
    
    [self buildTransitionComposition:composition andVideoComposition:videoComposition andAudioMix:audioMix];
    
    if (videoComposition) {
        // 通用属性
        videoComposition.frameDuration = CMTimeMake(1, 30); // 30 fps
        videoComposition.renderSize = videoSize;
    }
    
    self.composition = composition;
    self.videoComposition = videoComposition;
    self.audioMix = audioMix;
}

3、创建GPUImageMovieComposition类,设置GPUImage相关信息

GPUImageMovieComposition的创建依赖第2步生成的轨道信息、音视频指令。
设置音视频的输出target为GPUImageMovieWriter,并在GPUImageMovieWriter的completionBlock中把处理完毕的视频存入手机。

- (void)synchronizePlayerWithEditor
{
    
    NSString *pathToMovie = [NSHomeDirectory() stringByAppendingPathComponent:@"Documents/Movie.m4v"];
    unlink([pathToMovie UTF8String]);
    NSURL *movieURL = [NSURL fileURLWithPath:pathToMovie];
    
    self.movieWriter = [[GPUImageMovieWriter alloc] initWithMovieURL:movieURL size:CGSizeMake(640, 480)];
    
    
    self.imageMovieComposition = [[GPUImageMovieComposition alloc] initWithComposition:self.editor.composition andVideoComposition:self.editor.videoComposition andAudioMix:self.editor.audioMix];
    self.imageMovieComposition.playAtActualSpeed = YES;
    self.imageMovieComposition.runBenchmark = YES;
    
    [self.imageMovieComposition addTarget:self.movieWriter];
    [self.imageMovieComposition addTarget:filter];
    
    [self.imageMovieComposition enableSynchronizedEncodingUsingMovieWriter:self.movieWriter];
    self.imageMovieComposition.audioEncodingTarget = self.movieWriter;
    
    [_movieWriter startRecording];
    [self.imageMovieComposition startProcessing];
    
    CADisplayLink* dlink = [CADisplayLink displayLinkWithTarget:self selector:@selector(updateProgress)];
    [dlink addToRunLoop:[NSRunLoop currentRunLoop] forMode:NSRunLoopCommonModes];
    [dlink setPaused:NO];
    
    __weak typeof(self) weakSelf = self;
    [_movieWriter setCompletionBlock:^{
        __strong typeof(self) strongSelf = weakSelf;
        
        [strongSelf->_movieWriter finishRecording];
        [strongSelf->_imageMovieComposition endProcessing];
        
        ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];
        if (UIVideoAtPathIsCompatibleWithSavedPhotosAlbum(pathToMovie))
        {
            [library writeVideoAtPathToSavedPhotosAlbum:movieURL completionBlock:^(NSURL *assetURL, NSError *error)
             {
                 dispatch_async(dispatch_get_main_queue(), ^{
                     
                     if (error) {
                         UIAlertView *alert = [[UIAlertView alloc] initWithTitle:@"视频保存失败" message:nil
                                                                        delegate:nil cancelButtonTitle:@"OK" otherButtonTitles:nil];
                         [alert show];
                     } else {
                         UIAlertView *alert = [[UIAlertView alloc] initWithTitle:@"视频保存成功" message:nil
                                                                        delegate:nil cancelButtonTitle:@"OK" otherButtonTitles:nil];
                         [alert show];
                     }
                 });
             }];
        }
        else {
            NSLog(@"error mssg)");
        }
    }];
    
}

总结

距离上一篇GPUImage文章更新GPUImage详细解析(九)图像的输入输出和滤镜通道已经有两个月,这一篇更多是了却当时未能的想法。
GPUImage进阶的各种滤镜使用是GPUImage的核心魅力所在,目前仅是在直播中用到GPUImage的美颜功能。
在研读GPUImage源码的过程中对OpenGL ES有了更深入的学习,也发现了GPUImage的部分问题,比如说在推流过程中添加滤镜导致推流视频闪烁的现象(这个问题来自于一个简书的网友,当时的图如下)。

大致结构图

demo在代码地址

扩展

通过debug,单步执行代码得到以下结果
1、Camera申请buffer_d90;
2、filter1申请buffer_420;
3、fitler1渲染完毕,归还buffer_d90;
4、推流读取buffer_420的数据;
5、ImageView读取buffer_420的数据,渲染到屏幕(预览),归还buffer_420;
1、Camera申请buffer_420;
2、filter1申请buffer_d90;
3、fitler1渲染完毕,归还buffer_420;
4、推流读取buffer_d90的数据;
5、ImageView读取buffer_d90的数据,渲染到屏幕(预览),归还buffer_d90;
Camera申请到了不同的buffer,GPU处理又是异步操作,这样推流读取像素数据的时候,可能会读取到两个buffer的数据。

解决方案1:

添加一个空白的filter,debug结果如下:
1、Camera申请buffer_5e0;
2、filter1申请buffer_ba0;
3、fitler1渲染完毕,归还buffer_5e0;
4、filter2申请buffer_5e0;
5、filter2渲染完毕,归还buffer_ba0;
6、推流读取buffer_5e0的数据;
7、ImageView读取buffer_5e0的数据,渲染到屏幕(预览),归还buffer_5e0;
1、Camera申请buffer_5e0;
2、filter1申请buffer_ba0;
3、fitler1渲染完毕,归还buffer_5e0;
4、filter2申请buffer_5e0;
5、filter2渲染完毕,归还buffer_ba0;
6、推流读取buffer_5e0的数据;
7、ImageView读取buffer_5e0的数据,渲染到屏幕(预览),归还buffer_5e0;
Camera两次申请的buffer是同一块,这样推流读取的像素数据来自同一个buffer。

解决方案2:

在GPUImageBuffer类添加一行代码

 glFinish();
上一篇下一篇

猜你喜欢

热点阅读