iOS 关于CVPixelBufferRef的滤镜处理
一.前言
在iOS音视频开发中,经常会看到CVPixelBufferRef
这个数据结构,和ffmpeg
中的AVFrame
类似,里面保存着原始的图像数据。
我们发现,在有些场景中将CVPixelBufferRef
送入滤镜sdk处理后,并不需要返回sdk处理后CVPixelBufferRef
,就能实现滤镜效果显示的改变,如下图场景。
![](https://img.haomeiwen.com/i4349969/d3c80d5ce1fd0e67.png)
1.滤镜sdk处理CVPixelBufferRef
的操作为同步操作。
2.滤镜sdk外部和内部的CVPixelBufferRef
共享同一块内存。
二.实现的流程图
![](https://img.haomeiwen.com/i4349969/dce45aca727a8ea6.png)
1.输入原始
CVPixelBufferRef
,放到GPUImage
的滤镜链中处理,输出处理后的纹理A
。2.使用原始
CVPixelBufferRef
生成纹理B
并挂载到frame buffer object
的纹理附件中。3.将
纹理A
绘制到frame buffer object
上,会更新纹理B
的内容,进而更新CVPixelBufferRef
的图像数据。4.输出滤镜处理后的
CVPixelBufferRef
,其内存地址和原始的CVPixelBufferRef
相同。
三.关键代码
1.使用CVPixelBufferRef
创建纹理对象的两种方法:
CoreVideo
框架的方法:使用此方法可以创建CVOpenGLESTextureRef
纹理,并通过CVOpenGLESTextureGetName(texture)
获取纹理id。
- (GLuint)convertRGBPixelBufferToTexture:(CVPixelBufferRef)pixelBuffer {
if (!pixelBuffer) {
return 0;
}
CGSize textureSize = CGSizeMake(CVPixelBufferGetWidth(pixelBuffer),
CVPixelBufferGetHeight(pixelBuffer));
CVOpenGLESTextureRef texture = nil;
CVReturn status = CVOpenGLESTextureCacheCreateTextureFromImage(nil,
[[GPUImageContext sharedImageProcessingContext] coreVideoTextureCache],
pixelBuffer,
nil,
GL_TEXTURE_2D,
GL_RGBA,
textureSize.width,
textureSize.height,
GL_BGRA,
GL_UNSIGNED_BYTE,
0,
&texture);
if (status != kCVReturnSuccess) {
NSLog(@"Can't create texture");
}
self.renderTexture = texture;
return CVOpenGLESTextureGetName(texture);
}
OpenGL
的方法:
创建纹理对象,使用glTexImage2D
方法上传CVPixelBufferRef
中图像数据data到纹理对象中。
glBindTexture(GL_TEXTURE_2D, [outputFramebuffer texture]);
glTexImage2D(GL_TEXTURE_2D, 0, _pixelFormat==GPUPixelFormatRGB ? GL_RGB : GL_RGBA, (int)uploadedImageSize.width, (int)uploadedImageSize.height, 0, (GLint)_pixelFormat, (GLenum)_pixelType, bytesToUpload);
2.demo中使用GPUImageRawDataInput
作为滤镜链起点,输入CVPixelBufferRef
的图像数据,使用GPUImageTextureOutput
作为滤镜链终点,输出滤镜处理后的纹理id。
- (CVPixelBufferRef)renderPixelBuffer:(CVPixelBufferRef)pixelBuffer{
if (!pixelBuffer) {
return nil;
}
CVPixelBufferRetain(pixelBuffer);
runSynchronouslyOnVideoProcessingQueue(^{
[GPUImageContext useImageProcessingContext];
CGSize size = CGSizeMake(CVPixelBufferGetWidth(pixelBuffer),
CVPixelBufferGetHeight(pixelBuffer));
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
void *bytes = CVPixelBufferGetBaseAddress(pixelBuffer);
[self.dataInput updateDataFromBytes:bytes size:size];
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
[self.dataInput processData];
GLuint textureId = self.textureOutput.texture;
[self convertTextureId:textureId textureSize:size pixelBuffer:pixelBuffer];
});
CVPixelBufferRelease(pixelBuffer);
return pixelBuffer;
}
- (void)newFrameReadyFromTextureOutput:(GPUImageTextureOutput *)callbackTextureOutput{
[self.textureOutput doneWithTexture];
}
3.使用原始CVPixelBufferRef
创建纹理,将此纹理作为附件挂载到frame buffer object
的纹理附件上。绘制滤镜处理后的纹理到帧缓冲对象中。
- (CVPixelBufferRef)convertTextureId:(GLuint)textureId
textureSize:(CGSize)textureSize
pixelBuffer:(CVPixelBufferRef)pixelBuffer{
[GPUImageContext useImageProcessingContext];
[self cleanUpTextures];
GLuint frameBuffer;
glGenFramebuffers(1, &frameBuffer);
glBindFramebuffer(GL_FRAMEBUFFER, frameBuffer);
// texture
GLuint targetTextureID = [self convertRGBPixelBufferToTexture:pixelBuffer];
glBindTexture(GL_TEXTURE_2D, targetTextureID);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, textureSize.width, textureSize.height, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, targetTextureID, 0);
glViewport(0, 0, textureSize.width, textureSize.height);
[self renderTextureWithId:textureId];
glDeleteFramebuffers(1, &frameBuffer);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glFlush();
return pixelBuffer;
}
激活并绑定滤镜纹理,上传顶点坐标,纹理坐标到顶点着色器,开始绘制:
- (void)renderTextureWithId:(GLuint)textureId{
[GPUImageContext setActiveShaderProgram:self->normalProgram];
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, textureId);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glUniform1i(self->inputTextureUniform,0);
static const GLfloat squareVertices[] = {
-1.0f, -1.0f,
1.0f, -1.0f,
-1.0f, 1.0f,
1.0f, 1.0f,
};
glVertexAttribPointer(self->positionAttribute, 2, GL_FLOAT, GL_FALSE, 0, squareVertices);
glVertexAttribPointer(self->textureCoordinateAttribute, 2, GL_FLOAT, GL_FALSE, 0, [GPUImageFilter textureCoordinatesForRotation:kGPUImageNoRotation]);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
}
四.总结
了解了CVPiexlBufferRef
以上特性后,在短视频sdk架构中,就可以设计出模块化,可插拔的滤镜组件。在视频采集,编辑,转码等场景中均可快速集成。
demo中也提供了两个简单的场景:
1.视频采集过程中添加滤镜:从GPUImageVideoCamera
的代理方法中取出CVPixelBufferRef
进行滤镜处理。
#pragma mark - GPUImageVideoCameraDelegate
- (void)willOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
{
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
[[HYRenderManager shareManager] renderItemsToPixelBuffer:pixelBuffer];
}
![](https://img.haomeiwen.com/i4349969/9d991aa575f01d2b.png)
2.视频播放过程中添加滤镜:在AVPlayer
播放时,从实现了AVVideoCompositing
协议的方法中取出CVPixelBufferRef
进行滤镜处理。
#pragma mark - EditorCompositionInstructionDelegete
- (CVPixelBufferRef)renderPixelBuffer:(CVPixelBufferRef)pixelBuffer
{
return [[HYRenderManager shareManager] renderItemsToPixelBuffer:pixelBuffer];
}
![](https://img.haomeiwen.com/i4349969/37b46c905beab939.png)
源码
Github:Demo地址
欢迎留言或私信探讨问题及Star,谢谢~