直播iOS图形处理相关iOS 直播视频

iOS开发直播app-美颜滤镜GPUImageBeautifyF

2016-12-05  本文已影响1568人  伍骁辛

随着各种各样的直播app的爆火,实时美颜滤镜的需求也越来越多。下面将主要介绍实现美颜滤镜的原理和思路,原理可以移步看下GPUImage原理,本文主要是GPUImageBeautifyFilter美颜滤镜的实现。美颜只是不同滤镜组合起来的效果,实际上美颜也是一种滤镜,只不过它组合了各种需求的滤镜,例如磨皮、美白、提高饱和度、提亮之类的。

GPUImageBeautifyFilter

GPUImageBeautifyFilter是基于GPUImage的实时美颜滤镜,包括
GPUImageBilateralFilterGPUImageCombinationFilterGPUImageHSBFilter

GPUImageBeautifyFilter.h创建上面的对象

@interface GPUImageBeautifyFilter : GPUImageFilterGroup {
GPUImageBilateralFilter *bilateralFilter;
GPUImageCannyEdgeDetectionFilter *cannyEdgeFilter;
GPUImageCombinationFilter *combinationFilter;
GPUImageHSBFilter *hsbFilter;
}

绘制步骤如下:
准备纹理
绘制纹理
显示处理后的纹理

一、 准备纹理(将要用到的类)

[GPUImageVideoCamera] -
[GPUImageBeautifyFilter] -
[GPUImageBilateralFliter] -
[GPUImageCombinationFilter] -
[GPUImageCannyEdgeDetectionFilter] -

准备过程分三步:
第一个纹理:
1、GPUImageVideoCamera捕获摄像头图像
调用newFrameReadyAtTime: atIndex:通知GPUImageBeautifyFilter;
2、GPUImageBeautifyFilter调用newFrameReadyAtTime: atIndex:
通知GPUImageBilateralFliter输入纹理已经准备好;

第二个纹理:
3、GPUImageBilateralFliter 绘制图像后,
informTargetsAboutNewFrameAtTime(),
调用setInputFramebufferForTarget: atIndex:
把绘制的图像设置为GPUImageCombinationFilter输入纹理,
并通知GPUImageCombinationFilter纹理已经绘制完毕;
4、GPUImageBeautifyFilter调用newFrameReadyAtTime: atIndex:
通知 GPUImageCannyEdgeDetectionFilter输入纹理已经准备好;

第三个纹理:
5、GPUImageCannyEdgeDetectionFilter 绘制图像后,
把图像设置为GPUImageCombinationFilter输入纹理;
6、GPUImageBeautifyFilter调用newFrameReadyAtTime: atIndex:
通知 GPUImageCombinationFilter输入纹理已经准备好;

纹理准备.png

二、绘制纹理:

7、判断纹理数量
GPUImageCombinationFilter判断是否有三个纹理,三个纹理都已经准备好后
调用GPUImageThreeInputFilter的绘制函数renderToTextureWithVertices: textureCoordinates:,
图像绘制完后,把图像设置为GPUImageHSBFilter的输入纹理,
通知GPUImageHSBFilter纹理已经绘制完毕;

8、绘制纹理
GPUImageHSBFilter调用renderToTextureWithVertices:
textureCoordinates:绘制图像,
完成后把图像设置为GPUImageView的输入纹理,并通知GPUImageView输入纹理已经绘制完毕;

三、显示纹理

9、GPUImageView把输入纹理绘制到自己的帧缓存,然后通过
[self.context presentRenderbuffer:GL_RENDERBUFFER];显示到UIView上。

GPUImageBeautifyFilter.m文件

@interface GPUImageCombinationFilter : GPUImageThreeInputFilter
{
GLint smoothDegreeUniform;
}

@property (nonatomic, assign) CGFloat intensity;

@end

NSString *const kGPUImageBeautifyFragmentShaderString = SHADER_STRING
(
varying highp vec2 textureCoordinate;
varying highp vec2 textureCoordinate2;
varying highp vec2 textureCoordinate3;

uniform sampler2D inputImageTexture;
uniform sampler2D inputImageTexture2;
uniform sampler2D inputImageTexture3;
uniform mediump float smoothDegree;

void main()
{
 highp vec4 bilateral = texture2D(inputImageTexture, textureCoordinate);
 highp vec4 canny = texture2D(inputImageTexture2, textureCoordinate2);
 highp vec4 origin = texture2D(inputImageTexture3,textureCoordinate3);
 highp vec4 smooth;
 lowp float r = origin.r;
 lowp float g = origin.g;
 lowp float b = origin.b;
 if (canny.r < 0.2 && r > 0.3725 && g > 0.1568 && b > 0.0784 && r > b && (max(max(r, g), b) - min(min(r, g), b)) > 0.0588 && abs(r-g) > 0.0588) {
     smooth = (1.0 - smoothDegree) * (origin - bilateral) + bilateral;
 }
 else {
     smooth = origin;
 }
 smooth.r = log(1.0 + 0.2 * smooth.r)/log(1.2);
 smooth.g = log(1.0 + 0.2 * smooth.g)/log(1.2);
 smooth.b = log(1.0 + 0.2 * smooth.b)/log(1.2);
 gl_FragColor = smooth;
}
);

@implementation GPUImageCombinationFilter

-(id)init {
if (self = [super initWithFragmentShaderFromString:kGPUImageBeautifyFragmentShaderString]) {
    smoothDegreeUniform = [filterProgram uniformIndex:@"smoothDegree"];
}
self.intensity = 0.5;
return self;
}

-(void)setIntensity:(CGFloat)intensity {
_intensity = intensity;
[self setFloat:intensity forUniform:smoothDegreeUniform program:filterProgram];
}

@end

@implementation GPUImageBeautifyFilter

-(id)init;
{
if (!(self = [super init]))
{
    return nil;
}

// First pass: face smoothing filter
bilateralFilter = [[GPUImageBilateralFilter alloc] init];
bilateralFilter.distanceNormalizationFactor = 4.0;
[self addFilter:bilateralFilter];

// Second pass: edge detection
cannyEdgeFilter = [[GPUImageCannyEdgeDetectionFilter alloc] init];
[self addFilter:cannyEdgeFilter];

// Third pass: combination bilateral, edge detection and origin
combinationFilter = [[GPUImageCombinationFilter alloc] init];
[self addFilter:combinationFilter];

// Adjust HSB
hsbFilter = [[GPUImageHSBFilter alloc] init];
[hsbFilter adjustBrightness:1.1];
[hsbFilter adjustSaturation:1.1];

[bilateralFilter addTarget:combinationFilter];
[cannyEdgeFilter addTarget:combinationFilter];

[combinationFilter addTarget:hsbFilter];

self.initialFilters = [NSArray arrayWithObjects:bilateralFilter,cannyEdgeFilter,combinationFilter,nil];
self.terminalFilter = hsbFilter;

return self;
}

#pragma mark - GPUImageInput protocol

绘制纹理

-(void)newFrameReadyAtTime:(CMTime)frameTime atIndex:(NSInteger)textureIndex;
{
for (GPUImageOutput<GPUImageInput> *currentFilter in self.initialFilters)
{
    if (currentFilter != self.inputFilterToIgnoreForUpdates)
    {
        if (currentFilter == combinationFilter) {
            textureIndex = 2;
        }
        [currentFilter newFrameReadyAtTime:frameTime atIndex:textureIndex];
    }
}
}

设置绘制图像的输入纹理

-(void)setInputFramebuffer:(GPUImageFramebuffer *)newInputFramebuffer atIndex:(NSInteger)textureIndex;
{
for (GPUImageOutput<GPUImageInput> *currentFilter in self.initialFilters)
{
    if (currentFilter == combinationFilter) {
        textureIndex = 2;
    }
    [currentFilter setInputFramebuffer:newInputFramebuffer atIndex:textureIndex];
}
}

GPUImage集成步骤:
自定义组合滤镜美颜

  1. 使用Cocoapods导入GPUImage;
  2. 创建视频源GPUImageVideoCamera;
  3. 创建最终目的源:GPUImageView;
  4. 创建GPUImageFilterGroup滤镜组合,需要组合亮度(GPUImageBrightnessFilter)和双边滤波(GPUImageBilateralFilter)这两个滤镜达到美颜效果;
  5. 设置滤镜组链;
  6. 设置GPUImage处理链,从数据源 -> 滤镜 -> 最终界面效果;
  7. 开始采集视频。
-(void)viewDidLoad {
[super viewDidLoad];
// Do any additional setup after loading the view.
self.view.backgroundColor = [UIColor whiteColor];
self.title = @"GPUImage美颜";

[self initBottomView];

//  1. 创建视频摄像头
// SessionPreset:屏幕分辨率,AVCaptureSessionPresetHigh会自适应高分辨率
// cameraPosition:摄像头方向
// 最好使用AVCaptureSessionPresetHigh,会自动识别,如果用太高分辨率,当前设备不支持会直接报错
GPUImageVideoCamera *videoCamera = [[GPUImageVideoCamera alloc] initWithSessionPreset:AVCaptureSessionPresetHigh cameraPosition:AVCaptureDevicePositionFront];

//  2. 设置摄像头输出视频的方向
videoCamera.outputImageOrientation = UIInterfaceOrientationPortrait;
_videoCamera = videoCamera;

//  3. 创建用于展示视频的GPUImageView
GPUImageView *captureVideoPreview = [[GPUImageView alloc] initWithFrame:self.view.bounds];
[self.view insertSubview:captureVideoPreview atIndex:0];

//  4.创建磨皮、美白组合滤镜
GPUImageFilterGroup *groupFliter = [[GPUImageFilterGroup alloc] init];

//  5.磨皮滤镜
GPUImageBilateralFilter *bilateralFilter = [[GPUImageBilateralFilter alloc] init];
[groupFliter addFilter:bilateralFilter];
_bilateralFilter = bilateralFilter;

//  6.美白滤镜
GPUImageBrightnessFilter *brightnessFilter = [[GPUImageBrightnessFilter alloc] init];
[groupFliter addFilter:brightnessFilter];
_brightnessFilter = brightnessFilter;


//  7.设置滤镜组链
[bilateralFilter addTarget:brightnessFilter];
[groupFliter setInitialFilters:@[bilateralFilter]];
groupFliter.terminalFilter = brightnessFilter;

//  8.设置GPUImage处理链 从数据源->滤镜->界面展示
[videoCamera addTarget:groupFliter];
[groupFliter addTarget:captureVideoPreview];

//  9.调用startCameraCapture采集视频,底层会把采集到的视频源,渲染到GPUImageView上,接着界面显示
[videoCamera startCameraCapture];
}

ps:

  1. GPUImageVideoCamera必须要强引用,否则在采集视频过程中会被销毁;
  2. 必须调用startCameraCapture,底层才会把采集到的视频源,渲染到GPUImageView中才能显示;
  3. GPUImageBilateralFilter的distanceNormalizationFactor值越小,磨皮效果越好,distanceNormalizationFactor取值范围: 大于1。

利用美颜滤镜GPUImageBeautifyFilter实现
1、使用Cocoapods导入GPUImage;
2、导入GPUImageBeautifyFilter文件夹;
3、创建视频源GPUImageVideoCamera;
4、创建最终目的源:GPUImageView;
5、创建最终美颜滤镜:GPUImageBeautifyFilter;
6、设置GPUImage处理链,从数据源 -> 滤镜 -> 最终界面展示。

-(void)viewDidLoad {
[super viewDidLoad];
// Do any additional setup after loading the view.
self.view.backgroundColor = [UIColor whiteColor];
self.title = @"Beautify美颜";

UISwitch *switcher = [[UISwitch alloc] initWithFrame:CGRectMake(140, 80, 70, 30)];
[switcher addTarget:self action:@selector(changeBeautyFilter:) forControlEvents:UIControlEventValueChanged];

[self.view addSubview:switcher];

//  1. 创建视频摄像头
// SessionPreset:屏幕分辨率,AVCaptureSessionPresetHigh会自适应高分辨率
// cameraPosition:摄像头方向
// 最好使用AVCaptureSessionPresetHigh,会自动识别,如果用太高分辨率,当前设备不支持会直接报错
GPUImageVideoCamera *videoCamera = [[GPUImageVideoCamera alloc] initWithSessionPreset:AVCaptureSessionPresetHigh cameraPosition:AVCaptureDevicePositionFront];

//  2. 设置摄像头输出视频的方向
videoCamera.outputImageOrientation = UIInterfaceOrientationPortrait;
_videoCamera = videoCamera;


//  3. 创建用于展示视频的GPUImageView
GPUImageView *captureVideoPreview = [[GPUImageView alloc] initWithFrame:self.view.bounds];
[self.view insertSubview:captureVideoPreview atIndex:0];
_captureVideoPreview = captureVideoPreview;

//  4.设置处理链
[_videoCamera addTarget:_captureVideoPreview];

//  5.调用startCameraCapture采集视频,底层会把采集到的视频源,渲染到GPUImageView上,接着界面显示
[videoCamera startCameraCapture];
}

切换美颜的时候要移动处理链

// 移除之前所有处理链
[_videoCamera removeAllTargets];

// 创建美颜滤镜
GPUImageBeautifyFilter *beautifyFilter = [[GPUImageBeautifyFilter alloc] init];

// 设置GPUImage处理链,从数据源 => 滤镜 => 最终界面效果
[_videoCamera addTarget:beautifyFilter];
[beautifyFilter addTarget:_captureVideoPreview];

demo下载

图像处理

落影lying-in的GPUImage详细解析对GPUImage实现美颜滤镜的原理和思路做了详细介绍,很多关于图像处理的知识写的很详细,很受用。

参考文献
http://www.jianshu.com/p/2ce9b63ecfef
http://www.jianshu.com/p/4646894245ba

上一篇下一篇

猜你喜欢

热点阅读