直播iOS知识点详解首页投稿(暂停使用,暂停投稿)

图片高斯模糊效果

2016-09-18  本文已影响717人  GentlePrince

iOS开发的时候有的时候需要将图片设置模糊,或者通过点击下拉方法,去除模糊。关于图片实现高斯模糊效果有三种方式,CoreImage,GPUImage(第三方开源类库)和vImage。GPUImage没怎么用过,本文就讲两种方式Core Image和vImage。

Core Image

iOS5.0之后就出现了Core Image的API,Core Image的API被放在CoreImage.framework库中,在iOS和OS X平台上,Core Image都提供了大量的滤镜(Filter),在OS X上有120多种Filter,而在iOS上也有90多。首先我们扩展一下UIImage,添加类方法:

Snip20160918_1.png Snip20160918_2.png
+(UIImage *)coreBlurImage:(UIImage *)image withBlurNumber:(CGFloat)blur
 {
    CIContext *context = [CIContext contextWithOptions:nil];
    CIImage *inputImage=[CIImage imageWithCGImage:image.CGImage];
    //设置filter
    CIFilter *filter = [CIFilter filterWithName:@"CIGaussianBlur"];
    [filter setValue:inputImage forKey:kCIInputImageKey];
    [filter setValue:@(blur) forKey: @"inputRadius"];
    //模糊图片
    CIImage *result=[filter valueForKey:kCIOutputImageKey];
    CGImageRef outImage=[context createCGImage:result fromRect:[result extent]];
    UIImage *blurImage=[UIImage imageWithCGImage:outImage];
    CGImageRelease(outImage);
    return blurImage;
}

其中过滤的选项设置为高斯模糊:


vImage 方式

vImage属于Accelerate.Framework,需要导入Accelerate下的Accelerate头文件,Accelerate主要是用来做数字信号处理、图像处理相关的向量、矩阵运算的库。图像可以认为是由向量或者矩阵数据构成的,Accelerate里既然提供了高效的数学运算API,自然就能方便我们对图像做各种各样的处理,模糊算法使用的是vImageBoxConvolve_ARGB8888这个函数。

+(UIImage *)boxblurImage:(UIImage *)image withBlurNumber:(CGFloat)blur {
    if (blur < 0.f || blur > 1.f) {
        blur = 0.5f;
    }
    int boxSize = (int)(blur * 40);
    boxSize = boxSize - (boxSize % 2) + 1;
     
    CGImageRef img = image.CGImage;
     
    vImage_Buffer inBuffer, outBuffer;
    vImage_Error error;
     
    void *pixelBuffer;
    //从CGImage中获取数据
    CGDataProviderRef inProvider = CGImageGetDataProvider(img);
    CFDataRef inBitmapData = CGDataProviderCopyData(inProvider);
    //设置从CGImage获取对象的属性
    inBuffer.width = CGImageGetWidth(img);
    inBuffer.height = CGImageGetHeight(img);
    inBuffer.rowBytes = CGImageGetBytesPerRow(img);
     
    inBuffer.data = (void*)CFDataGetBytePtr(inBitmapData);
     
    pixelBuffer = malloc(CGImageGetBytesPerRow(img) *
                         CGImageGetHeight(img));
     
    if(pixelBuffer == NULL)
        NSLog(@"No pixelbuffer");
     
    outBuffer.data = pixelBuffer;
    outBuffer.width = CGImageGetWidth(img);
    outBuffer.height = CGImageGetHeight(img);
    outBuffer.rowBytes = CGImageGetBytesPerRow(img);
     
    error = vImageBoxConvolve_ARGB8888(&inBuffer, &outBuffer, NULL, 0, 0, boxSize, boxSize, NULL, kvImageEdgeExtend);
     
    if (error) {
        NSLog(@"error from convolution %ld", error);
    }
     
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
    CGContextRef ctx = CGBitmapContextCreate(
                                             outBuffer.data,
                                             outBuffer.width,
                                             outBuffer.height,
                                             8,
                                             outBuffer.rowBytes,
                                             colorSpace,
                                             kCGImageAlphaNoneSkipLast);
    CGImageRef imageRef = CGBitmapContextCreateImage (ctx);
    UIImage *returnImage = [UIImage imageWithCGImage:imageRef];
     
    //clean up
    CGContextRelease(ctx);
    CGColorSpaceRelease(colorSpace);
     
    free(pixelBuffer);
    CFRelease(inBitmapData);
     
    CGColorSpaceRelease(colorSpace);
    CGImageRelease(imageRef);
     
    return returnImage;
}

图片模糊调用:

self.imageView=[[UIImageView alloc]initWithFrame:CGRectMake(0, 300, SCREENWIDTH, 100)];
self.imageView.contentMode=UIViewContentModeScaleAspectFill;
self.imageView.image=[UIImage boxblurImage:self.image withBlurNumber:0.5];
self.imageView.clipsToBounds=YES;
[self.view addSubview:self.imageView];

参考资料:http://t.cn/RcCyU01

上一篇下一篇

猜你喜欢

热点阅读