GPUImage源码阅读(四)
概述
GPUImage是一个著名的图像处理开源库,它让你能够在图片、视频、相机上使用GPU加速的滤镜和其它特效。与CoreImage框架相比,可以根据GPUImage提供的接口,使用自定义的滤镜。项目地址:https://github.com/BradLarson/GPUImage
这篇文章主要是阅读GPUImage框架中的GPUImagePicture、GPUImageView、GPUImageUIElement三个类的源码。这三个类与iOS中的图片加载、图片显示、UI渲染相关。在使用GPUImage框架的时候,涉及到对图片进行滤镜处理和显示的时候,基本都会用到这几个类。以下是源码内容:
*** GPUImagePicture***
GPUImageView
GPUImageUIElement
实现效果
GPUImagePicture.png GPUImageUIElement.pngGPUImagePicture
从名称就可以知道GPUImagePicture是GPUImage框架中处理与图片相关的类,它的主要作用是将UIImage或CGImage转化为纹理对象。GPUImagePicture继承自GPUImageOutput,从而可以知道它能够作为输出,由于它没有实现GPUImageInput协议,不能处理输入。因此,常常作为响应链源。
- 直接alpha与预乘alpha
使用直接 alpha 描述 RGBA 颜色时,颜色的 alpha 值会存储在 alpha 通道中。例如,若要描述具有 60% 不透明度的红色,使用以下值:(255, 0, 0, 255 * 0.6) = (255, 0, 0, 153)。其中153(153 = 255 * 0.6)指示颜色应具有 60% 的不透明度。
使用预乘 alpha 描述 RGBA 颜色时,每种颜色都会与 alpha 值相乘:(255 * 0.6, 0 * 0.6, 0 * 0.6, 255 * 0.6) = (153, 0, 0, 153)。
- 初始化方法。初始化方法比较多,因为提供的初始化选项比较多。好处便是自由度比较大,方便自己定制。
// 通过图片URL初始化
- (id)initWithURL:(NSURL *)url;
// 通过UIImage或CGImage初始化
- (id)initWithImage:(UIImage *)newImageSource;
- (id)initWithCGImage:(CGImageRef)newImageSource;
// 通过UIImage或CGImage、是否平滑缩放输出来初始化
- (id)initWithImage:(UIImage *)newImageSource smoothlyScaleOutput:(BOOL)smoothlyScaleOutput;
- (id)initWithCGImage:(CGImageRef)newImageSource smoothlyScaleOutput:(BOOL)smoothlyScaleOutput;
// 通过UIImage或CGImage、是否去除预乘alpha来初始化
- (id)initWithImage:(UIImage *)newImageSource removePremultiplication:(BOOL)removePremultiplication;
- (id)initWithCGImage:(CGImageRef)newImageSource removePremultiplication:(BOOL)removePremultiplication;
// 通过UIImage或CGImage、是否平滑缩放、是否去除预乘alpha来初始化
- (id)initWithImage:(UIImage *)newImageSource smoothlyScaleOutput:(BOOL)smoothlyScaleOutput removePremultiplication:(BOOL)removePremultiplication;
- (id)initWithCGImage:(CGImageRef)newImageSource smoothlyScaleOutput:(BOOL)smoothlyScaleOutput removePremultiplication:(BOOL)removePremultiplication;
初始化方法比较多,但是其它初始化方法都是基于下面的这个初始化方法。因此,只看下面的这个初始化方法。实现比较复杂,但是基本思路就是:1、获取图片适合的宽高(不能超出OpenGL ES允许的最大纹理宽高)2、如果使用了smoothlyScaleOutput,需要调整宽高为接近2的幂的值,调整后必须重绘;3、如果不用重绘,则获取大小端、alpha等信息;4、需要重绘,则使用CoreGraphics重绘;5、根据是否需要去除预乘alpha选项,决定是否去除预乘alpha;6、由调整后的数据生成纹理缓存对象;7、根据shouldSmoothlyScaleOutput选项决定是否生成mipmaps;8、最后释放资源。
- (id)initWithCGImage:(CGImageRef)newImageSource smoothlyScaleOutput:(BOOL)smoothlyScaleOutput removePremultiplication:(BOOL)removePremultiplication;
{
if (!(self = [super init]))
{
return nil;
}
hasProcessedImage = NO;
self.shouldSmoothlyScaleOutput = smoothlyScaleOutput;
imageUpdateSemaphore = dispatch_semaphore_create(0);
dispatch_semaphore_signal(imageUpdateSemaphore);
// TODO: Dispatch this whole thing asynchronously to move image loading off main thread
CGFloat widthOfImage = CGImageGetWidth(newImageSource);
CGFloat heightOfImage = CGImageGetHeight(newImageSource);
// If passed an empty image reference, CGContextDrawImage will fail in future versions of the SDK.
NSAssert( widthOfImage > 0 && heightOfImage > 0, @"Passed image must not be empty - it should be at least 1px tall and wide");
pixelSizeOfImage = CGSizeMake(widthOfImage, heightOfImage);
CGSize pixelSizeToUseForTexture = pixelSizeOfImage;
BOOL shouldRedrawUsingCoreGraphics = NO;
// For now, deal with images larger than the maximum texture size by resizing to be within that limit
CGSize scaledImageSizeToFitOnGPU = [GPUImageContext sizeThatFitsWithinATextureForSize:pixelSizeOfImage];
if (!CGSizeEqualToSize(scaledImageSizeToFitOnGPU, pixelSizeOfImage))
{
pixelSizeOfImage = scaledImageSizeToFitOnGPU;
pixelSizeToUseForTexture = pixelSizeOfImage;
shouldRedrawUsingCoreGraphics = YES;
}
if (self.shouldSmoothlyScaleOutput)
{
// In order to use mipmaps, you need to provide power-of-two textures, so convert to the next largest power of two and stretch to fill
CGFloat powerClosestToWidth = ceil(log2(pixelSizeOfImage.width));
CGFloat powerClosestToHeight = ceil(log2(pixelSizeOfImage.height));
pixelSizeToUseForTexture = CGSizeMake(pow(2.0, powerClosestToWidth), pow(2.0, powerClosestToHeight));
shouldRedrawUsingCoreGraphics = YES;
}
GLubyte *imageData = NULL;
CFDataRef dataFromImageDataProvider = NULL;
GLenum format = GL_BGRA;
BOOL isLitteEndian = YES;
BOOL alphaFirst = NO;
BOOL premultiplied = NO;
if (!shouldRedrawUsingCoreGraphics) {
/* Check that the memory layout is compatible with GL, as we cannot use glPixelStore to
* tell GL about the memory layout with GLES.
*/
if (CGImageGetBytesPerRow(newImageSource) != CGImageGetWidth(newImageSource) * 4 ||
CGImageGetBitsPerPixel(newImageSource) != 32 ||
CGImageGetBitsPerComponent(newImageSource) != 8)
{
shouldRedrawUsingCoreGraphics = YES;
} else {
/* Check that the bitmap pixel format is compatible with GL */
CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(newImageSource);
if ((bitmapInfo & kCGBitmapFloatComponents) != 0) {
/* We don't support float components for use directly in GL */
shouldRedrawUsingCoreGraphics = YES;
} else {
CGBitmapInfo byteOrderInfo = bitmapInfo & kCGBitmapByteOrderMask;
if (byteOrderInfo == kCGBitmapByteOrder32Little) {
/* Little endian, for alpha-first we can use this bitmap directly in GL */
CGImageAlphaInfo alphaInfo = bitmapInfo & kCGBitmapAlphaInfoMask;
if (alphaInfo != kCGImageAlphaPremultipliedFirst && alphaInfo != kCGImageAlphaFirst &&
alphaInfo != kCGImageAlphaNoneSkipFirst) {
shouldRedrawUsingCoreGraphics = YES;
}
} else if (byteOrderInfo == kCGBitmapByteOrderDefault || byteOrderInfo == kCGBitmapByteOrder32Big) {
isLitteEndian = NO;
/* Big endian, for alpha-last we can use this bitmap directly in GL */
CGImageAlphaInfo alphaInfo = bitmapInfo & kCGBitmapAlphaInfoMask;
if (alphaInfo != kCGImageAlphaPremultipliedLast && alphaInfo != kCGImageAlphaLast &&
alphaInfo != kCGImageAlphaNoneSkipLast) {
shouldRedrawUsingCoreGraphics = YES;
} else {
/* Can access directly using GL_RGBA pixel format */
premultiplied = alphaInfo == kCGImageAlphaPremultipliedLast || alphaInfo == kCGImageAlphaPremultipliedLast;
alphaFirst = alphaInfo == kCGImageAlphaFirst || alphaInfo == kCGImageAlphaPremultipliedFirst;
format = GL_RGBA;
}
}
}
}
}
// CFAbsoluteTime elapsedTime, startTime = CFAbsoluteTimeGetCurrent();
if (shouldRedrawUsingCoreGraphics)
{
// For resized or incompatible image: redraw
imageData = (GLubyte *) calloc(1, (int)pixelSizeToUseForTexture.width * (int)pixelSizeToUseForTexture.height * 4);
CGColorSpaceRef genericRGBColorspace = CGColorSpaceCreateDeviceRGB();
CGContextRef imageContext = CGBitmapContextCreate(imageData, (size_t)pixelSizeToUseForTexture.width, (size_t)pixelSizeToUseForTexture.height, 8, (size_t)pixelSizeToUseForTexture.width * 4, genericRGBColorspace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// CGContextSetBlendMode(imageContext, kCGBlendModeCopy); // From Technical Q&A QA1708: http://developer.apple.com/library/ios/#qa/qa1708/_index.html
CGContextDrawImage(imageContext, CGRectMake(0.0, 0.0, pixelSizeToUseForTexture.width, pixelSizeToUseForTexture.height), newImageSource);
CGContextRelease(imageContext);
CGColorSpaceRelease(genericRGBColorspace);
isLitteEndian = YES;
alphaFirst = YES;
premultiplied = YES;
}
else
{
// Access the raw image bytes directly
dataFromImageDataProvider = CGDataProviderCopyData(CGImageGetDataProvider(newImageSource));
imageData = (GLubyte *)CFDataGetBytePtr(dataFromImageDataProvider);
}
if (removePremultiplication && premultiplied) {
NSUInteger totalNumberOfPixels = round(pixelSizeToUseForTexture.width * pixelSizeToUseForTexture.height);
uint32_t *pixelP = (uint32_t *)imageData;
uint32_t pixel;
CGFloat srcR, srcG, srcB, srcA;
for (NSUInteger idx=0; idx<totalNumberOfPixels; idx++, pixelP++) {
pixel = isLitteEndian ? CFSwapInt32LittleToHost(*pixelP) : CFSwapInt32BigToHost(*pixelP);
if (alphaFirst) {
srcA = (CGFloat)((pixel & 0xff000000) >> 24) / 255.0f;
}
else {
srcA = (CGFloat)(pixel & 0x000000ff) / 255.0f;
pixel >>= 8;
}
srcR = (CGFloat)((pixel & 0x00ff0000) >> 16) / 255.0f;
srcG = (CGFloat)((pixel & 0x0000ff00) >> 8) / 255.0f;
srcB = (CGFloat)(pixel & 0x000000ff) / 255.0f;
srcR /= srcA; srcG /= srcA; srcB /= srcA;
pixel = (uint32_t)(srcR * 255.0) << 16;
pixel |= (uint32_t)(srcG * 255.0) << 8;
pixel |= (uint32_t)(srcB * 255.0);
if (alphaFirst) {
pixel |= (uint32_t)(srcA * 255.0) << 24;
}
else {
pixel <<= 8;
pixel |= (uint32_t)(srcA * 255.0);
}
*pixelP = isLitteEndian ? CFSwapInt32HostToLittle(pixel) : CFSwapInt32HostToBig(pixel);
}
}
// elapsedTime = (CFAbsoluteTimeGetCurrent() - startTime) * 1000.0;
// NSLog(@"Core Graphics drawing time: %f", elapsedTime);
// CGFloat currentRedTotal = 0.0f, currentGreenTotal = 0.0f, currentBlueTotal = 0.0f, currentAlphaTotal = 0.0f;
// NSUInteger totalNumberOfPixels = round(pixelSizeToUseForTexture.width * pixelSizeToUseForTexture.height);
//
// for (NSUInteger currentPixel = 0; currentPixel < totalNumberOfPixels; currentPixel++)
// {
// currentBlueTotal += (CGFloat)imageData[(currentPixel * 4)] / 255.0f;
// currentGreenTotal += (CGFloat)imageData[(currentPixel * 4) + 1] / 255.0f;
// currentRedTotal += (CGFloat)imageData[(currentPixel * 4 + 2)] / 255.0f;
// currentAlphaTotal += (CGFloat)imageData[(currentPixel * 4) + 3] / 255.0f;
// }
//
// NSLog(@"Debug, average input image red: %f, green: %f, blue: %f, alpha: %f", currentRedTotal / (CGFloat)totalNumberOfPixels, currentGreenTotal / (CGFloat)totalNumberOfPixels, currentBlueTotal / (CGFloat)totalNumberOfPixels, currentAlphaTotal / (CGFloat)totalNumberOfPixels);
runSynchronouslyOnVideoProcessingQueue(^{
[GPUImageContext useImageProcessingContext];
outputFramebuffer = [[GPUImageContext sharedFramebufferCache] fetchFramebufferForSize:pixelSizeToUseForTexture onlyTexture:YES];
[outputFramebuffer disableReferenceCounting];
glBindTexture(GL_TEXTURE_2D, [outputFramebuffer texture]);
if (self.shouldSmoothlyScaleOutput)
{
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
}
// no need to use self.outputTextureOptions here since pictures need this texture formats and type
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, (int)pixelSizeToUseForTexture.width, (int)pixelSizeToUseForTexture.height, 0, format, GL_UNSIGNED_BYTE, imageData);
if (self.shouldSmoothlyScaleOutput)
{
glGenerateMipmap(GL_TEXTURE_2D);
}
glBindTexture(GL_TEXTURE_2D, 0);
});
if (shouldRedrawUsingCoreGraphics)
{
free(imageData);
}
else
{
if (dataFromImageDataProvider)
{
CFRelease(dataFromImageDataProvider);
}
}
return self;
}
- 其它方法。这些方法主要是与图片处理相关。
// Image rendering
- (void)processImage;
- (CGSize)outputImageSize;
- (BOOL)processImageWithCompletionHandler:(void (^)(void))completion;
- (void)processImageUpToFilter:(GPUImageOutput<GPUImageInput> *)finalFilterInChain withCompletionHandler:(void (^)(UIImage *processedImage))block;
// 处理图片
- (void)processImage;
{
[self processImageWithCompletionHandler:nil];
}
// 输出图片大小,由于图像大小可能被调整(详见初始化方法)。因此,提供了获取图像大小的方法。
- (CGSize)outputImageSize;
{
return pixelSizeOfImage;
}
// 处理图片,可以传入处理完回调的block
- (BOOL)processImageWithCompletionHandler:(void (^)(void))completion;
{
hasProcessedImage = YES;
// dispatch_semaphore_wait(imageUpdateSemaphore, DISPATCH_TIME_FOREVER);
// 如果计数器小于1便立即返回。计数器大于等于1的时候,使计数器减1,并且往下执行
if (dispatch_semaphore_wait(imageUpdateSemaphore, DISPATCH_TIME_NOW) != 0)
{
return NO;
}
// 传递Framebuffer给所有targets处理
runAsynchronouslyOnVideoProcessingQueue(^{
for (id<GPUImageInput> currentTarget in targets)
{
NSInteger indexOfObject = [targets indexOfObject:currentTarget];
NSInteger textureIndexOfTarget = [[targetTextureIndices objectAtIndex:indexOfObject] integerValue];
[currentTarget setCurrentlyReceivingMonochromeInput:NO];
[currentTarget setInputSize:pixelSizeOfImage atIndex:textureIndexOfTarget];
[currentTarget setInputFramebuffer:outputFramebuffer atIndex:textureIndexOfTarget];
[currentTarget newFrameReadyAtTime:kCMTimeIndefinite atIndex:textureIndexOfTarget];
}
// 执行完,计数器加1
dispatch_semaphore_signal(imageUpdateSemaphore);
// 有block,执行block
if (completion != nil) {
completion();
}
});
return YES;
}
// 由响应链的final filter生成UIImage图像
- (void)processImageUpToFilter:(GPUImageOutput<GPUImageInput> *)finalFilterInChain withCompletionHandler:(void (^)(UIImage *processedImage))block;
{
[finalFilterInChain useNextFrameForImageCapture];
[self processImageWithCompletionHandler:^{
UIImage *imageFromFilter = [finalFilterInChain imageFromCurrentFramebuffer];
block(imageFromFilter);
}];
}
GPUImageView
从名称就可以知道GPUImageView是GPUImage框架中显示图片相关的类。GPUImageView实现了GPUImageInput协议,从而可以知道它能够接受GPUImageFramebuffer的输入。因此,常常作为响应链的终端节点,用于显示处理后的帧缓存。GPUImageView这个涉及了比较多的OpenGL ES的知识,在这里不会说太多OpenGL ES的知识。如果不太了解,欢迎阅读我的 OpenGL ES入门专题。
- 初始化
- (id)initWithFrame:(CGRect)frame;
-(id)initWithCoder:(NSCoder *)coder;
初始化的时候主要是以下几个方面的工作:1、设置OpenGL ES的相关属性;2、创建着色器程序;3、获取属性变量、统一变量的位置;4、设置清屏颜色;5、创建默认的帧缓存、渲染缓存,用于图像的显示;6、根据填充模式调整顶点坐标。
- (id)initWithFrame:(CGRect)frame
{
if (!(self = [super initWithFrame:frame]))
{
return nil;
}
[self commonInit];
return self;
}
-(id)initWithCoder:(NSCoder *)coder
{
if (!(self = [super initWithCoder:coder]))
{
return nil;
}
[self commonInit];
return self;
}
- (void)commonInit;
{
// Set scaling to account for Retina display
if ([self respondsToSelector:@selector(setContentScaleFactor:)])
{
self.contentScaleFactor = [[UIScreen mainScreen] scale];
}
inputRotation = kGPUImageNoRotation;
self.opaque = YES;
self.hidden = NO;
CAEAGLLayer *eaglLayer = (CAEAGLLayer *)self.layer;
eaglLayer.opaque = YES;
eaglLayer.drawableProperties = [NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithBool:NO], kEAGLDrawablePropertyRetainedBacking, kEAGLColorFormatRGBA8, kEAGLDrawablePropertyColorFormat, nil];
self.enabled = YES;
runSynchronouslyOnVideoProcessingQueue(^{
[GPUImageContext useImageProcessingContext];
displayProgram = [[GPUImageContext sharedImageProcessingContext] programForVertexShaderString:kGPUImageVertexShaderString fragmentShaderString:kGPUImagePassthroughFragmentShaderString];
if (!displayProgram.initialized)
{
[displayProgram addAttribute:@"position"];
[displayProgram addAttribute:@"inputTextureCoordinate"];
if (![displayProgram link])
{
NSString *progLog = [displayProgram programLog];
NSLog(@"Program link log: %@", progLog);
NSString *fragLog = [displayProgram fragmentShaderLog];
NSLog(@"Fragment shader compile log: %@", fragLog);
NSString *vertLog = [displayProgram vertexShaderLog];
NSLog(@"Vertex shader compile log: %@", vertLog);
displayProgram = nil;
NSAssert(NO, @"Filter shader link failed");
}
}
displayPositionAttribute = [displayProgram attributeIndex:@"position"];
displayTextureCoordinateAttribute = [displayProgram attributeIndex:@"inputTextureCoordinate"];
displayInputTextureUniform = [displayProgram uniformIndex:@"inputImageTexture"]; // This does assume a name of "inputTexture" for the fragment shader
[GPUImageContext setActiveShaderProgram:displayProgram];
glEnableVertexAttribArray(displayPositionAttribute);
glEnableVertexAttribArray(displayTextureCoordinateAttribute);
[self setBackgroundColorRed:0.0 green:0.0 blue:0.0 alpha:1.0];
_fillMode = kGPUImageFillModePreserveAspectRatio;
[self createDisplayFramebuffer];
});
}
- 方法列表
// 设置背景颜色
- (void)setBackgroundColorRed:(GLfloat)redComponent green:(GLfloat)greenComponent blue:(GLfloat)blueComponent alpha:(GLfloat)alphaComponent;
// 该方法未实现
- (void)setCurrentlyReceivingMonochromeInput:(BOOL)newValue;
方法实现。方法实现比较简单,主要看以下几个方法。
- (void)setBackgroundColorRed:(GLfloat)redComponent green:(GLfloat)greenComponent blue:(GLfloat)blueComponent alpha:(GLfloat)alphaComponent;
{
backgroundColorRed = redComponent;
backgroundColorGreen = greenComponent;
backgroundColorBlue = blueComponent;
backgroundColorAlpha = alphaComponent;
}
- (void)setCurrentlyReceivingMonochromeInput:(BOOL)newValue;
{
}
// 根据旋转模式获取纹理坐标
+ (const GLfloat *)textureCoordinatesForRotation:(GPUImageRotationMode)rotationMode;
{
// static const GLfloat noRotationTextureCoordinates[] = {
// 0.0f, 0.0f,
// 1.0f, 0.0f,
// 0.0f, 1.0f,
// 1.0f, 1.0f,
// };
static const GLfloat noRotationTextureCoordinates[] = {
0.0f, 1.0f,
1.0f, 1.0f,
0.0f, 0.0f,
1.0f, 0.0f,
};
static const GLfloat rotateRightTextureCoordinates[] = {
1.0f, 1.0f,
1.0f, 0.0f,
0.0f, 1.0f,
0.0f, 0.0f,
};
static const GLfloat rotateLeftTextureCoordinates[] = {
0.0f, 0.0f,
0.0f, 1.0f,
1.0f, 0.0f,
1.0f, 1.0f,
};
static const GLfloat verticalFlipTextureCoordinates[] = {
0.0f, 0.0f,
1.0f, 0.0f,
0.0f, 1.0f,
1.0f, 1.0f,
};
static const GLfloat horizontalFlipTextureCoordinates[] = {
1.0f, 1.0f,
0.0f, 1.0f,
1.0f, 0.0f,
0.0f, 0.0f,
};
static const GLfloat rotateRightVerticalFlipTextureCoordinates[] = {
1.0f, 0.0f,
1.0f, 1.0f,
0.0f, 0.0f,
0.0f, 1.0f,
};
static const GLfloat rotateRightHorizontalFlipTextureCoordinates[] = {
0.0f, 1.0f,
0.0f, 0.0f,
1.0f, 1.0f,
1.0f, 0.0f,
};
static const GLfloat rotate180TextureCoordinates[] = {
1.0f, 0.0f,
0.0f, 0.0f,
1.0f, 1.0f,
0.0f, 1.0f,
};
switch(rotationMode)
{
case kGPUImageNoRotation: return noRotationTextureCoordinates;
case kGPUImageRotateLeft: return rotateLeftTextureCoordinates;
case kGPUImageRotateRight: return rotateRightTextureCoordinates;
case kGPUImageFlipVertical: return verticalFlipTextureCoordinates;
case kGPUImageFlipHorizonal: return horizontalFlipTextureCoordinates;
case kGPUImageRotateRightFlipVertical: return rotateRightVerticalFlipTextureCoordinates;
case kGPUImageRotateRightFlipHorizontal: return rotateRightHorizontalFlipTextureCoordinates;
case kGPUImageRotate180: return rotate180TextureCoordinates;
}
}
// 覆盖父类的方法,作用是负责OpenGL的图形绘制,并显示在屏幕上
- (void)newFrameReadyAtTime:(CMTime)frameTime atIndex:(NSInteger)textureIndex;
{
runSynchronouslyOnVideoProcessingQueue(^{
[GPUImageContext setActiveShaderProgram:displayProgram];
[self setDisplayFramebuffer];
// 清屏
glClearColor(backgroundColorRed, backgroundColorGreen, backgroundColorBlue, backgroundColorAlpha);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glActiveTexture(GL_TEXTURE4);
glBindTexture(GL_TEXTURE_2D, [inputFramebufferForDisplay texture]);
glUniform1i(displayInputTextureUniform, 4);
glVertexAttribPointer(displayPositionAttribute, 2, GL_FLOAT, 0, 0, imageVertices);
glVertexAttribPointer(displayTextureCoordinateAttribute, 2, GL_FLOAT, 0, 0, [GPUImageView textureCoordinatesForRotation:inputRotation]);
// 绘制
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
// 显示
[self presentFramebuffer];
[inputFramebufferForDisplay unlock];
inputFramebufferForDisplay = nil;
});
}
// 显示帧缓存
- (void)presentFramebuffer;
{
glBindRenderbuffer(GL_RENDERBUFFER, displayRenderbuffer);
[[GPUImageContext sharedImageProcessingContext] presentBufferForDisplay];
}
GPUImageUIElement
与GPUImagePicture类似可以作为响应链源。与GPUImagePicture不同的是,它的数据不是来自图片,而是来自于UIView或CALayer的渲染结果,类似于对UIView或CALayer截图。GPUImageUIElement继承自GPUImageOutput,从而可以知道它能够作为输出,由于它没有实现GPUImageInput协议,不能处理输入。
- 初始化
- (id)initWithView:(UIView *)inputView;
- (id)initWithLayer:(CALayer *)inputLayer;
通过UIView或CALayer进行初始化,初始化的过程中调用 [layer renderInContext:imageContext]
进行渲染,渲染后便生成纹理对象。
- (id)initWithView:(UIView *)inputView;
{
if (!(self = [super init]))
{
return nil;
}
view = inputView;
layer = inputView.layer;
previousLayerSizeInPixels = CGSizeZero;
[self update];
return self;
}
- (id)initWithLayer:(CALayer *)inputLayer;
{
if (!(self = [super init]))
{
return nil;
}
view = nil;
layer = inputLayer;
previousLayerSizeInPixels = CGSizeZero;
[self update];
return self;
}
- (void)update;
{
[self updateWithTimestamp:kCMTimeIndefinite];
}
- (void)updateWithTimestamp:(CMTime)frameTime;
{
[GPUImageContext useImageProcessingContext];
CGSize layerPixelSize = [self layerSizeInPixels];
GLubyte *imageData = (GLubyte *) calloc(1, (int)layerPixelSize.width * (int)layerPixelSize.height * 4);
CGColorSpaceRef genericRGBColorspace = CGColorSpaceCreateDeviceRGB();
CGContextRef imageContext = CGBitmapContextCreate(imageData, (int)layerPixelSize.width, (int)layerPixelSize.height, 8, (int)layerPixelSize.width * 4, genericRGBColorspace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// CGContextRotateCTM(imageContext, M_PI_2);
CGContextTranslateCTM(imageContext, 0.0f, layerPixelSize.height);
CGContextScaleCTM(imageContext, layer.contentsScale, -layer.contentsScale);
// CGContextSetBlendMode(imageContext, kCGBlendModeCopy); // From Technical Q&A QA1708: http://developer.apple.com/library/ios/#qa/qa1708/_index.html
[layer renderInContext:imageContext];
CGContextRelease(imageContext);
CGColorSpaceRelease(genericRGBColorspace);
// TODO: This may not work
outputFramebuffer = [[GPUImageContext sharedFramebufferCache] fetchFramebufferForSize:layerPixelSize textureOptions:self.outputTextureOptions onlyTexture:YES];
glBindTexture(GL_TEXTURE_2D, [outputFramebuffer texture]);
// no need to use self.outputTextureOptions here, we always need these texture options
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, (int)layerPixelSize.width, (int)layerPixelSize.height, 0, GL_BGRA, GL_UNSIGNED_BYTE, imageData);
free(imageData);
for (id<GPUImageInput> currentTarget in targets)
{
if (currentTarget != self.targetToIgnoreForUpdates)
{
NSInteger indexOfObject = [targets indexOfObject:currentTarget];
NSInteger textureIndexOfTarget = [[targetTextureIndices objectAtIndex:indexOfObject] integerValue];
[currentTarget setInputSize:layerPixelSize atIndex:textureIndexOfTarget];
[currentTarget newFrameReadyAtTime:frameTime atIndex:textureIndexOfTarget];
}
}
}
- 其它方法
- (CGSize)layerSizeInPixels;
- (void)update;
- (void)updateUsingCurrentTime;
- (void)updateWithTimestamp:(CMTime)frameTime;
其它方法主要是与截屏生成纹理对象并传给所有targets处理相关。
// 获取像素大小
- (CGSize)layerSizeInPixels;
{
CGSize pointSize = layer.bounds.size;
return CGSizeMake(layer.contentsScale * pointSize.width, layer.contentsScale * pointSize.height);
}
// 更新方法
- (void)update;
{
[self updateWithTimestamp:kCMTimeIndefinite];
}
// 使用当前时间的更新方法
- (void)updateUsingCurrentTime;
{
if(CMTIME_IS_INVALID(time)) {
time = CMTimeMakeWithSeconds(0, 600);
actualTimeOfLastUpdate = [NSDate timeIntervalSinceReferenceDate];
} else {
NSTimeInterval now = [NSDate timeIntervalSinceReferenceDate];
NSTimeInterval diff = now - actualTimeOfLastUpdate;
time = CMTimeAdd(time, CMTimeMakeWithSeconds(diff, 600));
actualTimeOfLastUpdate = now;
}
[self updateWithTimestamp:time];
}
// 带时间的更新方法
- (void)updateWithTimestamp:(CMTime)frameTime;
{
[GPUImageContext useImageProcessingContext];
CGSize layerPixelSize = [self layerSizeInPixels];
GLubyte *imageData = (GLubyte *) calloc(1, (int)layerPixelSize.width * (int)layerPixelSize.height * 4);
CGColorSpaceRef genericRGBColorspace = CGColorSpaceCreateDeviceRGB();
CGContextRef imageContext = CGBitmapContextCreate(imageData, (int)layerPixelSize.width, (int)layerPixelSize.height, 8, (int)layerPixelSize.width * 4, genericRGBColorspace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// CGContextRotateCTM(imageContext, M_PI_2);
CGContextTranslateCTM(imageContext, 0.0f, layerPixelSize.height);
CGContextScaleCTM(imageContext, layer.contentsScale, -layer.contentsScale);
// CGContextSetBlendMode(imageContext, kCGBlendModeCopy); // From Technical Q&A QA1708: http://developer.apple.com/library/ios/#qa/qa1708/_index.html
[layer renderInContext:imageContext];
CGContextRelease(imageContext);
CGColorSpaceRelease(genericRGBColorspace);
// TODO: This may not work
outputFramebuffer = [[GPUImageContext sharedFramebufferCache] fetchFramebufferForSize:layerPixelSize textureOptions:self.outputTextureOptions onlyTexture:YES];
glBindTexture(GL_TEXTURE_2D, [outputFramebuffer texture]);
// no need to use self.outputTextureOptions here, we always need these texture options
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, (int)layerPixelSize.width, (int)layerPixelSize.height, 0, GL_BGRA, GL_UNSIGNED_BYTE, imageData);
free(imageData);
for (id<GPUImageInput> currentTarget in targets)
{
if (currentTarget != self.targetToIgnoreForUpdates)
{
NSInteger indexOfObject = [targets indexOfObject:currentTarget];
NSInteger textureIndexOfTarget = [[targetTextureIndices objectAtIndex:indexOfObject] integerValue];
[currentTarget setInputSize:layerPixelSize atIndex:textureIndexOfTarget];
[currentTarget newFrameReadyAtTime:frameTime atIndex:textureIndexOfTarget];
}
}
}
实例代码
- GPUImagePicture与GPUImageView。效果见
GPUImagePicture.png
。
@interface ViewController ()
@property (weak, nonatomic) IBOutlet GPUImageView *imageView;
@end
@implementation ViewController
- (void)viewDidLoad {
[super viewDidLoad];
[_imageView setBackgroundColorRed:1.0 green:1.0 blue:1.0 alpha:1.0];
GPUImagePicture *picture = [[GPUImagePicture alloc] initWithImage:[UIImage imageNamed:@"1.jpg"]];
GPUImageGrayscaleFilter *filter = [[GPUImageGrayscaleFilter alloc] init];
[picture addTarget:filter];
[filter addTarget:_imageView];
[filter useNextFrameForImageCapture];
[picture processImage];
}
- GPUImageUIElement与GPUImageView。效果见
GPUImageUIElement.png
。
@interface SecondViewController ()
@property (weak, nonatomic) IBOutlet GPUImageView *imageView;
@property (weak, nonatomic) IBOutlet UIView *bgView;
@end
@implementation SecondViewController
- (void)viewDidLoad {
[super viewDidLoad];
[_imageView setBackgroundColorRed:1.0 green:1.0 blue:1.0 alpha:1.0];
GPUImageUIElement *element = [[GPUImageUIElement alloc] initWithView:_bgView];
GPUImageHueFilter *filter = [[GPUImageHueFilter alloc] init];
[element addTarget:filter];
[filter addTarget:_imageView];
[filter useNextFrameForImageCapture];
[element update];
}
总结
GPUImagePicture、 GPUImageView、GPUImageUIElement 这几类在处理图片、处理截屏、以及显示图片方面会经常用到。因此,熟悉这几类对了解GPUImage框架有着重要的作用。
源码地址:GPUImage源码阅读系列 https://github.com/QinminiOS/GPUImage
系列文章地址:GPUImage源码阅读 http://www.jianshu.com/nb/11749791