GPUImage源码分析(二):GPUImageFilter

2018-12-17  本文已影响0人  奔向火星005

GPUImage框架可以说是一个由Filter连接起来的链式结构,它利用OpenGL的FBO机制,将中间结果缓存到在绑定在FBO的texture中,像流水线般对图像进行层层处理。大致如下图:


GPUImage中所有的Filter都是GPUImageFilter的派生类,GPUImageFilter的简化类图如下:


由图可见,GPUImageFilter继承自GPUImageOutput,同时实现了GPUImageInput协议,GPUImageOutput定义了输出相关接口,GPUImageInput定义了输入流相关接口。

两个GPUImageFilter是通过GPUImageOutput的addTarget连接起来的,源码如下:

- (void)addTarget:(id<GPUImageInput>)newTarget atTextureLocation:(NSInteger)textureLocation;
{
    if([targets containsObject:newTarget])  //如果targets已经包含了该Filterz,则返回
    {
        return;
    }
    
    cachedMaximumOutputSize = CGSizeZero;
    runSynchronouslyOnVideoProcessingQueue(^{
        //一般讲self的outputframe赋给newTarget的inputframebuffer,也可自定义
        [self setInputFramebufferForTarget:newTarget atIndex:textureLocation];
        [targets addObject:newTarget];  //将该Filter存入targets数组中
        [targetTextureIndices addObject:[NSNumber numberWithInteger:textureLocation]];
        
        allTargetsWantMonochromeData = allTargetsWantMonochromeData && [newTarget wantsMonochromeInput];
    });
}

如要将FilterA与FilterB连接起来,数据由FilterA流向FilterB,则调用[FilterA addTarget:FilterB];FilterA中的成员变量targets数组将持有FilterB。

再看下newFrameReadyAtTime接口的实现:

- (void)newFrameReadyAtTime:(CMTime)frameTime atIndex:(NSInteger)textureIndex;
{
    static const GLfloat imageVertices[] = {
        -1.0f, -1.0f,
        1.0f, -1.0f,
        -1.0f,  1.0f,
        1.0f,  1.0f,
    };
    
    //完成当前Filter的图像处理,一般由子类自定义
    [self renderToTextureWithVertices:imageVertices textureCoordinates:[[self class] textureCoordinatesForRotation:inputRotation]];

    //遍历targets,将当前Filter的outputframebuffer赋给每个target,然后每个target调用newFrameReadyAtTime
    [self informTargetsAboutNewFrameAtTime:frameTime];
}

Filter处理图像的过程,其实就是由上往下逐级对每个Filter调用newFrameReadyAtTime的过程,挺简单清晰的,informTargetsAboutNewFrameAtTime:frameTime的代码也列下吧,

- (void)informTargetsAboutNewFrameAtTime:(CMTime)frameTime;
{
    if (self.frameProcessingCompletionBlock != NULL)
    {
        self.frameProcessingCompletionBlock(self, frameTime);
    }
    
    // Get all targets the framebuffer so they can grab a lock on it
    for (id<GPUImageInput> currentTarget in targets)
    {
        if (currentTarget != self.targetToIgnoreForUpdates)
        {
            NSInteger indexOfObject = [targets indexOfObject:currentTarget];
            NSInteger textureIndex = [[targetTextureIndices objectAtIndex:indexOfObject] integerValue];

            [self setInputFramebufferForTarget:currentTarget atIndex:textureIndex];
            [currentTarget setInputSize:[self outputFrameSize] atIndex:textureIndex];
        }
    }
    
    // Release our hold so it can return to the cache immediately upon processing
    [[self framebufferForOutput] unlock];
    
    if (usingNextFrameForImageCapture)
    {
//        usingNextFrameForImageCapture = NO;
    }
    else
    {
        [self removeOutputFramebuffer];
    }    
    
    // Trigger processing last, so that our unlock comes first in serial execution, avoiding the need for a callback
    for (id<GPUImageInput> currentTarget in targets)
    {
        if (currentTarget != self.targetToIgnoreForUpdates)
        {
            NSInteger indexOfObject = [targets indexOfObject:currentTarget];
            NSInteger textureIndex = [[targetTextureIndices objectAtIndex:indexOfObject] integerValue];
            [currentTarget newFrameReadyAtTime:frameTime atIndex:textureIndex];
        }
    }
}
上一篇下一篇

猜你喜欢

热点阅读