iOS开发心得iOS Swift && Objective-C程序员

用OpenGLES实现yuv420p视频播放界面

2017-07-27  本文已影响529人  FindCrt

背景

例子TFLive这个项目里,是我按着ijkPlayer写的直播播放器,要运行需要编译ffmpeg的库,网盘里存了一份, 提取码:vjce。OpenGL ES播放相关的在在OpenGLES的文件夹里。

learnOpenGL学到会使用纹理就可以了。

播放视频,就是把画面一副一副的显示,跟帧动画那样。在解码视频帧数据之后得到的就是某种格式的一段内存,这段数据构成了一副画面所需的颜色信息,比如yuv420p。图文详解YUV420数据格式这篇写的很好。

YUV和RGB这些都叫颜色空间,我的理解便是:它们是一种约定好的颜色值的排列方式。比如RGB,便是红绿蓝三种颜色分量依次排列,一般每个颜色分量就占一个字节,值为0-255。

YUV420p, 是YUV三个分量分别三层,就像:YYYYUUVV。就是Y全部在一起,而RGB是RGBRGBRGB这样混合的。每个分量各自在一起的就是有平面(Plane)的。而420样式是4个Y分量和一对UV分量组合,节省空间。

要显示YUV420p的图像,需要转化yuv到rgba,因为OpenGL输出只认rgba。

iOS上准备工作

OpenGL部分在各平台逻辑是一致的,不在iOS上的可以跳过这段。

使用frameBuffer来显示:

+(Class)layerClass{
    return [CAEAGLLayer class];
}
-(BOOL)setupOpenGLContext{
    _renderLayer = (CAEAGLLayer *)self.layer;
    _renderLayer.opaque = YES;
    _renderLayer.contentsScale = [UIScreen mainScreen].scale;
    _renderLayer.drawableProperties = [NSDictionary dictionaryWithObjectsAndKeys:
                                       [NSNumber numberWithBool:NO], kEAGLDrawablePropertyRetainedBacking,
                                       kEAGLColorFormatRGBA8, kEAGLDrawablePropertyColorFormat,
                                       nil];
    
    _context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES3];
    //_context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
    if (!_context) {
        NSLog(@"alloc EAGLContext failed!");
        return false;
    }
    EAGLContext *preContex = [EAGLContext currentContext];
    if (![EAGLContext setCurrentContext:_context]) {
        NSLog(@"set current EAGLContext failed!");
        return false;
    }
    [self setupFrameBuffer];
    
    [EAGLContext setCurrentContext:preContex];
    return true;
}

有了这个context,并且把它设为CurrentContext,那么在绘制过程里的那些OpenGL代码才能在这个context生效,它才能把结果输出到需要的地方。

-(void)setupFrameBuffer{
    glGenBuffers(1, &_frameBuffer);
    glBindFramebuffer(GL_FRAMEBUFFER, _frameBuffer);
    
    glGenRenderbuffers(1, &_colorBuffer);
    glBindRenderbuffer(GL_RENDERBUFFER, _colorBuffer);
    [_context renderbufferStorage:GL_RENDERBUFFER fromDrawable:_renderLayer];
    glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, _colorBuffer);
    
    GLint width,height;
    glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &width);
    glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &height);
    
    _bufferSize.width = width;
    _bufferSize.height = height;
    
    glViewport(0, 0, _bufferSize.width, _bufferSize.height);

    GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER) ;
    if(status != GL_FRAMEBUFFER_COMPLETE) {
        NSLog(@"failed to make complete framebuffer object %x", status);
    }
}

OpenGL部分

分为两部分:第一次绘制开始前准备数据和每次绘制循环。

准备部分

使用OpenGL显示的逻辑是:画一个正方形,然后把输出的视频帧数据制作成纹理(texture)给这个正方形,把纹理显示处理就OK里。

所以绘制的图形是不变的,那么shader和数据(AVO等)都是固定的,在第一次开始前搞定后面就不需要变了。

    if (!_renderConfiged) {
        [self configRenderData];
    }
-(BOOL)configRenderData{
    if (_renderConfiged) {
        return true;
    }
    
    GLfloat vertices[] = {
        -1.0f, 1.0f, 0.0f, 0.0f, 0.0f,  //left top
        -1.0f, -1.0f, 0.0f, 0.0f, 1.0f, //left bottom
        1.0f, 1.0f, 0.0f, 1.0f, 0.0f,   //right top
        1.0f, -1.0f, 0.0f, 1.0f, 1.0f,  //right bottom
    };
    
//    NSString *vertexPath = [[NSBundle mainBundle] pathForResource:@"frameDisplay" ofType:@"vs"];
//    NSString *fragmentPath = [[NSBundle mainBundle] pathForResource:@"frameDisplay" ofType:@"fs"];
    //_frameProgram = new TFOPGLProgram(std::string([vertexPath UTF8String]), std::string([fragmentPath UTF8String]));
    _frameProgram = new TFOPGLProgram(TFVideoDisplay_common_vs, TFVideoDisplay_yuv420_fs);
    
    glGenVertexArrays(1, &VAO);
    glBindVertexArray(VAO);
    
    glGenBuffers(1, &VBO);
    glBindBuffer(GL_ARRAY_BUFFER, VBO);
    glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
    
    glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 5*sizeof(GL_FLOAT), 0);
    glEnableVertexAttribArray(0);
    
    glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 5*sizeof(GL_FLOAT), (void*)(3*(sizeof(GL_FLOAT))));
    glEnableVertexAttribArray(1);
    
    glBindBuffer(GL_ARRAY_BUFFER, 0);
    glBindVertexArray(0);
    
    
    //gen textures
    glGenTextures(TFMAX_TEXTURE_COUNT, textures);
    for (int i = 0; i<TFMAX_TEXTURE_COUNT; i++) {
        glBindTexture(GL_TEXTURE_2D, textures[i]);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
    }
    _renderConfiged = YES;
    
    return YES;
}

绘制

先上shader:

const GLchar *TFVideoDisplay_common_vs ="               \n\
#version 300 es                                         \n\
                                                        \n\
layout (location = 0) in highp vec3 position;           \n\
layout (location = 1) in highp vec2 inTexcoord;         \n\
                                                        \n\
out highp vec2 texcoord;                                \n\
                                                        \n\
void main()                                             \n\
{                                                       \n\
gl_Position = vec4(position, 1.0);                      \n\
texcoord = inTexcoord;                                  \n\
}                                                       \n\
";
const GLchar *TFVideoDisplay_yuv420_fs ="               \n\
#version 300 es                                         \n\
precision highp float;                                  \n\
                                                        \n\
in vec2 texcoord;                                       \n\
out vec4 FragColor;                                     \n\
uniform lowp sampler2D yPlaneTex;                       \n\
uniform lowp sampler2D uPlaneTex;                       \n\
uniform lowp sampler2D vPlaneTex;                       \n\
                                                        \n\
void main()                                             \n\
{                                                       \n\
    // (1) y - 16 (2) rgb * 1.164                       \n\
    vec3 yuv;                                           \n\
    yuv.x = texture(yPlaneTex, texcoord).r;             \n\
    yuv.y = texture(uPlaneTex, texcoord).r - 0.5f;      \n\
    yuv.z = texture(vPlaneTex, texcoord).r - 0.5f;      \n\
                                                        \n\
    mat3 trans = mat3(1, 1 ,1,                          \n\
                      0, -0.34414, 1.772,               \n\
                      1.402, -0.71414, 0                \n\
                      );                                \n\
                                                        \n\
    FragColor = vec4(trans*yuv, 1.0);                   \n\
}                                                       \n\
";

所以采用每个分量单独的纹理。这样厉害的地方就是他们可以共用同一个纹理坐标:

glBindTexture(GL_TEXTURE_2D, textures[0]);
    glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, width, height, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, overlay->pixels[0]);
    glGenerateMipmap(GL_TEXTURE_2D);
    
    glBindTexture(GL_TEXTURE_2D, textures[1]);
    glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, width/2, height/2, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, overlay->pixels[1]);
    glGenerateMipmap(GL_TEXTURE_2D);
    
    glBindTexture(GL_TEXTURE_2D, textures[2]);
    glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, width/2, height/2, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, overlay->pixels[2]);
    glGenerateMipmap(GL_TEXTURE_2D);

最后用的把yuv转成rgb,用的公式:

R = Y + 1.402 (Cr-128)
G = Y - 0.34414 (Cb-128) - 0.71414 (Cr-128)
B = Y + 1.772 (Cb-128)

这里还有一个注意的就是,YUV和YCrCb的区别
YCrCb是YUV的一个偏移版本,所以需要减去0.5(因为都映射到0-1范围了128就是0.5)。当然我觉得这个公式还是要看编码的时候设置了什么格式,视频拍摄的时候是怎么把rgb转成yuv的,两者配套就ok了!

绘制正方形

glBindFramebuffer(GL_FRAMEBUFFER, self.frameBuffer);
    glClear(GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT);
    
    _frameProgram->use();
    
    _frameProgram->setTexture("yPlaneTex", GL_TEXTURE_2D, textures[0], 0);
    _frameProgram->setTexture("uPlaneTex", GL_TEXTURE_2D, textures[1], 1);
    _frameProgram->setTexture("vPlaneTex", GL_TEXTURE_2D, textures[2], 2);
    
    glBindVertexArray(VAO);
    
    glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
    
    
    glBindRenderbuffer(GL_RENDERBUFFER, self.colorBuffer);
    [self.context presentRenderbuffer:GL_RENDERBUFFER];

细节处理

[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(catchAppResignActive) name:UIApplicationWillResignActiveNotification object:nil];
[[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(catchAppBecomeActive) name:UIApplicationDidBecomeActiveNotification object:nil];
......
-(void)catchAppResignActive{
    _appIsUnactive = YES;
}

-(void)catchAppBecomeActive{
    _appIsUnactive = NO;
}
.......
if (self.appIsUnactive) {
    return;    //绘制之前检查,直接取消
}
-(void)layoutSubviews{
    [super layoutSubviews];
    
    //If context has setuped and layer's size has changed, realloc renderBuffer.
    if (self.context && !CGSizeEqualToSize(self.layer.frame.size, self.bufferSize)) {
 _needReallocRenderBuffer = YES;
    }
}
...........
if (_needReallocRenderBuffer) {
   [self reallocRenderBuffer];
   _needReallocRenderBuffer = NO;
}
.........
-(void)reallocRenderBuffer{
    glBindRenderbuffer(GL_RENDERBUFFER, _colorBuffer);
    
    [_context renderbufferStorage:GL_RENDERBUFFER fromDrawable:_renderLayer];
    glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, _colorBuffer);
    ......
}

最后

重点是fragment shader里对yuv分量的读取:

  1. 采取3个纹理
  2. 使用同一个纹理坐标
  3. 构建纹理是使用GL_LUMINANCE, u、v纹理宽高相对y都减半。
上一篇 下一篇

猜你喜欢

热点阅读