FFmpegAndroidAndroid开发经验谈

Android平台下AVfilter 实现水印,滤镜等特效功能

2017-05-11  本文已影响554人  小码哥_WS

上一篇我们实现了平台解码avi并用SurfaceView播放。
http://blog.csdn.net/column/details/15511.html

本篇我们在此基础上实现滤镜,水印等功能。

我们将使用最新版:

最新版ffmpeg ffmpeg3.3
新版Android studio Android studio2.3
新版JNI编译方式 CMake

如果对C/C++/JNI知识不够了解。
可以先看这里:
C语言小结http://blog.csdn.net/king1425/article/details/70256764 **
C++小结(一)** : http://blog.csdn.net/king1425/article/details/70260091
JNI高阶知识总结http://blog.csdn.net/king1425/article/details/71405131

对ffmpeg不熟的客官看这里:http://blog.csdn.net/king1425/article/details/70597642

先上两张效果图:

黑白:const char *filters_descr = “lutyuv=’u=128:v=128’”;

QQ截图20170511120950.png

添加水印:const char *filters_descr = “movie=/storage/emulated/0/ws.jpg[wm];[in][wm]overlay=5:5[out]”;

QQ截图20170511121149.png

在前面的几篇文章中我们已经学会了用ffmpeg对音视频进行编解码,下面我们就主要介绍一下libavfilter
ffmpeg的libavfilter是为音视频添加特效功能的。

libavfilter的关键函数如下所示:

avfilter_register_all():注册所有AVFilter。
avfilter_graph_alloc():为FilterGraph分配内存。
avfilter_graph_create_filter():创建并向FilterGraph中添加一个Filter。
avfilter_graph_parse_ptr():将一串通过字符串描述的Graph添加到FilterGraph中。
avfilter_graph_config():检查FilterGraph的配置。
av_buffersrc_add_frame():向FilterGraph中加入一个AVFrame。

av_buffersink_get_frame():从FilterGraph中取出一个AVFrame。

今天我们的示例程序中提供了几种特效:

const char *filters_descr = "lutyuv='u=128:v=128'";
//const char *filters_descr = "hflip";
//const char *filters_descr = "hue='h=60:s=-3'";
//const char *filters_descr = "crop=2/3*in_w:2/3*in_h";
//const char *filters_descr = "drawbox=x=200:y=200:w=300:h=300:color=pink@0.5";
//const char *filters_descr = "movie=/storage/emulated/0/ws.jpg[wm];[in][wm]overlay=5:5[out]";
//const char *filters_descr="drawgrid=width=100:height=100:thickness=4:color=pink@0.9";

更多的特效使用,请到官网学习,http://www.ffmpeg.org/ffmpeg-filters.html

下面看代码实现:

在我们的MainActivity中初始化了一个SurfaceView,并定义一个native函数用于把Surface传到底层(底层把处理过的数据交给Surface传给上层显示)

SurfaceView surfaceView = (SurfaceView) findViewById(R.id.surface_view);
        surfaceHolder = surfaceView.getHolder();
        surfaceHolder.addCallback(this);

...
 public native int play(Object surface);

surfaceCreated()函数中实现play函数。

  @Override
    public void surfaceCreated(SurfaceHolder holder) {
        new Thread(new Runnable() {
            @Override
            public void run() {
                play(surfaceHolder.getSurface());
            }
        }).start();
    }

那么重点就是JNI层的play()函数做了什么?

首先我们在上一篇play()函数的基础上添加libavfilter各种特效需要的头文件

//added by ws for AVfilter start
#include <libavfilter/avfiltergraph.h>
#include <libavfilter/buffersrc.h>
#include <libavfilter/buffersink.h>
//added by ws for AVfilter end
};

然后我们声明初始化一些必要的结构体。

//added by ws for AVfilter start

const char *filters_descr = "lutyuv='u=128:v=128'";
//const char *filters_descr = "hflip";
//const char *filters_descr = "hue='h=60:s=-3'";
//const char *filters_descr = "crop=2/3*in_w:2/3*in_h";
//const char *filters_descr = "drawbox=x=200:y=200:w=300:h=300:color=pink@0.5";
//const char *filters_descr = "movie=/storage/emulated/0/ws.jpg[wm];[in][wm]overlay=5:5[out]";
//const char *filters_descr="drawgrid=width=100:height=100:thickness=4:color=pink@0.9";

AVFilterContext *buffersink_ctx;
AVFilterContext *buffersrc_ctx;
AVFilterGraph *filter_graph;


//added by ws for AVfilter end

现在我们可以正式的初始化AVfilter 了,代码比较多,对着上面的AVfilter 关键函数看比较好

 //added by ws for AVfilter start----------init AVfilter--------------------------ws


    char args[512];
    int ret;
    AVFilter *buffersrc  = avfilter_get_by_name("buffer");
    AVFilter *buffersink = avfilter_get_by_name("buffersink");//新版的ffmpeg库必须为buffersink
    AVFilterInOut *outputs = avfilter_inout_alloc();
    AVFilterInOut *inputs  = avfilter_inout_alloc();
    enum AVPixelFormat pix_fmts[] = { AV_PIX_FMT_YUV420P, AV_PIX_FMT_NONE };
    AVBufferSinkParams *buffersink_params;

    filter_graph = avfilter_graph_alloc();

    /* buffer video source: the decoded frames from the decoder will be inserted here. */
    snprintf(args, sizeof(args),
             "video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d",
             pCodecCtx->width, pCodecCtx->height, pCodecCtx->pix_fmt,
             pCodecCtx->time_base.num, pCodecCtx->time_base.den,
             pCodecCtx->sample_aspect_ratio.num, pCodecCtx->sample_aspect_ratio.den);

    ret = avfilter_graph_create_filter(&buffersrc_ctx, buffersrc, "in",
                                       args, NULL, filter_graph);
    if (ret < 0) {
        LOGD("Cannot create buffer source\n");
        return ret;
    }

    /* buffer video sink: to terminate the filter chain. */
    buffersink_params = av_buffersink_params_alloc();
    buffersink_params->pixel_fmts = pix_fmts;
    ret = avfilter_graph_create_filter(&buffersink_ctx, buffersink, "out",
                                       NULL, buffersink_params, filter_graph);
    av_free(buffersink_params);
    if (ret < 0) {
        LOGD("Cannot create buffer sink\n");
        return ret;
    }

    /* Endpoints for the filter graph. */
    outputs->name       = av_strdup("in");
    outputs->filter_ctx = buffersrc_ctx;
    outputs->pad_idx    = 0;
    outputs->next       = NULL;

    inputs->name       = av_strdup("out");
    inputs->filter_ctx = buffersink_ctx;
    inputs->pad_idx    = 0;
    inputs->next       = NULL;

   // avfilter_link(buffersrc_ctx, 0, buffersink_ctx, 0);

    if ((ret = avfilter_graph_parse_ptr(filter_graph, filters_descr,
                                        &inputs, &outputs, NULL)) < 0) {
        LOGD("Cannot avfilter_graph_parse_ptr\n");
        return ret;
    }

    if ((ret = avfilter_graph_config(filter_graph, NULL)) < 0) {
        LOGD("Cannot avfilter_graph_config\n");
        return ret;
    }

    //added by ws for AVfilter end------------init AVfilter------------------------------ws

初始化完成后,
我们把解码器解码出来的帧进行加工改造。

   //added by ws for AVfilter start
                pFrame->pts = av_frame_get_best_effort_timestamp(pFrame);

                //* push the decoded frame into the filtergraph
                if (av_buffersrc_add_frame(buffersrc_ctx, pFrame) < 0) {
                    LOGD("Could not av_buffersrc_add_frame");
                    break;
                }

                ret = av_buffersink_get_frame(buffersink_ctx, pFrame);
                if (ret < 0) {
                    LOGD("Could not av_buffersink_get_frame");
                    break;
                }
                //added by ws for AVfilter end

改造后的帧就是已经加上特效了
记着最后释放内存:

avfilter_graph_free(&filter_graph); //added by ws for avfilter

到此我们今天的功能已经实现了。
建议大家结合着代码看,否则如盲人摸象这一篇可能助你理解libavfilter: libavfilter实践指南 :http://blog.csdn.net/king1425/article/details/71215686

demo源码在这里:

ffmpeg实战教程(八)Android平台下AVfilter 实现水印,滤镜等特效功能 :
http://blog.csdn.net/King1425/article/details/71609520

上一篇下一篇

猜你喜欢

热点阅读