ffmpeg 音视频流信息分析延时
ffmpeg的两个接口avformat_open_input和avformat_find_stream_info分别用于打开一个流和分析流信息。在初始信息不足的情况下,avformat_find_stream_info接口需要在内部调用read_frame_internal接口读取流数据,然后再分析后,设置核心数据结构AVFormatContext。由于需要读取数据包,avformat_find_stream_info接口会带来很大的延迟,可通过以下几种方案降低该接口的延迟,具体如下:
通过设置AVFormatContext的probesize成员,来限制avformat_find_stream_info接口内部读取的最大数据量,代码如下:
AVFormatContext *fmt_ctx =NULL;
ret = avformat_open_input(&fmt_ctx, url, input_fmt,NULL);
fmt_ctx->probesize =4096;ret = avformat_find_stream_info(fmt_ctx,NULL);
这种方法其实会带来弊端,因为预读长度设置的过小时,在avformat_find_stream_info内部至多只会读取一帧数据,有些情况下,会导致这些数据不足以分析这个流的信息。
通过设置AVFormatContext的flags成员,来设置将avformat_find_stream_info内部读取的数据包不放入AVFormatContext的缓冲区packet_buffer中,代码如下:
AVFormatContext *fmt_ctx =NULL;
ret = avformat_open_input(&fmt_ctx, url, input_fmt,NULL);
fmt_ctx->flags |= AVFMT_FLAG_NOBUFFER;ret = avformat_find_stream_info(fmt_ctx,NULL);
深入avformat_find_stream_info接口内部就可以发现,当设置了AVFMT_FLAG_NOBUFFER选项后,数据包不入缓冲区,相当于在avformat_find_stream_info接口内部读取的每一帧数据只用于分析,不显示,摘avformat_find_stream_info接口中的一段代码即可理解:
if(ic->flags & AVFMT_FLAG_NOBUFFER) { pkt = &pkt1;}
else{ pkt = add_to_pktbuf(&ic->packet_buffer, &pkt1, &ic->packet_buffer_end);
if((ret = av_dup_packet(pkt)) <0)gotofind_stream_info_err;}
当读取的数据包很多时,实际avformat_find_stream_info接口内部尝试解码以及分析的过程也是耗时的(具体没有测试),所以想到一种极端的解决方案,直接跳过avformat_find_stream_info接口,自定义初始化解码环境。
0x02 解决方法
前提条件:发送端的流信息可知。
比如环境的流信息为:
audio: AAC 44100Hz 2 channel 16bit
video: H264 640*480 30fps
直播流
调用avformat_open_input接口后,不继续调用avformat_find_stream_info接口,具体代码如下:
AVFormatContext *fmt_ctx =NULL;
ret = avformat_open_input(&fmt_ctx, url, input_fmt,NULL);fmt_ctx->probesize =4096;
ret = init_decode(fmt_ctx);
init_decode为自己实现的接口:接口及详细代码如下:
enum{ FLV_TAG_TYPE_AUDIO =0x08, FLV_TAG_TYPE_VIDEO =0x09, FLV_TAG_TYPE_META =0x12,};staticAVStream *create_stream(AVFormatContext *s,intcodec_type){ AVStream *st = avformat_new_stream(s,NULL);if(!st)returnNULL; st->codec->codec_type = codec_type;returnst;}staticintget_video_extradata(AVFormatContext *s,intvideo_index){inttype, size, flags, pos, stream_type;intret = -1;int64_tdts;boolgot_extradata =false;if(!s || video_index <0|| video_index >2)returnret;for(;; avio_skip(s->pb,4)) { pos = avio_tell(s->pb); type = avio_r8(s->pb); size = avio_rb24(s->pb); dts = avio_rb24(s->pb); dts |= avio_r8(s->pb) <<24; avio_skip(s->pb,3);if(0== size)break;if(FLV_TAG_TYPE_AUDIO == type || FLV_TAG_TYPE_META == type) {/*if audio or meta tags, skip them.*/avio_seek(s->pb, size, SEEK_CUR); }elseif(type == FLV_TAG_TYPE_VIDEO) {/*if the first video tag, read the sps/pps info from it. then break.*/size -=5; s->streams[video_index]->codec->extradata = xmalloc(size + FF_INPUT_BUFFER_PADDING_SIZE);if(NULL== s->streams[video_index]->codec->extradata)break;memset(s->streams[video_index]->codec->extradata,0, size + FF_INPUT_BUFFER_PADDING_SIZE);memcpy(s->streams[video_index]->codec->extradata, s->pb->buf_ptr +5, size); s->streams[video_index]->codec->extradata_size = size; ret =0; got_extradata =true; }else{/*The type unknown,something wrong.*/break; }if(got_extradata)break; }returnret;}staticintinit_decode(AVFormatContext *s){intvideo_index = -1;intaudio_index = -1;intret = -1;if(!s)returnret;/*
Get video stream index, if no video stream then create it.
And audio so on.
*/if(0== s->nb_streams) { create_stream(s, AVMEDIA_TYPE_VIDEO); create_stream(s, AVMEDIA_TYPE_AUDIO); video_index =0; audio_index =1; }elseif(1== s->nb_streams) {if(AVMEDIA_TYPE_VIDEO == s->streams[0]->codec->codec_type) { create_stream(s, AVMEDIA_TYPE_AUDIO); video_index =0; audio_index =1; }elseif(AVMEDIA_TYPE_AUDIO == s->streams[0]->codec->codec_type) { create_stream(s, AVMEDIA_TYPE_VIDEO); video_index =1; audio_index =0; } }elseif(2== s->nb_streams) {if(AVMEDIA_TYPE_VIDEO == s->streams[0]->codec->codec_type) { video_index =0; audio_index =1; }elseif(AVMEDIA_TYPE_VIDEO == s->streams[1]->codec->codec_type) { video_index =1; audio_index =0; } }/*Error. I can't find video stream.*/if(video_index !=0&& video_index !=1)returnret;//Init the audio codec(AAC).s->streams[audio_index]->codec->codec_id = AV_CODEC_ID_AAC; s->streams[audio_index]->codec->sample_rate =44100; s->streams[audio_index]->codec->time_base.den =44100; s->streams[audio_index]->codec->time_base.num =1; s->streams[audio_index]->codec->bits_per_coded_sample =16; s->streams[audio_index]->codec->channels =2; s->streams[audio_index]->codec->channel_layout =3; s->streams[audio_index]->pts_wrap_bits =32; s->streams[audio_index]->time_base.den =1000; s->streams[audio_index]->time_base.num =1;//Init the video codec(H264).s->streams[video_index]->codec->codec_id = AV_CODEC_ID_H264; s->streams[video_index]->codec->width =640; s->streams[video_index]->codec->height =480; s->streams[video_index]->codec->ticks_per_frame =2; s->streams[video_index]->codec->pix_fmt =0; s->streams[video_index]->pts_wrap_bits =32; s->streams[video_index]->time_base.den =1000; s->streams[video_index]->time_base.num =1; s->streams[video_index]->avg_frame_rate.den =90; s->streams[video_index]->avg_frame_rate.num =3;/*Need to change, different condition has different frame_rate. 'r_frame_rate' is new in ffmepg2.3.3*/s->streams[video_index]->r_frame_rate.den =60; s->streams[video_index]->r_frame_rate.num =2;/* H264 need sps/pps for decoding, so read it from the first video tag.*/ret = get_video_extradata(s, video_index);/*Update the AVFormatContext Info*/s->nb_streams =2;/*empty the buffer.*/s->pb->buf_ptr = s->pb->buf_end;/* something wrong.TODO:find out the 'pos' means what. then set it. */s->pb->pos = s->pb->buf_end;returnret;}
分析:
在init_decode接口执行的操作如下:
经过avformat_open_input接口的调用,AVFormatContext内部有几个流其实是无法预知的,所以需要判断,没有的流需调用create_stream接口创建,并分别设置video_index和audio_index。
根据已知信息,初始化audio和video的流信息。
因为H264解码时需要sps/pps信息,这个信息在接收到的第一个video tag中,通过get_video_extradata接口获取:
3.1avio_*系列接口读到的数据已经是flv格式的,所以判断读到的tag如果是audiotag或者metadatatag时,跳过这个tag数据,继续读。3.2如果是videotag,读取其中的数据至s->streams[video_index]->codec->extradata中,跳出循环。
更新AVFormatContext信息。
将缓冲区置空。
如果video tag是H263编码的,在init_decode接口内部,无需调用get_video_extradata接口即可成功初始化解码环境(需将codec_id设置为AV_CODEC_ID_FLV1)。
对于大多数情况,都可以通过自定义的接口init_decode替代avformat_find_stream_info接口来降低延迟,当然会有很多限制,就看具体项目需求了。