对格式采样的理解
提出yuv格式的原因,是为了解决彩色电视和黑白电视兼容性问题,因此从rgb的颜色空间,转换为yuv的颜色空间,其中y代表亮度,u和v代表色度。yuv种类分为很多,可以理解是一个二维的,即空间间,和空间内,这样的表述,借鉴了h264中的帧间和帧内的思想
为什么这么说呢?
下面分别从这两种情况阐述一下这个观点:
空间-间:不同空间,即描述一个像素的bit数不同,比如yuv444,yuv422,yuv411,yuv420
空间-内:相同空间,即描述一个像素的bit数相同,但是存储方式不同,比如对于yuv420而言,又可细分为yuv420p,yuv420sp,nv21,nv12,yv12,yu12,I420
因此,我们在理解yuv格式时,时刻需要记住要从bit数,和存储结构两方面考察
☞ 对于4的理解
为什么yuv444,yuv420,yuv422,yuv411都是用的4呢?
我仔细的研究了一下,
首先yuv的命名方式和rgb的命名方式是不相同的,如果是rgb4444,这里的数字代表的是不同颜色分量所占的bit数
这里,rgb4444代表alpha 4bit,red 4bit,green 4bit,blue 4bit,从宏观上看,是16bit,也就是2字节,但是从微观上看,是以bit区分的
yuv的命名方式则不是这种情况,yuv不会分细到bit级别,最低最低的单位,也是1字节,即8bit,说到这里,就会抛出一个问题,
难道每个像素点都需要至少1字节的整数倍来描述吗?
如果说描述的最小单位是1字节,那么每个像素都是由y,u,v共同描述的,难道说明每个像素点至少都是
1字节y+1字节u+1字节v = 3字节
来描述吗?显然这是不合理的,因为人眼对y是很敏感的,但是u和v是可以在一定程度上压缩的,也就是说,同一个像素点,u和v的描述可以小于1字节,那么这是不是和最小单位为一字节矛盾呢?答案是不矛盾,因为用到了共享的思想,这是yuv和rgb的本质区别
rgb是一个像素是一个家庭,家庭成员是r,g,b,但是yuv是若干像素是一个家庭,不同像素的y共享同一个u和v,这样,引入了共享的思想,虽然最小单位是一个字节,但实际上描述一个像素点的字节,不一定是字节的整数倍
该进入正题了,为什么是4?因为这个4,实际上表达了共享的最大单位!也就是最多4个像素进行共享,因此4实际上是隐含的采样全集
☞ 不同空间下yuv格式
这里从空间的角度考虑yuv格式
① yuv444
那么yuv444代表什么呢?就是最完整的最理想的最奢侈的状态,
[ y u v ] [ y u v ] [ y u v ] [ y u v ]
[ y u v ] [ y u v ] [ y u v ] [ y u v ]
[ y u v ] [ y u v ] [ y u v ] [ y u v ]
[ y u v ] [ y u v ] [ y u v ] [ y u v ]
这无疑是最理想的状态了y,u,v都为4,说明大家都是满的
② yuv422
那么,yuv422呢?说明这里,u由4变为2,v由4变为2,也就是在原来满的情况下,每行需要去掉两个u和两个v
如何去掉呢?最简单的方法就是第一个点保留u,第二个点保留v,第三个点保留u,第四个点保留v ..
图示如下:
[ y u ] [ y v ] [ y u ] [ y v ]
[ y v ] [ y u ] [ y v ] [ y u ]
[ y u ] [ y v ] [ y u ] [ y v ]
[ y v ] [ y u ] [ y v ] [ y u ]
在这张图中,显然一个家庭的成员为
[ y u ] [ y v ]
也就是两个像素点共享uv
③ yuv411
yuv411又是什么情况呢?显然在原来yuv422的基础上,每行再去掉一个u和一个v
图示如下:
[ y u ] [ y ] [ y v ] [ y ]
[ y u ] [ y ] [ y v ] [ y ]
[ y u ] [ y ] [ y v ] [ y ]
[ y u ] [ y ] [ y v ] [ y ]
这个家庭的成员为
[ y u ] [ y ] [ y v ] [ y ]
也就是四个像素点共享uv
④ yuv420
yuv420的意思似乎是在yuv422的基础上,再拿掉两个v,这样不就没有v了吗?
其实yuv420的取名方式不是很高明,更确切的命名为yuv420yuv402
也就是第一行只有两个u,而第二行只有两个v
图示如下:
[ y u ] [ y ] [ y u ] [ y ]
[ y v ] [ y ] [ y v ] [ y ]
[ y u ] [ y ] [ y u ] [ y ]
[ y v ] [ y ] [ y v ] [ y ]
仔细体会yuv420和yuv411的区别
对于yuv420而言,这个家庭的成员为
[ y u ] [ y ]
[ y v ] [ y ]
也是四个像素点共享uv,但是这个家庭显然比yuv411的家庭关系更近一些
思考:从这个课题讨论来看,我们是否可以进一步提高yuv的压缩效率?可以更多y共享uv吗?可以自适应吗?
小结:从这里可以看出,yuv211和yuv422显然格式上是不一样的,因此这里的数字不仅仅代表了比例,还代表了实际的值
☞ 三种格式packet,planar,semi-plane
在这一课题,我们探讨一下相同bit数的不同存储格式,主要讨论yuv422和yuv420
packet是打包格式,即存储yuv,然后再存储下一个yuv ..
planar是平面格式,即先存储y平面,再存储u平面,最后存储v平面
semi-planar是两个平面,正常的planar是三个平面,即y平面,u平面,v平面,现在的semi-planar是两个平面,也就是说uv为同一个平面,即一个y平面,一个uv平面
① yuv422
yuyv(yuy2)
[ y u ] [ y v ] [ y u ] [ y v ]
[ y u ] [ y v ] [ y u ] [ y v ]
[ y u ] [ y v ] [ y u ] [ y v ]
[ y u ] [ y v ] [ y u ] [ y v ]
uyvy
[ u y ] [ v y ] [ u y ] [ v y ]
[ u y ] [ v y ] [ u y ] [ v y ]
[ u y ] [ v y ] [ u y ] [ v y ]
[ u y ] [ v y ] [ u y ] [ v y ]
yuv422p(yu16)
[ y y y y ]
[ y y y y ]
[ y y y y ]
[ y y y y ]
[ u u u u ]
[ u u u u ]
[ v v v v ]
[ v v v v ]
或(yv16)
[ y y y y ]
[ y y y y ]
[ y y y y ]
[ y y y y ]
[ v v v v ]
[ v v v v ]
[ u u u u ]
[ u u u u ]
yuv422sp(nv16)
[ y y y y ]
[ y y y y ]
[ y y y y ]
[ y y y y ]
[ u v u v ]
[ u v u v ]
[ u v u v ]
[ u v u v ]
或(nv61)
[ y y y y ]
[ y y y y ]
[ y y y y ]
[ y y y y ]
[ v u v u ]
[ v u v u ]
[ v u v u ]
[ v u v u ]
② yuv420
yuv420p(yu12 / I420)
[ y y y y ]
[ y y y y ]
[ y y y y ]
[ y y y y ]
[ u u ]
[ u u ]
[ v v ]
[ v v ]
或(yv12)
[ y y y y ]
[ y y y y ]
[ y y y y ]
[ y y y y ]
[ v v ]
[ v v ]
[ u u ]
[ u u ]
yuv420sp(nv12)
[ y y y y ]
[ y y y y ]
[ y y y y ]
[ y y y y ]
[ u v u v ]
[ u v u v ]
或(nv21)
[ y y y y ]
[ y y y y ]
[ y y y y ]
[ y y y y ]
[ v u v u ]
[ v u v u ]
☞ 对nv系列理解
常见的有nv12,nv21,nv16,nv61等,这里代表什么意思呢?
其实nv系列,都属于semi-plane系列
这里nv12表示正常的顺序,即uv plane,先是u,然后是v
而nv21表示相反的顺序,即uv plane,先是v,然后是u
同样,nv16和nv61的区别也是仅仅是uv的次序而已
这里的12和16又代表什么呢?实际上代表的是一个像素所占的位数!
以nv12为例,表示一个像素占用12bit,其中y是定死的占8bit,也就是u占2bit,v占2bit,实际上就是yuv420格式,具体而言是yuv420sp格式
nv16,则表示一个像素占用16bit,其中y是定死的8bit,也即是u占4bit,v占4bit,实际上就是yuv422格式,具体而言是yuv422sp格式
/*
* Copyright (C) 2014 The Android Open Source Project
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/* This is a JNI example where we use native methods to play video
* using the native AMedia* APIs.
* See the corresponding Java source file located at:
*
* src/com/example/nativecodec/NativeMedia.java
*
* In this example we use assert() for "impossible" error conditions,
* and explicit handling and recovery for more likely error conditions.
*/
#include <assert.h>
#include <jni.h>
#include <stdio.h>
#include <string.h>
#include <unistd.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <errno.h>
#include <limits.h>
#include <pthread.h>
//#include <gui/GraphicBufferMapper.h>
//#include <ui/Region.h>
#include "looper.h"
#include "media/NdkMediaCodec.h"
#include "media/NdkMediaExtractor.h"
// for __android_log_print(ANDROID_LOG_INFO, "YourApp", "formatted message");
#include <android/log.h>
#define TAG "NativeCodec"
#define LOGV(...) __android_log_print(ANDROID_LOG_VERBOSE, TAG, __VA_ARGS__)
// for native window JNI
#include <android/native_window_jni.h>
// for DSP by shijie
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
#include <stdbool.h>
#include <assert.h>
#include <malloc.h>
#include <unistd.h>
#include <sys/mman.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <sys/time.h>
#include <assert.h>
#include <malloc.h>
#include <unistd.h>
#include <sys/mman.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <sys/time.h>
#include "rpcmem.h"
#include "gaussian7x7.h"
#include "dspCV.h"
#include "AEEStdErr.h"
#include "verify.h"
#ifndef ION_HEAP_ID_SYSTEM
#define ION_HEAP_ID_SYSTEM 25
#endif
#define WIDTH 1024
#define HEIGHT 540
#define NUMCODE 4096
#define SCALER 2
#define BLKSIZE 3
#define OVERLAP 1
#define DICIND_WIDTH (WIDTH/(BLKSIZE - OVERLAP))
#define DICIND_HEIGHT (HEIGHT/(BLKSIZE - OVERLAP))
#define WIDTH_DEBUG WIDTH
#define HEIGHT_DEBUG HEIGHT
#define NUMCODE_DEBUG NUMCODE
#define DICIND_WIDTH_DEBUG DICIND_WIDTH
#define DICIND_HEIGHT_DEBUG DICIND_HEIGHT
#define LOOPS 10
int PLE_main_Init(void)
{
uint8_t *src=NULL, *dst=NULL, *ref=NULL;
int16_t *dic=NULL;
uint16_t *dicInd = NULL;
int srcWidth = WIDTH;
int srcHeight = HEIGHT;
int dicIndWidth = srcWidth/(BLKSIZE - OVERLAP);
int dicIndHeight= srcHeight/(BLKSIZE - OVERLAP);
int overlap = OVERLAP;
int blkSize = BLKSIZE;
int scaler = SCALER;
int numcode = NUMCODE;
int dstWidth = srcWidth*scaler; // keep aligned to 128 bytes!
int dstHeight = srcHeight*scaler;
int rowsInd = srcHeight/(blkSize-overlap);
int colsInd = srcWidth/(blkSize-overlap);
int retVal;
int i;
int nErr = 0;
int srcSize = srcWidth * (srcHeight+1);
int dstSize = (dstWidth) * (dstHeight+2);
int dicSize = numcode * 9 * (36 + 28); // total = 4096 codewords, codeword = 9 rows, row = 36 elements, element = 2 byte
int dicIndSize = rowsInd*colsInd;
rpcmem_init();
printf("---------test by shijie------------\n");
printf("srcWidth = %d\n", srcWidth);
printf("srcHeight = %d\n", srcHeight);
printf("overlap = %d\n", overlap);
printf("blkSize = %d\n", blkSize);
printf("scaler = %d\n", scaler);
printf("numcode = %d\n", numcode);
printf("dstWidth = %d\n", dstWidth);
printf("dstHeight = %d\n", dstHeight);
printf("rowsInd = %d\n", rowsInd);
printf("colsInd = %d\n", colsInd);
// call dspCV_initQ6_with_attributes() to bump up Q6 clock frequency
// Since this app is not real-time, and can fully load the DSP clock & bus resources
// throughout its lifetime, vote for the maximum available MIPS & BW.
dspCV_Attribute attrib[] =
{
{DSP_TOTAL_MCPS, 1000}, // Slightly more MCPS than are available on current targets
{DSP_MCPS_PER_THREAD, 500}, // drive the clock to MAX on known targets
{PEAK_BUS_BANDWIDTH_MBPS, 12000}, // 12 GB/sec is slightly higher than the max realistic max BW on existing targets.
{BUS_USAGE_PERCENT, 100}, // This app is non-real time, and constantly reading/writing memory
};
retVal = dspCV_initQ6_with_attributes(attrib, sizeof(attrib)/sizeof(attrib[0]));
printf("return value from dspCV_initQ6() : %d \n", retVal);
VERIFY(0 == retVal);
// allocate ion buffers
VERIFY(0 != (src = (uint8_t*)rpcmem_alloc(ION_HEAP_ID_SYSTEM, RPCMEM_DEFAULT_FLAGS, (srcSize+srcWidth)*sizeof(uint8_t))));
printf("src - allocated!! address = %x, byte = %d\n", (unsigned int)src, (srcSize+srcWidth)*sizeof(uint8_t));
VERIFY(0 != (dst = (uint8_t*)rpcmem_alloc(ION_HEAP_ID_SYSTEM, RPCMEM_DEFAULT_FLAGS, dstSize)));
printf("dst - allocated!! address = %x, byte = %d\n", (unsigned int)dst, dstSize*sizeof(uint8_t));
VERIFY(0 != (dic = (int16_t*)rpcmem_alloc(ION_HEAP_ID_SYSTEM, RPCMEM_DEFAULT_FLAGS, dicSize*sizeof(int16_t))));
printf("dic - allocated!! address = %x, byte = %d\n", (unsigned int)dic, dicSize*sizeof(int16_t));
// codeword indices can be allocation either by malloc or by rmcmen_alloc
VERIFY(0 != (dicInd = (uint16_t*)rpcmem_alloc(ION_HEAP_ID_SYSTEM, RPCMEM_DEFAULT_FLAGS, dicIndSize*sizeof(uint16_t))));
printf("dic Ind - allocated!! address = %x, byte = %d\n", (unsigned int)dicInd, dicIndSize*sizeof(uint16_t));
VERIFY(0 != (ref = (uint8_t*)malloc(dstSize)));
printf("ref - allocated!! address = %x, byte = %d (via malloc)\n", (unsigned int)ref, (dstSize)*sizeof(uint8_t));
#if 1
FILE *fp;
int temp_int32;
// populate src buffer (with a simple pattern)
if ((fp = fopen("./data/srcImg.txt","r")) == NULL)
{
printf("failed to open ./srcImg.txt\n");
VERIFY(-1);
}
else
{
printf("reading srcImg.txt !!\n");
for (i = 0; i < srcSize; i++)
{
fscanf(fp, "%d,", &temp_int32);
src[i] = (uint8_t)temp_int32;
}
fclose(fp);
printf("Load srcImg.txt completed!\n");
if ((fp = fopen("./data/srcImg_wr.txt","w")) == NULL)
{
printf("failed to open ./srcImg_wr.txt\n");
VERIFY(-1);
}
else
{
for (i = 0; i < srcSize; i++)
fprintf(fp,"%d,", (int)src[i]);
fclose(fp);
printf("Write back srcImg.txt completed!\n");
}
}
if ( (fp = fopen("./data/dict_new.txt", "r")) == NULL)
{
printf("failed to open ./dict.txt\n");
VERIFY(-1);
}
else
{
printf("reading dict.txt !!\n");
for (i = 0; i < dicSize; i++)
{
fscanf(fp, "%d,", &temp_int32);
dic[i] = (int16_t)temp_int32;
}
printf("Load dict.txt completed!\n");
fclose(fp);
if ((fp = fopen("./data/dict_new_wr.txt","w")) == NULL)
{
printf("failed to open ./dict_wr.txt\n");
VERIFY(-1);
}
else
{
for (i = 0; i < dicSize; i++)
fprintf(fp,"%d,", (int)dic[i]);
fclose(fp);
printf("Write back dict.txt completed!\n");
}
}
#endif
// call API
printf( "calling gaussian7x7_Gaussian7x7u8 on a %dx%d image...\n", (int)srcWidth, (int)srcHeight);
printf("Startind find index!\n");
for (i = 0; i < LOOPS; i++)
{
// For HVX case, note that src, srcStride, dst, dstStride all must be multiples of 128 bytes. The HVX code for this example function does not handle unaligned inputs.
retVal = gaussian7x7_Find_indexu8(src, srcSize, srcWidth, srcHeight, dicInd, dicIndSize, dicIndWidth, dicIndHeight);
}
printf("\n retVal of gaussian7x7_Find_indexu8 = %x\n",(int)retVal);
uint8_t test[6*512*36];
printf("Startind supersolution!\n");
for (i = 0; i < LOOPS; i++)
{
// For HVX case, note that src, srcStride, dst, dstStride all must be multiples of 128 bytes. The HVX code for this example function does not handle unaligned inputs.
retVal = gaussian7x7_Gaussian7x7u8(src, srcSize, srcWidth, srcHeight, dic, dicSize, dicInd, dicIndSize, dst, dstSize, dstWidth, dstHeight, test, 6*512*36);
}
printf("\n return value of gaussian7x7_Gaussian7x7u8 is %d \n", retVal);
fp = fopen("./data/dst.txt","w");
for (i = 0; i < dstWidth*dstHeight; i++)
{
fprintf(fp,"%d\n",(int)(*(dst+i)));
//printf("%d\n",(int)(*(dst+i)));
}
fclose(fp);
bail:
if(src) {
rpcmem_free(src);
printf("rpcmem_free src!\n");
}
if(dst) {
rpcmem_free(dst);
printf("rpcmem_free dst!\n");
}
if(dic) {
rpcmem_free(dic);
printf("rpcmem_free dic!\n");
}
if(dicInd) {
rpcmem_free(dicInd);
printf("rpcmem_free dicInd!\n");
}
// free ion buffers
rpcmem_deinit();
printf("calling dspCV_deinitQ6()... \n");
retVal = dspCV_deinitQ6();
printf("return value from dspCV_deinitQ6(): %d \n", retVal);
if (0 == retVal)
{
printf("- success\n");
return 0;
}
else
{
printf("- failure\n");
return 0;
}
return 0;
}
typedef struct {
int fd;
ANativeWindow* window;
AMediaExtractor* ex;
AMediaCodec *codec;
int64_t renderstart;
bool sawInputEOS;
bool sawOutputEOS;
bool isPlaying;
bool renderonce;
} workerdata;
workerdata data = {-1, NULL, NULL, NULL, 0, false, false, false, false};
enum {
kMsgCodecBuffer,
kMsgPause,
kMsgResume,
kMsgPauseAck,
kMsgDecodeDone,
kMsgSeek,
kMsgCodecTrack,
};
class mylooper: public looper {
virtual void handle(int what, void* obj);
};
static mylooper *mlooper = NULL;
static int mVideoWidth = 0;
static int mVideoHeight = 0;
static int mSampleRate = 0;
static int mChannel = 0;
int64_t systemnanotime() {
timespec now;
clock_gettime(CLOCK_MONOTONIC, &now);
return now.tv_sec * 1000000000LL + now.tv_nsec;
}
void* doTrackCodecWork(workerdata *d){
LOGV("zhou Edward lannister begin to decoder audio track");
ssize_t bufidx = -1;
if (!d->sawInputEOS) {
bufidx = AMediaCodec_dequeueInputBuffer(d->codec, 2000);
LOGV("input buffer %zd", bufidx);
if (bufidx >= 0) {
size_t bufsize;
uint8_t *buf = AMediaCodec_getInputBuffer(d->codec, bufidx, &bufsize);
ssize_t sampleSize = AMediaExtractor_readSampleData(d->ex, buf, bufsize);
if (sampleSize < 0) {
sampleSize = 0;
d->sawInputEOS = true;
LOGV("EOS");
}
int64_t presentationTimeUs = AMediaExtractor_getSampleTime(d->ex);
AMediaCodec_queueInputBuffer(d->codec, bufidx, 0, sampleSize, presentationTimeUs,
d->sawInputEOS ? AMEDIACODEC_BUFFER_FLAG_END_OF_STREAM : 0);
AMediaExtractor_advance(d->ex);
}
}
if (!d->sawOutputEOS) {
AMediaCodecBufferInfo info;
ssize_t status = AMediaCodec_dequeueOutputBuffer(d->codec, &info, 0);
/*
if (status >= 0) {
if (info.flags & AMEDIACODEC_BUFFER_FLAG_END_OF_STREAM) {
LOGV("output EOS");
d->sawOutputEOS = true;
}
int64_t presentationNano = info.presentationTimeUs * 1000;
if (d->renderstart < 0) {
d->renderstart = systemnanotime() - presentationNano;
}
int64_t delay = (d->renderstart + presentationNano) - systemnanotime();
if (delay > 0) {
usleep(delay / 1000);
}
int buffsize = AudioTrack.getMinBufferSize(mSampleRate,
AudioFormat.CHANNEL_OUT_STEREO,
AudioFormat.ENCODING_PCM_16BIT);
AudioTrack audioTrack = new AudioTrack(AudioManager.STREAM_MUSIC,
mSampleRate, AudioFormat.CHANNEL_OUT_STEREO,
AudioFormat.ENCODING_PCM_16BIT, buffsize,
AudioTrack.MODE_STREAM);
audioTrack.play();
AMediaCodec_releaseOutputBuffer(d->codec, status, info.size != 0);
if (d->renderonce) {
d->renderonce = false;
//return;
}
} else if (status == AMEDIACODEC_INFO_OUTPUT_BUFFERS_CHANGED) {
LOGV("output buffers changed");
} else if (status == AMEDIACODEC_INFO_OUTPUT_FORMAT_CHANGED) {
AMediaFormat *format = NULL;
format = AMediaCodec_getOutputFormat(d->codec);
LOGV("format changed to: %s", AMediaFormat_toString(format));
AMediaFormat_delete(format);
} else if (status == AMEDIACODEC_INFO_TRY_AGAIN_LATER) {
LOGV("no output buffer right now");
} else {
LOGV("unexpected info code: %zd", status);
}
*/
}
if (!d->sawInputEOS || !d->sawOutputEOS) {
mlooper->post(kMsgCodecTrack, d);
}
}
void doCodecWork(workerdata *d) {
ssize_t bufidx = -1;
if (!d->sawInputEOS) {
bufidx = AMediaCodec_dequeueInputBuffer(d->codec, 2000);
LOGV("input buffer %zd", bufidx);
if (bufidx >= 0) {
size_t bufsize;
uint8_t *buf = AMediaCodec_getInputBuffer(d->codec, bufidx, &bufsize);
//dump buf
LOGV("zhou Edward lannister begin to dump inputbuffer");
FILE *fp=NULL;
if((fp = fopen("/data/data/com.example.nativecodec/hirestmp1.txt", "wb"))==NULL){
LOGV("zhou Edward lannister can not open the txt data/data/temp1 file,open fail errno = %d reason = %s \n",errno, strerror(errno));
}else {
LOGV("zhou Edward lannister begin to write buffer");
//fwrite(buf, sizeof(uint8_t),1,fp);
}
LOGV("zhou Edward lannister begin to close fp");
fclose(fp);
//dump buf
ssize_t sampleSize = AMediaExtractor_readSampleData(d->ex, buf, bufsize);
if (sampleSize < 0) {
sampleSize = 0;
d->sawInputEOS = true;
LOGV("EOS");
}
int64_t presentationTimeUs = AMediaExtractor_getSampleTime(d->ex);
AMediaCodec_queueInputBuffer(d->codec, bufidx, 0, sampleSize, presentationTimeUs,
d->sawInputEOS ? AMEDIACODEC_BUFFER_FLAG_END_OF_STREAM : 0);
AMediaExtractor_advance(d->ex);
}
}
if (!d->sawOutputEOS) {
AMediaCodecBufferInfo info;
ssize_t status = AMediaCodec_dequeueOutputBuffer(d->codec, &info, 0);
if (status >= 0) {
if (info.flags & AMEDIACODEC_BUFFER_FLAG_END_OF_STREAM) {
LOGV("output EOS");
d->sawOutputEOS = true;
}
/*
int64_t presentationNano = info.presentationTimeUs * 1000;
if (d->renderstart < 0) {
d->renderstart = systemnanotime() - presentationNano;
}
*/
LOGV("zhou AMediaCodec_getOutputBuffer!!!");
LOGV("new video frame info.size= %d",info.size);
//zhouyanxian
if (info.size > 0) {
size_t bufsize;
uint8_t *buf = AMediaCodec_getOutputBuffer(d->codec, status, &bufsize);
//此处需要对buf进行LPE处理,然后回传给nwbuffer
PLE_main_Init();
//dump buf
LOGV("zhou Edward lannister begin to dumpoutput 480.368 buffer ");
FILE *fp1=NULL;
if((fp1 = fopen("/data/data/com.example.nativecodec/hirestmp2.txt", "wb"))==NULL){
LOGV("zhou Edward lannister can not open the txt data/data/temp2 file,open fail errno = %d reason = %s \n",errno, strerror(errno));
}else {
LOGV("zhou Edward lannister begin to write out buffer");
fwrite(buf, 1,480*368*1.5,fp1);
}
LOGV("zhou Edward lannister begin to close fp");
fclose(fp1);
//dump buf
//ANativeWindowBuffer *buf;
//int fenceFd = -1;
//int err = data.window->dequeueBuffer(mNativeWindow.get(), &buf, &fenceFd);
//mVideoWidth = 512;
mVideoHeight=368;
ANativeWindow_setBuffersGeometry(d->window,mVideoWidth,mVideoHeight,0x109);
ANativeWindow_Buffer nwBuffer;
ANativeWindow_acquire(d->window);
if (0 != ANativeWindow_lock(d->window, &nwBuffer, NULL)) {
LOGV("ANativeWindow_lock() error");
return;
}
LOGV("renderSurface, %d, %d, %d", nwBuffer.width ,nwBuffer.height, nwBuffer.stride);
if (nwBuffer.width >= nwBuffer.stride) {
//srand(time(NULL));
//memset(piexels, rand() % 100, nwBuffer.width * nwBuffer.height * 2);
//memcpy(nwBuffer.bits, piexels, nwBuffer.width * nwBuffer.height * 2);
//buffer上传给了nwbuffer,上屏幕了
memcpy(nwBuffer.bits, buf, nwBuffer.width * nwBuffer.height * 3/2);
} else {
LOGV("new buffer width is %d,height is %d ,stride is %d, info.size is %d",
nwBuffer.width, nwBuffer.height, nwBuffer.stride,info.size);
int i;
for (i = 0; i < nwBuffer.height; ++i) {
memcpy((void*) ((intptr_t)nwBuffer.bits + nwBuffer.stride * i),
(void*) ((intptr_t)buf + nwBuffer.width * i),
nwBuffer.width);
}
for(i = 0; i < nwBuffer.height/2; ++i){
memcpy((void*) ((intptr_t)nwBuffer.bits + nwBuffer.stride * (i+nwBuffer.height)),
(void*) ((intptr_t)buf + nwBuffer.width * (i+nwBuffer.height)),
nwBuffer.width);
}
}
AMediaFormat *format = NULL;
format = AMediaCodec_getOutputFormat(d->codec);
LOGV("zhou format changed to: %s", AMediaFormat_toString(format));
AMediaFormat_delete(format);
{
FILE *fp1=NULL;
if((fp1 = fopen("/data/data/com.example.nativecodec/hirestmp3.yuv", "wb"))==NULL){
LOGV("zhou Edward lannister can not open the txt data/data/3 file,open fail errno = %d reason = %s \n",errno, strerror(errno));
}else {
LOGV("zhou Edward lannister begin to write out buffer");
fwrite(nwBuffer.bits, 1,nwBuffer.height*nwBuffer.stride*1.5,fp1);
}
LOGV("zhou Edward lannister begin to close fp");
fclose(fp1);
}
//usleep(100000);
ANativeWindow_unlockAndPost(d->window);
AMediaCodec_releaseOutputBuffer(d->codec, status,false);
ANativeWindow_release(d->window);
}
int64_t presentationNano = info.presentationTimeUs * 1000;
if (d->renderstart < 0) {
d->renderstart = systemnanotime() - presentationNano;
}
int64_t delay = (d->renderstart + presentationNano) - systemnanotime();
if (delay > 0) {
usleep(delay / 1000);
}
AMediaCodec_releaseOutputBuffer(d->codec, status, info.size != 0);
if (d->renderonce) {
d->renderonce = false;
return;
}
} else if (status == AMEDIACODEC_INFO_OUTPUT_BUFFERS_CHANGED) {
LOGV("output buffers changed");
} else if (status == AMEDIACODEC_INFO_OUTPUT_FORMAT_CHANGED) {
AMediaFormat *format = NULL;
format = AMediaCodec_getOutputFormat(d->codec);
LOGV("format changed to: %s", AMediaFormat_toString(format));
AMediaFormat_delete(format);
} else if (status == AMEDIACODEC_INFO_TRY_AGAIN_LATER) {
LOGV("no output buffer right now");
} else {
LOGV("unexpected info code: %zd", status);
}
}
if (!d->sawInputEOS || !d->sawOutputEOS) {
mlooper->post(kMsgCodecBuffer, d);
}
}
void mylooper::handle(int what, void* obj) {
switch (what) {
case kMsgCodecBuffer:
doCodecWork((workerdata*)obj);
break;
case kMsgCodecTrack:
doTrackCodecWork((workerdata*)obj);
break;
case kMsgDecodeDone:
{
workerdata *d = (workerdata*)obj;
AMediaCodec_stop(d->codec);
AMediaCodec_delete(d->codec);
AMediaExtractor_delete(d->ex);
d->sawInputEOS = true;
d->sawOutputEOS = true;
}
break;
case kMsgSeek:
{
workerdata *d = (workerdata*)obj;
AMediaExtractor_seekTo(d->ex, 0, AMEDIAEXTRACTOR_SEEK_NEXT_SYNC);
AMediaCodec_flush(d->codec);
d->renderstart = -1;
d->sawInputEOS = false;
d->sawOutputEOS = false;
if (!d->isPlaying) {
d->renderonce = true;
post(kMsgCodecBuffer, d);
}
LOGV("seeked");
}
break;
case kMsgPause:
{
workerdata *d = (workerdata*)obj;
if (d->isPlaying) {
// flush all outstanding codecbuffer messages with a no-op message
d->isPlaying = false;
post(kMsgPauseAck, NULL, true);
}
}
break;
case kMsgResume:
{
workerdata *d = (workerdata*)obj;
if (!d->isPlaying) {
d->renderstart = -1;
d->isPlaying = true;
post(kMsgCodecBuffer, d);
}
}
break;
}
}
extern "C" {
jboolean Java_com_example_nativecodec_NativeCodec_createStreamingMediaPlayer(JNIEnv* env,
jclass clazz, jstring filename)
{
LOGV("@@@ create");
// convert Java string to UTF-8
const char *utf8 = env->GetStringUTFChars(filename, NULL);
LOGV("opening %s", utf8);
int fd = open(utf8, O_RDONLY);
env->ReleaseStringUTFChars(filename, utf8);
if (fd < 0) {
LOGV("failed: %d (%s)", fd, strerror(errno));
return JNI_FALSE;
}
data.fd = fd;
workerdata *d = &data;
AMediaExtractor *ex = AMediaExtractor_new();
media_status_t err = AMediaExtractor_setDataSourceFd(ex, d->fd, 0 , LONG_MAX);
close(d->fd);
if (err != AMEDIA_OK) {
LOGV("setDataSource error: %d", err);
return JNI_FALSE;
}
int numtracks = AMediaExtractor_getTrackCount(ex);
AMediaCodec *codec = NULL;
AMediaCodec *audioCodec = NULL;
pthread_t videotid;
pthread_t audiotid;
LOGV("input has %d tracks", numtracks);
for (int i = 0; i < numtracks; i++) {
AMediaFormat *format = AMediaExtractor_getTrackFormat(ex, i);
const char *s = AMediaFormat_toString(format);
LOGV("track %d format: %s", i, s);
const char *mime;
if (!AMediaFormat_getString(format, AMEDIAFORMAT_KEY_MIME, &mime)) {
LOGV("no mime type");
return JNI_FALSE;
} else if (!strncmp(mime, "video/", 6)) {
LOGV("zhou Edward lannister input has %s mime type", &mime);
// Omitting most error handling for clarity.
// Production code should check for errors.
AMediaExtractor_selectTrack(ex, i);
codec = AMediaCodec_createDecoderByType(mime);
if (!AMediaFormat_getInt32(format, "width", &mVideoWidth)) {
LOGV("zhou get videotrack width fail");
}
LOGV("zhou get videotrack width = %d ",mVideoWidth);
if (!AMediaFormat_getInt32(format, "height", &mVideoHeight)) {
LOGV("zhou get videotrack height fail");
}
LOGV("zhou get videotrack height = %d ",mVideoHeight);
//zhouyanxian
//AMediaFormat_setInt32(format,"color-format", 0x7FA30C04);
//zhouyanxian
//AMediaCodec_configure(codec, format, d->window, NULL, 0);//normal
AMediaCodec_configure(codec, format, NULL, NULL, 0);//zhou
d->ex = ex;
d->codec = codec;
d->renderstart = -1;
d->sawInputEOS = false;
d->sawOutputEOS = false;
d->isPlaying = false;
d->renderonce = true;
AMediaCodec_start(codec);
/*
int status = pthread_create(&videotid, NULL, (void *(*)(void *))&doCodecWork, &d);
if(status!=0){
LOGV("zhou Edward lannister pthread_create error");
}
pthread_detach(videotid);
*/
}
/*else if (!strncmp(mime, "audio/", 6)){
LOGV("zhou Edward lannister input begin to do the audio codec");
AMediaExtractor_selectTrack(ex, i);
//get sample rate
if (!AMediaFormat_getInt32(format, "sample-rate", &mSampleRate)) {
LOGV("zhou Edward lannister get sample-rate fail");
}
LOGV("zhou Edward lannister get sample-rate %d", mSampleRate);
//get channels
if (!AMediaFormat_getInt32(format, "channel-count", &mChannel)) {
LOGV("zhou Edward lannister get Channel fail");
}
LOGV("zhou Edward lannister get sample-rate %d", mChannel);
//create decoder
audioCodec = AMediaCodec_createDecoderByType(mime);
AMediaCodec_configure(audioCodec, format, NULL, NULL, 0);
//copy data to workdata
d->ex = ex;
d->codec = audioCodec;
d->renderstart = -1;
d->sawInputEOS = false;
d->sawOutputEOS = false;
d->isPlaying = false;
d->renderonce = true;
//check codec
if(audioCodec == NULL){
LOGV("zhou Edward lannister can't find audioCodec info!");
}
AMediaCodec_start(audioCodec);
//loop audio track
//mlooper->post(kMsgCodecTrack, d);
//AudioTrackDecoder maudiotrackdecoder;
//std::thread decoder(&AudioTrackDecoder::doAudioCodecWork, maudiotrackdecoder, d);
//maudiotrackdecoder.detach();
//add by hu
int status = pthread_create(&audiotid, NULL, (void *(*)(void *))&doTrackCodecWork, &d);
if(status!=0){
LOGV("zhou Edward lannister audio pthread_create error");
}
pthread_detach(audiotid);
}
*/
AMediaFormat_delete(format);
}
mlooper = new mylooper();
mlooper->post(kMsgCodecBuffer, d);
return JNI_TRUE;
}
// set the playing state for the streaming media player
void Java_com_example_nativecodec_NativeCodec_setPlayingStreamingMediaPlayer(JNIEnv* env,
jclass clazz, jboolean isPlaying)
{
LOGV("@@@ playpause: %d", isPlaying);
if (mlooper) {
if (isPlaying) {
mlooper->post(kMsgResume, &data);
} else {
mlooper->post(kMsgPause, &data);
}
}
}
// shut down the native media system
void Java_com_example_nativecodec_NativeCodec_shutdown(JNIEnv* env, jclass clazz)
{
LOGV("@@@ shutdown");
if (mlooper) {
mlooper->post(kMsgDecodeDone, &data, true /* flush */);
mlooper->quit();
delete mlooper;
mlooper = NULL;
}
if (data.window) {
ANativeWindow_release(data.window);
data.window = NULL;
}
}
// set the surface
void Java_com_example_nativecodec_NativeCodec_setSurface(JNIEnv *env, jclass clazz, jobject surface)
{
// obtain a native window from a Java surface
if (data.window) {
ANativeWindow_release(data.window);
data.window = NULL;
}
data.window = ANativeWindow_fromSurface(env, surface);
LOGV("@@@ setsurface %p", data.window);
}
// rewind the streaming media player
void Java_com_example_nativecodec_NativeCodec_rewindStreamingMediaPlayer(JNIEnv *env, jclass clazz)
{
LOGV("@@@ rewind");
mlooper->post(kMsgSeek, &data);
}
}