移动端音视频开发Android学习笔记 - NDK音视频研发

Android客户端音视频推流

2019-06-19  本文已影响102人  CoderYuZ

该工程目录是Android客户端推流环境搭建的工程基础下创建的(音频相关的类AudioChannel先不做):

工程目录

视频推流:

视频推流的工作主要是这几个部分:

  1. 获取摄像头原始数据
    这里要注意的是拿到后置摄像头原始数据后要进行旋转,原因如图:


    摄像头在手机里的样子!
  2. 数据转码(NV21转I420)
    Android摄像头拿到的数据是NV21,为了支持更多终端,需要转为I420
  3. 数据进行H264编码
    为什么要编码?
    视频是由一帧帧图像组成,就如常见的gif图片,如果打开一张gif图片,可以发现里面是由很多张图片组成。一般视频为了不让观众感觉到卡顿,一秒钟至少需要16帧画面(一般是30帧),假如该视频是一个1280x720分辨率的视频,那么不经过编码一秒钟的大小:结果:1280x720x60≈843.75M所以不经过编码的视频根本没法保存和传输。
    H264编码
    H264的数据中分为I帧、B帧、P帧,I帧是关键帧,与I帧相似程度极高达到95%以上编码成B帧,相似程度70%编码成P帧,所以不是每一帧都要传输完整的数据,只需要有与关键帧(I帧)的差异数据,就可以通过关键帧算出该帧的显示数据。如何编码不需要程序员来实现,已经由x264这个工具帮我们做了。除了I/P/B帧外,还有图像序列GOP,可以理解成一个场景,场景的物体都是相似的。图示:


    H264数据流

    NALU单元:
    为了方便传输(传输指 文件传输,网络流传输) 我们并不能把一整帧传输过去,一帧的内容太大了,还需要细分才能更方便的传输。如果通过传递一完整帧传过去,对方等的花都谢了。所以我们需要更小的传输单元以保证 更好的压缩性,容错性和实时观看性。这种更小的单元成为NALU单元,所以H264 原始码流(又称为裸流),是有一个接一个的 NALU 组成的,关于NALU的组成(组成可以不去了解,知道传输数据是以NALU为单位就可以了):

NALU = NALU头 + RBSP(切片)
RBSP = 片头 + 片数据
片数据 = n * 宏块
//把一张图片划分成若干个小的区域,这些小的区域称之为宏块
//H264默认是使用 16X16 大小的区域作为一个宏块,也可以划分成 8X8 大小。
所以:NALU = NALU头 + (片头 + n宏块)
  1. 组装RTMPPacket并发送
    这里是把NALU的数据按照RTMP协议进行封装,然后传输里是把NALU的数据按照RTMP协议进行封装,然后传输

开撸之前还有个事,编码使用到的x264的参数配置超多,相关帖子也很多,需要先了解一下。

开撸

布局不贴了,一个SurfaceView, 三个按钮:开始直播、停止直播、切换摄像头,直接先用后置摄像头实现推流再说。局不贴了,一个SurfaceView, 三个按钮:开始直播、停止直播、切换摄像头,直接先用后置摄像头实现推流再说。
类的说明看上面的目录结构。的说明看上面的目录结构。

MainActivity:对LivePusher初始化(初始化参数中包括图像宽、高、传输码率、传输帧率、摄像头id,这些都是VideoChannel推流需要的)。通过LivePusher设置摄像头预览的界面、控制推流开关

public class MainActivity extends AppCompatActivity {

    private LivePusher livePusher;
    @Override
    protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        setContentView(R.layout.activity_main);

        SurfaceView surfaceView = findViewById(R.id.surfaceView);
        livePusher = new LivePusher(this, 1920, 1080, 800_000, 10, Camera.CameraInfo.CAMERA_FACING_BACK);
        //  设置摄像头预览的界面
        livePusher.setPreviewDisplay(surfaceView.getHolder());

    }

    public void switchCamera(View view) {
        livePusher.switchCamera();
    }

    public void startLive(View view) {
        livePusher.startLive("rtmp://47.96.117.157/myapp");
    }

    public void stopLive(View view) {
        livePusher.stopLive();
    }
}

LivePusher:调度音、视频推流类,目前只实现了视频相关,所以目前任务是初始化VideoChannel,命令VideoChannel设置摄像头预览、开关视频推流。因为这些功能都是视频相关,所以具体实现都在VideoChannel中。native_init()其实也是对的native层的VideoChannel初始化。

public class LivePusher {
    private AudioChannel audioChannel;
    private VideoChannel videoChannel;
    static {
        System.loadLibrary("native-lib");
    }

    public LivePusher(Activity activity, int width, int height, int bitrate,
                      int fps, int cameraId) {
        //对native层的VideoChannel进行初始化
        native_init();
        videoChannel = new VideoChannel(this, activity, width, height, bitrate, fps, cameraId);
        audioChannel = new AudioChannel(this);
    }

    /**
     * 设置摄像头预览
     * @param surfaceHolder
     */
    public void setPreviewDisplay(SurfaceHolder surfaceHolder) {
        videoChannel.setPreviewDisplay(surfaceHolder);
    }

    /**
     * 切换摄像头
     */
    public void switchCamera() {
        videoChannel.switchCamera();
    }

    /**
     * 开始直播
     * @param path
     */
    public void startLive(String path) {
        native_start(path);
        videoChannel.startLive();
    }

    /**
     * 停止直播
     */
    public void stopLive() {
        videoChannel.stopLive();
    }

    public native void native_init();

    public native void native_setVideoEncInfo(int w, int h, int mFps, int mBitrate);

    public native void native_start(String path);

    public native void native_pushVideo(byte[] data);

}

先不看LivePusher中的native函数怎么实现,假设native是已经实现了希望的功能。先看java层的VideoChannel,VideoChannel的主要工作是同过CameraHelper打开摄像头预览,监听尺寸改变和摄像头数据回调。这里会发现java层的VideoChannel并没有调用native层的VideoChannel,而是都把数据传给了Java层的LivePusher,因为为了保证数据清晰,LivePusher才是唯一和native层连接的:

public class VideoChannel implements Camera.PreviewCallback, CameraHelper.OnChangedSizeListener {
    private static final String TAG = "VideoChannel";
    private CameraHelper cameraHelper;
    private int mBitrate;
    private int mFps;
    private boolean isLiving;
    LivePusher livePusher;
    public VideoChannel(LivePusher livePusher, Activity activity, int width, int height, int bitrate, int fps, int cameraId) {
        mBitrate = bitrate;
        mFps = fps;
        this.livePusher = livePusher;
        cameraHelper = new CameraHelper(activity, cameraId, width, height);
        cameraHelper.setPreviewCallback(this);
        cameraHelper.setOnChangedSizeListener(this);
    }

    @Override
    public void onPreviewFrame(byte[] data, Camera camera) {
        Log.i(TAG, "onPreviewFrame: ");
        if (isLiving) {
            Log.i(TAG, "push");
            livePusher.native_pushVideo(data);
        }
    }

    @Override
    public void onChanged(int w, int h) {
        livePusher.native_setVideoEncInfo(w, h, mFps, mBitrate);
    }
    public void switchCamera() {
        cameraHelper.switchCamera();
    }

    public void setPreviewDisplay(SurfaceHolder surfaceHolder) {
        cameraHelper.setPreviewDisplay(surfaceHolder);
    }

    public void startLive() {
        isLiving = true;
    }

    public void stopLive() {
        isLiving = false;
    }
}

CameraHelper就是个摄像头预览工具类,除去正常的开启摄像头和预览外,还有两项工作:

  1. 在摄像头的onPreviewFrame和setPreviewOrientation回调里,把数据通过监听器传出去,也就是传给VideoChannel.
  2. 对摄像头数据进行旋转,注意这里是对数据进行旋转,不仅仅是预览画面旋转.

代码:

public class CameraHelper implements SurfaceHolder.Callback, Camera.PreviewCallback {

    private static final String TAG = "CameraHelper";
    private Activity mActivity;
    private int mHeight;
    private int mWidth;
    private int mCameraId;
    private Camera mCamera;
    private byte[] buffer;
    private SurfaceHolder mSurfaceHolder;
    private Camera.PreviewCallback mPreviewCallback;
    private int mRotation;
    private OnChangedSizeListener mOnChangedSizeListener;
    byte[] bytes;

    public CameraHelper(Activity activity, int cameraId, int width, int height) {
        mActivity = activity;
        mCameraId = cameraId;
        mWidth = width;
        mHeight = height;
    }


    /**
     * 设置surfaceHolder
     * @param surfaceHolder
     */
    public void  setPreviewDisplay(SurfaceHolder surfaceHolder) {
        mSurfaceHolder = surfaceHolder;
        mSurfaceHolder.addCallback(this);
    }

    /**
     * SurfaceHolder.Callback
     */
    @Override
    public void surfaceCreated(SurfaceHolder holder) {

    }

    /**
     * SurfaceHolder.Callback
     */
    @Override
    public void surfaceChanged(SurfaceHolder holder, int format, int width, int height) {
        stopPreview();
        startPreview();
    }

    /**
     * SurfaceHolder.Callback
     */
    @Override
    public void surfaceDestroyed(SurfaceHolder holder) {
        stopPreview();
    }



    /**
     * 停止预览
     */
    private void stopPreview() {
        if (mCamera != null) {
            //预览数据回调接口
            mCamera.setPreviewCallback(null);
            //停止预览
            mCamera.stopPreview();
            //释放摄像头
            mCamera.release();
            mCamera = null;
        }
    }

    /**
     * 开始预览
     */
    private void startPreview() {
        try {
            //获得camera对象
            mCamera = Camera.open(mCameraId);
            //配置camera的属性
            Camera.Parameters parameters = mCamera.getParameters();
            //设置预览数据格式为nv21
            parameters.setPreviewFormat(ImageFormat.NV21);
            //设置摄像头宽、高
            setPreviewSize(parameters);
            // 设置摄像头 图像传感器的角度、方向
            setPreviewOrientation(parameters);
            // 设置自动对焦
            setFocusMode(parameters);
            mCamera.setParameters(parameters);

            // 大小由YUV格式决定
            buffer = new byte[mWidth * mHeight * 3 / 2];
            bytes = new byte[buffer.length];
            //数据缓存区
            mCamera.addCallbackBuffer(buffer);
            mCamera.setPreviewCallbackWithBuffer(this);

            //设置预览画面
            mCamera.setPreviewDisplay(mSurfaceHolder);
            mCamera.startPreview();
        } catch (Exception ex) {
            ex.printStackTrace();
        }
    }

    /**
     * 设置宽高
     * 1、获取摄像头支持的宽、高
     * 2、选择一个与设置的差距最小的支持分辨率
     * @param parameters
     */
    private void setPreviewSize(Camera.Parameters parameters) {
        List<Camera.Size> supportedPreviewSizes = parameters.getSupportedPreviewSizes();
        Camera.Size size = supportedPreviewSizes.get(0);
        Log.d(TAG, "支持 " + size.width + "x" + size.height);
        int m = Math.abs(size.height * size.width - mWidth * mHeight);
        supportedPreviewSizes.remove(0);
        Iterator<Camera.Size> iterator = supportedPreviewSizes.iterator();
        while (iterator.hasNext()) {
            Camera.Size next = iterator.next();
            Log.d(TAG, "支持 " + next.width + "x" + next.height);
            int n = Math.abs(next.height * next.width - mWidth * mHeight);
            if (n < m) {
                m = n;
                size = next;
            }
        }
        mWidth = size.width;
        mHeight = size.height;
        parameters.setPreviewSize(mWidth, mHeight);
        Log.d(TAG, "设置预览分辨率 width:" + size.width + " height:" + size.height);
    }

    /**
     * 设置摄像头 图像传感器的角度、方向
     * 摄像头正常情况要旋转90度才能正过来,手机里的摄像头都是头朝右躺着放的
     * @param parameters
     */
    private void setPreviewOrientation(Camera.Parameters parameters) {
        Camera.CameraInfo info = new Camera.CameraInfo();
        Camera.getCameraInfo(mCameraId, info);
        mRotation = mActivity.getWindowManager().getDefaultDisplay().getRotation();
        int degrees = 0;
        switch (mRotation) {
            case Surface.ROTATION_0:
                degrees = 0;
                mOnChangedSizeListener.onChanged(mHeight, mWidth);
                break;
            case Surface.ROTATION_90: // 横屏 左边是头部(home键在右边)
                degrees = 90;
                mOnChangedSizeListener.onChanged(mWidth, mHeight);
                break;
            case Surface.ROTATION_270:// 横屏 头部在右边
                degrees = 270;
                mOnChangedSizeListener.onChanged(mWidth, mHeight);
                break;
        }
        int result;
        if (info.facing == Camera.CameraInfo.CAMERA_FACING_FRONT) {
            result = (info.orientation + degrees) % 360;
            result = (360 - result) % 360; // compensate the mirror
        } else { // back-facing
            result = (info.orientation - degrees + 360) % 360;
        }
        //设置角度
        mCamera.setDisplayOrientation(result);
    }

    /**
     * 自动对焦
     * @param parameters
     */
    private void setFocusMode(Camera.Parameters parameters) {
        List<String> focusModes = parameters.getSupportedFocusModes();
        if (focusModes != null
                && focusModes.contains(Camera.Parameters.FOCUS_MODE_CONTINUOUS_PICTURE)) {
            parameters.setFocusMode(Camera.Parameters.FOCUS_MODE_CONTINUOUS_PICTURE);
        }
    }

    /**
     * Camera.PreviewCallback
     * @param data
     * @param camera
     */
    @Override
    public void onPreviewFrame(byte[] data, Camera camera) {
        switch (mRotation) {
            case Surface.ROTATION_0:
                rotation90(data);
                break;
            case Surface.ROTATION_90: // 横屏 左边是头部(home键在右边)
                //TODO
                break;
            case Surface.ROTATION_270:// 横屏 头部在右边
                //TODO
                break;
        }
        // data数据依然是倒的
        mPreviewCallback.onPreviewFrame(bytes, camera);
        camera.addCallbackBuffer(buffer);
    }

    /**
     * 数据旋转90度, 具体和查看NV21编码规则
     * @param data
     */
    private void rotation90(byte[] data) {
        int index = 0;
        int ySize = mWidth * mHeight;
        //u和v
        int uvHeight = mHeight / 2;
        //后置摄像头顺时针旋转90度
        if (mCameraId == Camera.CameraInfo.CAMERA_FACING_BACK) {
            //将y的数据旋转之后 放入新的byte数组
            for (int i = 0; i < mWidth; i++) {
                for (int j = mHeight - 1; j >= 0; j--) {
                    bytes[index++] = data[mWidth * j + I];
                }
            }

            //每次处理两个数据
            for (int i = 0; i < mWidth; i += 2) {
                for (int j = uvHeight - 1; j >= 0; j--) {
                    // v
                    bytes[index++] = data[ySize + mWidth * j + I];
                    // u
                    bytes[index++] = data[ySize + mWidth * j + i + 1];
                }
            }
        } else {
            //逆时针旋转90度
            for (int i = 0; i < mWidth; i++) {
                int nPos = mWidth - 1;
                for (int j = 0; j < mHeight; j++) {
                    bytes[index++] = data[nPos - I];
                    nPos += mWidth;
                }
            }
            //u v
            for (int i = 0; i < mWidth; i += 2) {
                int nPos = ySize + mWidth - 1;
                for (int j = 0; j < uvHeight; j++) {
                    bytes[index++] = data[nPos - i - 1];
                    bytes[index++] = data[nPos - I];
                    nPos += mWidth;
                }
            }
        }
    }

    /**
     * 切换摄像头
     */
    public void switchCamera() {
        if (mCameraId == Camera.CameraInfo.CAMERA_FACING_BACK) {
            mCameraId = Camera.CameraInfo.CAMERA_FACING_FRONT;
        } else {
            mCameraId = Camera.CameraInfo.CAMERA_FACING_BACK;
        }
        stopPreview();
        startPreview();
    }

    /**
     * 设置预览监听器
     * @param previewCallback
     */
    public void setPreviewCallback(Camera.PreviewCallback previewCallback) {
        mPreviewCallback = previewCallback;
    }


    /**
     * 设置旋转监听器
     * @param listener
     */
    public void setOnChangedSizeListener(OnChangedSizeListener listener) {
        mOnChangedSizeListener = listener;
    }

    /**
     * 旋转监听器
     */
    public interface OnChangedSizeListener {
        void onChanged(int w, int h);
    }


    /**
     * 释放
     */
    public void release() {
        mSurfaceHolder.removeCallback(this);
        stopPreview();
    }

}

到这里Java层已经完成了,代码流程如下:

  1. 在MainActivity的oncreat()里初始化LivePusher,LivePusher会初始化VideoChannel和native层
  2. VideoChannel初始化时会初始化CameraHelper进行摄像头预览操作
  3. CameraHelper在摄像头尺寸变化和收到摄像头数据的时候,回调给VideoChannel。
  4. VideoChannel在收到尺寸变化回调时,通过LivePusher把的尺寸、码率、帧率设置给native层;在收到摄像头数据回调时,通过LivePusher把摄像头数据发送给native层

画个图理下思路:


Java层

Java层已经完事,并没有推流,唯一跟数据相关的操作是把摄像头数据旋转了90度,然后传给了native 层,下面就是native对数据进行编码、传输的操作了。
native层主要是两个文件:
native-lib.cpp:实现Java层LivePusher中的一些native方法(推流操作)
VideoChannel.cpp:只实现视频数据通过x264进行编码,然后把编码之后的数据回调给native-lib.cpp。跟Java层的VideoChannel并没有关系,因为Java层和native层的通信只通过LivePusher.java和native-lib.cpp之间调用。Java层的VideoChannel中并没有任何native方法,千万不要搞混。

native层代码:
先看naitive-lib.cpp,这是native层唯一与Java层通讯的文件,也就是实现Java层的LivePusher中的native方法的地方,LivePusher中有4个native方法:

    //native层初始化(native层的VideoChannel初始化)
    public native void native_init(); 

    //设置宽高, 在摄像头第一次初始化成功和尺寸变化时掉用(比如横竖屏切换)
    public native void native_setVideoEncInfo(int w, int h, int mFps, int mBitrate);

    //指连接服务器,把编码之后的RTMPPacket一个个推向服务器
    public native void native_start(String path);

    //开始编码,把摄像头采集的每一帧数据编码成一个个RTMPPacket
    public native void native_pushVideo(byte[] data);

然后贴下naitive-lib.cpp:

#include <jni.h>
#include <string>
#include "x264.h"
#include "librtmp/rtmp.h"
#include "VideoChannel.h"
#include "pthread.h"
#include "macro.h"
#include "safe_queue.h"

VideoChannel *videoChannel;//编码专用,会回调编码之后的RTMPPacket
int isStart = 0;//为了防止用户重复点击开始直播,导致重新初始化
pthread_t pid; //连接服务器的线程
uint32_t start_time;//开始推流时间戳
int readyPushing = 0;// 是否已连接服务器,准备就绪
SafeQueue<RTMPPacket *> packets;//队列,用于存储VideoChannel中组装好准备传输的RTMPPacket


/**
 * VideoChannel的回调方法,会收到VideoChannel中编码之后的每个RTMPPacket,计入队列等待推流(上传服务器)
 * @param packet
 */
void callback(RTMPPacket *packet) {

    if (packet) {

        //设置时间戳
        packet->m_nTimeStamp = RTMP_GetTime() - start_time;
        //加入队列
        packets.put(packet);
    }
}


/**
 * 释放packet
 * @param packet
 */
void releasePackets(RTMPPacket *&packet) {
    if (packet) {
        RTMPPacket_Free(packet);
        delete packet;
        packet = 0;
    }

}

/**
 * 开始推流
 * 该方法在开始直播的方法(Java_com_yu_mypush_LivePusher_native_1start)中调用,可以理解为Java里new Thread中的run()方法
 * @param args
 * @return
 */
void *start(void *args) {

    char *url = static_cast<char *>(args);
    RTMP *rtmp = 0;
    rtmp = RTMP_Alloc();
    if (!rtmp) {
        LOGE("alloc rtmp失败");
        return NULL;
    }

    RTMP_Init(rtmp);
    int ret = RTMP_SetupURL(rtmp, url);
    if (!ret) {
        LOGE("设置地址失败:%s", url);
        return NULL;
    }

    rtmp->Link.timeout = 5;
    RTMP_EnableWrite(rtmp);
    ret = RTMP_Connect(rtmp, 0);
    if (!ret) {
        LOGE("连接服务器:%s", url);
        return NULL;
    }

    ret = RTMP_ConnectStream(rtmp, 0);
    if (!ret) {
        LOGE("连接流:%s", url);
        return NULL;
    }
    start_time= RTMP_GetTime();
    //表示可以开始推流了
    readyPushing = 1;
    packets.setWork(1);
    RTMPPacket *packet = 0;
    while (readyPushing) {
//        队列取数据  pakets
        packets.get(packet);
        LOGE("取出一帧数据");
        if (!readyPushing) {
            break;
        }
        if (!packet) {
            continue;
        }
        packet->m_nInfoField2 = rtmp->m_stream_id;
        ret = RTMP_SendPacket(rtmp, packet, 1);

//        packet 释放
        releasePackets(packet);
    }

    isStart = 0;
    readyPushing = 0;
    packets.setWork(0);
    packets.clear();
    if (rtmp) {
        RTMP_Close(rtmp);
        RTMP_Free(rtmp);
    }
    delete (url);
    return  0;

}

extern "C"
JNIEXPORT void JNICALL
/**
 * 初始化
 * @param env
 * @param instance
 */
Java_com_yu_mypush_LivePusher_native_1init(JNIEnv *env, jobject instance) {

    videoChannel = new VideoChannel;
    //设置回调,因为VideoChannel只负责编码,这里拿到编码之后的数据进行传输
    videoChannel->setVideoCallback(callback);
}


extern "C"
JNIEXPORT void JNICALL
/**
 * 初始化视频数据,在摄像头数据尺寸变化时Java层调用的(第一次打开摄像头、摄像头切换、横竖屏切换都会引起摄像头采集的尺寸发送变化,会走这个方法)
 * @param env
 * @param instance
 * @param width
 * @param height
 * @param fps
 * @param bitrate
 */
Java_com_yu_mypush_LivePusher_native_1setVideoEncInfo(JNIEnv *env, jobject instance, jint width, jint height,
                                                      jint fps, jint bitrate) {

    if (!videoChannel) {
        return;
    }
    videoChannel->setVideoEncInfo(width, height, fps, bitrate);

}


extern "C"
JNIEXPORT void JNICALL
/**
 * 开始直播(连接服务器,从队列中取出数据并推向服务器)
 * 该方法会在点击开始直播之后调用
 * @param env
 * @param instance
 * @param path_
 */
Java_com_yu_mypush_LivePusher_native_1start(JNIEnv *env, jobject instance, jstring path_) {
    const char *path = env->GetStringUTFChars(path_, 0);

    if (isStart) {
        return;
    }
    isStart = 1;

    // path会回收
    char *url = new char[strlen(path) + 1];
    strcpy(url, path);

    //start类似java线程中的run方法,url是start的参数
    pthread_create(&pid, 0, start, url);

    env->ReleaseStringUTFChars(path_, path);
}


extern "C"
JNIEXPORT void JNICALL
/**
 * 开始数据编码(开始后会收到每个编码后的packet回调并加入到队列中)
 * 该方法会在开始直播并且收到摄像头返回数据之后调用
 * @param env
 * @param instance
 * @param data_
 */
Java_com_yu_mypush_LivePusher_native_1pushVideo(JNIEnv *env, jobject instance, jbyteArray data_) {
    jbyte *data = env->GetByteArrayElements(data_, NULL);

    if (!videoChannel || !readyPushing) {
        return;
    }
    videoChannel->encodeData(data);

    env->ReleaseByteArrayElements(data_, data, 0);
}

各方法功能:

这就是naitive-lib的功能如下:

然后重点来了,native层VideoChannel中的视频编码,开撸之前要先理解编码中的一些知识点,这里要做两项工作,上面说过,再再再说一下:

  1. 把每一帧图片由NV21转为I420
  2. 把YUVI420的每一帧图片分解成RTMPPacket

NV21转为YUVI420怎么转?先来两张图对比数据组成:

I420编码
NV21编码
因为都是数据YUV,结构类似的,Y数据没差别,只是U和V的排列不一样,撸一下:
pic_in->img.plane是容易,存转化后的数据
data是目标数据,摄像头里传过来的
void VideoChannel::encodeData(int8_t *data) {

    // y数据
    memcpy(pic_in->img.plane[0], data, ySize);

    //uv数据
    for (int i = 0; i < uvSize; ++i) {
        *(pic_in->img.plane[1] + i) = *(data + ySize + i * 2 + 1);//u  1  3   5  7  9
        *(pic_in->img.plane[2] + i) = *(data + ySize + i * 2);//  v  0   2  4  6  8  10
    }
    ......
}

哦了,这就转成I420了,如果感觉转码很简单,继续... 把每一帧数据拆分成一个个RTMPPacket。
上面提到H264编码中有I帧、B帧、P帧,I帧说过了很重要,I帧中包含了SPS、PPS。


其实我也不知道具体是啥内容,反正就是这俩内容很重要,要先这俩(SPS和PPS)单独封装成一个RTMPPacket,然后才是I帧中其他的数据和B帧、P帧中其他的数据,至于为什么这样,原因很简单,协议就是这样,就是这么规定的。贴一段来自网络的:

假设SPS和PPS都理解了(不理解也没事,知道有这么个东西,而且必须这么做就行),怎么把一帧图片拆成RTMPPacket?其实还要分两步:

  1. 把一帧图片拆成N个NALU单元,上面提到过这个词
  2. 把每个NALU单元分装成RTMPPacket

先看第一步的实现,NALU单元是怎么拆分的?这里不需要理解太深,有兴趣可以慢慢研究。之前提到的有个工具还没用到:x264,这个工具就是把每一帧图片拆成NALU单元单元的。
所以encodeData(int8_t *data)函数中我们要把NV21转换成I420图片之后,把图片拆成NALU单元, 可以通过NALU单元的类型判断该单元是视频帧数据还是SPS还是PPS,SPS和PPS一般是最先拿到的,因为第一帧肯定是关键帧(I帧),关键帧最开始的数据就是SPS和PPS。
encodeData(int8_t *data)函数完整代码是这样的:

void VideoChannel::encodeData(int8_t *data) {

    //---------------------------nv21 -> yuvI420---------------------------------------------

    //pic_in是x264编码后的一帧, 临时存储, 之后还要转化成NALU
    memcpy(pic_in->img.plane[0], data, ySize);// y数据

    //uv数据
    for (int i = 0; i < uvSize; ++i) {
        *(pic_in->img.plane[1] + i) = *(data + ySize + i * 2 + 1);//u  1  3   5  7  9
        *(pic_in->img.plane[2] + i) = *(data + ySize + i * 2);//  v  0   2  4  6  8  10
    }

    //---------------------------yuvI420格式的整帧图片转NALU单元---------------------------------------------

    //NALU单元
    x264_nal_t *pp_nal;
    //NALU单元数量
    int pi_nal;
    x264_picture_t pic_out;
    x264_encoder_encode(videoCodec, &pp_nal, &pi_nal, pic_in, & pic_out);
    int sps_len;
    int pps_len;
    uint8_t sps[100];
    uint8_t pps[100];
    for (int i = 0; i < pi_nal; ++i) {
        //发送SPS和PPS
        if (pp_nal[i].i_type == NAL_SPS) {
            sps_len = pp_nal[i].i_payload - 4;
            memcpy(sps, pp_nal[i].p_payload + 4, sps_len);
        } else if (pp_nal[i].i_type == NAL_PPS) {
            pps_len = pp_nal[i].i_payload - 4;
            memcpy(pps, pp_nal[i].p_payload + 4, pps_len);
            sendSpsPps(sps, pps, sps_len, pps_len);
        } else {
            //关键帧和非关键帧
            sendFrame(pp_nal[i].i_type, pp_nal[i].p_payload, pp_nal[i].i_payload);
        }
    }
}

上面sendSpsPps()sendFrame()并没有实现,只是通过x264_encoder_encode()把一帧数据拆成NALU单元sendSpsPps()sendFrame()中来实现把NALU单元分装成RTMPPacket,封装也是要有具体规则的,先贴规则:




是的,这就是规则,对应代码多看几遍,固定死的。sendSpsPps()sendFrame()
/**
 * 发送SPS、PPS组装成的packet
 * @param sps
 * @param pps
 * @param sps_len
 * @param pps_len
 */
void VideoChannel::sendSpsPps(uint8_t *sps, uint8_t *pps, int sps_len, int pps_len) {
//    sps, pps  --->packet
    int bodySize = 13 + sps_len + 3 + pps_len;
    RTMPPacket *packet = new RTMPPacket;
    RTMPPacket_Alloc(packet, bodySize);

    int i = 0;
    //固定头
    packet->m_body[i++] = 0x17;
    //类型
    packet->m_body[i++] = 0x00;
    //composition time 0x000000
    packet->m_body[i++] = 0x00;
    packet->m_body[i++] = 0x00;
    packet->m_body[i++] = 0x00;

    //版本
    packet->m_body[i++] = 0x01;
    //编码规格
    packet->m_body[i++] = sps[1];
    packet->m_body[i++] = sps[2];
    packet->m_body[i++] = sps[3];
    packet->m_body[i++] = 0xFF;

    //整个sps
    packet->m_body[i++] = 0xE1;
    //sps长度
    packet->m_body[i++] = (sps_len >> 8) & 0xff;
    packet->m_body[i++] = sps_len & 0xff;
    memcpy(&packet->m_body[i], sps, sps_len);
    i += sps_len;

    //pps
    packet->m_body[i++] = 0x01;
    packet->m_body[i++] = (pps_len >> 8) & 0xff;
    packet->m_body[i++] = (pps_len) & 0xff;
    memcpy(&packet->m_body[i], pps, pps_len);

    //视频
    packet->m_packetType = RTMP_PACKET_TYPE_VIDEO;
    packet->m_nBodySize = bodySize;
    //随意分配一个管道(尽量避开rtmp.c中使用的)
    packet->m_nChannel = 10;
    //sps pps没有时间戳
    packet->m_nTimeStamp = 0;
    //不使用绝对时间
    packet->m_hasAbsTimestamp = 0;
    packet->m_headerType = RTMP_PACKET_SIZE_MEDIUM;

    videoCallback(packet);
}

void VideoChannel::sendFrame(int type, uint8_t *payload, int i_payload) {
    if (payload[2] == 0x00) {
        i_payload -= 4;
        payload += 4;
    } else {
        i_payload -= 3;
        payload += 3;
    }
    //看表
    int bodySize = 9 + i_payload;
    RTMPPacket *packet = new RTMPPacket;
    //
    RTMPPacket_Alloc(packet, bodySize);

    packet->m_body[0] = 0x27;
    if(type == NAL_SLICE_IDR){
        packet->m_body[0] = 0x17;
        LOGE("关键帧");
    }
    //类型
    packet->m_body[1] = 0x01;
    //时间戳
    packet->m_body[2] = 0x00;
    packet->m_body[3] = 0x00;
    packet->m_body[4] = 0x00;
    //数据长度 int 4个字节
    packet->m_body[5] = (i_payload >> 24) & 0xff;
    packet->m_body[6] = (i_payload >> 16) & 0xff;
    packet->m_body[7] = (i_payload >> 8) & 0xff;
    packet->m_body[8] = (i_payload) & 0xff;

    //图片数据
    memcpy(&packet->m_body[9], payload, i_payload);

    packet->m_hasAbsTimestamp = 0;
    packet->m_nBodySize = bodySize;
    packet->m_packetType = RTMP_PACKET_TYPE_VIDEO;
    packet->m_nChannel = 0x10;
    packet->m_headerType = RTMP_PACKET_SIZE_LARGE;
    videoCallback(packet);
}

这就是把NALU单元组装成RTMPPacket。对照上面的图多看几遍。固定就是这么写的。相关的知识可以慢慢研究,这块确实有点复杂。
贴下整个VideoChannel.cpp的代码:

#include <x264.h>
#include "VideoChannel.h"
#include "librtmp/rtmp.h"
#include "jni.h"
#include <cstring>
#include "macro.h"

void VideoChannel::setVideoEncInfo(jint width, jint height, jint fps, jint bitrate) {

    mWidth = width;
    mHeight = height;
    mFps = fps;
    mBitrate = bitrate;
    ySize = width * height;
    uvSize = ySize / 4;

    x264_param_t param;
    // 初始化参数  ultrafast:最快   zerolatency:0延迟
    x264_param_default_preset(&param, "ultrafast", "zerolatency");

    //编码复杂度
    param.i_level_idc = 32;

    //其他设备大多是I420, 但是Android摄像头的数据是NV21,  为了支持大多设备,这里选i420
    param.i_csp = X264_CSP_I420;

    // 宽高
    param.i_width = width;
    param.i_height = height;

    //无b帧   首开
    param.i_bframe = 0;

    //参数i_rc_method表示码率控制,CQP(恒定质量),CRF(恒定码率),ABR(平均码率), 直播选平均码率
    param.rc.i_rc_method = X264_RC_ABR;

    //码率(比特率,单位Kbps)
    param.rc.i_bitrate = bitrate / 1000;

    //瞬时最大码率   网速   1M    10M
    param.rc.i_vbv_max_bitrate = bitrate / 1000 * 1.2;

    //设置了i_vbv_max_bitrate必须设置此参数,码率控制区大小,单位kbps
    param.rc.i_vbv_buffer_size = bitrate / 1000;

    //帧率分子分母
    param.i_fps_num = fps;
    param.i_fps_den = 1;

    //时间基分子分母, 为了音视频同步
    param.i_timebase_num = param.i_fps_num;
    param.i_timebase_den = param.i_fps_den;

    //用fps而不是时间戳来计算帧间距离
    param.b_vfr_input = 0;

    //帧距离(关键帧)  2s一个关键帧
    param.i_keyint_max = fps * 2;

    // 是否复制sps和pps放在每个关键帧的前面 该参数设置是让每个关键帧(I帧)都附带sps/pps。
    param.b_repeat_headers = 1;

    //多线程
    param.i_threads = 1;
    x264_param_apply_profile(&param, "baseline");

    videoCodec= x264_encoder_open(&param);

    //x264编码后的一帧, 临时存储, 之后还要转化成NALU
    pic_in = new x264_picture_t;
    //申请空间
    x264_picture_alloc(pic_in, X264_CSP_I420, width, height);
}

void VideoChannel::setVideoCallback(VideoCallback  videoCallback) {
    this->videoCallback = videoCallback;
}


/**
 * 解码
 * @param data
 */
void VideoChannel::encodeData(int8_t *data) {

    //---------------------------nv21 -> yuvI420---------------------------------------------

    //pic_in是x264编码后的一帧, 临时存储, 之后还要转化成NALU
    memcpy(pic_in->img.plane[0], data, ySize);// y数据

    //uv数据
    for (int i = 0; i < uvSize; ++i) {
        *(pic_in->img.plane[1] + i) = *(data + ySize + i * 2 + 1);//u  1  3   5  7  9
        *(pic_in->img.plane[2] + i) = *(data + ySize + i * 2);//  v  0   2  4  6  8  10
    }

    //---------------------------yuvI420格式的整帧图片转NALU单元---------------------------------------------

    //NALU单元
    x264_nal_t *pp_nal;
    //NALU单元数量
    int pi_nal;
    x264_picture_t pic_out;
    x264_encoder_encode(videoCodec, &pp_nal, &pi_nal, pic_in, & pic_out);
    int sps_len;
    int pps_len;
    uint8_t sps[100];
    uint8_t pps[100];
    for (int i = 0; i < pi_nal; ++i) {
        //发送SPS和PPS
        if (pp_nal[i].i_type == NAL_SPS) {
            sps_len =   pp_nal[i].i_payload - 4;
            memcpy(sps, pp_nal[i].p_payload + 4, sps_len);
        } else if (pp_nal[i].i_type == NAL_PPS) {
            pps_len = pp_nal[i].i_payload - 4;
            memcpy(pps, pp_nal[i].p_payload + 4, pps_len);
            sendSpsPps(sps, pps, sps_len, pps_len);
        } else {

            //关键帧和非关键帧
            sendFrame(pp_nal[i].i_type, pp_nal[i].p_payload, pp_nal[i].i_payload);
        }
    }
}

/**
 * 发送SPS、PPS组装成的packet
 * @param sps
 * @param pps
 * @param sps_len
 * @param pps_len
 */
void VideoChannel::sendSpsPps(uint8_t *sps, uint8_t *pps, int sps_len, int pps_len) {
//    sps, pps  --->packet
    int bodySize = 13 + sps_len + 3 + pps_len;
    RTMPPacket *packet = new RTMPPacket;
    RTMPPacket_Alloc(packet, bodySize);

    int i = 0;
    //固定头
    packet->m_body[i++] = 0x17;
    //类型
    packet->m_body[i++] = 0x00;
    //composition time 0x000000
    packet->m_body[i++] = 0x00;
    packet->m_body[i++] = 0x00;
    packet->m_body[i++] = 0x00;

    //版本
    packet->m_body[i++] = 0x01;
    //编码规格
    packet->m_body[i++] = sps[1];
    packet->m_body[i++] = sps[2];
    packet->m_body[i++] = sps[3];
    packet->m_body[i++] = 0xFF;

    //整个sps
    packet->m_body[i++] = 0xE1;
    //sps长度
    packet->m_body[i++] = (sps_len >> 8) & 0xff;
    packet->m_body[i++] = sps_len & 0xff;
    memcpy(&packet->m_body[i], sps, sps_len);
    i += sps_len;

    //pps
    packet->m_body[i++] = 0x01;
    packet->m_body[i++] = (pps_len >> 8) & 0xff;
    packet->m_body[i++] = (pps_len) & 0xff;
    memcpy(&packet->m_body[i], pps, pps_len);

    //视频
    packet->m_packetType = RTMP_PACKET_TYPE_VIDEO;
    packet->m_nBodySize = bodySize;
    //随意分配一个管道(尽量避开rtmp.c中使用的)
    packet->m_nChannel = 10;
    //sps pps没有时间戳
    packet->m_nTimeStamp = 0;
    //不使用绝对时间
    packet->m_hasAbsTimestamp = 0;
    packet->m_headerType = RTMP_PACKET_SIZE_MEDIUM;

    videoCallback(packet);
}

void VideoChannel::sendFrame(int type, uint8_t *payload, int i_payload) {
    if (payload[2] == 0x00) {
        i_payload -= 4;
        payload += 4;
    } else {
        i_payload -= 3;
        payload += 3;
    }
    //看表
    int bodySize = 9 + i_payload;
    RTMPPacket *packet = new RTMPPacket;
    //
    RTMPPacket_Alloc(packet, bodySize);

    packet->m_body[0] = 0x27;
    if(type == NAL_SLICE_IDR){
        packet->m_body[0] = 0x17;
        LOGE("关键帧");
    }
    //类型
    packet->m_body[1] = 0x01;
    //时间戳
    packet->m_body[2] = 0x00;
    packet->m_body[3] = 0x00;
    packet->m_body[4] = 0x00;
    //数据长度 int 4个字节
    packet->m_body[5] = (i_payload >> 24) & 0xff;
    packet->m_body[6] = (i_payload >> 16) & 0xff;
    packet->m_body[7] = (i_payload >> 8) & 0xff;
    packet->m_body[8] = (i_payload) & 0xff;

    //图片数据
    memcpy(&packet->m_body[9], payload, i_payload);

    packet->m_hasAbsTimestamp = 0;
    packet->m_nBodySize = bodySize;
    packet->m_packetType = RTMP_PACKET_TYPE_VIDEO;
    packet->m_nChannel = 0x10;
    packet->m_headerType = RTMP_PACKET_SIZE_LARGE;
    videoCallback(packet);
}

除了解码相关代码,x264的配置参数也不仅仅用到的这些。用到的写了注释,很多很多很多......,x264结构体参数说明的帖子也是很多很多很多...... ,可以慢慢理解。

macro.h是在C文件中输出log用的,safe_queue.h是个队列。可以在工程中找到。
视频推流的简单实现(没包括优化和释放)到这里就完成了。

音频推流:

音频推流的主要过程和实现思路跟视频推流是一样的。
视频数据格式化:摄像头数据(NV21) -> H264 -> RTMPPacket
音频数据格式化:麦克风数据(PCM) -> AAC -> RTMPPacket
工程流程和视频一样,Java层的AudioChannel采集麦克风数据,通过LivePusher传递给native层,native-lib接收到数据后丢给native的AudioChannel编码,编码之后回调给native-lib进行网络传输。

  1. 采集麦克风数据,直接贴代码:
public class AudioChannel {
    private LivePusher mLivePusher;
    private AudioRecord audioRecord;
    private int inputSamples;
    private int channels = 2;//双声道
    int channelConfig;
    int minBufferSize;
    private ExecutorService executor;
    private boolean isLiving;
    public AudioChannel(LivePusher livePusher) {
        executor = Executors.newSingleThreadExecutor();
        mLivePusher = livePusher;
        if (channels == 2) {
            channelConfig = AudioFormat.CHANNEL_IN_STEREO;
        } else {
            channelConfig = AudioFormat.CHANNEL_IN_MONO;
        }
        mLivePusher.native_setAudioEncInfo(44100, channels);

        minBufferSize=  AudioRecord.getMinBufferSize(44100,
                channelConfig, AudioFormat.ENCODING_PCM_16BIT);

        //faac返回的当前输入采样率对应的采样个数,因为是用的16位,所以*2是byte长度
        inputSamples = mLivePusher.getInputSamples() * 2;

        audioRecord = new AudioRecord(MediaRecorder.AudioSource.MIC, 44100, channelConfig,
                AudioFormat.ENCODING_PCM_16BIT, Math.min(minBufferSize, inputSamples));
    }

    public void startLive() {
        isLiving = true;
        executor.submit(new AudioTeask());
    }



    public void setChannels(int channels) {
        this.channels = channels;
    }

    class AudioTeask implements Runnable {


        @Override
        public void run() {
            audioRecord.startRecording();

            byte[] bytes = new byte[inputSamples];
            while (isLiving) {
                int len = audioRecord.read(bytes, 0, bytes.length);
                mLivePusher.native_pushAudio(bytes);
            }

        }
    }

    public void stopLive() {
        isLiving = false;
    }
}
  1. 音频数据编码AAC(使用FAAC工具,上篇环境集成中写到)和RTMPPacket封装,AAC初始化参数和封装类似X264,先贴下RTMPPacket音频协议:



    AudioChannel代码:

//不断调用
void AudioChannel::encodeData(int8_t *data) {
    int bytelen= faacEncEncode(audioCodec, reinterpret_cast<int32_t *>(data), inputSamples, buffer, maxOutputBytes);
    if (bytelen > 0) {
        RTMPPacket *packet = new RTMPPacket;
        int bodySize = 2 + bytelen;
        RTMPPacket_Alloc(packet, bodySize);
        packet->m_body[0] = 0xAF;
        if (mChannels == 1) {
            packet->m_body[0] = 0xAE;
        }
        packet->m_body[1] = 0x01;

        memcpy(&packet->m_body[2], buffer, bytelen);

        packet->m_hasAbsTimestamp = 0;
        packet->m_nBodySize = bodySize;
        packet->m_packetType = RTMP_PACKET_TYPE_AUDIO;
        packet->m_nChannel = 0x11;
        packet->m_headerType = RTMP_PACKET_SIZE_LARGE;
        audioCallback(packet);
    }

}
void AudioChannel::setAudioCallback(AudioChannel::AudioCallback audioCallback) {
    this->audioCallback = audioCallback;
}
// 初始初始化
void AudioChannel::setAudioEncInfo(int samplesInHZ, int channels){
    audioCodec=faacEncOpen(samplesInHZ, channels, &inputSamples, &maxOutputBytes);
    faacEncConfigurationPtr config=faacEncGetCurrentConfiguration(audioCodec);
    config->mpegVersion = MPEG4;
    config->aacObjectType = LOW;
    config->inputFormat = FAAC_INPUT_16BIT;
    config->outputFormat = 0;
    faacEncSetConfiguration(audioCodec, config);
    buffer = new u_char[maxOutputBytes];
}

int AudioChannel::getInputSamples() {
    return inputSamples;
}

AudioChannel::~AudioChannel() {
    DELETE(buffer);
    //释放编码器
    if (audioCodec) {
        faacEncClose(audioCodec);
        audioCodec = 0;
    }
}

初涉音视频,学习中的一些思路整理。音视频想了解明白,相关的知识还是需要去细细琢磨。很多知识了解不清楚就不在这瞎扯了,项目只是简单实现。

项目还在更新优化中: 项目地址

上一篇下一篇

猜你喜欢

热点阅读