短视频

Android 平台音视频编辑功能 汇总

2020-01-27  本文已影响0人  码上就说
  • 提取视频的基本信息;
  • 分离音频、视频;
  • 裁剪视频;
  • 音频合成到视频;
  • 视频拼接
  • 取出视频中关键帧;
  • 生成快慢视频;
  • 将视频倒放;

1.提取视频的基本信息

使用MediaExtractor 提取视频中的track信息,这段代码放在子线程中执行;

        MediaExtractor mediaExtractor = new MediaExtractor();
        mediaExtractor.setDataSource(inputPath);
        for (int index = 0; index < mediaExtractor.getTrackCount(); index++) {
            MediaFormat format = mediaExtractor.getTrackFormat(index);
            LogUtils.w("format = " + format);
        }

打印出来的信息如下:

{track-id=1, level=512, mime=video/avc, profile=8, language=und, display-width=720, csd-1=java.nio.HeapByteBuffer[pos=0 lim=10 cap=10], durationUs=47100000, display-height=1280, width=720, max-input-size=103124, frame-rate=30, height=1280, csd-0=java.nio.HeapByteBuffer[pos=0 lim=30 cap=30]}

{track-id=2, mime=audio/mp4a-latm, profile=2, max-bitrate=128004, sample-rate=44100, durationUs=47065442, channel-count=2, language=und, aac-profile=2, bitrate=128003, max-input-size=605, csd-0=java.nio.HeapByteBuffer[pos=0 lim=2 cap=2]}

mime 是 描述 MediaFormat的mime-type 类型,这个只是mime-type,如果是视频轨道:video/*;如果是音频轨道:audio/*
视频轨道,有display-width display-height 和width height 区分,两者有什么区别?

见分享的文章----> 视频的宽高应该怎么看?https://www.jianshu.com/p/9eda5e7f3fed

csd-0 表示 H264 的SPS;SPS是必须的,如果缺失,视频播不出来;
csd-1 表示 H264 的PPS;PPS可以缺失;

SPS即Sequence Paramater Set,又称作序列参数集。SPS中保存了一组编码视频序列(Coded video sequence)的全局参数。所谓的编码视频序列即原始视频的一帧一帧的像素数据经过编码之后的结构组成的序列。而每一帧的编码后数据所依赖的参数保存于图像参数集中。一般情况SPS和PPS的NALUnit通常位于整个码流的起始位置。但在某些特殊情况下,在码流中间也可能出现这两种结构,主要原因可能为:

  • 解码器需要在码流中间开始解码;
  • 编码器在编码的过程中改变了码流的参数(如图像分辨率等);

除了序列参数集SPS之外,H.264中另一重要的参数集合为图像参数集Picture Paramater Set(PPS)。通常情况下,PPS类似于SPS,在H.264的裸码流中单独保存在一个NAL Unit中,只是PPS NALUnit的nal_unit_type值为8;而在封装格式中,PPS通常与SPS一起,保存在视频文件的文件头中。

2.分离音频、视频

MediaExtractor 是可以获取到 视频文件中的track信息的,一个视频文件一般是:一个视频track,若干个音频track,可能还会有字幕track等等;

提取media文件中的音频信息:

    public static boolean splitAudioFile(String inputPath, String audioPath) throws IOException {
        MediaMuxer mediaMuxer = null;

        MediaExtractor mediaExtractor = new MediaExtractor();
        mediaExtractor.setDataSource(inputPath);

        int audioTrackIndex = -1;
        for (int index = 0; index < mediaExtractor.getTrackCount(); index++) {
            MediaFormat format = mediaExtractor.getTrackFormat(index);
            LogUtils.w("format = " + format);
            String mime = format.getString(MediaFormat.KEY_MIME);
            if (!mime.startsWith("audio/")) {
                continue;
            }
            mediaExtractor.selectTrack(index);

            mediaMuxer = new MediaMuxer(audioPath, MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4);
            audioTrackIndex = mediaMuxer.addTrack(format);
            mediaMuxer.start();
        }

        if (mediaMuxer == null) {
            return false;
        }
        MediaCodec.BufferInfo info = new MediaCodec.BufferInfo();
        info.presentationTimeUs = 0;
        ByteBuffer buffer = ByteBuffer.allocate(500 * 1024);
        int sampleSize = 0;
        while ((sampleSize = mediaExtractor.readSampleData(buffer, 0)) > 0) {
            info.offset = 0;
            info.size = sampleSize;
            info.flags = mediaExtractor.getSampleFlags();
            info.presentationTimeUs = mediaExtractor.getSampleTime();
            mediaMuxer.writeSampleData(audioTrackIndex, buffer, info);
            mediaExtractor.advance();   // 开始下一帧数据
        }

        mediaExtractor.release();
        mediaMuxer.stop();
        mediaMuxer.release();

        return true;
    }

这个操作要放在线程中执行,分离视频的操作是类似的,判断的时候记得mime 是 video/*

MediaMuxer 用来合并 多种基本流;

  • 首先MediaExtractor --> selectTrack(index);选定轨道index;因为接下来要去这个轨道中读数据;
  • 创建MediaMuxer对象,第一个参数是 输出的路径,第二个参数是格式:目前支持下面的格式;
    public static final int MUXER_OUTPUT_MPEG_4 = MUXER_OUTPUT_FIRST;
    /** WEBM media file format/
    public static final int MUXER_OUTPUT_WEBM = MUXER_OUTPUT_FIRST + 1;
    /
    * 3GPP media file format/
    public static final int MUXER_OUTPUT_3GPP = MUXER_OUTPUT_FIRST + 2;
    /
    * HEIF media file format/
    public static final int MUXER_OUTPUT_HEIF = MUXER_OUTPUT_FIRST + 3;
    /
    * Ogg media file format*/
    public static final int MUXER_OUTPUT_OGG = MUXER_OUTPUT_FIRST + 4;
  • MediaMuxer-->addTrack(format) ; 这个操作表示输出的 media文件是什么MediaFormat的,我们是分离视频和音频文件,那就直接取要分离的MediaFormat;
  • MediaMuxer-->start();创建文件,这个必须要在 MediaMuxer-->addTrack(format)之后做;
  • MediaExtractor-->readSampleData(buffer, 0); 读取 track index 对应的轨道信息数据,放到buffer中;
  • MediaMuxer-->writeSampleData(index, buffer, info); 写入读取的帧数据;
  • MediaExtractor-->advance(); 开始移到下一帧,读新的数据;

3.裁剪视频

裁剪视频,需要完成两个工作:

  • 裁剪视频中的视频track info;
  • 裁剪视频中的音频track info;
    public static boolean cutAudio(String inputPath, String outputPath, long start, long duration) throws IOException {
        MediaMuxer mediaMuxer = null;

        MediaExtractor mediaExtractor = new MediaExtractor();
        mediaExtractor.setDataSource(inputPath);

        mediaMuxer = new MediaMuxer(outputPath, MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4);

        int sourceVideoTrack = -1;
        int sourceAudioTrack = -1;
        int videoTrackIndex = -1;
        int audioTrackIndex = -1;

        for (int index = 0; index < mediaExtractor.getTrackCount(); index++) {
            MediaFormat format = mediaExtractor.getTrackFormat(index);
            LogUtils.w("format = " + format);
            String mime = format.getString(MediaFormat.KEY_MIME);
            if (mime.startsWith("audio/")) {
                sourceAudioTrack = index;
                audioTrackIndex = mediaMuxer.addTrack(format);
            } else if (mime.startsWith("video/")) {
                sourceVideoTrack = index;
                videoTrackIndex = mediaMuxer.addTrack(format);
            }
        }

        if (mediaMuxer == null) {
            return false;
        }

        mediaMuxer.start();


        //1.cut video track info.
        mediaExtractor.selectTrack(sourceVideoTrack);
        MediaCodec.BufferInfo info = new MediaCodec.BufferInfo();
        info.presentationTimeUs = 0;
        ByteBuffer buffer = ByteBuffer.allocate(500 * 1024);
        int sampleSize = 0;
        while ((sampleSize = mediaExtractor.readSampleData(buffer, 0)) > 0) {
            info.offset = 0;
            info.size = sampleSize;
            info.flags = mediaExtractor.getSampleFlags();
            info.presentationTimeUs = mediaExtractor.getSampleTime();
            if (info.presentationTimeUs <= start) {
                mediaExtractor.advance();
                continue;
            }
            mediaMuxer.writeSampleData(videoTrackIndex, buffer, info);
            if (info.presentationTimeUs > start + duration) {
                break;
            }
            mediaExtractor.advance();
        }

        //2.cut audio track info.
        mediaExtractor.unselectTrack(sourceVideoTrack);
        mediaExtractor.selectTrack(sourceAudioTrack);
        info = new MediaCodec.BufferInfo();
        info.presentationTimeUs = 0;
        buffer = ByteBuffer.allocate(500 * 1024);
        sampleSize = 0;
        while ((sampleSize = mediaExtractor.readSampleData(buffer, 0)) > 0) {
            info.offset = 0;
            info.size = sampleSize;
            info.flags = mediaExtractor.getSampleFlags();
            info.presentationTimeUs = mediaExtractor.getSampleTime();
            if (info.presentationTimeUs <= start) {
                mediaExtractor.advance();
                continue;
            }
            mediaMuxer.writeSampleData(audioTrackIndex, buffer, info);
            if (info.presentationTimeUs > start + duration) {
                break;
            }
            mediaExtractor.advance();
        }

        mediaExtractor.release();
        mediaMuxer.stop();
        mediaMuxer.release();

        return true;

    }

当然这个函数调用也是需要放在子线程中执行的;
裁剪视频具体要经过一下几个操作步骤:

    1. 首先使用MediaExtractor 取出 视频中的音频流track 和 视频流track;分别取出它们的track index还有对应的MediaFormat信息;
    1. MediaMuxer.addTrack(MediaFormat) 得到合成之后新视频的 音频track 和 视频track;这儿要注意合成之后的视频的 音频track 或者 视频track 与 原始视频不一定是一样的;这一点要明确一下,很重要,怕应用的时候混淆;我们定义的变量中也可以看出来 sourceVideoTrack 和 videoTrackIndex 分别区分一下;
    1. MediaMuxer.start() ;开始合成视频操作;
    1. 提取视频流中的 固定位置的 data;
    1. 提取音频流中的 固定位置的 data;

假如:start 是10* 1000 * 1000;duration 是 20 * 1000 * 1000;那就是需要截取视频中起始位置是 10s,之后的20s视频信息;
我这边跳过前 10s的方式是 遇到视频 或者 音频中前10s的data 信息就直接忽略,不会写到 MediaMuxer 中;

有没有其他的办法呢?
mediaExtractor.seekTo(start, MediaExtractor.SEEK_TO_PREVIOUS_SYNC);
上面这个方法是直接跳到 start位置;这是系统提供的API,可以使用下,但是正常情况下,还是建议使用我上面的方法,这样控制比较精准一些;

4.音频合成到视频

音频和视频合成,两个文件,一个音频文件,一个视频文件,视频文件中还有视频track 信息和 音频track信息;
我们抽取去音频文件的中的音频track 信息 ;抽取视频文件中的视频track 信息,然后将两个track信息合成到一个文件中;

    public static boolean mergeMedia(String audioPath, String videoPath, String outputPath) throws IOException {
        MediaMuxer mediaMuxer = null;
        mediaMuxer = new MediaMuxer(outputPath, MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4);

        MediaExtractor videoExtractor = new MediaExtractor();
        videoExtractor.setDataSource(videoPath);

        MediaExtractor audioExtractor = new MediaExtractor();
        audioExtractor.setDataSource(audioPath);

        int sourceVideoTrack = -1;
        int videoTrackIndex = -1;
        for (int index = 0; index < videoExtractor.getTrackCount(); index++) {
            MediaFormat format = videoExtractor.getTrackFormat(index);
            LogUtils.w("format = " + format);
            String mime = format.getString(MediaFormat.KEY_MIME);
            if (mime.startsWith("video/")) {
                sourceVideoTrack = index;
                videoTrackIndex = mediaMuxer.addTrack(format);
                break;
            }
        }

        int sourceAudioTrack = -1;
        int audioTrackIndex = -1;
        for (int index = 0; index < audioExtractor.getTrackCount(); index++) {
            MediaFormat format = audioExtractor.getTrackFormat(index);
            LogUtils.w("format = " + format);
            String mime = format.getString(MediaFormat.KEY_MIME);
            if (mime.startsWith("audio/")) {
                sourceAudioTrack = index;
                audioTrackIndex = mediaMuxer.addTrack(format);
                break;
            }
        }

        if (mediaMuxer == null)
            return false;

        mediaMuxer.start();

        //1.write video track info into muxer.
        videoExtractor.selectTrack(sourceVideoTrack);
        MediaCodec.BufferInfo info = new MediaCodec.BufferInfo();
        info.presentationTimeUs = 0;
        ByteBuffer buffer = ByteBuffer.allocate(500 * 1024);
        int sampleSize = 0;
        while ((sampleSize = videoExtractor.readSampleData(buffer, 0)) > 0) {
            info.offset = 0;
            info.size = sampleSize;
            info.flags = videoExtractor.getSampleFlags();
            info.presentationTimeUs = videoExtractor.getSampleTime();
            mediaMuxer.writeSampleData(videoTrackIndex, buffer, info);
            videoExtractor.advance();
        }

        //2.write audio track info into muxer;
        audioExtractor.selectTrack(sourceAudioTrack);
        info = new MediaCodec.BufferInfo();
        info.presentationTimeUs = 0;
        buffer = ByteBuffer.allocate(500 * 1024);
        sampleSize = 0;
        while ((sampleSize = audioExtractor.readSampleData(buffer, 0)) > 0) {
            info.offset = 0;
            info.size = sampleSize;
            info.flags = audioExtractor.getSampleFlags();
            info.presentationTimeUs = audioExtractor.getSampleTime();
            mediaMuxer.writeSampleData(audioTrackIndex, buffer, info);
            audioExtractor.advance();
        }

        videoExtractor.release();
        audioExtractor.release();
        mediaMuxer.stop();
        mediaMuxer.release();

        return true;
    }

函数的参数有3个:音频文件的路径,视频文件的路径,输出文件的路径;

    1. 实例化两个 MediaExtractor ,一个抽取音频的信息,一个抽取 视频的信息;audioExtractor videoExtractor
    1. audioExtractor 只取 音频的 track信息(正常情况下应该只有一个track信息);videoExtractor 只取 视频的track信息,判断到video 信息之后就行;
    1. MediaMuxer 分别将 抽取出的 音频 track 和 视频 track 合并起来,成为一个新的文件;
    1. 最后记得一定要释放创建的extractor和mediaMuxer;

5.视频拼接

视频拼接,就是选择两个或者多个视频,一个视频拼在另一个视频后面;这个功能主要关注点如下:

  • 分别抽取出 两个视频中的视频和音频;
  • 拼接视频的时候,一定要对好视频的时间戳,就是时间点;
    public static boolean appendVideo(String inputPath1, String inputPath2, String outputPath) throws IOException {
        MediaMuxer mediaMuxer = null;
        mediaMuxer = new MediaMuxer(outputPath, MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4);

        MediaExtractor videoExtractor1 = new MediaExtractor();
        videoExtractor1.setDataSource(inputPath1);

        MediaExtractor videoExtractor2 = new MediaExtractor();
        videoExtractor2.setDataSource(inputPath2);

        int videoTrackIndex = -1;
        int audioTrackIndex = -1;
        long file1_duration = 0L;

        int sourceVideoTrack1 = -1;
        int sourceAudioTrack1 = -1;
        for (int index = 0; index < videoExtractor1.getTrackCount(); index++) {
            MediaFormat format = videoExtractor1.getTrackFormat(index);
            String mime = format.getString(MediaFormat.KEY_MIME);
            file1_duration = format.getLong(MediaFormat.KEY_DURATION);
            if (mime.startsWith("video/")) {
                sourceVideoTrack1 = index;
                videoTrackIndex = mediaMuxer.addTrack(format);
            } else if (mime.startsWith("audio/")) {
                sourceAudioTrack1 = index;
                audioTrackIndex = mediaMuxer.addTrack(format);
            }
        }

        int sourceVideoTrack2 = -1;
        int sourceAudioTrack2 = -1;
        for (int index = 0; index < videoExtractor2.getTrackCount(); index++) {
            MediaFormat format = videoExtractor2.getTrackFormat(index);
            String mime = format.getString(MediaFormat.KEY_MIME);
            if (mime.startsWith("video/")) {
                sourceVideoTrack2 = index;
            } else if (mime.startsWith("audio/")) {
                sourceAudioTrack2 = index;
            }
        }

        if (mediaMuxer == null)
            return false;

        mediaMuxer.start();
        //1.write first video track into muxer.
        videoExtractor1.selectTrack(sourceVideoTrack1);
        MediaCodec.BufferInfo info = new MediaCodec.BufferInfo();
        info.presentationTimeUs = 0;
        ByteBuffer buffer = ByteBuffer.allocate(500 * 1024);
        int sampleSize = 0;
        while ((sampleSize = videoExtractor1.readSampleData(buffer, 0)) > 0) {
            info.offset = 0;
            info.size = sampleSize;
            info.flags = videoExtractor1.getSampleFlags();
            info.presentationTimeUs = videoExtractor1.getSampleTime();
            mediaMuxer.writeSampleData(videoTrackIndex, buffer, info);
            videoExtractor1.advance();
        }

        //2.write first audio track into muxer.
        videoExtractor1.unselectTrack(sourceVideoTrack1);
        videoExtractor1.selectTrack(sourceAudioTrack1);
        info = new MediaCodec.BufferInfo();
        info.presentationTimeUs = 0;
        buffer = ByteBuffer.allocate(500 * 1024);
        sampleSize = 0;
        while ((sampleSize = videoExtractor1.readSampleData(buffer, 0)) > 0) {
            info.offset = 0;
            info.size = sampleSize;
            info.flags = videoExtractor1.getSampleFlags();
            info.presentationTimeUs = videoExtractor1.getSampleTime();
            mediaMuxer.writeSampleData(audioTrackIndex, buffer, info);
            videoExtractor1.advance();
        }

        //3.write second video track into muxer.
        videoExtractor2.selectTrack(sourceVideoTrack2);
        info = new MediaCodec.BufferInfo();
        info.presentationTimeUs = 0;
        buffer = ByteBuffer.allocate(500 * 1024);
        sampleSize = 0;
        while ((sampleSize = videoExtractor2.readSampleData(buffer, 0)) > 0) {
            info.offset = 0;
            info.size = sampleSize;
            info.flags = videoExtractor2.getSampleFlags();
            info.presentationTimeUs = videoExtractor2.getSampleTime() + file1_duration;
            mediaMuxer.writeSampleData(videoTrackIndex, buffer, info);
            videoExtractor2.advance();
        }

        //4.write second audio track into muxer.
        videoExtractor2.unselectTrack(sourceVideoTrack2);
        videoExtractor2.selectTrack(sourceAudioTrack2);
        info = new MediaCodec.BufferInfo();
        info.presentationTimeUs = 0;
        buffer = ByteBuffer.allocate(500 * 1024);
        sampleSize = 0;
        while ((sampleSize = videoExtractor2.readSampleData(buffer, 0)) > 0) {
            info.offset = 0;
            info.size = sampleSize;
            info.flags = videoExtractor2.getSampleFlags();
            info.presentationTimeUs = videoExtractor2.getSampleTime() + file1_duration;
            mediaMuxer.writeSampleData(audioTrackIndex, buffer, info);
            videoExtractor2.advance();
        }

        videoExtractor1.release();
        videoExtractor2.release();
        mediaMuxer.stop();
        mediaMuxer.release();

        return true;
    }
    1. 实例化两个MediaExtractor ,分别是 videoExtractor1 和 videoExtractor2;
    1. 分别取出videoExtractor1 和 videoExtractor2中的 audio track 和 video track;
    1. MediaMuxer 中定义一个 audio track 和一个 video track;
    1. 先写入videoExtractor1 中的 video track 和audio track;
    1. 再写入 videoExtractor2 中的 video track 和 audio track,这儿一定要注意 info.presentationTimeUs 的定义;看看代码中的写法:
      info.presentationTimeUs = videoExtractor2.getSampleTime() + file1_duration; 这儿表示附在 video1 之后;时间戳对应在video1之后;

6.取出视频中关键帧

我们在分析视频信息的时候,视频的关键帧对我们非常重要,通常而言,一组关键帧可以组成一个完整的视频,其他的非关键帧可以通过前后的关键帧计算得到;

    public static boolean getKeyFrames(String inputPath) throws IOException {
        MediaMetadataRetriever mRetriever = new MediaMetadataRetriever();
        mRetriever.setDataSource(inputPath);

        MediaExtractor mediaExtractor = new MediaExtractor();
        mediaExtractor.setDataSource(inputPath);

        int sourceVideoTrack = -1;
        for (int index=0; index < mediaExtractor.getTrackCount(); index++) {
            MediaFormat format = mediaExtractor.getTrackFormat(index);
            String mime = format.getString(MediaFormat.KEY_MIME);
            if (mime.startsWith("video/")) {
                sourceVideoTrack = index;
                break;
            }
        }

        if (sourceVideoTrack == -1)
            return false;

        mediaExtractor.selectTrack(sourceVideoTrack);
        ByteBuffer buffer = ByteBuffer.allocate(500 * 1024);
        List<Long> frameTimeList = new ArrayList<>();
        int sampleSize = 0;
        while((sampleSize = mediaExtractor.readSampleData(buffer, 0)) > 0) {
            int flags = mediaExtractor.getSampleFlags();
            if (flags > 0 && (flags & MediaExtractor.SAMPLE_FLAG_SYNC) != 0) {
                frameTimeList.add(mediaExtractor.getSampleTime());
            }
            mediaExtractor.advance();
        }
        LogUtils.d("getKeyFrames keyFrameCount = " + frameTimeList.size());

        String parentPath = (new File(inputPath)).getParent() + File.separator;
        LogUtils.d("getKeyFrames parent Path="+parentPath);
        for(int index = 0; index < frameTimeList.size(); index++) {
            Bitmap bitmap = mRetriever.getFrameAtTime(frameTimeList.get(index), MediaMetadataRetriever.OPTION_CLOSEST_SYNC);
            savePicFile(bitmap, parentPath + "test_pic_" + index + ".jpg");

        }
        return true;
    }

    private static void savePicFile(Bitmap bitmap, String savePath) throws IOException {
        if (bitmap == null) {
            LogUtils.d("savePicFile failed, bitmap is null.");
            return;
        }
        LogUtils.d("savePicFile step 1, bitmap is not null.");
        File file = new File(savePath);
        if (!file.exists()) {
            file.createNewFile();
        }
        FileOutputStream outputStream = new FileOutputStream(file);
        bitmap.compress(Bitmap.CompressFormat.JPEG, 100, outputStream);
        outputStream.flush();
        outputStream.close();
    }

抽取关键帧非常重要,我们需要关键帧来获取当前视频的大概信息;这个关键帧就是I帧,下面我们会分析一下;

  • 1.定义MediaExtractor 实例,来获取视频中的视频track信息;
    1. 将视频中的关键帧的sampleTime信息记录下来,什么是关键帧?就是当前视频帧的flags中如果标记为MediaExtractor.SAMPLE_FLAG_SYNC 是关键帧,如果没有标记,不是关键帧;
    1. 定义MediaMetadataRetriever 实例,通过调用 MediaMetadataRetriever-->getFrameAtTime(time, flags);time就是表示当前关键帧的时间点;后一个参数flags为 MediaMetadataRetriever.OPTION_CLOSEST_SYNC 表示在这个时间戳 中最近的一个关键帧;这个关键帧可以在这个时间点前面,也可以是后面,也可以就是这个时间点;
    1. 保存Bitmap 文件;

7.生成快慢视频

8.将视频倒放

我们平时在玩抖音的时候,经常会遇到倒着播放的视频,实际上这些视频是经过处理的,将视频中的所有帧倒着生成,合成一个新的视频就可以的;
一般而言,我们倒放视频,只是倒放视频,不会倒放音频,所以这儿只会提供倒放视频的代码,倒放音频的代码思想是一样的;
这儿我提供一种思路:

  • 将当前的视频帧都变成关键帧;
  • 对关键帧进行逆序处理,因为MediaExtractor 只能处理视频关键帧;非关键帧的处理手段不太实用;

demo的地址如下:
VideoApplication ------ https://github.com/JeffMony/VideoApplication

视频编辑图示

感谢关注公众号JeffMony,持续给你带来音视频方面的知识。

上一篇下一篇

猜你喜欢

热点阅读