Android播放webp和gif的一种方法(接上篇)
上一篇我们介绍了Android中播放Webp动画的一种方法FrameSequenceDrawable的实现原理,在上篇文章的最后我做了一个预告,本篇就是接上篇的内容,抽象了FrameSequence思想实现了可以播放webp和gif的AnimationSequenceDrawable,主要还是介绍实现的原理,如果你没有看上一篇的内容,建议一定要看一下,了解整个播放的原理后在看本篇,这篇对播放原理不会在介绍,上篇的地址Android中播放webp动画的一种方式:FrameSequenceDrawable
播放效果
在介绍之前,我们还是先看一下播放效果:
播放gif
播放webp
我想直接用
如果你想直接使用,本篇就不直接贴代码了,可以去github查看
没有介绍的类:FrameSequence
上篇在介绍FrameSequenceDrawable时,并没有详细的查看FrameSequence的源码,那么我们现在先来看看FrameSequence在FrameSequenceDrawable都有哪些地方使用到了
public FrameSequenceDrawable(FrameSequence frameSequence, BitmapProvider bitmapProvider) {
if (frameSequence == null || bitmapProvider == null) throw new IllegalArgumentException();
mFrameSequence = frameSequence;
mFrameSequenceState = frameSequence.createState();
//...省略
mFrameSequenceState.getFrame(0, mFrontBitmap, -1);
initializeDecodingThread();
}
@Override
protected void finalize() throws Throwable {
try {
mFrameSequenceState.destroy();
} finally {
super.finalize();
}
}
/**
* Runs on decoding thread, only modifies mBackBitmap's pixels
*/
private Runnable mDecodeRunnable = new Runnable() {
@Override
public void run() {
//...省略
boolean exceptionDuringDecode = false;
long invalidateTimeMs = 0;
try {
invalidateTimeMs = mFrameSequenceState.getFrame(nextFrame, bitmap, lastFrame);
} catch (Exception e) {
// Exception during decode: continue, but delay next frame indefinitely.
Log.e(TAG, "exception during decode: " + e);
exceptionDuringDecode = true;
}
//...省略
}
};
private void scheduleDecodeLocked() {
mState = STATE_SCHEDULED;
mNextFrameToDecode = (mNextFrameToDecode + 1) % mFrameSequence.getFrameCount();
sDecodingThreadHandler.post(mDecodeRunnable);
}
@Override
public void draw(Canvas canvas) {
//...省略
if (mNextFrameToDecode == mFrameSequence.getFrameCount() - 1) {
mCurrentLoop++;
if ((mLoopBehavior == LOOP_FINITE && mCurrentLoop == mLoopCount) ||
(mLoopBehavior == LOOP_DEFAULT && mCurrentLoop == mFrameSequence.getDefaultLoopCount())) {
continueLooping = false;
}
}
//...省略
}
@Override
public int getIntrinsicWidth() {
return mFrameSequence.getWidth();
}
@Override
public int getIntrinsicHeight() {
return mFrameSequence.getHeight();
}
@Override
public int getOpacity() {
return mFrameSequence.isOpaque() ? PixelFormat.OPAQUE : PixelFormat.TRANSPARENT;
}
通过代码和上篇介绍过的内容来看,FrameSequence主要就是封装了帧序列的一些信息,长、宽、帧数、循环次数、透明度等信息,所以在FrameSequenceDrawable中,主要就是使用FrameSequence获取这些信息,mFrameSequenceState的类型是FrameSequence.State它主要的作用就是解析某一帧,和释放资源内存,主要解析的一些方法都是在native层实现的,我们可以看一下FrameSequence的源码中定义的一些方法
public class FrameSequence {
static {
System.loadLibrary("framesequence");
}
private final long mNativeFrameSequence;
private final int mWidth;
private final int mHeight;
private final boolean mOpaque;
private final int mFrameCount;
private final int mDefaultLoopCount;
public int getWidth() {
return mWidth;
}
public int getHeight() {
return mHeight;
}
public boolean isOpaque() {
return mOpaque;
}
public int getFrameCount() {
return mFrameCount;
}
public int getDefaultLoopCount() {
return mDefaultLoopCount;
}
private static native FrameSequence nativeDecodeByteArray(byte[] data, int offset, int length);
private static native FrameSequence nativeDecodeStream(InputStream is, byte[] tempStorage);
private static native FrameSequence nativeDecodeByteBuffer(ByteBuffer buffer, int offset, int capacity);
private static native void nativeDestroyFrameSequence(long nativeFrameSequence);
private static native long nativeCreateState(long nativeFrameSequence);
private static native void nativeDestroyState(long nativeState);
private static native long nativeGetFrame(long nativeState, int frameNr,
Bitmap output, int previousFrameNr);
@SuppressWarnings("unused") // called by native
private FrameSequence(long nativeFrameSequence, int width, int height,
boolean opaque, int frameCount, int defaultLoopCount) {
...
}
public static FrameSequence decodeByteArray(byte[] data) {
return decodeByteArray(data, 0, data.length);
}
public static FrameSequence decodeByteArray(byte[] data, int offset, int length) {
...
return nativeDecodeByteArray(data, offset, length);
}
public static FrameSequence decodeByteBuffer(ByteBuffer buffer) {
...
return nativeDecodeByteBuffer(buffer, buffer.position(), buffer.remaining());
}
public static FrameSequence decodeStream(InputStream stream) {
...
return nativeDecodeStream(stream, tempStorage);
}
State createState() {
...
return new State(nativeState);
}
@Override
protected void finalize() throws Throwable {
...
}
static class State {
private long mNativeState;
public State(long nativeState) {}
public void destroy() {}
// TODO: consider adding alternate API for drawing into a SurfaceTexture
public long getFrame(int frameNr, Bitmap output, int previousFrameNr) {
}
return nativeGetFrame(mNativeState, frameNr, output, previousFrameNr);
}
}
}
可以发现,所以有和webp解析图片相关的方法都被封装到了FrameSequence中,FrameSequenceDrawable中需要获取某一张图片时,只需要调用内部FrameSequence.State对象去解析就好了
是否可以使用其它的解析so库?
如果理解了上面的设计方式的话,那么使用其它解析库来解析webp就是很容易的事情了,开篇提到的AnimationSequenceDrawable中,我们使用了Fresco的animated-webp和animated-gif来解析每一帧对象,实现了AnimationSequenceDrawable在使用了不同的Sequence时可以播放webp或者gif的图片,整体的类关系如下图
uml.png
通过上面的uml图,可以看到,
- AnimationSequenceDrawable就是FrameSequenceDrawable在这个项目中的名字,它和FrameSequenceDrawable的代码几乎时一摸一样的(拷贝过来修改的 嘿嘿)
- 抽象了BaseAnimationSequence它是AnimationSequenceDrawable唯一依赖的帧序列
- FrescoSequence继承BaseAnimationSequence实现的帧序列,里面的解析方法主要是用了Fresco的库
- BaseSequenceFactory是一个抽象的工厂,它的唯一作用就是根据传入的InputStream生成一个BaseAnimationSequence,其中FrescoSequence的内部类FrescoWebpSequenceFactory和FrescoGifSequenceFactory分别是根据传入的InputStream生成Webp和GIF的FrescoSequence
在看这些类的源码之前,看一下这个既可以播放webp又可以播放gif是如何使用的,这里主要是直接使用AnimationSequenceDrawable,如果你觉得麻烦可以看gifhub项目里封装的AnimationImageView
public void playGif(){
InputStream in = null;
try {
in = getResources().getAssets().open("lion.gif");
BaseSequenceFactory factory = FrescoSequence.getSequenceFactory(FrescoSequence.GIF);
final AnimationSequenceDrawable drawable = new AnimationSequenceDrawable(factory.createSequence(in));
drawable.setLoopCount(1);
drawable.setLoopBehavior(AnimationSequenceDrawable.LOOP_FINITE);
drawable.setOnFinishedListener(new AnimationSequenceDrawable.OnFinishedListener() {
@Override
public void onFinished(AnimationSequenceDrawable frameSequenceDrawable) {
}
});
mGifImage.setImageDrawable(drawable);
} catch (IOException e) {
e.printStackTrace();
}
}
public void playWebp(){
InputStream in = null;
try {
in = getResources().getAssets().open("rmb.webp");
BaseSequenceFactory factory = FrescoSequence.getSequenceFactory(FrescoSequence.WEBP);
final AnimationSequenceDrawable drawable = new AnimationSequenceDrawable(factory.createSequence(in));
drawable.setLoopCount(1);
drawable.setLoopBehavior(AnimationSequenceDrawable.LOOP_FINITE);
drawable.setOnFinishedListener(new AnimationSequenceDrawable.OnFinishedListener() {
@Override
public void onFinished(AnimationSequenceDrawable frameSequenceDrawable) {
}
});
mFrescoImage.setImageDrawable(drawable);
} catch (IOException e) {
e.printStackTrace();
}
}
可以看到,这里AnimationSequenceDrawable并不会依赖具体的某一种Sequence, BaseSequenceFactory的createSequence返回值类型为BaseAnimationSequence,这样设计的好处就是,如果有一天有需要播放别的类型的帧序列动画或者发现了效率更牛逼的解析库时,可以直接实现对应继承的BaseSequenceFactory和BaseAnimationSequence即可,AnimationSequenceDrawable代码不需要一点的改动,下面我们来看一下这几个主要的类的代码
AnimationSequenceDrawable和FrameSequenceDrawable几乎就是一样的,不同的地方就是所有依赖FrameSequence和FrameSequence.State对象的地方被替换成了BaseAnimationSequence对象,感兴趣的朋友可以自行查看代码这里就不贴了,我们先看看BaseAnimationSequence代码
abstract public class BaseAnimationSequence {
private final int mWidth;
private final int mHeight;
private final int mFrameCount;
private final int mDefaultLoopCount;
public int getWidth() {
return mWidth;
}
public int getHeight() {
return mHeight;
}
public int getFrameCount() {
return mFrameCount;
}
public int getDefaultLoopCount() {
return mDefaultLoopCount;
}
public BaseAnimationSequence(int width, int height, int frameCount, int defaultLoopCount){
mWidth = width;
mHeight = height;
mFrameCount = frameCount;
mDefaultLoopCount = defaultLoopCount;
}
/**
* getFrame
* @param frameNr
* @param output
* @param previousFrameNr
* @return previousFrameNr duration time
*/
abstract public long getFrame(int frameNr, Bitmap output, int previousFrameNr);
/**
* isOpaque
* @return
*/
abstract public boolean isOpaque();
}
可以看到,BaseAnimationSequence和FrameSequence的方法基本都是一样的,只不过我把State这个内部类去掉了,getFrame直接变成了抽象的方法,然后我们看一下BaseSequenceFactory
abstract public class BaseSequenceFactory {
/**
* create Sequence
* @param inputStream
* @return
*/
abstract public BaseAnimationSequence createSequence(InputStream inputStream);
}
BaseSequenceFactory目前只有一个方法,那就createSequence根据传入的InputStream生成BaseAnimationSequence,为什么BaseAnimationSequence和BaseSequenceFactory我用了抽象类而没用接口?我主要的考虑就是,如果某天需要增加某些通用方法时(比如图片是否透明),如果我使用的解析库不支持,或者有些方法是统一的实现,那么我可以直写一个默认的方法,如果要是用接口的话可能就不是那么方便了,下面我看最后一个类FrescoSequence它里面还包含了两个BaseSequenceFactory的子类
public class FrescoSequence extends BaseAnimationSequence {
public static final int WEBP = 1;
public static final int GIF = 2;
@IntDef({WEBP, GIF})
@Retention(RetentionPolicy.SOURCE)
public @interface ImageType {}
//fresco包中的类,抽象了动画图片
private AnimatedImage mWebpImage;
public FrescoSequence(AnimatedImage image){
this(image.getWidth(),image.getHeight(),image.getFrameCount(),image.getLoopCount());
mWebpImage = image;
}
private FrescoSequence(int width, int height, int frameCount, int defaultLoopCount) {
super(width, height, frameCount, defaultLoopCount);
}
@Override
public long getFrame(int frameNr, Bitmap output, int previousFrameNr) {
//将frameNr的帧绘制到output,返回frameNr前一张的duration时间
AnimatedImageFrame frame = mWebpImage.getFrame(frameNr);
frame.renderFrame(mWebpImage.getWidth(),mWebpImage.getHeight(),output);
int lastFrame = (frameNr + mWebpImage.getFrameCount() - 1) % mWebpImage.getFrameCount();
return mWebpImage.getFrame(lastFrame).getDurationMs();
}
@Override
public boolean isOpaque() {
return false;
}
public static FrescoSequence decodeStream(InputStream in,@ImageType int type){
ByteArrayOutputStream out = new ByteArrayOutputStream();
byte[] buff = new byte[1024];
int rc;
try {
while ((rc = in.read(buff,0,buff.length)) > 0){
out.write(buff,0,rc);
}
} catch (IOException e) {
e.printStackTrace();
return null;
}
byte[] bytes = out.toByteArray();
switch (type){
case GIF :
return decodeGifPByteArray(bytes);
default:
return decodeWebPByteArray(bytes);
}
}
public static FrescoSequence decodeWebPByteArray(byte[] data){
return new FrescoSequence(WebPImage.create(data));
}
public static FrescoSequence decodeGifPByteArray(byte[] data){
return new FrescoSequence(GifImage.create(data));
}
public static BaseSequenceFactory getSequenceFactory(int srcType ){
if(srcType == GIF) {
return new FrescoGifSequenceFactory();
}else {
return new FrescoWebpSequenceFactory();
}
}
public static class FrescoWebpSequenceFactory extends BaseSequenceFactory {
@Override
public BaseAnimationSequence createSequence(InputStream inputStream) {
return decodeStream(inputStream,WEBP);
}
}
public static class FrescoGifSequenceFactory extends BaseSequenceFactory {
@Override
public BaseAnimationSequence createSequence(InputStream inputStream) {
return decodeStream(inputStream,GIF);
}
}
}
FrescoSequence类里面主要就是实现了BaseAnimationSequence和BaseSequenceFactory的相关方法,主要就是实用了Fresco对每一帧图片进行了解析,熟悉Fresco的朋友应该对AnimatedImage、WebPImage、GifImage并不陌生,还有一个值得注意的地方就是getFrame方法的返回值的问题,在上篇文章里我也有解释为什么这里的返回值是frameNr的前一张
到此为止,整个Android端播放WebP和Gif方法的介绍就结束了,这里的介绍也只是根据我的理解把我认为需要介绍的东西写出来了,希望感兴趣的朋友可以查看源码并且可以一起讨论或者提供更好的思路😁
相关代码地址