学习资源收集

SDWebImage 360°无死角分析之解码

2019-05-16  本文已影响0人  王技术

打算用几篇文章整理一下 SDWebImage 的源码
源码有点小多, 决定把每个模块分开来整理
这其中包括 : 调度模块、下载模块、缓存模块、解码模块和一些代码整理
调度模块看这里
缓存模块看这里
下载模块看这里
解码模块看这里
整理模块看这里


本篇是解码篇

无论是图片从磁盘中找到还是在网络中下载
SD 都会帮我们解码后再交给 UI 显示
对图片解码做一次整理


为什么要图片解码

要想弄明白这个问题,我们首先需要知道什么是位图
其实,位图就是一个像素数组
数组中的每个像素就代表着图片中的一个点
我们在应用中经常用到的 JPEG 和 PNG 图片是位图的一种压缩形式
下面这张 PNG 图片,像素为 30 × 30 ,文件大小为 843B :


asd.png

使用下面的代码可以看到图片的原始像素数据, 大小为 3600B

UIImage *image = [UIImage imageNamed:@"asd"];
CFDataRef rawData = CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage));
NSLog(@" \n %@", rawData);

打印结果 :



那么这个 3600B 是怎么得来的呢?
与图片的文件大小或者像素有什么必然的联系吗?
事实上,解码后的图片大小与原始文件大小之间没有任何关系,而只与图片的像素有关:

解码缩后的图片大小 = 图片的像素宽 30 * 图片的像素高 30 * 每个像素所占的字节数 4

那么位图与我们经常提到的图片的二进制数据有什么联系吗?
事实上,这二者是完全独立的两个东西,它们之间没有必然的联系。
为了加深理解,我把这个图片图片后缀改为.txt 然后拖进 Sublime Text 2 中
得到了这个图片的二进制数据,大小与原始文件大小一致,为 843B :

8950 4e47 0d0a 1a0a 0000 000d 4948 4452 0000 001e 0000 001e 0806 0000 003b 30ae a200
0000 0173 5247 4200 aece 1ce9 0000 0305 4944 4154 480d c557 4d68 1341 149e 3709 da4d
09c6 8a56 2385 9e14 f458 4fa2 d092 f4a6 28d8 2222 de04 3d09 a1d0 7a50 0954 8bad 2d05
4fde 3c89 482b 2ad6 8334 d183 e049 ef9e 4a41 48b0 42eb a549 6893 1ddf 9bcd b4d9 d9d9
4dd8 a43a b0d9 9d79 3fdf bc79 3ff3 02ac 8591 1559 3e97 9b3e 5b05 fb32 6330 c098 48a2
183d 340a b886 8ff8 1e15 fced 587a e26b 16b2 b643 f2ff 057f 1263 fd9f fbbb 7ed7 7edd
1142 8c09 268e 04f1 2a1a 3058 0380 b9c3 91de a7ab 43ab 15b5 aebf 7d81 ad65 eb0a 5a31
8f4f 9f2e d4da 1c7e e249 64ca c3e5 d726 7eae 2fa2 7510 cb75 3d62 cc5e 0c0f 4a5a 69c3
...
36ac b11e 7006 f71b 5386 a2b7 1e48 ad82 a26a 2880 95db 3f8b f525 b880 e0ed 7221 75f1
fa02 2cd4 1af7 1d0e 546a 98e5 d4ae 342a 337e 6b96 134f 1ba0 0c0b c83b a0f2 3593 7b5c
6ca9 b541 cb4f 254e df58 d958 8955 a0fc 2638 658c 2660 f986 b5f1 f4dd 63f2 5aec ce59
e3b6 b0a7 cdac ee55 145c c7dc 8f60 f53f e0a6 b436 e3c0 27b0 8ecf 5054 336a ccd0 e1d8
2335 1f78 323d 6141 09c3 c1aa 5f8b 4e37 0899 e6b0 ed72 4046 759e d262 5247 9d01 1689
a976 55fb c993 6ed5 7d10 8ff4 b162 fe6f cd1e ee4a d4bb c18e 594e 96ea 1da6 c762 6539
bdff 7943 afc0 c91f bdd1 a327 28fc 29f7 d47a b337 f192 0cc9 36fa 5497 73f9 5827 aa39
1599 4eff 69fb 0b0d 1f7a 96cd 3eb0 7800 0000 0049 454e 44ae 4260 82

事实上,不管是 JPEG 还是 PNG 图片
都是一种压缩的位图格式
只不过 PNG 图片是无损压缩,并且支持 alpha 通道
而 JPEG 图片则是有损压缩,可以指定 0-100% 的压缩比
值得一提的是,在苹果的 SDK 中专门提供了两个函数用来生成 PNG 和 JPEG 图片:

// return image as PNG. May return nil if image has no CGImageRef or invalid bitmap format
UIKIT_EXTERN NSData * __nullable UIImagePNGRepresentation(UIImage * __nonnull image);

// return image as JPEG. May return nil if image has no CGImageRef or invalid bitmap format. compression is 0(most)..1(least)                           
UIKIT_EXTERN NSData * __nullable UIImageJPEGRepresentation(UIImage * __nonnull image, CGFloat compressionQuality);

因此,在将磁盘中的图片渲染到屏幕之前
必须先要得到图片的原始像素数据
才能执行后续的绘制操作
这就是为什么需要对图片解码的原因。


使用用 UIImage 或 CGImageSource 的那几个方法创建图片时
图片数据并不会立刻解码
图片设置到 UIImageView 或者 CALayer.contents 中去
并且 CALayer 被提交到 GPU 前
CGImage 中的数据才会得到解码
这一步是发生在主线程的,会产生性能问题
因此,也就有了业内的解决方案,在子线程提前对图片进行强制解码。
而强制解码的原理就是对图片进行重新绘制,得到一张新的解码后的位图
其中,用到的最核心的函数是 CGBitmapContextCreate
这个函数用于创建一个位图上下文,用来绘制一张宽 width 像素,高 height 像素的位图

/* Create a bitmap context. The context draws into a bitmap which is `width'
   pixels wide and `height' pixels high. The number of components for each
   pixel is specified by `space', which may also specify a destination color
   profile. The number of bits for each component of a pixel is specified by
   `bitsPerComponent'. The number of bytes per pixel is equal to
   `(bitsPerComponent * number of components + 7)/8'. Each row of the bitmap
   consists of `bytesPerRow' bytes, which must be at least `width * bytes
   per pixel' bytes; in addition, `bytesPerRow' must be an integer multiple
   of the number of bytes per pixel. `data', if non-NULL, points to a block
   of memory at least `bytesPerRow * height' bytes. If `data' is NULL, the
   data for context is allocated automatically and freed when the context is
   deallocated. `bitmapInfo' specifies whether the bitmap should contain an
   alpha channel and how it's to be generated, along with whether the
   components are floating-point or integer. */
CG_EXTERN CGContextRef __nullable CGBitmapContextCreate(void * __nullable data,
    size_t width, size_t height, size_t bitsPerComponent, size_t bytesPerRow,
    CGColorSpaceRef cg_nullable space, uint32_t bitmapInfo)
    CG_AVAILABLE_STARTING(__MAC_10_0, __IPHONE_2_0);

CGBitmapContextCreate 函数中每个参数所代表的具体含义:

sd 中的图片解码实现

sd 中解码分为大图解码和小图解码

- (UIImage *)decompressedImageWithImage:(UIImage *)image
                                   data:(NSData *__autoreleasing  _Nullable *)data
                                options:(nullable NSDictionary<NSString*, NSObject*>*)optionsDict {
#if SD_MAC
    return image;
#endif
#if SD_UIKIT || SD_WATCH
    /** 先判断用户配置, 需不需要大图解吗 */
    BOOL shouldScaleDown = NO;
    if (optionsDict != nil) {
        NSNumber *scaleDownLargeImagesOption = nil;
        if ([optionsDict[SDWebImageCoderScaleDownLargeImagesKey] isKindOfClass:[NSNumber class]]) {
            scaleDownLargeImagesOption = (NSNumber *)optionsDict[SDWebImageCoderScaleDownLargeImagesKey];
        }
        if (scaleDownLargeImagesOption != nil) {
            shouldScaleDown = [scaleDownLargeImagesOption boolValue];
        }
    }
    /** 不需要大图解码, 直接采用小图解码方式 */
    if (!shouldScaleDown) {
        return [self sd_decompressedImageWithImage:image];
    } else {
        /** 大图解码 */
        UIImage *scaledDownImage = [self sd_decompressedAndScaledDownImageWithImage:image];
        if (scaledDownImage && !CGSizeEqualToSize(scaledDownImage.size, image.size)) {
            // if the image is scaled down, need to modify the data pointer as well
            SDImageFormat format = [NSData sd_imageFormatForImageData:*data];
            NSData *imageData = [self encodedDataWithImage:scaledDownImage format:format];
            if (imageData) {
                *data = imageData;
            }
        }
        return scaledDownImage;
    }
#endif
}

先看一下小图解码的方式
就是对 CGBitmapContextCreate 的简单使用

- (nullable UIImage *)sd_decompressedImageWithImage:(nullable UIImage *)image {
    if (![[self class] shouldDecodeImage:image]) {
        return image;
    }
    //自动释放位图上下文和所有变量以帮助系统在存在内存警告时释放内存
    @autoreleasepool{
        
        CGImageRef imageRef = image.CGImage;
        // device color space
        CGColorSpaceRef colorspaceRef = SDCGColorSpaceGetDeviceRGB();
        BOOL hasAlpha = SDCGImageRefContainsAlpha(imageRef);
        // iOS display alpha info (BRGA8888/BGRX8888)
        CGBitmapInfo bitmapInfo = kCGBitmapByteOrder32Host;
        bitmapInfo |= hasAlpha ? kCGImageAlphaPremultipliedFirst : kCGImageAlphaNoneSkipFirst;
        
        size_t width = CGImageGetWidth(imageRef);
        size_t height = CGImageGetHeight(imageRef);
        
        // kCGImageAlphaNone is not supported in CGBitmapContextCreate.
        // Since the original image here has no alpha info, use kCGImageAlphaNoneSkipLast
        // to create bitmap graphics contexts wi thout alpha info.
        CGContextRef context = CGBitmapContextCreate(NULL,
                                                     width,
                                                     height,
                                                     kBitsPerComponent,
                                                     0,
                                                     colorspaceRef,
                                                     bitmapInfo);
        if (context == NULL) {
            return image;
        }
        
        // Draw the image into the context and retrieve the new bitmap image without alpha
        CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
        CGImageRef imageRefWithoutAlpha = CGBitmapContextCreateImage(context);
        UIImage *imageWithoutAlpha = [[UIImage alloc] initWithCGImage:imageRefWithoutAlpha scale:image.scale orientation:image.imageOrientation];
        CGContextRelease(context);
        CGImageRelease(imageRefWithoutAlpha);
        
        return imageWithoutAlpha;
    }
}

然后是对大图的解码

- (nullable UIImage *)sd_decompressedAndScaledDownImageWithImage:(nullable UIImage *)image {
    if (![[self class] shouldDecodeImage:image]) {
        return image;
    }
    // 判断 是否真的应该走大图解码流程, 如果不需要的话直接走小图解码流程
    if (![[self class] shouldScaleDownImage:image]) {
        return [self sd_decompressedImageWithImage:image];
    }
    
    CGContextRef destContext;
    
    // autorelease the bitmap context and all vars to help system to free memory when there are memory warning.
    // on iOS7, do not forget to call [[SDImageCache sharedImageCache] clearMemory];
    @autoreleasepool {
        CGImageRef sourceImageRef = image.CGImage;
        
        CGSize sourceResolution = CGSizeZero;
        sourceResolution.width = CGImageGetWidth(sourceImageRef);
        sourceResolution.height = CGImageGetHeight(sourceImageRef);
        float sourceTotalPixels = sourceResolution.width * sourceResolution.height;
        // Determine the scale ratio to apply to the input image
        // that results in an output image of the defined size.
        // see kDestImageSizeMB, and how it relates to destTotalPixels.
        float imageScale = kDestTotalPixels / sourceTotalPixels;
        CGSize destResolution = CGSizeZero;
        destResolution.width = (int)(sourceResolution.width*imageScale);
        destResolution.height = (int)(sourceResolution.height*imageScale);
        
        // device color space
        CGColorSpaceRef colorspaceRef = SDCGColorSpaceGetDeviceRGB();
        BOOL hasAlpha = SDCGImageRefContainsAlpha(sourceImageRef);
        // iOS display alpha info (BGRA8888/BGRX8888)
        CGBitmapInfo bitmapInfo = kCGBitmapByteOrder32Host;
        bitmapInfo |= hasAlpha ? kCGImageAlphaPremultipliedFirst : kCGImageAlphaNoneSkipFirst;
        
        // kCGImageAlphaNone is not supported in CGBitmapContextCreate.
        // Since the original image here has no alpha info, use kCGImageAlphaNoneSkipLast
        // to create bitmap graphics contexts without alpha info.
        destContext = CGBitmapContextCreate(NULL,
                                            destResolution.width,
                                            destResolution.height,
                                            kBitsPerComponent,
                                            0,
                                            colorspaceRef,
                                            bitmapInfo);
        
        if (destContext == NULL) {
            return image;
        }
        CGContextSetInterpolationQuality(destContext, kCGInterpolationHigh);
        
        // Now define the size of the rectangle to be used for the
        // incremental blits from the input image to the output image.
        // we use a source tile width equal to the width of the source
        // image due to the way that iOS retrieves image data from disk.
        // iOS must decode an image from disk in full width 'bands', even
        // if current graphics context is clipped to a subrect within that
        // band. Therefore we fully utilize all of the pixel data that results
        // from a decoding opertion by achnoring our tile size to the full
        // width of the input image.
        CGRect sourceTile = CGRectZero;
        sourceTile.size.width = sourceResolution.width;
        // The source tile height is dynamic. Since we specified the size
        // of the source tile in MB, see how many rows of pixels high it
        // can be given the input image width.
        sourceTile.size.height = (int)(kTileTotalPixels / sourceTile.size.width );
        sourceTile.origin.x = 0.0f;
        // The output tile is the same proportions as the input tile, but
        // scaled to image scale.
        CGRect destTile;
        destTile.size.width = destResolution.width;
        destTile.size.height = sourceTile.size.height * imageScale;
        destTile.origin.x = 0.0f;
        // The source seem overlap is proportionate to the destination seem overlap.
        // this is the amount of pixels to overlap each tile as we assemble the ouput image.
        float sourceSeemOverlap = (int)((kDestSeemOverlap/destResolution.height)*sourceResolution.height);
        CGImageRef sourceTileImageRef;
        // calculate the number of read/write operations required to assemble the
        // output image.
        int iterations = (int)( sourceResolution.height / sourceTile.size.height );
        // If tile height doesn't divide the image height evenly, add another iteration
        // to account for the remaining pixels.
        int remainder = (int)sourceResolution.height % (int)sourceTile.size.height;
        if(remainder) {
            iterations++;
        }
        // Add seem overlaps to the tiles, but save the original tile height for y coordinate calculations.
        float sourceTileHeightMinusOverlap = sourceTile.size.height;
        sourceTile.size.height += sourceSeemOverlap;
        destTile.size.height += kDestSeemOverlap;
        for( int y = 0; y < iterations; ++y ) {
            @autoreleasepool {
                sourceTile.origin.y = y * sourceTileHeightMinusOverlap + sourceSeemOverlap;
                destTile.origin.y = destResolution.height - (( y + 1 ) * sourceTileHeightMinusOverlap * imageScale + kDestSeemOverlap);
                sourceTileImageRef = CGImageCreateWithImageInRect( sourceImageRef, sourceTile );
                if( y == iterations - 1 && remainder ) {
                    float dify = destTile.size.height;
                    destTile.size.height = CGImageGetHeight( sourceTileImageRef ) * imageScale;
                    dify -= destTile.size.height;
                    destTile.origin.y += dify;
                }
                CGContextDrawImage( destContext, destTile, sourceTileImageRef );
                CGImageRelease( sourceTileImageRef );
            }
        }
        
        CGImageRef destImageRef = CGBitmapContextCreateImage(destContext);
        CGContextRelease(destContext);
        if (destImageRef == NULL) {
            return image;
        }
        UIImage *destImage = [[UIImage alloc] initWithCGImage:destImageRef scale:image.scale orientation:image.imageOrientation];
        CGImageRelease(destImageRef);
        if (destImage == nil) {
            return image;
        }
        return destImage;
    }
}

这是检查是否需要走大图解码的方法

/** 需不需要缩放 */
+ (BOOL)shouldScaleDownImage:(nonnull UIImage *)image {
    BOOL shouldScaleDown = YES;
    
    CGImageRef sourceImageRef = image.CGImage;
    CGSize sourceResolution = CGSizeZero;
    sourceResolution.width = CGImageGetWidth(sourceImageRef);
    sourceResolution.height = CGImageGetHeight(sourceImageRef);
    float sourceTotalPixels = sourceResolution.width * sourceResolution.height;
    float imageScale = kDestTotalPixels / sourceTotalPixels;
    if (imageScale < 1) {
        shouldScaleDown = YES;
    } else {
        shouldScaleDown = NO;
    }
    
    return shouldScaleDown;
}

图片的大小是写死的常量

static const CGFloat kBytesPerMB = 1024.0f * 1024.0f;
static const CGFloat kPixelsPerMB = kBytesPerMB / kBytesPerPixel;
static const CGFloat kDestTotalPixels = kDestImageSizeMB * kPixelsPerMB;

static const CGFloat kDestImageSizeMB = 60.0f;

static const size_t kBytesPerPixel = 4;
static const size_t kBitsPerComponent = 8;

方法在 SDWebImageImageIOCoder 类中
SDWebImageImageIOCoder 和 SDWebImageCodersManager 的关系
就像
SDWebImageDownloaderOperation 和 SDWebImageDownloader 的关系
任何类型的图片解码都会交给 SDWebImageCodersManager 来完成
SDWebImageCodersManager 再查找合适的 id<SDWebImageCoder> 来解码
SDWebImageImageIOCoder 就是其中一种
SDWebImageCodersManager 在初始化时有一个属性存放着所有的 id<SDWebImageCoder>

@property (nonatomic, copy, readwrite, nullable) NSArray<id<SDWebImageCoder>> *coders;

只不过 SDWebImageCodersManager 在初始化时, 只添加了 SDWebImageImageIOCoder
SD 另外还提供了 SDWebImageWebPCoderSDWebImageGIFCoder
需要调用 SDWebImageCodersManager 的 - (void)addCoder:(nonnull id<SDWebImageCoder>)coder 来添加
我们也可以遵循 <SDWebImageCoder> 协议来自己实现解码类

另外值得一提的是
当图片需要存储的时候
还提供了编码方法, 就是将加码后的图片编码成 NSData 存储到磁盘

- (nullable NSData *)encodedDataWithImage:(nullable UIImage *)image format:(SDImageFormat)format;
上一篇下一篇

猜你喜欢

热点阅读