iOS - cache_t分析

2020-01-26  本文已影响0人  e521

类的结构分析一文中提到过cache_t,但并未对其进行具体的分析,今天我们就一起看看iOS中的方法缓存在底层是如何实现的.

cache_t结构体
struct cache_t {
    struct bucket_t *_buckets;//结构体指针,缓存放在这里面
    mask_t _mask;//在64位下为uint32_t类型,代表总的可以缓存的方法数量
    mask_t _occupied;//当前已缓存的方法数量

public://缓存的方法
    struct bucket_t *buckets();
    mask_t mask();
    mask_t occupied();
    void incrementOccupied();
    void setBucketsAndMask(struct bucket_t *newBuckets, mask_t newMask);
    void initializeToEmpty();

    mask_t capacity();
    bool isConstantEmptyCache();
    bool canBeFreed();

    static size_t bytesForCapacity(uint32_t cap);
    static struct bucket_t * endMarker(struct bucket_t *b, uint32_t cap);

    void expand();
    void reallocate(mask_t oldCapacity, mask_t newCapacity);
    struct bucket_t * find(cache_key_t key, id receiver);

    static void bad_cache(id receiver, SEL sel, Class isa) __attribute__((noreturn));
};

bucket_t
struct bucket_t {
private:
    // IMP-first is better for arm64e ptrauth and no worse for arm64.
    // SEL-first is better for armv7* and i386 and x86_64.
#if __arm64__
    MethodCacheIMP _imp;
    cache_key_t _key;
#else
    cache_key_t _key;
    MethodCacheIMP _imp;
#endif

public:
    inline cache_key_t key() const { return _key; }
    inline IMP imp() const { return (IMP)_imp; }
    inline void setKey(cache_key_t newKey) { _key = newKey; }
    inline void setImp(IMP newImp) { _imp = newImp; }

    void set(cache_key_t newKey, IMP newImp);
};

由bucket_t的结构可知:在arm64的环境下,存储的为_imp方法的实现和相应的_key.

源码流程分析

如果我们要找方法的缓存,那么我们就要先找到struct bucket_t *_buckets结构体指针,那么我们该如何寻找呢?接下来我们就一步步踏上寻找_buckets之旅.
首先在cache_t的结构体中我们看到了 _mask,并且在缓存方法中我们看到一个mask()函数,查看mask()方法我们发现其只是返回了一个_mask,并未对_mask的值进行操作;

mask_t cache_t::mask() 
{
    return _mask; 
}

通过全局搜索mask()方法,我们发现在capacity()方法中调用了mask()方法,但具体作用并不知道;

mask_t cache_t::capacity() 
{
    return mask() ? mask()+1 : 0; 
}

继续对capacity()方法进行全局搜索,发现在expand()方法中调用了该方法:

void cache_t::expand()
{
    cacheUpdateLock.assertLocked();//断言
    
    uint32_t oldCapacity = capacity();//旧的容量,
    uint32_t newCapacity = oldCapacity ? oldCapacity*2 : INIT_CACHE_SIZE;//如果oldCapacity为0,此时就为INIT_CACHE_SIZE也就是4,如果不为0,则newCapacity为oldCapacity的两倍

    if ((uint32_t)(mask_t)newCapacity != newCapacity) {
        // mask overflow - can't grow further
        // fixme this wastes one bit of mask
        newCapacity = oldCapacity;
    }

    reallocate(oldCapacity, newCapacity);
}
enum {
    INIT_CACHE_SIZE_LOG2 = 2,
    INIT_CACHE_SIZE      = (1 << INIT_CACHE_SIZE_LOG2)//将1左移两位也就是4
};

只从字面意思我们看出: expand(扩容), capacity(容量),既然需要扩容,就肯定需要一定的条件,那么我们就看看在什么时候,开始进行扩容,通过搜索我们发现在cache_fill_nolock方法中调用了expand():

static void cache_fill_nolock(Class cls, SEL sel, IMP imp, id receiver)
{
    cacheUpdateLock.assertLocked();

    // Never cache before +initialize is done
    if (!cls->isInitialized()) return;

    // Make sure the entry wasn't added to the cache by some other thread 
    // before we grabbed the cacheUpdateLock.
    if (cache_getImp(cls, sel)) return;//从缓存中得到imp,如果拿到就直接返回,没有就继续走下面的方法

    cache_t *cache = getCache(cls);//获取缓存
    cache_key_t key = getKey(sel);//通过sel拿到相应的key,是一个哈希表

    // Use the cache as-is if it is less than 3/4 full
    mask_t newOccupied = cache->occupied() + 1;//创建一个newOccupied
    mask_t capacity = cache->capacity();
   //如果是空就直接创建
    if (cache->isConstantEmptyCache()) {
        // Cache is read-only. Replace it.
        cache->reallocate(capacity, capacity ?: INIT_CACHE_SIZE);
    }
    //判断是逗超出3/4临界点,如果超出就需要进行扩容操作
    else if (newOccupied <= capacity / 4 * 3) {
        // Cache is less than 3/4 full. Use it as-is.
    }
    else {
        //扩容到原来的两倍
        // Cache is too full. Expand it.
        cache->expand();
    }

    // Scan for the first unused slot and insert there.
    // There is guaranteed to be an empty slot because the 
    // minimum size is 4 and we resized at 3/4 full.
    bucket_t *bucket = cache->find(key, receiver);//通过key找到相应的bucket
    if (bucket->key() == 0) cache->incrementOccupied();
    bucket->set(key, imp);
}

由上面的分析我们可以看出如果cache为空则会调用reallocate()方法,如果容量大于3/4则需要进行扩容操作

reallocate分析
void cache_t::reallocate(mask_t oldCapacity, mask_t newCapacity)
{
    bool freeOld = canBeFreed();//根据isConstantEmptyCache判断是否释放旧的缓存

    bucket_t *oldBuckets = buckets();//获取旧的buckets
    bucket_t *newBuckets = allocateBuckets(newCapacity);//创建新的buckets

    // Cache's old contents are not propagated. 
    // This is thought to save cache memory at the cost of extra cache fills.
    // fixme re-measure this

    assert(newCapacity > 0);
    assert((uintptr_t)(mask_t)(newCapacity-1) == newCapacity-1);

    setBucketsAndMask(newBuckets, newCapacity - 1);//将newCapacity-1作为参数传入setBucketsAndMask方法中进行赋值
    
    if (freeOld) {//清理旧缓存
        cache_collect_free(oldBuckets, oldCapacity);
        cache_collect(false);
    }
}
bool cache_t::canBeFreed()
{
    return !isConstantEmptyCache();
}
setBucketsAndMask分析
void cache_t::setBucketsAndMask(struct bucket_t *newBuckets, mask_t newMask)
{
    // objc_msgSend uses mask and buckets with no locks.
    // It is safe for objc_msgSend to see new buckets but old mask.
    // (It will get a cache miss but not overrun the buckets' bounds).
    // It is unsafe for objc_msgSend to see old buckets and new mask.
    // Therefore we write new buckets, wait a lot, then write new mask.
    // objc_msgSend reads mask first, then buckets.

    // ensure other threads see buckets contents before buckets pointer
    mega_barrier();

    _buckets = newBuckets;
    
    // ensure other threads see new buckets before new mask
    mega_barrier();
    
    _mask = newMask;//由reallocate方法我们可以知道此时的_mask值实际上为新扩容后的容量减1 
    _occupied = 0;
}

由setBucketsAndMask源码可以看出:该方法实际就是对_buckets, _mask,_occupied进行赋值操作;

find()
bucket_t * cache_t::find(cache_key_t k, id receiver)
{
    assert(k != 0);

    bucket_t *b = buckets();
    mask_t m = mask();
    // 通过cache_hash函数【begin  = k & m】计算出key值 k 对应的 index值 begin,用来记录查询起始索引
    mask_t begin = cache_hash(k, m);
    // begin 赋值给 i,用于切换索引
    mask_t i = begin;
    do {
        if (b[i].key() == 0  ||  b[i].key() == k) {
            //用这个i从散列表取值,如果取出来的bucket_t的 key = k,则查询成功,返回该bucket_t,
            //如果key = 0,说明在索引i的位置上还没有缓存过方法,同样需要返回该bucket_t,用于中止缓存查询。
            return &b[i];
        }
    } while ((i = cache_next(i, m)) != begin);
    
    // 这一步其实相当于 i = i-1,回到上面do循环里面,相当于查找散列表上一个单元格里面的元素,再次进行key值 k的比较,
    //当i=0时,也就i指向散列表最首个元素索引的时候重新将mask赋值给i,使其指向散列表最后一个元素,重新开始反向遍历散列表,
    //其实就相当于绕圈,把散列表头尾连起来,不就是一个圈嘛,从begin值开始,递减索引值,当走过一圈之后,必然会重新回到begin值,
    //如果此时还没有找到key对应的bucket_t,或者是空的bucket_t,则循环结束,说明查找失败,调用bad_cache方法。
 
    // hack
    Class cls = (Class)((uintptr_t)this - offsetof(objc_class, cache));
    cache_t::bad_cache(receiver, (SEL)k, cls);
}

至此,我们大致梳理出了cache_t的基本流程,其大致流程如下:


cache_t流程图.jpg
实例验证

创建一个Student的类

@interface Student : NSObject

- (void)study;

- (void)eat;

- (void)play;

@end

只调用Student中的一个方法时:

Student *student = [Student alloc];
        Class sClass = [Student class];
        [student study];

通过LLDB进行调试

(lldb) x/4gx sClass
0x1000013c8: 0x001d8001000013a1 0x0000000100b36140
0x1000013d8: 0x0000000101938eb0 0x0000000100000003
(lldb) p (cache_t *)0x1000013d8//根据地址偏移拿到cache_t
(cache_t *) $1 = 0x00000001000013d8
(lldb) p *$1
(cache_t) $2 = {
  _buckets = 0x0000000101938eb0
  _mask = 3//根据我们上面分析, 一开始oldCapacity为0, newCapacity则为4, _mask在赋值的等于newCapacity-1,因此_mask为3
  _occupied = 1 
}
(lldb) p $2._buckets
(bucket_t *) $3 = 0x0000000101938eb0
(lldb) p *$3
(bucket_t) $4 = {
  _key = 4294971012
  _imp = 0x0000000100000dd0 (LGTest`-[Student study] at Student.m:12)
}

调用4个Student中的方法时:

        Student *student = [[Student alloc] init];
        Class sClass = [Student class];
        [student study];
        [student eat];
        [student play];

通过LLDB进行调试

(lldb) x/4gx sClass
0x1000013e0: 0x001d8001000013b9 0x0000000100b36140
0x1000013f0: 0x0000000100f5b810 0x0000000100000007
(lldb) p (cache_t *)0x1000013f0
(cache_t *) $1 = 0x00000001000013f0
(lldb) p *$1
(cache_t) $2 = {
  _buckets = 0x0000000100f5b810
  _mask = 7//由扩容我们可知此时3势必无法满足四个方法的缓存,需要扩容,我们知道oldCapacity上次为4, newCapacity则为8, _mask= newCapacity-1 = 7
  _occupied = 1//代表当前缓存一个
}
(lldb) p $2._buckets
(bucket_t *) $3 = 0x0000000100f5b810
(lldb) p *$3
(bucket_t) $4 = {
  _key = 0
  _imp = 0x0000000000000000
}
(lldb) p $2._buckets[0]
(bucket_t) $5 = {
  _key = 0
  _imp = 0x0000000000000000
}
(lldb) p $2._buckets[1]
(bucket_t) $6 = {
  _key = 0
  _imp = 0x0000000000000000
}
(lldb) p $2._buckets[2]
(bucket_t) $7 = {
  _key = 140735178921514
  _imp = 0x0000000100000de0 (LGTest`-[Student play] at Student.m:20)
}
(lldb) p $2._buckets[3]
(bucket_t) $8 = {
  _key = 0
  _imp = 0x0000000000000000
}
(lldb) p $2._buckets[4]
(bucket_t) $9 = {
  _key = 0
  _imp = 0x0000000000000000
}
(lldb) p $2._buckets[5]
(bucket_t) $10 = {
  _key = 0
  _imp = 0x0000000000000000
}
(lldb) p $2._buckets[6]
(bucket_t) $11 = {
  _key = 0
  _imp = 0x0000000000000000
}
(lldb) p $2._buckets[7]
(bucket_t) $12 = {
  _key = 0
  _imp = 0x0000000000000000
}

通过打印:我们发现当前的缓存方法只有最后一个调用的play方法,那么init, study, eat,哪去了呢?在reallocate方法中我们判断了freeOld,清理了旧的缓存,当4个方法的时候其实是调用了两次reallocate,第一次cache为空时调用了一次reallocate此时将_mask置为3,当明显4个方法_mask为3不够用,因此会调用扩容方法再次调用reallocate方法,将_mask缓存数量置为7,并清理旧的缓存,这也就是为什么当前缓存数量为1,且只存在play方法.

总结:

Class中的Cache主要是为了在消息发送的过程中,进行方法的缓存,加快调用效率,其中使用了动态扩容的方法,当容量达到最大值的3/4时,开始2倍扩容,扩容时会完全抹除旧的buckets,并且创建新的buckets代替,之后把最近一次临界的imp和key缓存进来.

上一篇下一篇

猜你喜欢

热点阅读