OC底层原理探索—GCD(上)

2021-08-07  本文已影响0人  十年开发初学者

GCD简介

GCD全程Grand Central Dispatch,纯C语言,提供了非常多强大的函数。由苹果公司为多核并行运算提出的解决方案,GCD会自动管理线程的生命周期,比如创建线程、调度任务、销毁线程。程序员只需要告诉GCD想要执行什么任务,不需要编写任何线程管理代码

GCD函数

执行任务的函数分为:异步函数和同步函数

队列

队列分为两种:串行队列和并发队列,不同的队列,任务的排列方式时不一样的,任务通过队列的调度,由线程池安排的线程来执行。
不论什么队列,都遵循FIFO原则,即先进先调度原则,任务的执行速度,与各自任务的任务复杂度有关

队列与函数

GCD相关实践

任务耗时分析
image.png image.png image.png

由上面测试可知,不论采用何种方式调用,都会存在一定的耗时,而采用异步调用时,耗时是比较低的

异步并发队列
image.png
同步并发队列
image.png
同步函数主队列(同步串行)
image.png

源码分析

主队列
/*!
 * @function dispatch_get_main_queue
 *
 * @abstract
 * Returns the default queue that is bound to the main thread.
 *
 * @discussion
 * In order to invoke blocks submitted to the main queue, the application must
 * call dispatch_main(), NSApplicationMain(), or use a CFRunLoop on the main
 * thread.
 *
 * The main queue is meant to be used in application context to interact with
 * the main thread and the main runloop.
 *
 * Because the main queue doesn't behave entirely like a regular serial queue,
 * it may have unwanted side-effects when used in processes that are not UI apps
 * (daemons). For such processes, the main queue should be avoided.
 *
 * @see dispatch_queue_main_t
 *
 * @result
 * Returns the main queue. This queue is created automatically on behalf of
 * the main thread before main() is called.
 */
DISPATCH_INLINE DISPATCH_ALWAYS_INLINE DISPATCH_CONST DISPATCH_NOTHROW
dispatch_queue_main_t
dispatch_get_main_queue(void)
{
    return DISPATCH_GLOBAL_OBJECT(dispatch_queue_main_t, _dispatch_main_q);
}

进入源码后,发现是通过主队列DISPATCH_GLOBAL_OBJECT获取,点击进入查看该宏的实现

#define DISPATCH_GLOBAL_OBJECT(type, object) ((OS_OBJECT_BRIDGE type)&(object))

第一个参数为队列类型,第二个参数为队列对象,接下来我们全局搜索_dispatch_main_q这个队列对象

image.png image.png

上述方法,我们是通过函数一步一步的往下找的,最终找到主队列的结构体,接下来我们可以通过以下的方法寻找

image.png
通过打印线程的名称,我们得知主线程的名称com.apple.main-thread,然后我们全局搜索
image.png
同样来到了这个地方
全局队列

进入dispatch_get_global_queue

image.png
在创建全局并发队列时是可以传参数的的,根据不同的服务质量或者优先级提供不同的并发队列

全局搜索lable:com.apple.root.default

WX20210806-104557@2x.png
创建队列

dispatch_queue_create
全局搜索dispatch_queue_create(const

image.png

_dispatch_lane_create_with_target继续进入这个方法

DISPATCH_NOINLINE
static dispatch_queue_t
_dispatch_lane_create_with_target(const char *label, dispatch_queue_attr_t dqa,
        dispatch_queue_t tq, bool legacy)
{
    dispatch_queue_attr_info_t dqai = _dispatch_queue_attr_to_info(dqa);

    //
    // Step 1: Normalize arguments (qos, overcommit, tq)
    //

    dispatch_qos_t qos = dqai.dqai_qos;
#if !HAVE_PTHREAD_WORKQUEUE_QOS
    if (qos == DISPATCH_QOS_USER_INTERACTIVE) {
        dqai.dqai_qos = qos = DISPATCH_QOS_USER_INITIATED;
    }
    if (qos == DISPATCH_QOS_MAINTENANCE) {
        dqai.dqai_qos = qos = DISPATCH_QOS_BACKGROUND;
    }
#endif // !HAVE_PTHREAD_WORKQUEUE_QOS

    _dispatch_queue_attr_overcommit_t overcommit = dqai.dqai_overcommit;
    if (overcommit != _dispatch_queue_attr_overcommit_unspecified && tq) {
        if (tq->do_targetq) {
            DISPATCH_CLIENT_CRASH(tq, "Cannot specify both overcommit and "
                    "a non-global target queue");
        }
    }

    if (tq && dx_type(tq) == DISPATCH_QUEUE_GLOBAL_ROOT_TYPE) {
        // Handle discrepancies between attr and target queue, attributes win
        if (overcommit == _dispatch_queue_attr_overcommit_unspecified) {
            if (tq->dq_priority & DISPATCH_PRIORITY_FLAG_OVERCOMMIT) {
                overcommit = _dispatch_queue_attr_overcommit_enabled;
            } else {
                overcommit = _dispatch_queue_attr_overcommit_disabled;
            }
        }
        if (qos == DISPATCH_QOS_UNSPECIFIED) {
            qos = _dispatch_priority_qos(tq->dq_priority);
        }
        tq = NULL;
    } else if (tq && !tq->do_targetq) {
        // target is a pthread or runloop root queue, setting QoS or overcommit
        // is disallowed
        if (overcommit != _dispatch_queue_attr_overcommit_unspecified) {
            DISPATCH_CLIENT_CRASH(tq, "Cannot specify an overcommit attribute "
                    "and use this kind of target queue");
        }
    } else {
        if (overcommit == _dispatch_queue_attr_overcommit_unspecified) {
            // Serial queues default to overcommit!
            overcommit = dqai.dqai_concurrent ?
                    _dispatch_queue_attr_overcommit_disabled :
                    _dispatch_queue_attr_overcommit_enabled;
        }
    }
    if (!tq) {
        tq = _dispatch_get_root_queue(
                qos == DISPATCH_QOS_UNSPECIFIED ? DISPATCH_QOS_DEFAULT : qos,
                overcommit == _dispatch_queue_attr_overcommit_enabled)->_as_dq;
        if (unlikely(!tq)) {
            DISPATCH_CLIENT_CRASH(qos, "Invalid queue attribute");
        }
    }

    //
    // Step 2: Initialize the queue
    //

    if (legacy) {
        // if any of these attributes is specified, use non legacy classes
        if (dqai.dqai_inactive || dqai.dqai_autorelease_frequency) {
            legacy = false;
        }
    }

    const void *vtable;//类
    dispatch_queue_flags_t dqf = legacy ? DQF_MUTABLE : 0;
    // 通过判断队列类型,来拼接类名
    // OS_dispatch_##name##_class
    // OS_dispatch_queue_concurrent
    if (dqai.dqai_concurrent) {
        vtable = DISPATCH_VTABLE(queue_concurrent);
    } else {
        // OS_dispatch_##name##_class
        // OS_dispatch_queue_serial
        vtable = DISPATCH_VTABLE(queue_serial);
    }
    switch (dqai.dqai_autorelease_frequency) {
    case DISPATCH_AUTORELEASE_FREQUENCY_NEVER:
        dqf |= DQF_AUTORELEASE_NEVER;
        break;
    case DISPATCH_AUTORELEASE_FREQUENCY_WORK_ITEM:
        dqf |= DQF_AUTORELEASE_ALWAYS;
        break;
    }
    if (label) {
        const char *tmp = _dispatch_strdup_if_mutable(label);
        if (tmp != label) {
            dqf |= DQF_LABEL_NEEDS_FREE;
            label = tmp;
        }
    }
    //开辟内存
    /**
     dq 是队列的对象,这里相当于[alloc init]
     */
    dispatch_lane_t dq = _dispatch_object_alloc(vtable,
            sizeof(struct dispatch_lane_s));
    
    _dispatch_queue_init(dq, dqf, dqai.dqai_concurrent ?  // 区分串行和并发队列,如果时并发队列,width取宏,如果是串行队列width为1
            DISPATCH_QUEUE_WIDTH_MAX : 1, DISPATCH_QUEUE_ROLE_INNER |
            (dqai.dqai_inactive ? DISPATCH_QUEUE_INACTIVE : 0));

    dq->dq_label = label;
    dq->dq_priority = _dispatch_priority_make((dispatch_qos_t)dqai.dqai_qos,
            dqai.dqai_relpri);
    if (overcommit == _dispatch_queue_attr_overcommit_enabled) {
        dq->dq_priority |= DISPATCH_PRIORITY_FLAG_OVERCOMMIT;
    }
    if (!dqai.dqai_inactive) {
        _dispatch_queue_priority_inherit_from_target(dq, tq);
        _dispatch_lane_inherit_wlh_from_target(dq, tq);
    }
    _dispatch_retain(tq);
    dq->do_targetq = tq;
    _dispatch_object_debug(dq, "%s", __func__);

    return _dispatch_trace_queue_create(dq)._dq;
}

这个方法根据最后的return _dispatch_trace_queue_create(dq)._dq,是苹果的方法追踪,但是里面的参数为dq,所以我们判断dq才是这个方法的最重要的部分

接下来对dq详细解读

  dispatch_lane_t dq = _dispatch_object_alloc(vtable,
            sizeof(struct dispatch_lane_s));
    
    _dispatch_queue_init(dq, dqf, dqai.dqai_concurrent ?  // 区分串行和并发队列,如果时并发队列,width取宏,如果是串行队列width为1
            DISPATCH_QUEUE_WIDTH_MAX : 1, DISPATCH_QUEUE_ROLE_INNER |
            (dqai.dqai_inactive ? DISPATCH_QUEUE_INACTIVE : 0));

    dq->dq_label = label;
    dq->dq_priority = _dispatch_priority_make((dispatch_qos_t)dqai.dqai_qos,
            dqai.dqai_relpri);
    if (overcommit == _dispatch_queue_attr_overcommit_enabled) {
        dq->dq_priority |= DISPATCH_PRIORITY_FLAG_OVERCOMMIT;
    }
    if (!dqai.dqai_inactive) {
        _dispatch_queue_priority_inherit_from_target(dq, tq);
        _dispatch_lane_inherit_wlh_from_target(dq, tq);
    }
    _dispatch_retain(tq);
    dq->do_targetq = tq;
    _dispatch_object_debug(dq, "%s", __func__);

调用_dispatch_object_alloc这里相当于[NSObject alloc],对象初始化,申请内存开辟空间

image.png
进入_os_object_alloc_realized方法
image.png
在这个方法中,设置了isa的指向

接下来查看_dispatch_queue_init,这其实是个构造函数
在这个方法中的第三个参数dqai.dqai_concurrent ? DISPATCH_QUEUE_WIDTH_MAX : 1

进入_dispatch_queue_init

static inline dispatch_queue_class_t
_dispatch_queue_init(dispatch_queue_class_t dqu, dispatch_queue_flags_t dqf,
        uint16_t width, uint64_t initial_state_bits)
{
    uint64_t dq_state = DISPATCH_QUEUE_STATE_INIT_VALUE(width);
    dispatch_queue_t dq = dqu._dq;

    dispatch_assert((initial_state_bits & ~(DISPATCH_QUEUE_ROLE_MASK |
            DISPATCH_QUEUE_INACTIVE)) == 0);

    if (initial_state_bits & DISPATCH_QUEUE_INACTIVE) {
        dq->do_ref_cnt += 2; // rdar://8181908 see _dispatch_lane_resume
        if (dx_metatype(dq) == _DISPATCH_SOURCE_TYPE) {
            dq->do_ref_cnt++; // released when DSF_DELETED is set
        }
    }

    dq_state |= initial_state_bits;
    dq->do_next = DISPATCH_OBJECT_LISTLESS;
    dqf |= DQF_WIDTH(width);
    os_atomic_store2o(dq, dq_atomic_flags, dqf, relaxed);
    dq->dq_state = dq_state;
    dq->dq_serialnum =
            os_atomic_inc_orig(&_dispatch_queue_serial_numbers, relaxed);
    return dqu;
}

我们发现在dqf |= DQF_WIDTH(width);这行代码,由此证明了DQF_WIDTH(width)是用来设置队列类型

【总结】
目前,我们了解了创建队列的全过程,在底层会根据上层传入的队列名称和队列类型进行封装,根据队列类型进行类的初始化(这里的类时拼接而成)vtable,通过alloc,对类进行内存开辟和isa的指向,最后通过init,对队列类型进行区分,并设置优先级,名称

函数调用

同步函数
        dispatch_sync(queue, ^{
            NSLog(@"%d-%@",i,[NSThread currentThread]);
        });

上述代码,是一个很简单的同步函数调用,我们来看第二个参数它时一个block,我们平时写block的时候,还需要手动调用block才会执行,这里我们产生疑问,同步函数的block是什么时候调用的,接下来我们来探索
进入dispatch_sync源码

image.png
参数一:dq是队列
参数二:block函数
我们直接进入_dispatch_sync_f
static void
_dispatch_sync_f(dispatch_queue_t dq, void *ctxt, dispatch_function_t func,
        uintptr_t dc_flags)
{
    _dispatch_sync_f_inline(dq, ctxt, func, dc_flags);
}

参数一:dq是队列
参数二:block函数
参数三:宏定义的函数

#define _dispatch_Block_invoke(bb) \
        ((dispatch_function_t)((struct Block_layout *)bb)->invoke)

进入_dispatch_sync_f_inline方法

DISPATCH_ALWAYS_INLINE
static inline void
_dispatch_sync_f_inline(dispatch_queue_t dq, void *ctxt,
        dispatch_function_t func, uintptr_t dc_flags)
{
    if (likely(dq->dq_width == 1)) {
        return _dispatch_barrier_sync_f(dq, ctxt, func, dc_flags);
    }

    if (unlikely(dx_metatype(dq) != _DISPATCH_LANE_TYPE)) {
        DISPATCH_CLIENT_CRASH(0, "Queue type doesn't support dispatch_sync");
    }

    dispatch_lane_t dl = upcast(dq)._dl;
    // Global concurrent queues and queues bound to non-dispatch threads
    // always fall into the slow case, see DISPATCH_ROOT_QUEUE_STATE_INIT_VALUE
    if (unlikely(!_dispatch_queue_try_reserve_sync_width(dl))) {
        return _dispatch_sync_f_slow(dl, ctxt, func, 0, dl, dc_flags);
    }

    if (unlikely(dq->do_targetq->do_targetq)) {
        return _dispatch_sync_recurse(dl, ctxt, func, dc_flags);
    }
    _dispatch_introspection_sync_begin(dl);
    _dispatch_sync_invoke_and_complete(dl, ctxt, func DISPATCH_TRACE_ARG(
            _dispatch_trace_item_sync_push_pop(dq, ctxt, func, dc_flags)));
}

走到这一步,我们不知道接下来会走什么方法,我们来逐个分析

    if (likely(dq->dq_width == 1)) {
        return _dispatch_barrier_sync_f(dq, ctxt, func, dc_flags);
    }

这个dq_width =1代表的是串行队列上面我们已经分析了,我们是要分析函数的调用这个暂时先不管

  if (unlikely(!_dispatch_queue_try_reserve_sync_width(dl))) {
        return _dispatch_sync_f_slow(dl, ctxt, func, 0, dl, dc_flags);
    }

    if (unlikely(dq->do_targetq->do_targetq)) {
        return _dispatch_sync_recurse(dl, ctxt, func, dc_flags);
    }

这两个if判断,我觉得都有可能调用,在拿不准的情况下我们通过打符号断点来判断

image.png image.png
由符号断点可知,调用的是_dispatch_sync_f_slow方法,进入到该方法中
WX20210806-165121@2x.png
进入_dispatch_sync_function_invoke方法
DISPATCH_NOINLINE
static void
_dispatch_sync_function_invoke(dispatch_queue_class_t dq, void *ctxt,
        dispatch_function_t func)
{
    _dispatch_sync_function_invoke_inline(dq, ctxt, func);
}

进入_dispatch_sync_function_invoke_inline方法

DISPATCH_ALWAYS_INLINE
static inline void
_dispatch_sync_function_invoke_inline(dispatch_queue_class_t dq, void *ctxt,
        dispatch_function_t func)
{
    dispatch_thread_frame_s dtf;
    _dispatch_thread_frame_push(&dtf, dq);
    _dispatch_client_callout(ctxt, func);
    _dispatch_perfmon_workitem_inc();
    _dispatch_thread_frame_pop(&dtf);
}

我们在这里关注的是方法的调用,所以只关心_dispatch_client_callout这个方法

image.png
这里有两个参数
ctxt对应的是dispatch_sync中的workblock函数
f对应的是_dispatch_Block_invoke(work)宏定义函数
最终func完成了work的调用执行。
【总结】在同步流程中通过宏定义的函数_dispatch_Block_invoke,也就是func,完成任务work的执行。
异步函数

libdispatch.dylib源码中,全局搜索dispatch_async(dis,找到了异步函数的入口。

image.png
这里通过_dispatch_continuation_init方法将任务给包装到了一下,进入该方法
image.png

进入_dispatch_continuation_init_f方法

image.png
在这里对将workfunc封装到dc中,同时对任务的优先级进行处理,因为异步函数由于线程或者cpu的调用是异步的,所以这里会对任务的调用优先级进行处理。
回到dispatch_async方法实现,完成任务包装后,调用_dispatch_continuation_async方法进行异步函数的处理流程。
image.png
在这里我们直接看最后的returndx_push函数,这个函数的参数分别为队列、dc(包装的任务),qos
image.png
在这个宏定义中dx_vtable方法调用了dq_push方法,全局搜索dq_push
image.png
这里我们发现,不同的队列,底层提供了不同的入口,接下来全局搜索_dispatch_root_queue_push
image.png
在这串代码中,前面的代码只是做了些判断和封装,最终走到return,进入_dispatch_root_queue_push_inline方法

DISPATCH_ALWAYS_INLINE
static inline void
_dispatch_root_queue_push_inline(dispatch_queue_global_t dq,
        dispatch_object_t _head, dispatch_object_t _tail, int n)
{
    struct dispatch_object_s *hd = _head._do, *tl = _tail._do;
    if (unlikely(os_mpsc_push_list(os_mpsc(dq, dq_items), hd, tl, do_next))) {
        return _dispatch_root_queue_poke(dq, n, 0);
    }
}

进入_dispatch_root_queue_poke方法

image.png

进入_dispatch_root_queue_poke_slow方法

image.png
在这个方法中其实有一步关键流程_dispatch_root_queues_init方法,进入该方法
image.png
这里是一个单例方法,我们进入_dispatch_root_queues_init_once方法
image.png
在这个方法中进行了线程池的初始化、队列的配置工作、函数执行,这也就解释了为什么_dispatch_root_queues_init_once时单例

同时这里有行代码

cfg.workq_cb = _dispatch_worker_thread2;

我们通过函数的调用栈打印出来,调用函数的方法,正好有这个方法,并且最终调用_dispatch_client_callout这个方法,来调用block函数

image.png
上一篇下一篇

猜你喜欢

热点阅读