二十二、ios 锁

2020-11-11  本文已影响0人  Mjs
锁的性能.png

锁的归类

⾃旋锁:线程反复检查锁变量是否可⽤。由于线程在这⼀过程中保持执⾏,
因此是⼀种忙等待。⼀旦获取了⾃旋锁,线程会⼀直保持该锁,直⾄显式释
放⾃旋锁。 ⾃旋锁避免了进程上下⽂的调度开销,因此对于线程只会阻塞很
短时间的场合是有效的。

互斥锁:是⼀种⽤于多线程编程中,防⽌两条线程同时对同⼀公共资源(⽐
如全局变量)进⾏读写的机制。该⽬的通过将代码切⽚成⼀个⼀个的临界区
⽽达成
这⾥属于互斥锁的有:
• NSLock
• pthread_mutex
• @synchronized

条件锁:就是条件变量,当进程的某些资源要求不满⾜时就进⼊休眠,也就
是锁住了。当资源被分配到了,条件锁打开,进程继续运⾏
• NSCondition
• NSConditionLock

递归锁:就是同⼀个线程可以加锁N次⽽不会引发死锁
• NSRecursiveLock
• pthread_mutex(recursive)

信号量 (semaphore):是⼀种更⾼级的同步机制,互斥锁可以说是
semaphore在仅取值0/1时的特例。信号量可以有更多的取值空间,⽤来实
现更加复杂的同步,⽽不单单是线程间互斥。
• dispatch_semaphore

读写锁:读写锁实际是⼀种特殊的⾃旋锁,它把对共享资源的访问者划分成读者和写者,读者只对共享资源
进⾏读访问,写者则需要对共享资源进⾏写操作。这种锁相对于⾃旋锁⽽⾔,能提⾼并发性,因为
在多处理器系统中,它允许同时有多个读者来访问共享资源,最⼤可能的读者数为实际的逻辑CPU
数。写者是排他性的,⼀个读写锁同时只能有⼀个写者或多个读者(与CPU数相关),但不能同时
既有读者⼜有写者。在读写锁保持期间也是抢占失效的。
如果读写锁当前没有读者,也没有写者,那么写者可以⽴刻获得读写锁,否则它必须⾃旋在那⾥,
直到没有任何写者或读者。如果读写锁没有写者,那么读者可以⽴即获得该读写锁,否则读者必须
⾃旋在那⾥,直到写者释放该读写锁。
⼀次只有⼀个线程可以占有写模式的读写锁, 但是可以有多个线程同时占有读模式的读写锁. 正
是因为这个特性,
当读写锁是写加锁状态时, 在这个锁被解锁之前, 所有试图对这个锁加锁的线程都会被阻塞.
当读写锁在读加锁状态时, 所有试图以读模式对它进⾏加锁的线程都可以得到访问权, 但是如果
线程希望以写模式对此锁进⾏加锁, 它必须直到所有的线程释放锁.
通常, 当读写锁处于读模式锁住状态时, 如果有另外线程试图以写模式加锁, 读写锁通常会阻塞
随后的读模式锁请求, 这样可以避免读模式锁⻓期占⽤, ⽽等待的写模式锁请求⻓期阻塞.
读写锁适合于对数据结构的读次数⽐写次数多得多的情况. 因为, 读模式锁定时可以共享, 以写
模式锁住时意味着独占, 所以读写锁⼜叫共享-独占锁.

锁的归类
其实基本的锁就包括了三类 ⾃旋锁 互斥锁 读写锁,
其他的⽐如条件锁,递归锁,信号量都是上层的封装和实现!

TLS 线程相关解释
线程局部存储(Thread Local Storage,TLS): 是操作系统为线
程单独提供的私有空间,通常只有有限的容量。Linux系统下
通常通过pthread库中的
pthread_key_create()、
pthread_getspecific()、
pthread_setspecific()、
pthread_key_delete()

@synchronized

int main(int argc, char * argv[]) {
    NSString * appDelegateClassName;
    @autoreleasepool {
        
        // Setup code that might create autoreleased objects goes here.
        appDelegateClassName = NSStringFromClass([AppDelegate class]);
        @synchronized (appDelegateClassName) {
            NSLog(@"111111111111");
        }
    }
    return UIApplicationMain(argc, argv, nil, appDelegateClassName);
}

在mian中添加@synchronized,然后clang -x objective-c -rewrite-objc -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator.sdk main.m一下
得到

int main(int argc, char * argv[]) {
    NSString * appDelegateClassName;
    /* @autoreleasepool */ { __AtAutoreleasePool __autoreleasepool; 


        appDelegateClassName = NSStringFromClass(((Class (*)(id, SEL))(void *)objc_msgSend)((id)objc_getClass("AppDelegate"), sel_registerName("class")));
    {
        id _rethrow = 0;
        id _sync_obj = (id)appDelegateClassName;
        objc_sync_enter(_sync_obj);
        try {
            struct _SYNC_EXIT { _SYNC_EXIT(id arg) : sync_exit(arg) {}
                ~_SYNC_EXIT() {objc_sync_exit(sync_exit);}
                id sync_exit;
                } _sync_exit(_sync_obj);

                    NSLog((NSString *)&__NSConstantStringImpl__var_folders__2_948tyv6520110qy_phw9x4fw0000gn_T_main_e85d49_mi_0);
            
                } catch (id e) {_rethrow = e;}
        
        
    { struct _FIN { _FIN(id reth) : rethrow(reth) {}
        ~_FIN() { if (rethrow) objc_exception_throw(rethrow); }
        id rethrow;
        } _fin_force_rethow(_rethrow);}
}

    }
    return UIApplicationMain(argc, argv, __null, appDelegateClassName);
}

在这里就将obj进行保存,然后多了objc_sync_enterobjc_sync_exit,通过符号断点发现来源于libobjc.A.dylib

int objc_sync_enter(id obj)
{
    int result = OBJC_SYNC_SUCCESS;

    if (obj) {
        SyncData* data = id2data(obj, ACQUIRE);
        ASSERT(data);
        data->mutex.lock();
    } else {
        // @synchronized(nil) does nothing
        if (DebugNilSync) {
            _objc_inform("NIL SYNC DEBUG: @synchronized(nil); set a breakpoint on objc_sync_nil to debug");
        }
        objc_sync_nil();//什么都不做
    }

    return result;
}

typedef struct alignas(CacheLineSize) SyncData {
    struct SyncData* nextData;//链表结构
    DisguisedPtr<objc_object> object;
    int32_t threadCount;  // number of THREADS using this block
    recursive_mutex_t mutex;//递归互斥锁
} SyncData;


BREAKPOINT_FUNCTION(
    void objc_sync_nil(void)
);


#   define BREAKPOINT_FUNCTION(prototype)                             \
    OBJC_EXTERN __attribute__((noinline, used, visibility("hidden"))) \
    prototype { asm(""); }

首先判断obj,如果没有的话,什么都不做。如果有,就加lock

static SyncData* id2data(id object, enum usage why)
{
    spinlock_t *lockp = &LOCK_FOR_OBJ(object);
    SyncData **listp = &LIST_FOR_OBJ(object);
    SyncData* result = NULL;

#if SUPPORT_DIRECT_THREAD_KEYS
    //tls Thread Local Storage - KVC
    // Check per-thread single-entry fast cache for matching object
    bool fastCacheOccupied = NO;
    SyncData *data = (SyncData *)tls_get_direct(SYNC_DATA_DIRECT_KEY);
    if (data) {
        fastCacheOccupied = YES;

        if (data->object == object) {
            // Found a match in fast cache.
            uintptr_t lockCount;

            result = data;
            lockCount = (uintptr_t)tls_get_direct(SYNC_COUNT_DIRECT_KEY);
            if (result->threadCount <= 0  ||  lockCount <= 0) {
                _objc_fatal("id2data fastcache is buggy");
            }

            switch(why) {
            case ACQUIRE: {
                lockCount++;
                tls_set_direct(SYNC_COUNT_DIRECT_KEY, (void*)lockCount);
                break;
            }
            case RELEASE:
                lockCount--;
                tls_set_direct(SYNC_COUNT_DIRECT_KEY, (void*)lockCount);
                if (lockCount == 0) {
                    // remove from fast cache
                    tls_set_direct(SYNC_DATA_DIRECT_KEY, NULL);
                    // atomic because may collide with concurrent ACQUIRE
                    OSAtomicDecrement32Barrier(&result->threadCount);
                }
                break;
            case CHECK:
                // do nothing
                break;
            }

            return result;
        }
    }
#endif

    // Check per-thread cache of already-owned locks for matching object
    SyncCache *cache = fetch_cache(NO);
    if (cache) {
        unsigned int i;
        for (i = 0; i < cache->used; i++) {
            SyncCacheItem *item = &cache->list[i];
            if (item->data->object != object) continue;

            // Found a match.
            result = item->data;
            if (result->threadCount <= 0  ||  item->lockCount <= 0) {
                _objc_fatal("id2data cache is buggy");
            }
                
            switch(why) {
            case ACQUIRE:
                item->lockCount++;
                break;
            case RELEASE:
                item->lockCount--;
                if (item->lockCount == 0) {
                    // remove from per-thread cache
                    cache->list[i] = cache->list[--cache->used];
                    // atomic because may collide with concurrent ACQUIRE
                    OSAtomicDecrement32Barrier(&result->threadCount);
                }
                break;
            case CHECK:
                // do nothing
                break;
            }

            return result;
        }
    }

    // Thread cache didn't find anything.
    // Walk in-use list looking for matching object
    // Spinlock prevents multiple threads from creating multiple 
    // locks for the same new object.
    // We could keep the nodes in some hash table if we find that there are
    // more than 20 or so distinct locks active, but we don't do that now.
    
    lockp->lock();

    {
        SyncData* p;
        SyncData* firstUnused = NULL;
        for (p = *listp; p != NULL; p = p->nextData) {
            if ( p->object == object ) {
                result = p;
                // atomic because may collide with concurrent RELEASE
                OSAtomicIncrement32Barrier(&result->threadCount);
                goto done;
            }
            if ( (firstUnused == NULL) && (p->threadCount == 0) )
                firstUnused = p;
        }
    
        // no SyncData currently associated with object
        if ( (why == RELEASE) || (why == CHECK) )
            goto done;
    
        // an unused one was found, use it
        if ( firstUnused != NULL ) {
            result = firstUnused;
            result->object = (objc_object *)object;
            result->threadCount = 1;
            goto done;
        }
    }

    // Allocate a new SyncData and add to list.
    // XXX allocating memory with a global lock held is bad practice,
    // might be worth releasing the lock, allocating, and searching again.
    // But since we never free these guys we won't be stuck in allocation very often.
    posix_memalign((void **)&result, alignof(SyncData), sizeof(SyncData));
    result->object = (objc_object *)object;
    result->threadCount = 1;
    new (&result->mutex) recursive_mutex_t(fork_unsafe_lock);
    result->nextData = *listp;
    *listp = result;
    
 done:
    lockp->unlock();
    if (result) {
        // Only new ACQUIRE should get here.
        // All RELEASE and CHECK and recursive ACQUIRE are 
        // handled by the per-thread caches above.
        if (why == RELEASE) {
            // Probably some thread is incorrectly exiting 
            // while the object is held by another thread.
            return nil;
        }
        if (why != ACQUIRE) _objc_fatal("id2data is buggy");
        if (result->object != object) _objc_fatal("id2data is buggy");

#if SUPPORT_DIRECT_THREAD_KEYS
        if (!fastCacheOccupied) {
            // Save in fast thread cache
            tls_set_direct(SYNC_DATA_DIRECT_KEY, result);
            tls_set_direct(SYNC_COUNT_DIRECT_KEY, (void*)1);
        } else 
#endif
        {
            // Save in thread cache
            if (!cache) cache = fetch_cache(YES);
            cache->list[cache->used].data = result;
            cache->list[cache->used].lockCount = 1;
            cache->used++;
        }
    }

    return result;
}

如果第一次进来

SyncData* p;
        SyncData* firstUnused = NULL;
        for (p = *listp; p != NULL; p = p->nextData) {
            if ( p->object == object ) {
                result = p;
                // atomic because may collide with concurrent RELEASE
                OSAtomicIncrement32Barrier(&result->threadCount);
                goto done;
            }
            if ( (firstUnused == NULL) && (p->threadCount == 0) )
                firstUnused = p;
        }
    
        // no SyncData currently associated with object
//第一次通过ACQUIRE进来的
        if ( (why == RELEASE) || (why == CHECK) )
            goto done;
    done:
        // an unused one was found, use it
        if ( firstUnused != NULL ) {
            result = firstUnused;
            result->object = (objc_object *)object;
            result->threadCount = 1;
            goto done;
        }

        if (!fastCacheOccupied) {
            // Save in fast thread cache
            tls_set_direct(SYNC_DATA_DIRECT_KEY, result);
            tls_set_direct(SYNC_COUNT_DIRECT_KEY, (void*)1);
        }

做一系列的初始化操作。

static SyncCache *fetch_cache(bool create)
{
    _objc_pthread_data *data;
    
    data = _objc_fetch_pthread_data(create);
    if (!data) return NULL;

    if (!data->syncCache) {
        if (!create) {
            return NULL;
        } else {
            int count = 4;
            data->syncCache = (SyncCache *)
                calloc(1, sizeof(SyncCache) + count*sizeof(SyncCacheItem));
            data->syncCache->allocated = count;
        }
    }

    // Make sure there's at least one open slot in the list.
    if (data->syncCache->allocated == data->syncCache->used) {
        data->syncCache->allocated *= 2;
        data->syncCache = (SyncCache *)
            realloc(data->syncCache, sizeof(SyncCache) 
                    + data->syncCache->allocated * sizeof(SyncCacheItem));
    }

    return data->syncCache;
}

_objc_pthread_data *_objc_fetch_pthread_data(bool create)
{
    _objc_pthread_data *data;

    data = (_objc_pthread_data *)tls_get(_objc_pthread_key);
    if (!data  &&  create) {
        data = (_objc_pthread_data *)
            calloc(1, sizeof(_objc_pthread_data));
        tls_set(_objc_pthread_key, data);
    }

    return data;
}

就从全局的线程控件中读取数据fetch,操作完再赋值回去。

注:
1.@synchronized性能很低:链表的缓存,下层代码不断的查找
2.给局部变量加锁的时候,变量在函数中值可能为nil到时程序崩溃,所以需要给一个持久的变量进行持有,比如self,保证锁的声明周期
3.方便好用,不需要自己解锁

NSLock

nslock方法来源于协议

@protocol NSLocking

- (void)lock;
- (void)unlock;

@end

来源于Foundation框架,目前Foundation暂未开源,我们可以通过swift来查看,swift是开源的。


    internal var mutex = _MutexPointer.allocate(capacity: 1)
#if os(macOS) || os(iOS) || os(Windows)
    private var timeoutCond = _ConditionVariablePointer.allocate(capacity: 1)
    private var timeoutMutex = _MutexPointer.allocate(capacity: 1)
#endif

    public override init() {
#if os(Windows)
        InitializeSRWLock(mutex)
        InitializeConditionVariable(timeoutCond)
        InitializeSRWLock(timeoutMutex)
#else
        pthread_mutex_init(mutex, nil)
#if os(macOS) || os(iOS)
        pthread_cond_init(timeoutCond, nil)
        pthread_mutex_init(timeoutMutex, nil)
#endif
#endif
    }
    
    deinit {
#if os(Windows)
        // SRWLocks do not need to be explicitly destroyed
#else
        pthread_mutex_destroy(mutex)
#endif
        mutex.deinitialize(count: 1)
        mutex.deallocate()
#if os(macOS) || os(iOS) || os(Windows)
        deallocateTimedLockData(cond: timeoutCond, mutex: timeoutMutex)
#endif
    }
    

可以看到NSLock初始化就是简单的初始化了pthread_mutex,而NSLock就是对pthread_mutex的操作,所以NSLock的性能只比pthread_mutex差一点。

    open func lock() {
#if os(Windows)
        AcquireSRWLockExclusive(mutex)
#else
        pthread_mutex_lock(mutex)
#endif
    }

lock只是加锁操作


    open func unlock() {
#if os(Windows)
        ReleaseSRWLockExclusive(mutex)
        AcquireSRWLockExclusive(timeoutMutex)
        WakeAllConditionVariable(timeoutCond)
        ReleaseSRWLockExclusive(timeoutMutex)
#else
        pthread_mutex_unlock(mutex)
#if os(macOS) || os(iOS)
        // Wakeup any threads waiting in lock(before:)
        pthread_mutex_lock(timeoutMutex)
        pthread_cond_broadcast(timeoutCond)
        pthread_mutex_unlock(timeoutMutex)
#endif
#endif
    }

unlock进行了解锁,并广播出去

    for (int i= 0; i<100; i++) {
        dispatch_async(dispatch_get_global_queue(0, 0), ^{
            static void (^testMethod)(int);
            [lock lock];
            testMethod = ^(int value){
                if (value > 0) {
                  NSLog(@"current value = %d",value);
                  testMethod(value - 1);
                }
            };
            testMethod(10);
            [lock unlock];
        });
    }

这是nslock的用法,但是在递归的时候就会发生严重的堵塞。

RecursiveLock

    NSRecursiveLock *recursiveLock = [[NSRecursiveLock alloc] init];
    for (int i= 0; i<100; i++) {
        dispatch_async(dispatch_get_global_queue(0, 0), ^{
            static void (^testMethod)(int);
            [recursiveLock lock];
            testMethod = ^(int value){
                if (value > 0) {
                  NSLog(@"current value = %d",value);
                  testMethod(value - 1);
                }
                [recursiveLock unlock];
            };
            testMethod(10);
        });
    }

在处理递归的时候我们可以使用RecursiveLock

    public override init() {
        super.init()
#if os(Windows)
        InitializeCriticalSection(mutex)
        InitializeConditionVariable(timeoutCond)
        InitializeSRWLock(timeoutMutex)
#else
#if CYGWIN
        var attrib : pthread_mutexattr_t? = nil
#else
        var attrib = pthread_mutexattr_t()
#endif
        withUnsafeMutablePointer(to: &attrib) { attrs in
            pthread_mutexattr_init(attrs)
            pthread_mutexattr_settype(attrs, Int32(PTHREAD_MUTEX_RECURSIVE))
            pthread_mutex_init(mutex, attrs)
        }
#if os(macOS) || os(iOS)
        pthread_cond_init(timeoutCond, nil)
        pthread_mutex_init(timeoutMutex, nil)
#endif
#endif
    }

NSRecursiveLockNSLock的区别只是在init时pthread_mutexattr_settype(attrs, Int32(PTHREAD_MUTEX_RECURSIVE))

NSCondition

NSCondition 的对象实际上作为⼀个锁和⼀个线程检查器:锁主要
为了当检测条件时保护数据源,执⾏条件引发的任务;线程检查器
主要是根据条件决定是否继续运⾏线程,即线程是否被阻塞
1:[condition lock];//⼀般⽤于多线程同时访问、修改同⼀个数据源,保证在同⼀
时间内数据源只被访问、修改⼀次,其他线程的命令需要在lock 外等待,只到
unlock ,才可访问
2:[condition unlock];//与lock 同时使⽤
3:[condition wait];//让当前线程处于等待状态
4:[condition signal];//CPU发信号告诉线程不⽤在等待,可以继续执⾏


#pragma mark -- NSCondition

- (void)lg_testConditon{
    
    _testCondition = [[NSCondition alloc] init];
    //创建生产-消费者
    for (int i = 0; i < 50; i++) {
        dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
            [self lg_producer];
        });
        dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
            [self lg_consumer];
        });
        
        dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
            [self lg_consumer];
        });
        dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
            [self lg_producer];
        });
    }
}

- (void)lg_producer{
    [_testCondition lock]; // 操作的多线程影响
    self.ticketCount = self.ticketCount + 1;
    NSLog(@"生产一个 现有 count %zd",self.ticketCount);
    [_testCondition signal]; // 信号
    [_testCondition unlock];
}

- (void)lg_consumer{
 
     [_testCondition lock];  // 操作的多线程影响
    if (self.ticketCount == 0) {
        NSLog(@"等待 count %zd",self.ticketCount);
        [_testCondition wait];
    }
    //注意消费行为,要在等待条件判断之后
    self.ticketCount -= 1;
    NSLog(@"消费一个 还剩 count %zd ",self.ticketCount);
     [_testCondition unlock];
}

NSCondition是对pthread_mutex的封装,使用起来很麻烦,所以我们一半时有NSConditionLock

NSConditionLock

1.1 NSConditionLock 是锁,⼀旦⼀个线程获得锁,其他线程⼀定等待
1.2 [xxxx lock]; 表示 xxx 期待获得锁,如果没有其他线程获得锁(不需要判断内部的
condition) 那它能执⾏此⾏以下代码,如果已经有其他线程获得锁(可能是条件锁,或者⽆条件
锁),则等待,直⾄其他线程解锁
1.3 [xxx lockWhenCondition:A条件]; 表示如果没有其他线程获得该锁,但是该锁内部的
condition不等于A条件,它依然不能获得锁,仍然等待。如果内部的condition等于A条件,并且
没有其他线程获得该锁,则进⼊代码区,同时设置它获得该锁,其他任何线程都将等待它代码的
完成,直⾄它解锁。
1.4 [xxx unlockWithCondition:A条件]; 表示释放锁,同时把内部的condition设置为A条件
1.5 return = [xxx lockWhenCondition:A条件 beforeDate:A时间]; 表示如果被锁定(没获得
锁),并超过该时间则不再阻塞线程。但是注意:返回的值是NO,它没有改变锁的状态,这个函
数的⽬的在于可以实现两种状态下的处理
1.6 所谓的condition就是整数,内部通过整数⽐较条件


open class NSConditionLock : NSObject, NSLocking {
    internal var _cond = NSCondition()
    internal var _value: Int
    internal var _thread: _swift_CFThreadRef?
    
    public convenience override init() {
        self.init(condition: 0)
    }
    
    public init(condition: Int) {
        _value = condition
    }

    open func lock() {
        let _ = lock(before: Date.distantFuture)
    }

    open func unlock() {
        _cond.lock()
#if os(Windows)
        _thread = INVALID_HANDLE_VALUE
#else
        _thread = nil
#endif
        _cond.broadcast()
        _cond.unlock()
    }
    
    open var condition: Int {
        return _value
    }

    open func lock(whenCondition condition: Int) {
        let _ = lock(whenCondition: condition, before: Date.distantFuture)
    }

    open func `try`() -> Bool {
        return lock(before: Date.distantPast)
    }
    
    open func tryLock(whenCondition condition: Int) -> Bool {
        return lock(whenCondition: condition, before: Date.distantPast)
    }

    open func unlock(withCondition condition: Int) {
        _cond.lock()
#if os(Windows)
        _thread = INVALID_HANDLE_VALUE
#else
        _thread = nil
#endif
        _value = condition
        _cond.broadcast()
        _cond.unlock()
    }

    open func lock(before limit: Date) -> Bool {
        _cond.lock()
        while _thread != nil {
            if !_cond.wait(until: limit) {
                _cond.unlock()
                return false
            }
        }
#if os(Windows)
        _thread = GetCurrentThread()
#else
        _thread = pthread_self()
#endif
        _cond.unlock()
        return true
    }
    
    open func lock(whenCondition condition: Int, before limit: Date) -> Bool {
        _cond.lock()
        while _thread != nil || _value != condition {
            if !_cond.wait(until: limit) {
                _cond.unlock()
                return false
            }
        }
#if os(Windows)
        _thread = GetCurrentThread()
#else
        _thread = pthread_self()
#endif
        _cond.unlock()
        return true
    }
    
    open var name: String?
}
NSConditionLock总结

    NSConditionLock *conditionLock = [[NSConditionLock alloc] initWithCondition:2];
    
    dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
         [conditionLock lockWhenCondition:1]; // conditoion = 1 内部 Condition 匹配
        // -[NSConditionLock lockWhenCondition: beforeDate:]
        NSLog(@"线程 1");
         [conditionLock unlockWithCondition:0];
    });
    
    dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_LOW, 0), ^{
       
        [conditionLock lockWhenCondition:2];
//        sleep(0.1);
        NSLog(@"线程 2");
        // self.myLock.value = 1;
        [conditionLock unlockWithCondition:1]; // _value = 2 -> 1
    });
    
上一篇下一篇

猜你喜欢

热点阅读