iOS-多线程
本文主要介绍了 iOS的多线程方案, 多线程安全方案, 多读单写方案.
篇幅稍长,还请耐心看完.
进程
理论上,每个iOS App都是一个进程, 有自己独立的虚拟空间来存储自己的运行数据.
线程
每个进程中有多个线程,这些线程共享进程的全局变量和堆数据. 多条线程可以在一个进程中并发执行.达到同一时间完成多个任务的效果. 其实在单处理器中,所谓的并发,是操作系统不断的在线程间来回切换达到的一个伪并发效果.
多线程的作用
- 避免线程堵塞,有些需要大量时间执行的任务,如果放在主线程同步执行,则会造成卡顿
- 将复杂任务拆分, 如UITableViewCell加载图片时,可以开辟新线程去处理图片数据,最终再交由主线程实现
- 多任务并行
iOS中的多线程方案
- pthread: C语言,使用难度大.需要由使用者管理生命周期. 基本没人用
- NSThread: OC 面向对象,需要依赖Runloop保活,需要由使用者管理生命周期,使用较少
- GCD: C语言. 由系统本身管理生命周期 是目前较主流的多线程方案
- NSOperation: OC语言 基于GCD的封装,更加面向对象. 由系统本身管理声明周期. 也比较多人使用
队列
队列中装载着多个线程,根据队列的不同属性安排线程的任务调度,队列可分为串行队列和并发队列
- 串行队列:队列中的任务一个一个连成串执行
- 并发队列:队列可以同时执行任务
同步
所有任务在同一个线程一个接一个执行
B任务必须等待A任务执行完成才可以进行. 如果A任务耗时太长, 则B任务也会一直等到A任务进行完成.
iOS场景: 加载网络数据时,使用加载指示器挡住当前view, 等到加载完成后指示器消失,页面展示数据. 如果此时加载任务迟迟未完成.则界面一直卡在加载界面.
有可能会引起死锁
异步
在新的线程中执行任务.
下面看看同步异步在不同线程的表现
- (void)viewDidLoad {
[super viewDidLoad];
NSLog(@"main thread %@",[NSThread currentThread]);
dispatch_async(dispatch_get_main_queue(), ^{
NSLog(@"main dispatch_async_thred:%@",[NSThread currentThread]);
});
dispatch_sync(dispatch_get_global_queue(0, 0), ^{
NSLog(@"global dispatch_sync_thred:%@",[NSThread currentThread]);
});
dispatch_async(dispatch_get_global_queue(0, 0), ^{
NSLog(@"global dispatch_async_thred:%@",[NSThread currentThread]);
});
dispatch_queue_t squeue = dispatch_queue_create("QUEUE1", DISPATCH_QUEUE_SERIAL);
dispatch_sync(squeue, ^{
NSLog(@"squeue dispatch_sync_thred:%@",[NSThread currentThread]);
});
dispatch_async(squeue, ^{
NSLog(@"squeue dispatch_async_thred:%@",[NSThread currentThread]);
});
dispatch_queue_t cqueue = dispatch_queue_create("QUEUE1", DISPATCH_QUEUE_CONCURRENT);
dispatch_sync(cqueue, ^{
NSLog(@"cqueue dispatch_sync_thred:%@",[NSThread currentThread]);
});
dispatch_async(squeue, ^{
NSLog(@"cqueue dispatch_async_thred:%@",[NSThread currentThread]);
});
}
main thread <NSThread: 0x600003a0c200>{number = 1, name = main}
global dispatch_sync_thred:<NSThread: 0x600003a0c200>{number = 1, name = main}
global dispatch_async_thred:<NSThread: 0x600003a4d980>{number = 2, name = (null)}
squeue dispatch_sync_thred:<NSThread: 0x600003a0c200>{number = 1, name = main}
cqueue dispatch_sync_thred:<NSThread: 0x600003a0c200>{number = 1, name = main}
squeue dispatch_async_thred:<NSThread: 0x600003a4d980>{number = 2, name = (null)}
cqueue dispatch_async_thred:<NSThread: 0x600003a4d980>{number = 2, name = (null)}
main dispatch_async_thred:<NSThread: 0x600003a0c200>{number = 1, name = main}
从结果可以看到.
- 同步:都没有开启新的线程
- 异步:全局队列,串行队列,并发队列中开辟了新线程. 主队列不开辟新线程
主队列异步执行任务情况
NSLog(@"1%@",[NSThread currentThread]);
dispatch_async(dispatch_get_main_queue(), ^{
NSLog(@"main async 2:%@",[NSThread currentThread]);
});
NSLog(@"3%@",[NSThread currentThread]);
dispatch_async(dispatch_get_main_queue(), ^{
NSLog(@"main async 4:%@",[NSThread currentThread]);
});
NSLog(@"5%@",[NSThread currentThread]);
dispatch_async(dispatch_get_main_queue(), ^{
NSLog(@"main async 6:%@",[NSThread currentThread]);
});
dispatch_async(dispatch_get_main_queue(), ^{
NSLog(@"main async 7:%@",[NSThread currentThread]);
});
dispatch_async(dispatch_get_main_queue(), ^{
NSLog(@"main async 8:%@",[NSThread currentThread]);
});
NSLog(@"9%@",[NSThread currentThread]);
dispatch_async(dispatch_get_main_queue(), ^{
NSLog(@"main async 10:%@",[NSThread currentThread]);
});
1<NSThread: 0x600001564140>{number = 1, name = main}
3<NSThread: 0x600001564140>{number = 1, name = main}
5<NSThread: 0x600001564140>{number = 1, name = main}
9<NSThread: 0x600001564140>{number = 1, name = main}
main async 2:<NSThread: 0x600001564140>{number = 1, name = main}
main async 4:<NSThread: 0x600001564140>{number = 1, name = main}
main async 6:<NSThread: 0x600001564140>{number = 1, name = main}
main async 7:<NSThread: 0x600001564140>{number = 1, name = main}
main async 8:<NSThread: 0x600001564140>{number = 1, name = main}
main async 10:<NSThread: 0x600001564140>{number = 1, name = main}
从结果可以得出. 在主队列中运行异步任务, 会等待viewdidload中的任务执行完成后,再串行的执行任务.那么根据上述的结果.抛出一个问题.下属代码,在主队列执行同步任务.那么会有什么后果.
NSLog(@"1%@",[NSThread currentThread]);
dispatch_sync(dispatch_get_main_queue(), ^{
NSLog(@"main sync 2:%@",[NSThread currentThread]);
});
NSLog(@"3%@",[NSThread currentThread]);
答案就是:死锁. 因为同步任务需要等待viewdidload执行完毕,而viewdidload又要等待同步任务出队.大家一起等.大家都不出队.那就大家抱着一起卡死呗.
全局队列异步执行任务情况
image-20210603174841144.pngdispatch_queue_t gQueue = dispatch_get_global_queue(0, 0);
NSLog(@"1%@",[NSThread currentThread]);
dispatch_async(gQueue, ^{
NSLog(@"global async 2:%@",[NSThread currentThread]);
});
NSLog(@"3%@",[NSThread currentThread]);
dispatch_async(gQueue, ^{
NSLog(@"global async 4:%@",[NSThread currentThread]);
});
NSLog(@"5%@",[NSThread currentThread]);
dispatch_async(gQueue, ^{
NSLog(@"global async 6:%@",[NSThread currentThread]);
});
dispatch_async(gQueue, ^{
NSLog(@"global async 7:%@",[NSThread currentThread]);
});
dispatch_async(gQueue, ^{
NSLog(@"global async 8:%@",[NSThread currentThread]);
});
NSLog(@"9%@",[NSThread currentThread]);
dispatch_async(gQueue, ^{
NSLog(@"global async 10:%@",[NSThread currentThread]);
});
1<NSThread: 0x6000003f0080>{number = 1, name = main}
3<NSThread: 0x6000003f0080>{number = 1, name = main}
global async 2:<NSThread: 0x6000003b1840>{number = 6, name = (null)}
5<NSThread: 0x6000003f0080>{number = 1, name = main}
9<NSThread: 0x6000003f0080>{number = 1, name = main}
global async 4:<NSThread: 0x6000003b2180>{number = 7, name = (null)}
global async 6:<NSThread: 0x6000003b1840>{number = 6, name = (null)}
global async 7:<NSThread: 0x6000003f3380>{number = 5, name = (null)}
global async 8:<NSThread: 0x6000003b2180>{number = 7, name = (null)}
global async 10:<NSThread: 0x6000003ec9c0>{number = 4, name = (null)}
我们上面已经得出结论.在全局队列执行异步任务,是会开辟新的线程.所以很明显.他并不需要等到viewdidload执行完毕就可以执行任务.
自建串行队列异步执行任务情况
dispatch_queue_t squeue = dispatch_queue_create("QUEUE1", DISPATCH_QUEUE_SERIAL);
NSLog(@"1%@",[NSThread currentThread]);
dispatch_async(squeue, ^{
NSLog(@"squeue async 2:%@",[NSThread currentThread]);
});
NSLog(@"3%@",[NSThread currentThread]);
dispatch_async(squeue, ^{
NSLog(@"squeue async 4:%@",[NSThread currentThread]);
});
NSLog(@"5%@",[NSThread currentThread]);
dispatch_async(squeue, ^{
NSLog(@"squeue async 6:%@",[NSThread currentThread]);
});
dispatch_async(squeue, ^{
NSLog(@"squeue async 7:%@",[NSThread currentThread]);
});
dispatch_async(squeue, ^{
NSLog(@"squeue async 8:%@",[NSThread currentThread]);
});
NSLog(@"9%@",[NSThread currentThread]);
dispatch_async(squeue, ^{
NSLog(@"squeue async 10:%@",[NSThread currentThread]);
});
1<NSThread: 0x60000215c600>{number = 1, name = main}
3<NSThread: 0x60000215c600>{number = 1, name = main}
squeue async 2:<NSThread: 0x600002108140>{number = 6, name = (null)}
5<NSThread: 0x60000215c600>{number = 1, name = main}
9<NSThread: 0x60000215c600>{number = 1, name = main}
squeue async 4:<NSThread: 0x600002108140>{number = 6, name = (null)}
squeue async 6:<NSThread: 0x600002108140>{number = 6, name = (null)}
squeue async 7:<NSThread: 0x600002108140>{number = 6, name = (null)}
squeue async 8:<NSThread: 0x600002108140>{number = 6, name = (null)}
squeue async 10:<NSThread: 0x600002108140>{number = 6, name = (null)}
结果与全局队列等同,但只创建了一条线程
自建并发队列异步执行任务情况
dispatch_queue_t cqueue = dispatch_queue_create("QUEUE1", DISPATCH_QUEUE_CONCURRENT);
NSLog(@"1%@",[NSThread currentThread]);
dispatch_async(cqueue, ^{
NSLog(@"cqueue async 2:%@",[NSThread currentThread]);
});
NSLog(@"3%@",[NSThread currentThread]);
dispatch_async(cqueue, ^{
NSLog(@"cqueue async 4:%@",[NSThread currentThread]);
});
NSLog(@"5%@",[NSThread currentThread]);
dispatch_async(cqueue, ^{
NSLog(@"cqueue async 6:%@",[NSThread currentThread]);
});
dispatch_async(cqueue, ^{
NSLog(@"cqueue async 7:%@",[NSThread currentThread]);
});
dispatch_async(cqueue, ^{
NSLog(@"cqueue async 8:%@",[NSThread currentThread]);
});
NSLog(@"9%@",[NSThread currentThread]);
dispatch_async(cqueue, ^{
NSLog(@"cqueue async 10:%@",[NSThread currentThread]);
});
1<NSThread: 0x6000028a03c0>{number = 1, name = main}
3<NSThread: 0x6000028a03c0>{number = 1, name = main}
cqueue async 2:<NSThread: 0x60000289d1c0>{number = 3, name = (null)}
5<NSThread: 0x6000028a03c0>{number = 1, name = main}
cqueue async 4:<NSThread: 0x6000028f8800>{number = 7, name = (null)}
cqueue async 6:<NSThread: 0x60000289d1c0>{number = 3, name = (null)}
cqueue async 7:<NSThread: 0x6000028f8800>{number = 7, name = (null)}
cqueue async 8:<NSThread: 0x6000028f8800>{number = 7, name = (null)}
9<NSThread: 0x6000028a03c0>{number = 1, name = main}
cqueue async 10:<NSThread: 0x6000028f8800>{number = 7, name = (null)}
结果与全局队列相同,也创建了多条线程
global_queue
从上述结果对比,global_queue是一条并发队列
main_queue
main_queue是一条串行队列
死锁
在往主队列添加同步任务时,会造成死锁.其实不只是主队列,在串行队列同步任务中添加同步任务,也会引起死锁.一下代码同样会引起奔溃.而在并发队列中则不会产生这种情况.
dispatch_queue_t cqueue = dispatch_queue_create("QUEUE1", DISPATCH_QUEUE_SERIAL);
dispatch_sync(cqueue, ^{
NSLog(@"cqueue async 2:%@",[NSThread currentThread]);
dispatch_sync(cqueue, ^{
NSLog(@"cqueue sync 3:%@",[NSThread currentThread]);
});
});
线程安全
多线程在带来便利的同时,也会带来隐患.当多个线程对同一个变量进行写操作时,可能会造成结果不同.卖票问题就是很典型的多线程导致的隐患.
- (void)viewDidLoad {
[super viewDidLoad];
dispatch_queue_t queue = dispatch_get_global_queue(0, 0);
dispatch_async(queue, ^{
for (int i = 0; i < 30; i++) {
[self saleTickets];
}
});
dispatch_async(queue, ^{
for (int i = 0; i < 30; i++) {
[self saleTickets];
}
});
dispatch_async(queue, ^{
for (int i = 0; i < 40; i++) {
[self saleTickets];
}
});
}
- (void)saleTickets {
tickets = tickets - 1;
NSLog(@"卖出1张票,还剩%d张票",tickets);
}
从最终结果可以看出.最终的余票并不为0.
说明有多条线程在同时操作一块内存
例如:多个线程访问tickets时值为98,进行减1. 所以造成了结果的不同.
解决方法1.加锁
iOS中的锁
- OSSpinLock(自旋锁)
- os_unfairLock
- pthread_mutex
- dispatch_semaphor
- NSLock
- NSRecursiveLock
- NSCondition
- NSConditonLock
- @synchronized
自旋锁
当线程检测到自旋锁上锁时,会进行忙等,直到锁被释放,才会继续执行任务.类似于while(lock){}
互斥锁
当线程检测到上锁时,该线程会进行休眠,等到其他线程解锁,才会唤起该线程.
OSSpinLock
#import <libkern/OSAtomic.h>
- (void)viewDidLoad {
[super viewDidLoad];
self.lock = OS_SPINLOCK_INIT;
[self beginSaleTicket];
}
- (void)saleTickets {
OSSpinLockLock(&_lock);
tickets = tickets - 1;
NSLog(@"卖出1张票,还剩%d张票",tickets);
OSSpinLockUnlock(&_lock);
}
- (void)beginSaleTicket {
dispatch_queue_t queue = dispatch_get_global_queue(0, 0);
dispatch_async(queue, ^{
for (int i = 0; i < 30; i++) {
[self saleTickets];
}
});
dispatch_async(queue, ^{
for (int i = 0; i < 30; i++) {
[self saleTickets];
}
});
dispatch_async(queue, ^{
for (int i = 0; i < 40; i++) {
[self saleTickets];
}
});
}
最终执行结果,发现最终余票为0. 也就是说,线程安全了
OSSpinLock为什么被弃用
原因1:死锁
#import <os/lock.h>
- (void)viewDidLoad {
[super viewDidLoad];
self.lock = OS_SPINLOCK_INIT;
[self deadLock];
NSLog(@"test");
}
- (void)deadLock {
OSSpinLockLock(&_lock);
[self lockAgain];
NSLog(@"等待lockAgain执行完成");
OSSpinLockUnlock(&_lock);
}
- (void)lockAgain {
OSSpinLockLock(&_lock);
NSLog(@"加锁");
OSSpinLockUnlock(&_lock);
}
- (void)touchesBegan:(NSSet<UITouch *> *)touches withEvent:(UIEvent *)event {
NSLog(@"touch");
}
上述代码的结果就是, 什么都不输出, touchesBegan也无法响应.为什么呢?分析一下
自旋锁中, deadLock方法拿到锁,上锁后调用lockAgain,此时检测到_lock上锁,lockAgain会进入忙等. deadLock被堵住,无法拿到锁进行解锁.所以线程卡死. 当然, 互斥锁也会有这种问题发生.
原因2:优先级反转
线程运行时抢占式的, 当高优先级的线程轮转时,会抢占低优先级任务的CPU控制器 .
假设有 A,B两条线程, 优先级 : A > B
B先于A持有锁并上锁, 此时A轮转, 抢占B的CPU控制权, B未对锁就行解锁操作, A任务检测到上了锁, 忙等, 最终挂起线程, 继续回到B操作. 只有两条线程可能问题不大. 但是如果有多个线程不断轮转, 就会出现优先级最高的任务A迟迟无法完成, 最终影响了程序性能
os_unfair_lock
iOS10之后, OC推出的用来替代 OSSpinLock 的锁,
- (void)viewDidLoad {
[super viewDidLoad];
self.lock = OS_UNFAIR_LOCK_INIT;
[self beginSaleTicket];
}
- (void)saleTickets {
os_unfair_lock_lock(&_lock);
tickets = tickets - 1;
NSLog(@"卖出1张票,还剩%d张票",tickets);
os_unfair_lock_unlock(&_lock);
}
使用方法也很简单, iOS内部没有标明os_unfair_lock是自旋锁还是互斥锁, 但是通过从汇编看到的结果, 检测到加锁的线程走到某一步之后, 是直接休眠的. 根据定义看, os_unfair_lock是一个互斥锁.
pthread_mutex_t
pthread_mutex_t是一个比较强大的锁, 里面封装了多个类型的锁. 普通锁, 递归锁. 也可以为锁添加条件
- (void)viewDidLoad {
[super viewDidLoad];
pthread_mutex_init(&(_lock), NULL);
[self beginSaleTicket];
}
- (void)saleTickets {
pthread_mutex_lock(&_lock);
tickets = tickets - 1;
NSLog(@"卖出1张票,还剩%d张票",tickets);
pthread_mutex_unlock(&_lock);
}
眼尖的朋友应该看出来了.pthread_mutex_init可以传入两个参数, 第二个参数指定了锁的类型. 还记得上面提到的同一线程中不同任务多次加锁导致的死锁问题吗? pthread_mutex_init提供了解决方案. 那就是 递归锁
递归锁
- (void)viewDidLoad {
[super viewDidLoad];
pthread_mutexattr_t attr;
pthread_mutexattr_init(&attr);
pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_RECURSIVE);
pthread_mutex_init(&(_lock), &attr);
pthread_mutexattr_destroy(&attr);
[self deadLock];
NSLog(@"test");
}
- (void)deadLock {
pthread_mutex_lock(&_lock);
[self lockAgain];
NSLog(@"等待lockAgain执行完成");
pthread_mutex_unlock(&_lock);
}
- (void)lockAgain {
pthread_mutex_lock(&_lock);
NSLog(@"加锁");
pthread_mutex_unlock(&_lock);
}
可以看到, 当设置pthread_mutex_lock为递归锁时, 任务可以顺利执行. 递归锁允许同一线程中的不同任务对同一把锁多次上锁.
条件
给锁添加条件时, 线程对当前持有的锁进行释放, 然后进行休眠, 等待其他线程释放条件信号,就会唤醒该线程, 重新加锁并继续执行.
假设我们有一个场景: 搬砖糊墙. 当有砖的时候, 才可以开始糊墙.假设糊墙跟搬砖的是两个人, 当糊墙的把砖用完的时候, 就需要等搬砖的人把砖搬过来才可以继续进行.
- (void)viewDidLoad {
[super viewDidLoad];
pthread_mutexattr_t attr;
pthread_mutexattr_init(&attr);
pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_RECURSIVE);
pthread_mutex_init(&_lock, &attr);
pthread_mutexattr_destroy(&attr);
pthread_cond_init(&_cond, NULL);
dispatch_async(dispatch_get_global_queue(0, 0), ^{
[self build];
});
dispatch_async(dispatch_get_global_queue(0, 0), ^{
[self getBrick];
});
}
- (void)build {
pthread_mutex_lock(&_lock);
NSLog(@"开始糊墙");
if (brick == 0) {
// 等待
NSLog(@"没砖了,等砖来");
pthread_cond_wait(&_cond, &_lock);
}
NSLog(@"糊墙");
pthread_mutex_unlock(&_lock);
}
- (void)getBrick {
pthread_mutex_lock(&_lock);
sleep(1);
brick += 1;
NSLog(@"砖来了");
pthread_cond_signal(&_cond);
pthread_mutex_unlock(&_lock);
}
值得注意的一点是. 条件信号释放后, 添加条件的一方并不会马上唤醒并加锁执行后续任务. 而是会等待信号发出方解开当前锁的时候才会唤醒. 所以如果有特别耗时的任务. 可以放在unlock 后面做.
NSLock
- (void)viewDidLoad {
[super viewDidLoad];
self.lock = [NSLock new];
[self beginSaleTicket];
}
- (void)saleTickets {
[self.lock lock];
tickets = tickets - 1;
NSLog(@"卖出1张票,还剩%d张票",tickets);
[self.lock unlock];
}
NSLock是对mutex 普通锁的封装, 原理相同.此处就不赘述了
NSRecursiveLock
对mutex递归锁的封装,API也与NSLock基本一致.此处就不贴代码占篇幅了
NSCondition
对mutex和cond的封装
- (void)viewDidLoad {
[super viewDidLoad];
self.condition = [NSCondition new];
[self buildWall];
}
- (void)buildWall {
dispatch_async(dispatch_get_global_queue(0, 0), ^{
[self build];
});
dispatch_async(dispatch_get_global_queue(0, 0), ^{
[self getBrick];
});
}
- (void)build {
[self.condition lock];
NSLog(@"开始糊墙");
if (brick == 0) {
// 等待
NSLog(@"没砖了,搬砖去");
[self.condition wait];
}
NSLog(@"糊墙");
[self.condition unlock];
}
- (void)getBrick {
[self.condition lock];
sleep(1);
brick += 1;
NSLog(@"开始搬砖");
[self.condition signal];
[self.condition unlock];
}
NSConditionLock
对NSCondition进行封装,可以在同一把锁上添加不同的条件
- (void)viewDidLoad {
[super viewDidLoad];
self.conditionLock = [[NSConditionLock alloc] initWithCondition:1];
dispatch_async(dispatch_get_global_queue(0, 0), ^{
[self stepThree];
});
dispatch_async(dispatch_get_global_queue(0, 0), ^{
[self stepOne];
});
dispatch_async(dispatch_get_global_queue(0, 0), ^{
[self stepTwo];
});
}
- (void)stepOne
{
[self.conditionLock lock];
NSLog(@"%s",__func__);
sleep(1);
[self.conditionLock unlockWithCondition:2];
}
- (void)stepTwo
{
[self.conditionLock lockWhenCondition:2];
NSLog(@"%s",__func__);
sleep(1);
[self.conditionLock unlockWithCondition:3];
}
- (void)stepThree
{
[self.conditionLock lockWhenCondition:3];
NSLog(@"%s",__func__);
[self.conditionLock unlock];
}
@synchronized
是OC的语法糖,本质上也是对mutex的封装
- (void)viewDidLoad {
[super viewDidLoad];
[self beginSaleTicket];
}
- (void)saleTickets {
@synchronized (self) {
tickets = tickets - 1;
NSLog(@"卖出1张票,还剩%d张票",tickets);
}
}
使用@synchronized代码相当简洁. 通过测试, @synchronized内部实现的是递归锁.
说完了iOS中提供的锁,再来说说他们的性能
普通锁 > 条件锁 > 递归锁
os_unfair_lock > OSSPinLock > pthread_mutext_t > NSLock > NSCondition > pthread_mutex(recursive) > NSRecursiveLock > NSConditionLock > @synchronized
解决方法2:GCD串行队列
上面我们提到,在串行队列中,异步任务是串行执行的.所以我们可以新建一个GCD的串行队列来进行卖票操作.
- (void)viewDidLoad {
[super viewDidLoad];
self.ticketQueue = dispatch_queue_create("ticketQueue", DISPATCH_QUEUE_SERIAL);
[self beginSaleTicket];
}
- (void)saleTickets {
tickets = tickets - 1;
NSLog(@"卖出1张票,还剩%d张票",tickets);
}
- (void)beginSaleTicket {
dispatch_async(self.ticketQueue, ^{
for (int i = 0; i < 30; i++) {
[self saleTickets];
}
});
dispatch_async(self.ticketQueue, ^{
for (int i = 0; i < 30; i++) {
[self saleTickets];
}
});
dispatch_async(self.ticketQueue, ^{
for (int i = 0; i < 40; i++) {
[self saleTickets];
}
});
}
解决方法3:信号量dispatch_semaphore_t
此处涉及PV操作的概念.
执行P操作时, 若 P > 0, 信号量-1, 执行任务, 若 P <= 0, 休眠等待
执行V操作, 信号量+1
我们可以通过设置信号量为1, 来设置并发线程的最大并发量为1
- (void)viewDidLoad {
[super viewDidLoad];
self.ticketQueue = dispatch_get_global_queue(0, 0);
self.semaphore = dispatch_semaphore_create(1);
[self beginSaleTicket];
}
- (void)saleTickets {
dispatch_semaphore_wait(self.semaphore, DISPATCH_TIME_FOREVER);
tickets = tickets - 1;
NSLog(@"卖出1张票,还剩%d张票",tickets);
dispatch_semaphore_signal(self.semaphore);
}
读写安全(多读单写)
当一个文件, 我们可以对它进行读写的时候, 就会出现不安全的状况, 多条线程同时写, 或者一边读一边写都是安全隐患.
所以我们希望对文件操作是可以做到
- 可以多条线程同时读取文件
- 进行写操作时,不可以读取文件, 也不允许多条线程进行写入操作
解决方案:dispatch_barrier_async
- (void)viewDidLoad {
[super viewDidLoad];
self.queue = dispatch_queue_create("rw_queue", DISPATCH_QUEUE_CONCURRENT);
for (int i = 0; i < 10; i++) {
dispatch_async(self.queue, ^{
[self read];
});
dispatch_async(self.queue, ^{
[self read];
});
dispatch_async(self.queue, ^{
[self read];
});
dispatch_barrier_async(self.queue, ^{
[self write];
});
}
}
- (void)read {
sleep(1);
NSLog(@"read");
}
- (void)write
{
sleep(1);
NSLog(@"write");
}
从输出结果看到, 会同时打印多个read,但一次只会打印出一个write,而且在读操作时,也不会进行其他操作.
dispatch_barrier_async 通过为并发队列设置栅栏的方式, 当栅栏建立时,其他异步任务都不允许执行.
注意
思考一下,把上面的并发队列换成全局队列,会怎么样?
self.queue = dispatch_get_global_queue(0, 0);
2021-06-04 11:53:23.544718+0800 MultiThread[98352:14881477] read
2021-06-04 11:53:23.544718+0800 MultiThread[98352:14881479] write
2021-06-04 11:53:23.544733+0800 MultiThread[98352:14881480] read
2021-06-04 11:53:23.544743+0800 MultiThread[98352:14881485] read
2021-06-04 11:53:23.544718+0800 MultiThread[98352:14881476] read
2021-06-04 11:53:23.544718+0800 MultiThread[98352:14881482] read
2021-06-04 11:53:23.544766+0800 MultiThread[98352:14881483] read
2021-06-04 11:53:23.544780+0800 MultiThread[98352:14881490] read
2021-06-04 11:53:23.544780+0800 MultiThread[98352:14881489] write
2021-06-04 11:53:23.544830+0800 MultiThread[98352:14881491] read
2021-06-04 11:53:23.544841+0800 MultiThread[98352:14881492] read
2021-06-04 11:53:23.544882+0800 MultiThread[98352:14881493] write
2021-06-04 11:53:23.544905+0800 MultiThread[98352:14881494] read
2021-06-04 11:53:23.545049+0800 MultiThread[98352:14881496] read
2021-06-04 11:53:23.545061+0800 MultiThread[98352:14881495] read
2021-06-04 11:53:23.545064+0800 MultiThread[98352:14881498] read
2021-06-04 11:53:23.545077+0800 MultiThread[98352:14881499] read
2021-06-04 11:53:23.545146+0800 MultiThread[98352:14881501] write
2021-06-04 11:53:23.545083+0800 MultiThread[98352:14881497] write
以上是部分的结果.可以看出,就算加了GCD栅栏,还是等于异步并发.
值得注意的是, GCD的栅栏只能挡住我们自己创建的并发队列.
并不能挡住全局队列. 这是为了安全起见. 如果全局队列被一个及其耗时的操作block住, 那会引发很多其他的问题.
文章很长,感谢观看. 如果文章有问题,还请不吝赐教.如果对你有一些帮助, 麻烦点个赞