WebRTCWebRTC源码分析

WebRTC源码分析-线程基础之MessageQueueMana

2019-11-15  本文已影响0人  ice_ly000

前言

正如其名,MessageQueueManager类(后续简写为MQM)提供了MessageQueue(简写为MQ)的管理功能。在之前的文章中已经分析过,MQ在构建时会调用MQ.DoInit()方法,该方法将MQ添加到MQM的内部std::Vector<MessageQueue*>成员中。
MQM类的声明和定义分别在rtc_base/message_queue.h以及rtc_base/message_queue.cc中,其定义如下所示

// MessageQueueManager does cleanup of of message queues
class MessageQueueManager {
 public:
  static void Add(MessageQueue* message_queue);
  static void Remove(MessageQueue* message_queue);
  static void Clear(MessageHandler* handler);

  // TODO(nisse): Delete alias, as soon as downstream code is updated.
  static void ProcessAllMessageQueues() { ProcessAllMessageQueuesForTesting(); }

  // For testing purposes, for use with a simulated clock.
  // Ensures that all message queues have processed delayed messages
  // up until the current point in time.
  static void ProcessAllMessageQueuesForTesting();

 private:
  static MessageQueueManager* Instance();
  MessageQueueManager();
  ~MessageQueueManager();

  void AddInternal(MessageQueue* message_queue);
  void RemoveInternal(MessageQueue* message_queue);
  void ClearInternal(MessageHandler* handler);
  void ProcessAllMessageQueuesInternal();

  // This list contains all live MessageQueues.
  std::vector<MessageQueue*> message_queues_ RTC_GUARDED_BY(crit_);

  // Methods that don't modify the list of message queues may be called in a
  // re-entrant fashion. "processing_" keeps track of the depth of re-entrant
  // calls.
  CriticalSection crit_;
  size_t processing_ RTC_GUARDED_BY(crit_);
};

MessageQueueManager的构造

MessageQueueManager的构造方式与ThreadManager一样,都是单例模式,都是非安全的。之前分析过ThreadManager为什么能够安全的构造,MessageQueueManager原理一样,并且MessageQueueManager对象的创建先于第一个MessageQueue对象。

MessageQueueManager* MessageQueueManager::Instance() {
  static MessageQueueManager* const instance = new MessageQueueManager;
  return instance;
}
MessageQueueManager::MessageQueueManager() : processing_(0) {}
MessageQueueManager::~MessageQueueManager() {}

MessageQueue的添加与移除

MessageQueueManager提供了Add与Remove的静态函数来往单例的管理类中添加和删除MQ,具体如下源码所示:

void MessageQueueManager::Add(MessageQueue* message_queue) {
  return Instance()->AddInternal(message_queue);
}
void MessageQueueManager::AddInternal(MessageQueue* message_queue) {
  CritScope cs(&crit_);
  // Prevent changes while the list of message queues is processed.
  RTC_DCHECK_EQ(processing_, 0);
  message_queues_.push_back(message_queue);
}

void MessageQueueManager::Remove(MessageQueue* message_queue) {
  return Instance()->RemoveInternal(message_queue);
}
void MessageQueueManager::RemoveInternal(MessageQueue* message_queue) {
  {
    CritScope cs(&crit_);
    // Prevent changes while the list of message queues is processed.
    RTC_DCHECK_EQ(processing_, 0);
    std::vector<MessageQueue*>::iterator iter;
    iter = std::find(message_queues_.begin(), message_queues_.end(),
                     message_queue);
    if (iter != message_queues_.end()) {
      message_queues_.erase(iter);
    }
  }
}

添加和删除方法对外都是以静态方法提供,通过调用MQM的单实例的对应的私有方法来实现往向量Vector中添加和删除MQ,需要说明的注意点有以下几个:
1)成员crit_是临界区类CriticalSection的对象,该成员保证多线程环境下MQM.message_queues_以及MQM.processing_访问安全,正如上面两个函数所示,函数开头创建CritScope cs(&crit_); 在cs的构造函数中调用crit_->Enter()表示进入临界区,相当于上锁。利用函数结束后cs对象的析构中调用crit_->Leave()表示离开临界区,相当于解锁。
2)存储MQ的向量声明为:std::vector<MessageQueue*> message_queues_ RTC_GUARDED_BY(crit_);
其中RTC_GUARDED_BY宏在clang编译器下展开为attribute(guarded_by(crit_)),指示编译器在编译过程中检查代码中所有访问message_queues_的各个路径上是否都先获取了锁crit_,如果没有就会在编译过程中产生错误或者警告。而对于其他编译器,该宏不起任何作用,意味着不会在编译期进行检查。详见 Thread Safety Analysis
3)成员processing_ 声明为: size_t processing_ RTC_GUARDED_BY(crit_); Add与Remove函数中执行了RTC_DCHECK_EQ(processing_, 0)断言,必须确保processing_ 为0。当processing_不为0时,要么在执行MQM的Clear()方法,要么在执行ProcessAllMessageQueues(),这些操作此时,是不允许往MQM添加MQ或者删除MQ这种会改变Vector列表的操作,因为前面的两个函数一般都会要遍历Vector。思考一点,不是已经上锁保证线程安全了嘛,为啥还要保证processing_为0呢?继续往下看吧~~

清理

与Add和Remove方式一摸一样,Clear函数也是以静态方法的形式对外提供。Clear函数的作用是从MQM所管理的所有MQ中删除与入参MessageHandler* handler匹配消息。具体而言就是遍历MQM中的MQ,然后调用MQ本身的Clear()方法,这个方法比较冗长,此处就不展开叙述,将会在介绍MQ的文章中详细描述。

void MessageQueueManager::Clear(MessageHandler* handler) {
  return Instance()->ClearInternal(handler);
}
void MessageQueueManager::ClearInternal(MessageHandler* handler) {
  // Deleted objects may cause re-entrant calls to ClearInternal. This is
  // allowed as the list of message queues does not change while queues are
  // cleared.
  MarkProcessingCritScope cs(&crit_, &processing_);
  for (MessageQueue* queue : message_queues_) {
    queue->Clear(handler);
  }
}

另外很重要的一点,该方法并没有使用前文所述的CritScope cs(&crit_)来实现线程安全,而是使用了一个新的类MarkProcessingCritScope cs(&crit_, &processing_),有什么神奇的地方?且看源码:

class RTC_SCOPED_LOCKABLE MarkProcessingCritScope {
 public:
  MarkProcessingCritScope(const CriticalSection* cs, size_t* processing)
      RTC_EXCLUSIVE_LOCK_FUNCTION(cs)
      : cs_(cs), processing_(processing) {
    cs_->Enter();
    *processing_ += 1;
  }

  ~MarkProcessingCritScope() RTC_UNLOCK_FUNCTION() {
    *processing_ -= 1;
    cs_->Leave();
  }

 private:
  const CriticalSection* const cs_;
  size_t* processing_;

  RTC_DISALLOW_COPY_AND_ASSIGN(MarkProcessingCritScope);
};

首先要知道CriticalSection是可重入的,也即一个线程上调用cs_->Enter()上锁之后,在释放锁之前,同一个线程可以反复调用cs_->Enter()而不会阻塞,因此被称为“可重入锁”。同一个线程上锁一次,processing_就增1,记录上锁次数,只要processing_不为0,表示我正在Clear操作或者后文的Process*方法,这两个方法不会改变MQM中Vector列表,因此,可以在解锁之前,重入进行反复操作,但是不允许Add和Remove操作,因为其会改变Vector,这就是为什么Add和Remove函数中既加锁了,还要断言processing_必须为0,否则代码就是写得有Bug了。

处理所有MQ中的消息

这个方法目前还没有完全的理解,首先说下我自己的分析。

static void ProcessAllMessageQueues() { ProcessAllMessageQueuesForTesting(); }

void MessageQueueManager::ProcessAllMessageQueuesForTesting() {
  return Instance()->ProcessAllMessageQueuesInternal();
}

void MessageQueueManager::ProcessAllMessageQueuesInternal() {
  // This works by posting a delayed message at the current time and waiting
  // for it to be dispatched on all queues, which will ensure that all messages
  // that came before it were also dispatched.
  volatile int queues_not_done = 0;

  // This class is used so that whether the posted message is processed, or the
  // message queue is simply cleared, queues_not_done gets decremented.
  class ScopedIncrement : public MessageData {
   public:
    ScopedIncrement(volatile int* value) : value_(value) {
      AtomicOps::Increment(value_);
    }
    ~ScopedIncrement() override { AtomicOps::Decrement(value_); }

   private:
    volatile int* value_;
  };

  {
    MarkProcessingCritScope cs(&crit_, &processing_);
    for (MessageQueue* queue : message_queues_) {
      if (!queue->IsProcessingMessagesForTesting()) {
        // If the queue is not processing messages, it can
        // be ignored. If we tried to post a message to it, it would be dropped
        // or ignored.
        continue;
      }
      queue->PostDelayed(RTC_FROM_HERE, 0, nullptr, MQID_DISPOSE,
                         new ScopedIncrement(&queues_not_done));
    }
  }
  rtc::Thread* current = rtc::Thread::Current();
  // Note: One of the message queues may have been on this thread, which is
  // why we can't synchronously wait for queues_not_done to go to 0; we need
  // to process messages as well.
  while (AtomicOps::AcquireLoad(&queues_not_done) > 0) {
    if (current) {
      current->ProcessMessages(0);
    }
  }
}

总结

上一篇 下一篇

猜你喜欢

热点阅读