Java拾遗:003 - ConcurrentHashMap源码
JDK1.7 ConcurrentHashMap实现原理浅析
在多线程场景下使用HashMap会造成死循环,CPU100%等问题,所以我们不能在多线程场景下使用HashMap,另外一个集合类HashTable是线程安全的,但其使用synchronized
这种粗粒度的锁来实现的,所以并发场景下性能低下,在多线程(并发)场景下我们推荐使用ConcurrentHashMap类。
这里放一张ConcurrentHashMap的类图:

可以看出该类也实现了Map接口,所以通常可以直接替换HashMap使用而不用修改业务代码。
HashTable之所以性能低下,原因是多线程竞争同一把锁(HashTable粗暴的为整个存储结构加了锁),而ConcurrentHashMap则改进这了一点。该类通过分段加锁来降低资源竞争,底层的存储数组结构不再像HashMap一样直接是一个哈希表(数组),而是使用Segment数组来实现分片,Segment类继承了ReentrantLock类,所以它本身也是一个可重入锁,每个Segment则相当于一个HashMap,同样使用哈希表存储数据,每个Bucket都是一个链表,其内部实现思想与HashMap基本一致,不同的是put、remove等方法都是加了锁的。这样分段加锁的好处是,如果两个线程操作的不是同一个Segment,则相互不影响,不用相互等待,从而提升了性能。
Segment数组本身是不加锁的,那么在向ConcurrentHashMap中添加元素时,会根据键计算出的HashCode来定位Segment,这个过程因为不涉及修改操作,所以不需要加锁。而针对特定的Segment内部数据进行操作,则需要加锁,下面以JDK1.7版ConcurrentHashMap源码为例进行解读。
JDK1.7 ConcurrentHashMap源码解读
ConcurrentHashMap底层实现涉及多个内部类,这里简述一下
- HashEntry类
static final class HashEntry<K,V> {
final int hash;
final K key;
volatile V value;
volatile HashEntry<K,V> next;
// ... ...
}
- Segment类(这里删除了代码细节和注释)
static final class Segment<K,V> extends ReentrantLock implements Serializable {
static final int MAX_SCAN_RETRIES =
Runtime.getRuntime().availableProcessors() > 1 ? 64 : 1;
transient volatile HashEntry<K,V>[] table;
transient int count;
transient int modCount;
transient int threshold;
final float loadFactor;
Segment(float lf, int threshold, HashEntry<K,V>[] tab) {}
final V put(K key, int hash, V value, boolean onlyIfAbsent) {}
@SuppressWarnings("unchecked")
private void rehash(HashEntry<K,V> node) {}
private HashEntry<K,V> scanAndLockForPut(K key, int hash, V value) {}
private void scanAndLock(Object key, int hash) {}
final V remove(Object key, int hash, Object value) {}
final boolean replace(K key, int hash, V oldValue, V newValue) {}
final V replace(K key, int hash, V value) {}
final void clear() {}
}
ConcurrentHashMap中分段是由Segment数组实现的,而每个Segment的内部存储结构为哈希表(数组),而每个Bucket则是由HashEntry构成的链表组成(这点与HashMap是一样的)。
下面通过ConcurrentHashMap中的几个主要方法来解读
构造方法
public ConcurrentHashMap(int initialCapacity,
float loadFactor, int concurrencyLevel) {
if (!(loadFactor > 0) || initialCapacity < 0 || concurrencyLevel <= 0)
throw new IllegalArgumentException();
if (concurrencyLevel > MAX_SEGMENTS)
concurrencyLevel = MAX_SEGMENTS;
// Find power-of-two sizes best matching arguments
int sshift = 0;
int ssize = 1;
// 找到刚好比 concurrencyLevel 大或相等的2的整数次幂
while (ssize < concurrencyLevel) {
++sshift;
ssize <<= 1;
}
this.segmentShift = 32 - sshift;
this.segmentMask = ssize - 1;
if (initialCapacity > MAXIMUM_CAPACITY)
initialCapacity = MAXIMUM_CAPACITY;
int c = initialCapacity / ssize;
if (c * ssize < initialCapacity)
++c;
// 计算每段容量(取刚好大于等于c的2的整数次幂)
int cap = MIN_SEGMENT_TABLE_CAPACITY;
while (cap < c)
cap <<= 1;
// create segments and segments[0]
Segment<K,V> s0 =
new Segment<K,V>(loadFactor, (int)(cap * loadFactor),
(HashEntry<K,V>[])new HashEntry[cap]);
Segment<K,V>[] ss = (Segment<K,V>[])new Segment[ssize];
UNSAFE.putOrderedObject(ss, SBASE, s0); // ordered write of segments[0]
this.segments = ss;
}
与HashMap不同该类的构造方法多了一个concurrencyLevel
参数,该参数主要用于控制分段数,该类的其它构造方法都脱胎与该方法,这里不再赘述,其中无参构造方法中的参数默认值分别是:initialCapacity=16
、loadFactor=0.75f
、concurrencyLevel=16
。
构造方法中分别初始化了:分段数、每段容器大小、Segment数组和第一个Segment节点。
isEmpty和size方法
public boolean isEmpty() {
long sum = 0L;
final Segment<K,V>[] segments = this.segments;
for (int j = 0; j < segments.length; ++j) {
Segment<K,V> seg = segmentAt(segments, j);
if (seg != null) {
if (seg.count != 0)
return false;
sum += seg.modCount;
}
}
if (sum != 0L) { // recheck unless no modifications
for (int j = 0; j < segments.length; ++j) {
Segment<K,V> seg = segmentAt(segments, j);
if (seg != null) {
if (seg.count != 0)
return false;
sum -= seg.modCount;
}
}
if (sum != 0L)
return false;
}
return true;
}
public int size() {
// Try a few times to get accurate count. On failure due to
// continuous async changes in table, resort to locking.
final Segment<K,V>[] segments = this.segments;
int size;
boolean overflow; // true if size overflows 32 bits
long sum; // sum of modCounts
long last = 0L; // previous sum
int retries = -1; // first iteration isn't retry
try {
for (;;) {
if (retries++ == RETRIES_BEFORE_LOCK) {
for (int j = 0; j < segments.length; ++j)
ensureSegment(j).lock(); // force creation
}
sum = 0L;
size = 0;
overflow = false;
for (int j = 0; j < segments.length; ++j) {
Segment<K,V> seg = segmentAt(segments, j);
if (seg != null) {
sum += seg.modCount;
int c = seg.count;
if (c < 0 || (size += c) < 0)
overflow = true;
}
}
if (sum == last)
break;
last = sum;
}
} finally {
if (retries > RETRIES_BEFORE_LOCK) {
for (int j = 0; j < segments.length; ++j)
segmentAt(segments, j).unlock();
}
}
return overflow ? Integer.MAX_VALUE : size;
}
两个实现方法的思路相同,都是遍历全部Segment,再计算每个Segment内部元素个数。需要注意的是为了防止在方法执行过程中,Segment本身会发生变化(如:添加、删除元素等),但遍历过程中对Segment加锁,方法执行结束后释放锁,所以这两个方法的性能不如HashMap的高(应用场景不同,本身也没什么可比性)。
put、putIfAbsent方法
public V put(K key, V value) {
Segment<K,V> s;
if (value == null)
throw new NullPointerException();
int hash = hash(key);
int j = (hash >>> segmentShift) & segmentMask;
if ((s = (Segment<K,V>)UNSAFE.getObject // nonvolatile; recheck
(segments, (j << SSHIFT) + SBASE)) == null) // in ensureSegment
s = ensureSegment(j);
return s.put(key, hash, value, false);
}
public V putIfAbsent(K key, V value) {
Segment<K,V> s;
if (value == null)
throw new NullPointerException();
int hash = hash(key);
int j = (hash >>> segmentShift) & segmentMask;
if ((s = (Segment<K,V>)UNSAFE.getObject
(segments, (j << SSHIFT) + SBASE)) == null)
s = ensureSegment(j);
return s.put(key, hash, value, true);
}
static final class Segment<K,V> extends ReentrantLock implements Serializable {
final V put(K key, int hash, V value, boolean onlyIfAbsent) {
HashEntry<K,V> node = tryLock() ? null :
scanAndLockForPut(key, hash, value);
V oldValue;
try {
HashEntry<K,V>[] tab = table;
int index = (tab.length - 1) & hash;
HashEntry<K,V> first = entryAt(tab, index);
for (HashEntry<K,V> e = first;;) {
if (e != null) {
K k;
if ((k = e.key) == key ||
(e.hash == hash && key.equals(k))) {
oldValue = e.value;
if (!onlyIfAbsent) {
e.value = value;
++modCount;
}
break;
}
e = e.next;
}
else {
if (node != null)
node.setNext(first);
else
node = new HashEntry<K,V>(hash, key, value, first);
int c = count + 1;
if (c > threshold && tab.length < MAXIMUM_CAPACITY)
rehash(node);
else
setEntryAt(tab, index, node);
++modCount;
count = c;
oldValue = null;
break;
}
}
} finally {
unlock();
}
return oldValue;
}
}
put方法的逻辑比较深,但有HashMap的源码基础的话,其实也不复杂。在ConcurrentHashMap中的put方法实际上只是根据HashCode找到对应的Segment,这个过程不需要加锁,而实际put动作是由Segment类中的put方法完成的。
该方法相比HashMap中的put方法,只是增加了锁的机制(毕竟是面向多线程场景)。
containsKey、containsValue、contains方法
public boolean containsKey(Object key) {
Segment<K,V> s; // same as get() except no need for volatile value read
HashEntry<K,V>[] tab;
int h = hash(key);
long u = (((h >>> segmentShift) & segmentMask) << SSHIFT) + SBASE;
if ((s = (Segment<K,V>)UNSAFE.getObjectVolatile(segments, u)) != null &&
(tab = s.table) != null) {
for (HashEntry<K,V> e = (HashEntry<K,V>) UNSAFE.getObjectVolatile
(tab, ((long)(((tab.length - 1) & h)) << TSHIFT) + TBASE);
e != null; e = e.next) {
K k;
if ((k = e.key) == key || (e.hash == h && key.equals(k)))
return true;
}
}
return false;
}
只是简单的查找,与size不同的是,不需要加锁(确实也没有加锁的必要,如果元素存在则不再添加,可以使用putIfAbsent方法)。
get方法
public V get(Object key) {
Segment<K,V> s; // manually integrate access methods to reduce overhead
HashEntry<K,V>[] tab;
int h = hash(key);
long u = (((h >>> segmentShift) & segmentMask) << SSHIFT) + SBASE;
if ((s = (Segment<K,V>)UNSAFE.getObjectVolatile(segments, u)) != null &&
(tab = s.table) != null) {
for (HashEntry<K,V> e = (HashEntry<K,V>) UNSAFE.getObjectVolatile
(tab, ((long)(((tab.length - 1) & h)) << TSHIFT) + TBASE);
e != null; e = e.next) {
K k;
if ((k = e.key) == key || (e.hash == h && key.equals(k)))
return e.value;
}
}
return null;
}
remove方法
public V remove(Object key) {
int hash = hash(key);
Segment<K,V> s = segmentForHash(hash);
return s == null ? null : s.remove(key, hash, null);
}
static final class Segment<K,V> extends ReentrantLock implements Serializable {
final V remove(Object key, int hash, Object value) {
if (!tryLock())
scanAndLock(key, hash);
V oldValue = null;
try {
HashEntry<K,V>[] tab = table;
int index = (tab.length - 1) & hash;
HashEntry<K,V> e = entryAt(tab, index);
HashEntry<K,V> pred = null;
while (e != null) {
K k;
HashEntry<K,V> next = e.next;
if ((k = e.key) == key ||
(e.hash == hash && key.equals(k))) {
V v = e.value;
if (value == null || value == v || value.equals(v)) {
if (pred == null)
setEntryAt(tab, index, next);
else
pred.setNext(next);
++modCount;
--count;
oldValue = v;
}
break;
}
pred = e;
e = next;
}
} finally {
unlock();
}
return oldValue;
}
}
结语
偷懒了,偷懒了,最近天天看源码,看得头大,这篇就到这里了(草草结束),主要是理解实现原理,后面再完善细节吧。