03_HashMap源码剖析
一、 基本原理
- HashMap底层基于数组+链表的数据结构,当出现hash冲突的时候,就将冲突的节点挂在链表尾部
- JDK8以后,为了提高性能,解决hash冲突采用了链表+红黑树,如果只有链表的话,他的查询时间复杂度为O(n),而红黑树时间复杂度为O(log(n)
二、红黑树简述
- 红黑树是二叉查找树,左小右大,根据这个规则可以快速查找某个值
- 普通的二叉查找树,是有可能出现瘸子的情况,只有一条腿,不平衡了,导致查询性能变成O(n),线性查询了
- 红黑树,红色和黑色两种节点,会有条件限制去保证树是平衡的,不会出现瘸腿的情况
- 如果插入节点的时候破坏了红黑树的规则和平衡,会自动重新平衡,变色(红 <-> 黑),旋转,左旋转,右旋转
- 如果要完全搞得红黑树,还是需要花点时间和精力的,我们研究HashMap的话,重点放在源码上
三、核心成员变量
/**
* HashMap里的数组默认大小,16
* The default initial capacity - MUST be a power of two.
*/
static final int DEFAULT_INITIAL_CAPACITY = 1 << 4; // aka 16
/**
* The maximum capacity, used if a higher value is implicitly specified
* by either of the constructors with arguments.
* MUST be a power of two <= 1<<30.
*/
static final int MAXIMUM_CAPACITY = 1 << 30;
/**
* 默认加载因子,0.75f,当数组里的元素达到 16 *0.75 = 12的时候,就会进行扩容
* 这个参数我们一般不会去修改,采用默认的就好
* The load factor used when none specified in constructor.
*/
static final float DEFAULT_LOAD_FACTOR = 0.75f;
/**
* The bin count threshold for using a tree rather than list for a
* bin. Bins are converted to trees when adding an element to a
* bin with at least this many nodes. The value must be greater
* than 2 and should be at least 8 to mesh with assumptions in
* tree removal about conversion back to plain bins upon
* shrinkage.
*/
static final int TREEIFY_THRESHOLD = 8;
/**
* The bin count threshold for untreeifying a (split) bin during a
* resize operation. Should be less than TREEIFY_THRESHOLD, and at
* most 6 to mesh with shrinkage detection under removal.
*/
static final int UNTREEIFY_THRESHOLD = 6;
/**
* The smallest table capacity for which bins may be treeified.
* (Otherwise the table is resized if too many nodes in a bin.)
* Should be at least 4 * TREEIFY_THRESHOLD to avoid conflicts
* between resizing and treeification thresholds.
*/
static final int MIN_TREEIFY_CAPACITY = 64;
/**
* 这个Node其实就是代表数组里的key-value对,key的hash值,key,vlue,以及链表指向的下一个指针
* Basic hash bin node, used for most entries. (See below for
* TreeNode subclass, and in LinkedHashMap for its Entry subclass.)
*/
static class Node<K,V> implements Map.Entry<K,V> {
final int hash;
final K key;
V value;
Node<K,V> next;
Node(int hash, K key, V value, Node<K,V> next) {
this.hash = hash;
this.key = key;
this.value = value;
this.next = next;
}
/**
* 代表map的底层的数组
* The table, initialized on first use, and resized as
* necessary. When allocated, length is always a power of two.
* (We also tolerate length zero in some operations to allow
* bootstrapping mechanics that are currently not needed.)
*/
transient Node<K,V>[] table;
/**
* Holds cached entrySet(). Note that AbstractMap fields are used
* for keySet() and values().
*/
transient Set<Map.Entry<K,V>> entrySet;
/**
* The number of key-value mappings contained in this map.
*/
transient int size;
四、hashmap如何降低hash冲突的算法
/**
* 将key-value放入到map中,如果这个key已经存在的话,就会将原来的值替换掉
* Associates the specified value with the specified key in this map.
* If the map previously contained a mapping for the key, the old
* value is replaced.
*
* @param key key with which the specified value is to be associated
* @param value value to be associated with the specified key
* @return the previous value associated with <tt>key</tt>, or
* <tt>null</tt> if there was no mapping for <tt>key</tt>.
* (A <tt>null</tt> return can also indicate that the map
* previously associated <tt>null</tt> with <tt>key</tt>.)
*/
public V put(K key, V value) {
return putVal(hash(key), key, value, false, true);
}
/**
* Computes key.hashCode() and spreads (XORs) higher bits of hash
* to lower. Because the table uses power-of-two masking, sets of
* hashes that vary only in bits above the current mask will
* always collide. (Among known examples are sets of Float keys
* holding consecutive whole numbers in small tables.) So we
* apply a transform that spreads the impact of higher bits
* downward. There is a tradeoff between speed, utility, and
* quality of bit-spreading. Because many common sets of hashes
* are already reasonably distributed (so don't benefit from
* spreading), and because we use trees to handle large sets of
* collisions in bins, we just XOR some shifted bits in the
* cheapest possible way to reduce systematic lossage, as well as
* to incorporate impact of the highest bits that would otherwise
* never be used in index calculations because of table bounds.
*/
static final int hash(Object key) {
int h;
return (key == null) ? 0 : (h = key.hashCode()) ^ (h >>> 16);
}
- 针对这个hash算法(h = key.hashCode()) ^ (h >>> 16),首先h为一个Int类型的变量
- 假设这个hashCode为1111 1111 1111 1111 1111 1010 0111 1100,那么h>>>16,就是将1111 1111 1111 1111 1111 1010 0111 1100右移16位
右移16位后为0000 0000 0000 0000 1111 1111 1111 1111
然后将右移16位的h和原来的h进行异或运算
1111 1111 1111 1111 1111 1010 0111 1100
^0000 0000 0000 0000 1111 1111 1111 1111
1111 1111 1111 1111 0000 0101 1000 0011
这样计算,其实就是将h的高16位和低16位进行一个异或运算,保证同时将高16位和低16位的特征同时纳入运算。通过这样的方式算出来的hash值,可以降低hash冲突的概率
五、put操作以及hash寻址算法
- 这里的源码细节中的一些参数属于核心,捋清楚这些参数是读懂源码的关键
- 我们知道,hashmap底层是基于数组和链表实现的。当出现hash冲突的时候,用链表来解决hash冲突,但是链表的get时间复杂度是O(n),正常来说,table[i]数组索引直接定位的方式的话,O(1)
- 如果链表,大量的key冲突,会导致get()操作,性能急剧下降,导致很多的问题
- JDK 1.8以后人家优化了这块东西,会判断,如果链表的长度达到8的时候,那么就会将链表转换为红黑树,如果用红黑树的话,get()操作,即使对一个很大的红黑树进行二叉查找,那么时间复杂度会变成O(logn),性能会比链表的O(n)得到大幅度的提升
- 新的数组是老数组的大小的两倍
- 扩容过以后,会判断一下,如果是一个链表里的元素的话,那么要么是直接放在新数组的原来的那个index,要么就是原来的index + oldCap
···
public V put(K key, V value) {
return putVal(hash(key), key, value, false, true);
}
/**
-
Implements Map.put and related methods
-
@param hash hash for key
-
@param key the key
-
@param value the value to put
-
@param onlyIfAbsent if true, don't change existing value
-
@param evict if false, the table is in creation mode.
-
@return previous value, or null if none
*/
final V putVal(int hash, K key, V value, boolean onlyIfAbsent,
boolean evict) {Node<K,V>[] tab; Node<K,V> p; int n, I;
// 新增一个map的table是空的,所以会走if逻辑
if ((tab = table) == null || (n = tab.length) == 0)
// 这里会走resize方法,resize第一次进来创建一个默认大小16的空数组
// 这样n就是16
n = (tab = resize()).length;
// 这行代码是计算key在数组中的位置的关键
// 16-1 = 15 用15和上面计算的hash值进行与运算
// 上面的hash:1111 1111 1111 1111 0000 0101 1000 0011
// 15的二进制:0000 0000 0000 0000 0000 0000 0000 1111
// 所以结果为:index = 3
// 这样的话,其实就是在数组的第三个位置放入这个key-value对
// 这里采用的是二进制的与运算,而不是取模,是因为性能比取模运算要高很多,而且只有每次扩容的时候
// 数组的大小是2的n次方就可以保证二进制和取模运算的结果一样了
if ((p = tab[i = (n - 1) & hash]) == null)
// 这里的逻辑,就是通过hash寻找之后定位到的index位置上是空的,那么就可以直接将元素放到index的数组中
tab[i] = newNode(hash, key, value, null);
else {
// 走到else逻辑后,说明出现了hash冲突
Node<K,V> e; K k;
if (p.hash == hash &&
((k = p.key) == key || (key != null && key.equals(k))))
// 这里的条件,说明key相同,那么直接覆盖原来的value,而这里会把e指向index这个位置的node
e = p;
else if (p instanceof TreeNode)
e = ((TreeNode<K,V>)p).putTreeVal(this, tab, hash, key, value);
else {
for (int binCount = 0; ; ++binCount) {
if ((e = p.next) == null) {
// 这里是说,如果链表的长度大于了8,达到9时,那么就要将这个链表转换为一个红黑树的数据结构
p.next = newNode(hash, key, value, null);
if (binCount >= TREEIFY_THRESHOLD - 1) // -1 for 1st
treeifyBin(tab, hash);
break;
}
if (e.hash == hash &&
((k = e.key) == key || (key != null && key.equals(k))))
break;
p = e;
}
}
//
if (e != null) { // existing mapping for key
// oldValue原来老的值
V oldValue = e.value;
if (!onlyIfAbsent || oldValue == null)
e.value = value;
// value代表新的值,也就是将数组那个位置的Node的value设置为了新的value
afterNodeAccess(e);return oldValue; }
}
++modCount;
if (++size > threshold)
resize();
afterNodeInsertion(evict);
return null;
}
/**
- Initializes or doubles table size. If null, allocates in
- accord with initial capacity target held in field threshold.
- Otherwise, because we are using power-of-two expansion, the
- elements from each bin must either stay at same index, or move
- with a power of two offset in the new table.
- @return the table
*/
final Node<K,V>[] resize() {
// 第一次进来,table为空
// 我们不用一行一行的分析,其实这里第一次put方法进来的时候,就是创建一个空的
// 默认大小16的空数组
Node<K,V>[] oldTab = table;
int oldCap = (oldTab == null) ? 0 : oldTab.length;
int oldThr = threshold;
int newCap, newThr = 0;
if (oldCap > 0) {
if (oldCap >= MAXIMUM_CAPACITY) {
threshold = Integer.MAX_VALUE;
return oldTab;
}
else if ((newCap = oldCap << 1) < MAXIMUM_CAPACITY &&
oldCap >= DEFAULT_INITIAL_CAPACITY)
newThr = oldThr << 1; // double threshold
}
else if (oldThr > 0) // initial capacity was placed in threshold
newCap = oldThr;
else { // zero initial threshold signifies using defaults
newCap = DEFAULT_INITIAL_CAPACITY;
newThr = (int)(DEFAULT_LOAD_FACTOR * DEFAULT_INITIAL_CAPACITY);
}
if (newThr == 0) {
float ft = (float)newCap * loadFactor;
newThr = (newCap < MAXIMUM_CAPACITY && ft < (float)MAXIMUM_CAPACITY ?
(int)ft : Integer.MAX_VALUE);
}
threshold = newThr;
@SuppressWarnings({"rawtypes","unchecked"})
Node<K,V>[] newTab = (Node<K,V>[])new Node[newCap];
table = newTab;
if (oldTab != null) {
for (int j = 0; j < oldCap; ++j) {
Node<K,V> e;
if ((e = oldTab[j]) != null) {
oldTab[j] = null;
if (e.next == null)
newTab[e.hash & (newCap - 1)] = e;
else if (e instanceof TreeNode)
((TreeNode<K,V>)e).split(this, newTab, j, oldCap);
else { // preserve order
Node<K,V> loHead = null, loTail = null;
Node<K,V> hiHead = null, hiTail = null;
Node<K,V> next;
do {
next = e.next;
if ((e.hash & oldCap) == 0) {
if (loTail == null)
loHead = e;
else
loTail.next = e;
loTail = e;
}
else {
if (hiTail == null)
hiHead = e;
else
hiTail.next = e;
hiTail = e;
}
} while ((e = next) != null);
if (loTail != null) {
loTail.next = null;
newTab[j] = loHead;
}
if (hiTail != null) {
hiTail.next = null;
newTab[j + oldCap] = hiHead;
}
}
}
}
}
return newTab;
}
···
五、get和remove
get和remove的逻辑思路其实是类似的
···
public V get(Object key) {
Node<K,V> e;
// hash(key)首先找到key对应的index,然后使用getNode方法读取数据
return (e = getNode(hash(key), key)) == null ? null : e.value;
}
/**
- Implements Map.get and related methods
- @param hash hash for key
- @param key the key
- @return the node, or null if none
*/
final Node<K,V> getNode(int hash, Object key) {
Node<K,V>[] tab; Node<K,V> first, e; int n; K k;
// 如果说通过hash寻址算法找到的index的数据不为空的话
if ((tab = table) != null && (n = tab.length) > 0 &&
(first = tab[(n - 1) & hash]) != null) {
// 先检查是不是链表第一个元素,是的话就直接返回
if (first.hash == hash && // always check first node
((k = first.key) == key || (key != null && key.equals(k))))
return first;
if ((e = first.next) != null) {
if (first instanceof TreeNode)
// 如果链表是红黑树的话,使用红黑树的二分查找读取数据
return ((TreeNode<K,V>)first).getTreeNode(hash, key);
do {
// 否则遍历链表
if (e.hash == hash &&
((k = e.key) == key || (key != null && key.equals(k))))
return e;
} while ((e = e.next) != null);
}
}
return null;
}
/**
- Removes the mapping for the specified key from this map if present.
- @param key key whose mapping is to be removed from the map
- @return the previous value associated with <tt>key</tt>, or
<tt>null</tt> if there was no mapping for <tt>key</tt>.
(A <tt>null</tt> return can also indicate that the map
previously associated <tt>null</tt> with <tt>key</tt>.)
*/
public V remove(Object key) {
Node<K,V> e;
return (e = removeNode(hash(key), key, null, false, true)) == null ?
null : e.value;
}
/**
- Implements Map.remove and related methods
- @param hash hash for key
- @param key the key
- @param value the value to match if matchValue, else ignored
- @param matchValue if true only remove if value is equal
- @param movable if false do not move other nodes while removing
- @return the node, or null if none
*/
final Node<K,V> removeNode(int hash, Object key, Object value,
boolean matchValue, boolean movable) {
Node<K,V>[] tab; Node<K,V> p; int n, index;
if ((tab = table) != null && (n = tab.length) > 0 &&
(p = tab[index = (n - 1) & hash]) != null) {
Node<K,V> node = null, e; K k; V v;
if (p.hash == hash &&
((k = p.key) == key || (key != null && key.equals(k))))
node = p;
else if ((e = p.next) != null) {
if (p instanceof TreeNode)
node = ((TreeNode<K,V>)p).getTreeNode(hash, key);
else {
do {
if (e.hash == hash &&
((k = e.key) == key ||
(key != null && key.equals(k)))) {
node = e;
break;
}
p = e;
} while ((e = e.next) != null);
}
}
if (node != null && (!matchValue || (v = node.value) == value ||
(value != null && value.equals(v)))) {
if (node instanceof TreeNode)
((TreeNode<K,V>)node).removeTreeNode(this, tab, movable);
else if (node == p)
tab[index] = node.next;
else
p.next = node.next;
++modCount;
--size;
afterNodeRemoval(node);
return node;
}
}
return null;
}
···
总结
- hash算法:为什么要高位和低位做异或运算,这样可以保证说高16位和低16位都参与了hash寻址
- hash寻址没有使用取模而是使用了位运算,因为位运算的性能要远远的高于取模
- hash冲突后数据挂在链表上,当链表的数量达到8以上后,就会将链表升级为红黑树,避免读取数据的时候,遍历整个链表。