Java中hashCode和equals相关问题阐述
本文主要针对如下三个问题进行解释:
- 默认情况下hashCode相同是不是意味着equals方法相等?
- 默认情况下equals方法相等是不是意味着hashCode相同?
- 重写equals方法是不是需要重写hashCode方法?为什么?
默认情况下hashCode相同是不是意味着equals方法相等和问题?equals方法相等是不是意味着hashCode相同?
image.png之所以将这两个问题放在一起,是因为两个问题可以联系在一起回答,在Object类中的hashCode和equals方法中已经有了该问题的答案
图中红色区域的意思为:如果两个对象根据equals方法判定相等,那么这两个对象的hashCode方法必定是相同的integer的整形值。其中暗含了两层意思:
- equals相等的两个对象,其hashCode必定相等
- 通过equals判定前,必定有hashCode值比较判断的步骤
image.png看到这里我们自然疑惑hashCode方法是如何得到某个对象的hash值的,我们再看如下这句话
其中说到hashCode方法一种典型的实现是将对象在堆内的地址通过某种手段转成一个integer整形值,但是该方法是native修饰的,需要通过查阅openjdk的源码得到,查阅相关资料得到真正对应的hashCode生成方法如下
intptr_t ObjectSynchronizer::FastHashCode (Thread * Self, oop obj) {
if (UseBiasedLocking) {
// NOTE: many places throughout the JVM do not expect a safepoint
// to be taken here, in particular most operations on perm gen
// objects. However, we only ever bias Java instances and all of
// the call sites of identity_hash that might revoke biases have
// been checked to make sure they can handle a safepoint. The
// added check of the bias pattern is to avoid useless calls to
// thread-local storage.
if (obj->mark()->has_bias_pattern()) {
// Box and unbox the raw reference just in case we cause a STW safepoint.
Handle hobj (Self, obj) ;
// Relaxing assertion for bug 6320749.
assert (Universe::verify_in_progress() ||
!SafepointSynchronize::is_at_safepoint(),
biases should not be seen by VM thread here);
BiasedLocking::revoke_and_rebias(hobj, false, JavaThread::current());
obj = hobj() ;
assert(!obj->mark()->has_bias_pattern(), biases should be revoked by now);
}
}
// hashCode() is a heap mutator ...
// Relaxing assertion for bug 6320749.
assert (Universe::verify_in_progress() ||
!SafepointSynchronize::is_at_safepoint(), invariant) ;
assert (Universe::verify_in_progress() ||
Self->is_Java_thread() , invariant) ;
assert (Universe::verify_in_progress() ||
((JavaThread *)Self)->thread_state() != _thread_blocked, invariant) ;
ObjectMonitor* monitor = NULL;
markOop temp, test;
intptr_t hash;
markOop mark = ReadStableMark (obj);
// object should remain ineligible for biased locking
assert (!mark->has_bias_pattern(), invariant) ;
if (mark->is_neutral()) {
hash = mark->hash(); // this is a normal header
if (hash) { // if it has hash, just return it
return hash;
}
hash = get_next_hash(Self, obj); // allocate a new hash code
temp = mark->copy_set_hash(hash); // merge the hash code into header
// use (machine word version) atomic operation to install the hash
test = (markOop) Atomic::cmpxchg_ptr(temp, obj->mark_addr(), mark);
if (test == mark) {
return hash;
}
// If atomic operation failed, we must inflate the header
// into heavy weight monitor. We could add more code here
// for fast path, but it does not worth the complexity.
} else if (mark->has_monitor()) {
monitor = mark->monitor();
temp = monitor->header();
assert (temp->is_neutral(), invariant) ;
hash = temp->hash();
if (hash) {
return hash;
}
// Skip to the following code to reduce code size
} else if (Self->is_lock_owned((address)mark->locker())) {
temp = mark->displaced_mark_helper(); // this is a lightweight monitor owned
assert (temp->is_neutral(), invariant) ;
hash = temp->hash(); // by current thread, check if the displaced
if (hash) { // header contains hash code
return hash;
}
// WARNING:
// The displaced header is strictly immutable.
// It can NOT be changed in ANY cases. So we have
// to inflate the header into heavyweight monitor
// even the current thread owns the lock. The reason
// is the BasicLock (stack slot) will be asynchronously
// read by other threads during the inflate() function.
// Any change to stack may not propagate to other threads
// correctly.
}
// Inflate the monitor to set hash code
monitor = ObjectSynchronizer::inflate(Self, obj);
// Load displaced header and check it has hash code
mark = monitor->header();
assert (mark->is_neutral(), invariant) ;
hash = mark->hash();
if (hash == 0) {
hash = get_next_hash(Self, obj);
temp = mark->copy_set_hash(hash); // merge hash code into header
assert (temp->is_neutral(), invariant) ;
test = (markOop) Atomic::cmpxchg_ptr(temp, monitor, mark);
if (test != mark) {
// The only update to the header in the monitor (outside GC)
// is install the hash code. If someone add new usage of
// displaced header, please update this code
hash = test->hash();
assert (test->is_neutral(), invariant) ;
assert (hash != 0, Trivial unexpected object/monitor header usage.);
}
}
// We finally get the hash
return hash;
重写equals方法是不是需要重写hashCode方法?为什么?
image.png首先该问题的答案仍然在Object中的equals方法注释中写的很清楚,如下图所示
按红框处的官方解释来说,只要equals方法被重写了就必须重写hashCode方法,此处也解释了“必须”的原因,要维持hashCode方法的contract约定,hashCode方法中申明了相同的对象必须有相同的hash code。
为了进一步加深对该“必须”的理解,这里又从两个方面举例说明:
<li> 自己创建一个自定义对象,只重写equals方法而不重写hashCode方法,看看有什么问题
<li> 从HashMap源码的角度分析一下,如果只重写equals而不重写hashCode有什么问题
<p> 自己创建一个Person对象,有pname和age字段,如下所示
public class Person implements Serializable {
private static final long serialVersionUID = 7592930394427200495L;
private String pname;
private int age;
public Person() {
}
public Person(String pname, int age) {
this.pname = pname;
this.age = age;
}
public String getPname() {
return pname;
}
public void setPname(String pname) {
this.pname = pname;
}
public int getAge() {
return age;
}
public void setAge(int age) {
this.age = age;
}
@Override
public boolean equals(Object o) {
if (this == o) return true;
if (!(o instanceof Person)) return false;
Person person = (Person) o;
if (age != person.age) return false;
return !(pname != null ? !pname.equals(person.pname) : person.pname != null);
}
// @Override
// public int hashCode() {
// int result = pname != null ? pname.hashCode() : 0;
// result = 31 * result + age;
// return result;
// }
}
<p> 进行测试
@Test
public void fun() {
Person p1 = new Person("lisi", 15);
Person p2 = new Person("lisi", 15);
Assert.assertEquals(false, p1.equals(p2));
}
<p> 结果如下
image.pngp1和p2在java堆中肯定分属不同的Person实例对象,其地址必定不相同,但因为我们仅仅重写了equals方法,只对pname和age的值进行了比对从而导致了结果的错误,如果重写了hashCode方法,根据两个实例地址的相关算法进行判断就会避免这个问题
image.png同样的我们再来分析HashMap中的一段源码再次说明hashCode和equals方法同时重写的重要性,其中的关键点在于put操作时的逻辑
当新元素放入HashMap时,会首先计算出该元素对应放在哪一个Entry链表上(HashMap原理不了解的请查阅相关文档),然后通过和链表上的每一个元素比较,来判断新加入元素是否是重复元素,而判断重复元素的思路就体现了两个方法协同的重要性,首先会判断两个元素的hash值是否相等,再判断两个元素equals是否相等,设想一下,如果没有重写元素的hashCode方法,那么就有可能存在这种可能,两个元素不等,但hash code相等,重写的equals也相等(如Person例中),从而导致错误的覆盖