Hadoop大数据我爱编程

namenode无法启动问题

2018-05-17  本文已影响13人  dpengwang

Hadoop中运行start-all命令后,通过jps查看进程,发现namenode并没有启动

1.png

按照网上的方法,删除hadoop的临时目录tmp后重启,仍然没有用

查看日志如下


2.png
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /usr/local/hadoop/hadoop-2.8.3/tmp/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:369)
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:220)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1044)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:707)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:635)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:696)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:906)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:885)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1626)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1694)
2018-03-05 06:46:04,300 INFO org.mortbay.log: Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:50070
2018-03-05 06:46:04,402 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system...
2018-03-05 06:46:04,403 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped.
2018-03-05 06:46:04,404 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete.
2018-03-05 06:46:04,404 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /usr/local/hadoop/hadoop-2.8.3/tmp/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:369)
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:220)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1044)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:707)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:635)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:696)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:906)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:885)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1626)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1694)
2018-03-05 06:46:04,408 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1

/usr/local/hadoop/hadoop-2.8.3/tmp/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.

从这入手,这是一个临时目录,在每次hadoop重启的时候都会删除,所以找不到

解决方法:

修改core-site.xml

<property>
                <name>hadoop.tmp.dir</name>
                <value>file:///usr/local/hadoop/tmp</value>
</property>

改成如下:

<property>
                <name>hadoop.tmp.dir</name>
                <value>/usr/local/hadoop/tmp</value>
</property>

所以,每次重启临时文件都会被清除

上一篇下一篇

猜你喜欢

热点阅读