Hadoop 在关机重启后,namenode启动报错
2014-04-06 09:53
351 查看
Hadoop 在关机重启后,namenode启动报错:
2011-10-21 05:22:20,504 INFO org.apache.hadoop.hdfs.server.common.Storage: Storage directory /tmp/hadoop-fzuir/dfs/name does not exist.
2011-10-21 05:22:20,506 ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed.
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /tmp/hadoop-fzuir/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:291)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:97)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:379)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:353)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:254)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:434)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1153)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1162)
这个以开始的解决方法是将namenode重新再format下,但是后面想想不对,这样每次都format,那不是玩完了~~
然后就搜了下,发现是因为临时文件/tmp会被删除掉,解决方法就是修改core-site.xml,添加hadoop.tmp.dir属性:
<property>
<name>hadoop.tmp.dir</name>
<value>/home/fzuir/Hadoop0.20.203.0/tmp/hadoop-${user.name}</value>
</property>
问题解决了,重启电脑后,再去启动hadoop就不会出现/dfs/name is in an inconsistent state的错误了~~
转自:http://www.linuxidc.com/Linux/2012-02/55079.htm
2011-10-21 05:22:20,504 INFO org.apache.hadoop.hdfs.server.common.Storage: Storage directory /tmp/hadoop-fzuir/dfs/name does not exist.
2011-10-21 05:22:20,506 ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed.
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /tmp/hadoop-fzuir/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:291)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:97)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:379)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:353)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:254)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:434)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1153)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1162)
这个以开始的解决方法是将namenode重新再format下,但是后面想想不对,这样每次都format,那不是玩完了~~
然后就搜了下,发现是因为临时文件/tmp会被删除掉,解决方法就是修改core-site.xml,添加hadoop.tmp.dir属性:
<property>
<name>hadoop.tmp.dir</name>
<value>/home/fzuir/Hadoop0.20.203.0/tmp/hadoop-${user.name}</value>
</property>
问题解决了,重启电脑后,再去启动hadoop就不会出现/dfs/name is in an inconsistent state的错误了~~
转自:http://www.linuxidc.com/Linux/2012-02/55079.htm
相关文章推荐
- Hadoop0.20.203.0在关机重启后,namenode启动报错(/dfs/name is in an inconsistent state)
- Hadoop0.20.203.0在关机重启后,namenode启动报错(/dfs/name is in an inconsistent state)
- Hadoop0.20.203.0在关机重启后,namenode启动报错(/dfs/name is in an inconsistent state)
- Hadoop 在关机重启后,namenode启动报错
- ubuntu下hadoop的重启后namenode无法启动的解决方法
- Hadoop问题:启动hadoop时报namenode未初始化:java.io.IOException: NameNode is not formatted.
- Hadoop Namenode 无法启动 总结一
- hadoop集群启动namenode成功,而datanode未启动!
- Hadoop namenode无法启动常见解决办法
- hadoop启动后jps没有namenode
- hadoop启动后jps没有namenode(转)
- Hadoop namenode不能启动
- hadoop中master中为什么没有namenode启动org.apache.hadoop.dfs.SafeModeException: Cannot delete /user/报错
- Hadoop NameNode启动之PendingReplicationMonitor(四)
- Hadoop中NameNode存储的元数据记录和NameNode的启动过程
- Hadoop namenode无法启动
- hadoop错误一,namenode启动问题
- hadoop 在搭建分布式时遇到DataNode,NameNode,JobTracker,TaskTracker用jps查看无法启动解决办法
- hadoop分布式环境部署之namenode或datanode启动失败
- hadoop HA 集群启动发现现datanode没有启动,namenode clusterID与datanode clusterID不兼容,不匹配。