您的位置:首页 > Web前端 > Node.js

Hadoop多次格式化导致datanode无法启动

2017-12-16 14:28 555 查看
首先看一下报错信息:

017-12-14 05:07:57,636 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting BPOfferServices for nameservices: <default>
at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadDataStorage(DataStorage.java:409)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:388)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:556)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1566)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1527)
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:327)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:266)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:746)
at java.lang.Thread.run(Thread.java:745)
2017-12-14 05:07:58,922 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool <registering> (Datanode Uuid 63404450-ed85-4636-8eac-ea75dba1d424) service to hadoop/192.168.137.5:9000. Exiting.
java.io.IOException: All specified directories are failed to load.
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:557)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1566)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1527)
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:327)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:266)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:746)
at java.lang.Thread.run(Thread.java:745)


这里面的日志有一句报错非常重要

Incompatible clusterIDs in /tmp/hadoop-hadoop/dfs/data: namenode clusterID = CID-c80f243c-4a07-43f3-9eb8-f40d164a4520; datanode clusterID = CID-3e6fcd99-a2fe-42f3-9ccf-bc257a065eb3


这句话告诉我们,namenode的clusterID和datanode的clusterID不同,导致其无法启动。原因是我们多次格式化namenode导致两者id不同,无法启动。

解决方案:

根据日志,找到存放clusterID的位置i,根据我的日志位置为/tmp/hadoop-hadoop/dfs/data

hadoop:hadoop:/tmp/hadoop-hadoop/dfs/name:>ll
total 8
drwxrwxr-x 4 hadoop hadoop 4096 De
aff9
c 14 05:35 current
-rw-rw-r-- 1 hadoop hadoop   11 Dec 16 05:41 in_use.lock


将current目录的VERSION中的clusterID复制,覆盖到cd /tmp/hadoop-hadoop/dfs/data/current/VERSION的clusterID

使两个的clusterID相同,就可以解决了
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签:  hadoop 格式化