您的位置:首页 > 运维架构

Hadoop多次格式化出现17/12/19 00:19:51 WARN hdfs.DataStreamer: DataStreamer Excep

2017-12-19 13:29 519 查看

在进行Hadoop伪分布式安装的时候我格式化一次,后来又修改了core-site.xml,又进行了一次格式化,然后使用MapReduce 进行测试计算的时候抛出下面的异常:

[hadoop@zydatahadoop001 hadoop]$ bin/hadoop jar ./share/hadoop/mapreduce/hadoop-mapreduce-examples-2.8.1.jar pi 5 10
Number of Maps  = 5
Samples per Map = 10
17/12/19 00:19:51 WARN hdfs.DataStreamer: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/hadoop/QuasiMonteCarlo_1513613989668_1586732403/in/part0 could only be replicated to 0 nodes instead of minReplication (=1).  There are 0 datanode(s) running and no node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1738)
at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:265)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2496)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:828)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:506)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:845)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:788)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1807)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2455)

at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1481)
at org.apache.hadoop.ipc.Client.call(Client.java:1427)
at org.apache.hadoop.ipc.Client.call(Client.java:1337)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:440)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:398)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:163)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:335)
at com.sun.proxy.$Proxy11.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.DataStreamer.locateFollowingBlock(DataStreamer.java:1733)
at org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1536)
at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:658)
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/hadoop/QuasiMonteCarlo_1513613989668_1586732403/in/part0 could only be replicated to 0 nodes instead of minReplication (=1).  There are 0 datanode(s) running and no node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1738)
at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:265)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2496)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:828)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:506)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:845)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:788)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1807)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2455)

at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1481)
at org.apache.hadoop.ipc.Client.call(Client.java:1427)
at org.apache.hadoop.ipc.Client.call(Client.java:1337)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:440)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:398)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:163)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:335)
at com.sun.proxy.$Proxy11.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.DataStreamer.locateFollowingBlock(DataStreamer.java:1733)
at org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1536)
at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:658)


错误原因:由于进行了多次格式化使得datanode无法正常启动,即出现

下面的错误信息:

17/12/19 00:19:51 WARN hdfs.DataStreamer: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/hadoop/QuasiMonteCarlo_1513613989668_1586732403/in/part0 could only be replicated to 0 nodes instead of minReplication (=1).  There are 0 datanode(s) running and no node(s) are excluded in this operation.


百度上总结了很多基本解决办法,总结如下:

先后启动namenode、datanode

$hadoop-daemon.sh start namenode
$hadoop-daemon.sh start datanode


确保master(namenode) 、slaves(datanode)的防火墙已经关闭

确保DFS空间的使用情况

Hadoop默认的hadoop.tmp.dir的路径为/tmp/hadoop-${user.name},而有的linux系统的/tmp目录文件系统的类型往往是Hadoop不支持的。

但还是不行,原因是因为当我们执行文件系统格式化时,会在namenode数据文件夹(/tmp/hadoop-hadoop/dfs/data/current)中保存一个current/VERSION文件,记录namespaceID,标识了所格式化的 namenode的版本。如果我们频繁的格式化namenode,那么datanode中保存(即配置文件中dfs.data.dir在本地系统的路径)的current/VERSION文件只是你第一次格式化时保存的namenode的ID,因此就会造成datanode与namenode之间的id不一致。

以下提供两种解决办法:

方式1

删除前先关闭进程

[hadoop@zydatahadoop001 sbin]$ jps
26884 Jps
26043 SecondaryNameNode
25756 NameNode
[hadoop@zydatahadoop001 sbin]$ kill -9 26043
[hadoop@zydatahadoop001 sbin]$ kill -9 25756


删除
/tmp/hadoop-hadoop/dfs/data/
data目录下的文件

[hadoop@zydatahadoop001 ~]$ cd /tmp/hadoop-hadoop/dfs/data/
[hadoop@zydatahadoop001 data]$ ll
total 8
drwxrwxr-x. 3 hadoop hadoop 4096 Dec 19 00:33 current
-rw-rw-r--. 1 hadoop hadoop   21 Dec 19 00:33 in_use.lock
[hadoop@zydatahadoop001 data]$ rm -rf current


这时候在进行格式化

[hadoop@zydatahadoop001 bin]$ hdfs namenode -format


问题就解决了。

方式2

修改current目录的VERSION中的clusterID使两个的clusterID相同,就可以解决了(/tmp/hadoop-hadoop/dfs/data/)

[hadoop@zydatahadoop001 ~]$ cd /tmp/hadoop-hadoop/dfs/data/
[hadoop@zydatahadoop001 data]$ ll
total 8
drwxrwxr-x. 3 hadoop hadoop 4096 Dec 19 00:33 current
-rw-rw-r--. 1 hadoop hadoop   21 Dec 19 00:33 in_use.lock

[hadoop@zydatahadoop001 current]$ cat VERSION
#Tue Dec 19 00:33:48 CST 2017
storageID=DS-d9b740ba-fe83-44ec-b3d2-e21da706a597
clusterID=CID-2544b3e3-8400-47a6-a253-63dffc356e47
cTime=0
datanodeUuid=79d78c7d-cd1b-4854-89df-c063ce5fba86
storageType=DATA_NODE
layoutVersion=-57


这时候在进行格式化

[hadoop@zydatahadoop001 bin]$ hdfs namenode -format


格式化之后jps查看datanode启动。问题就解决了。

来自@若泽大数据
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
相关文章推荐