Hadoop多次格式化出现17/12/19 00:19:51 WARN hdfs.DataStreamer: DataStreamer Excep
2017-12-19 13:29
519 查看
在进行Hadoop伪分布式安装的时候我格式化一次,后来又修改了core-site.xml,又进行了一次格式化,然后使用MapReduce 进行测试计算的时候抛出下面的异常:
[hadoop@zydatahadoop001 hadoop]$ bin/hadoop jar ./share/hadoop/mapreduce/hadoop-mapreduce-examples-2.8.1.jar pi 5 10 Number of Maps = 5 Samples per Map = 10 17/12/19 00:19:51 WARN hdfs.DataStreamer: DataStreamer Exception org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/hadoop/QuasiMonteCarlo_1513613989668_1586732403/in/part0 could only be replicated to 0 nodes instead of minReplication (=1). There are 0 datanode(s) running and no node(s) are excluded in this operation. at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1738) at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:265) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2496) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:828) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:506) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:845) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:788) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1807) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2455) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1481) at org.apache.hadoop.ipc.Client.call(Client.java:1427) at org.apache.hadoop.ipc.Client.call(Client.java:1337) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116) at com.sun.proxy.$Proxy10.addBlock(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:440) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:398) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:163) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:335) at com.sun.proxy.$Proxy11.addBlock(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.locateFollowingBlock(DataStreamer.java:1733) at org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1536) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:658) org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/hadoop/QuasiMonteCarlo_1513613989668_1586732403/in/part0 could only be replicated to 0 nodes instead of minReplication (=1). There are 0 datanode(s) running and no node(s) are excluded in this operation. at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1738) at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:265) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2496) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:828) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:506) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:845) at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:788) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1807) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2455) at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1481) at org.apache.hadoop.ipc.Client.call(Client.java:1427) at org.apache.hadoop.ipc.Client.call(Client.java:1337) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116) at com.sun.proxy.$Proxy10.addBlock(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:440) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:398) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:163) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155) at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:335) at com.sun.proxy.$Proxy11.addBlock(Unknown Source) at org.apache.hadoop.hdfs.DataStreamer.locateFollowingBlock(DataStreamer.java:1733) at org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1536) at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:658)
错误原因:由于进行了多次格式化使得datanode无法正常启动,即出现
下面的错误信息:
17/12/19 00:19:51 WARN hdfs.DataStreamer: DataStreamer Exception org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/hadoop/QuasiMonteCarlo_1513613989668_1586732403/in/part0 could only be replicated to 0 nodes instead of minReplication (=1). There are 0 datanode(s) running and no node(s) are excluded in this operation.
百度上总结了很多基本解决办法,总结如下:
先后启动namenode、datanode
$hadoop-daemon.sh start namenode $hadoop-daemon.sh start datanode
确保master(namenode) 、slaves(datanode)的防火墙已经关闭
确保DFS空间的使用情况
Hadoop默认的hadoop.tmp.dir的路径为/tmp/hadoop-${user.name},而有的linux系统的/tmp目录文件系统的类型往往是Hadoop不支持的。
但还是不行,原因是因为当我们执行文件系统格式化时,会在namenode数据文件夹(/tmp/hadoop-hadoop/dfs/data/current)中保存一个current/VERSION文件,记录namespaceID,标识了所格式化的 namenode的版本。如果我们频繁的格式化namenode,那么datanode中保存(即配置文件中dfs.data.dir在本地系统的路径)的current/VERSION文件只是你第一次格式化时保存的namenode的ID,因此就会造成datanode与namenode之间的id不一致。
以下提供两种解决办法:
方式1删除前先关闭进程
[hadoop@zydatahadoop001 sbin]$ jps 26884 Jps 26043 SecondaryNameNode 25756 NameNode [hadoop@zydatahadoop001 sbin]$ kill -9 26043 [hadoop@zydatahadoop001 sbin]$ kill -9 25756
删除
/tmp/hadoop-hadoop/dfs/data/data目录下的文件
[hadoop@zydatahadoop001 ~]$ cd /tmp/hadoop-hadoop/dfs/data/ [hadoop@zydatahadoop001 data]$ ll total 8 drwxrwxr-x. 3 hadoop hadoop 4096 Dec 19 00:33 current -rw-rw-r--. 1 hadoop hadoop 21 Dec 19 00:33 in_use.lock [hadoop@zydatahadoop001 data]$ rm -rf current
这时候在进行格式化
[hadoop@zydatahadoop001 bin]$ hdfs namenode -format
问题就解决了。
方式2
修改current目录的VERSION中的clusterID使两个的clusterID相同,就可以解决了(/tmp/hadoop-hadoop/dfs/data/)
[hadoop@zydatahadoop001 ~]$ cd /tmp/hadoop-hadoop/dfs/data/ [hadoop@zydatahadoop001 data]$ ll total 8 drwxrwxr-x. 3 hadoop hadoop 4096 Dec 19 00:33 current -rw-rw-r--. 1 hadoop hadoop 21 Dec 19 00:33 in_use.lock [hadoop@zydatahadoop001 current]$ cat VERSION #Tue Dec 19 00:33:48 CST 2017 storageID=DS-d9b740ba-fe83-44ec-b3d2-e21da706a597 clusterID=CID-2544b3e3-8400-47a6-a253-63dffc356e47 cTime=0 datanodeUuid=79d78c7d-cd1b-4854-89df-c063ce5fba86 storageType=DATA_NODE layoutVersion=-57
这时候在进行格式化
[hadoop@zydatahadoop001 bin]$ hdfs namenode -format
格式化之后jps查看datanode启动。问题就解决了。
来自@若泽大数据
相关文章推荐
- 安装单机Hadoop时格式化HDFS出现问题
- hadoop多次格式化后出现datanode无法正常启动的解决办法
- hadoop 中 HDFS 由于多次格式化引起的 nanenode 与 datanode 启动异常
- Hadoop: HDFS 格式化时,出现 “ERROR namenode.NameNode: java.io.IOException: Cannot create directory /usr/hadoop/tmp/dfs/name/current”
- hadoop格式化HDFS出现错误解决办法
- Hadoop格式化HDFS报错
- hadoop多次格式化后,导致datanode启动不了
- Hadoop异常 hdfs.DFSClient: DataStreamer
- ubuntu14.04+eclipse(mars)+hadoop-2.7.1开发环境调试程序出现log4j:WARN no appenders could be found for logger。。
- 564 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: IOException in offerService java.io.IOExce
- hadoop多次格式化后导致VERSION不一致,启动namenode和datanode时报错的解决方法
- hadoop多次格式化后,导致datanode启动不了
- (2)Hadoop重新格式化HDFS的方法
- 从键盘上输入一个数,将其插入到数列{2,5,6,8,12,13,15,17,19,22}中,并保证该数列的有序性。
- Hadoop在格式化HDFS时,报未知名主机错误UnknownHostException
- hadoop2.5.2出现 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform… us
- 快速判断一个数能否被1、2、3、4、5、6、7、8、9、10、11、12、13、17、19、23等整除的规律总结
- hadoop多次格式化后,导致datanode启动不了
- hadoop+eclipse 调试程序出现的问题warn: no job jar file set 以及 点击run on hadoop 无反应
- hadoop多次格式化后,导致datanode启动不了