您的位置:首页 > 移动开发

hadoop append 追加文件 错误

2016-01-26 10:05 429 查看
2016-01-25 22:13:11,601 ERROR [Thread-173] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Error writing History Event: org.apache.hadoop.mapreduce.jobhistory.TaskFinishedEvent@57aa9c83

java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[192.168.28.40:50010,DS-5f89be6b-ef87-48a2-8d9c-d8b88fc229e8,DISK], DatanodeInfoWithStorage[192.168.28.41:50010,DS-7149d22f-6d54-4d52-8168-0dbbfff5ced4,DISK]],
original=[DatanodeInfoWithStorage[192.168.28.40:50010,DS-5f89be6b-ef87-48a2-8d9c-d8b88fc229e8,DISK], DatanodeInfoWithStorage[192.168.28.41:50010,DS-7149d22f-6d54-4d52-8168-0dbbfff5ced4,DISK]]). The current failed datanode replacement policy is DEFAULT, and
a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.

at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:951)

at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:1017)

at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1165)

at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:909)

at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:412)

2016-01-25 22:13:11,601 INFO [Thread-173] org.apache.hadoop.service.AbstractService: Service JobHistoryEventHandler failed in state STOPPED; cause: org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.io.IOException: Failed to replace a bad datanode
on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[192.168.28.40:50010,DS-5f89be6b-ef87-48a2-8d9c-d8b88fc229e8,DISK], DatanodeInfoWithStorage[192.168.28.41:50010,DS-7149d22f-6d54-4d52-8168-0dbbfff5ced4,DISK]],
original=[DatanodeInfoWithStorage[192.168.28.40:50010,DS-5f89be6b-ef87-48a2-8d9c-d8b88fc229e8,DISK], DatanodeInfoWithStorage[192.168.28.41:50010,DS-7149d22f-6d54-4d52-8168-0dbbfff5ced4,DISK]]). The current failed datanode replacement policy is DEFAULT, and
a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.

org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[192.168.28.40:50010,DS-5f89be6b-ef87-48a2-8d9c-d8b88fc229e8,DISK],
DatanodeInfoWithStorage[192.168.28.41:50010,DS-7149d22f-6d54-4d52-8168-0dbbfff5ced4,DISK]], original=[DatanodeInfoWithStorage[192.168.28.40:50010,DS-5f89be6b-ef87-48a2-8d9c-d8b88fc229e8,DISK], DatanodeInfoWithStorage[192.168.28.41:50010,DS-7149d22f-6d54-4d52-8168-0dbbfff5ced4,DISK]]).
The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.

at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.handleEvent(JobHistoryEventHandler.java:580)

at org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler.serviceStop(JobHistoryEventHandler.java:374)

at org.apache.hadoop.service.AbstractService.stop(AbstractService.java:221)

at org.apache.hadoop.service.ServiceOperations.stop(ServiceOperations.java:52)

at org.apache.hadoop.service.ServiceOperations.stopQuietly(ServiceOperations.java:80)

at org.apache.hadoop.service.CompositeService.stop(CompositeService.java:157)

at org.apache.hadoop.service.CompositeService.serviceStop(CompositeService.java:131)

at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceStop(MRAppMaster.java:1626)

at org.apache.hadoop.service.AbstractService.stop(AbstractService.java:221)

at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.stop(MRAppMaster.java:1126)

at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.shutDownJob(MRAppMaster.java:561)

at org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobFinishEventHandler$1.run(MRAppMaster.java:609)

Caused by: java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[DatanodeInfoWithStorage[192.168.28.40:50010,DS-5f89be6b-ef87-48a2-8d9c-d8b88fc229e8,DISK], DatanodeInfoWithStorage[192.168.28.41:50010,DS-7149d22f-6d54-4d52-8168-0dbbfff5ced4,DISK]],
original=[DatanodeInfoWithStorage[192.168.28.40:50010,DS-5f89be6b-ef87-48a2-8d9c-d8b88fc229e8,DISK], DatanodeInfoWithStorage[192.168.28.41:50010,DS-7149d22f-6d54-4d52-8168-0dbbfff5ced4,DISK]]). The current failed datanode replacement policy is DEFAULT, and
a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.

at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:951)

at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:1017)

at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1165)

at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:909)

at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:412)

原因:无法写入;我的环境中有3个datanode,备份数量设置的是3。在写操作时,它会在pipeline中写3个机器。默认replace-datanode-on-failure.policy是DEFAULT,如果系统中的datanode大于等于3,它会找另外一个datanode来拷贝。目前机器只有3台,因此只要一台datanode出问题,就一直无法写入成功。- j& a/ W2 ~& k

解决办法:修改hdfs-site.xml文件,添加或者修改如下两项:

<property>

<name>dfs.client.block.write.replace-datanode-on-failure.enable</name>

<value>true</value>

</property>

<property>

<name>dfs.client.block.write.replace-datanode-on-failure.policy</name>

<value>NEVER</value>

</property>

对于dfs.client.block.write.replace-datanode-on-failure.enable,客户端在写失败的时候,是否使用更换策略,默认是true没有问题。

对于,dfs.client.block.write.replace-datanode-on-failure.policy,default在3个或以上备份的时候,是会尝试更换结点尝试写入datanode。而在两个备份的时候,不更换datanode,直接开始写。对于3个datanode的集群,只要一个节点没响应写入就会出问题,所以可以关掉。/

或者在客户端的代码里面加入:

conf = new Configuration();

conf.set("dfs.client.block.write.replace-datanode-on-failure.policy","NEVER");

conf.set("dfs.client.block.write.replace-datanode-on-failure.enable","true");
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: