hadoop节点热删除(动态删除)
2011-07-26 19:23
239 查看
今天在hadoop集群环境下需要将两台datanode删除,为了不影响在运行业务,需对节点进行动态删除,记录操作过程如下:
1, 从集群中移走节点,需要对移走节点的数据进行备份:
在主节点的core-site.xml配置文件中添加如下内容:
<property>
<name>dfs.hosts.exclude</name>
<value>/etc/hadoop/conf/excludes</value>
</property>
说明
dfs.hosts.exclude:指要删除的节点
/etc/hadoop/conf/excludes:指定要被删除文件所在路径及名称,该处定义为excludes
2, 在1中设置目录中touch excludes,内容为每行需要移走的节点
192.168.5.91
192.168.5.113
3,进入/usr/lib/hadoop/bin 运行命令:hadoop dfsadmin -refreshNodes(我这用的yum安装的,不同安装方式hadoop目录会在不同路径),该命令可以动态刷新dfs.hosts和dfs.hosts.exclude配置,无需重启NameNode。
执行完成被删除节点datanode消失了,但是tasktracker还会存在,需要自己手动停掉
4,然后通过 bin/hadoop dfsadmin -report查看,结果如下:
Configured Capacity: 5217900085248 (4.75 TB)
Present Capacity: 4007070527488 (3.64 TB)
DFS Remaining: 3598168113152 (3.27 TB)
DFS Used: 408902414336 (380.82 GB)
DFS Used%: 10.2%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0-------------------------------------------------
Datanodes available: 3 (5 total, 2 dead)Name: 192.168.5.201:50010
Decommission Status : Normal
Configured Capacity: 1739300028416 (1.58 TB)
DFS Used: 157545607168 (146.73 GB)
Non DFS Used: 441254821888 (410.95 GB)
DFS Remaining: 1140499599360(1.04 TB)
DFS Used%: 9.06%
DFS Remaining%: 65.57%
Last contact: Tue Jul 26 19:18:02 CST 2011
Name: 192.168.5.202:50010
Decommission Status : Normal
Configured Capacity: 1739300028416 (1.58 TB)
DFS Used: 161735507968 (150.63 GB)
Non DFS Used: 174588116992 (162.6 GB)
DFS Remaining: 1402976403456(1.28 TB)
DFS Used%: 9.3%
DFS Remaining%: 80.66%
Last contact: Tue Jul 26 19:17:59 CST 2011
Name: 192.168.5.71:50010
Decommission Status : Normal
Configured Capacity: 1739300028416 (1.58 TB)
DFS Used: 89621299200 (83.47 GB)
Non DFS Used: 594986618880 (554.12 GB)
DFS Remaining: 1054692110336(982.26 GB)
DFS Used%: 5.15%
DFS Remaining%: 60.64%
Last contact: Tue Jul 26 19:17:59 CST 2011
Name: 192.168.5.91:50010
Decommission Status : Decommissioned
Configured Capacity: 0 (0 KB)
DFS Used: 0 (0 KB)
Non DFS Used: 0 (0 KB)
DFS Remaining: 0(0 KB)
DFS Used%: 100%
DFS Remaining%: 0%
Last contact: Thu Jan 01 08:00:00 CST 1970
Name: 192.168.5.113
Decommission Status : Normal
Configured Capacity: 0 (0 KB)
DFS Used: 0 (0 KB)
Non DFS Used: 0 (0 KB)
DFS Remaining: 0(0 KB)
DFS Used%: 100%
DFS Remaining%: 0%
Last contact: Thu Jan 01 08:00:00 CST 1970
5,通过4中命令可以查看到被删除节点状态,如192.168.5.91
Decommission Status : Decommissioned
说明从91往其他节点同步数据已经完成,如果状态为Decommission Status : Decommissione in process则还在执行。
至此删除节点操作完成
问题总结
在拔掉节点时注意要把往hadoop放数据程序先停掉,否则程序还会往要删除节点同步数据,删除节点程序会一直执行。
数据节点间的数据同步还是相当给力的,眨眼功夫就完成了~~~哈哈
1, 从集群中移走节点,需要对移走节点的数据进行备份:
在主节点的core-site.xml配置文件中添加如下内容:
<property>
<name>dfs.hosts.exclude</name>
<value>/etc/hadoop/conf/excludes</value>
</property>
说明
dfs.hosts.exclude:指要删除的节点
/etc/hadoop/conf/excludes:指定要被删除文件所在路径及名称,该处定义为excludes
2, 在1中设置目录中touch excludes,内容为每行需要移走的节点
192.168.5.91
192.168.5.113
3,进入/usr/lib/hadoop/bin 运行命令:hadoop dfsadmin -refreshNodes(我这用的yum安装的,不同安装方式hadoop目录会在不同路径),该命令可以动态刷新dfs.hosts和dfs.hosts.exclude配置,无需重启NameNode。
执行完成被删除节点datanode消失了,但是tasktracker还会存在,需要自己手动停掉
4,然后通过 bin/hadoop dfsadmin -report查看,结果如下:
Configured Capacity: 5217900085248 (4.75 TB)
Present Capacity: 4007070527488 (3.64 TB)
DFS Remaining: 3598168113152 (3.27 TB)
DFS Used: 408902414336 (380.82 GB)
DFS Used%: 10.2%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0-------------------------------------------------
Datanodes available: 3 (5 total, 2 dead)Name: 192.168.5.201:50010
Decommission Status : Normal
Configured Capacity: 1739300028416 (1.58 TB)
DFS Used: 157545607168 (146.73 GB)
Non DFS Used: 441254821888 (410.95 GB)
DFS Remaining: 1140499599360(1.04 TB)
DFS Used%: 9.06%
DFS Remaining%: 65.57%
Last contact: Tue Jul 26 19:18:02 CST 2011
Name: 192.168.5.202:50010
Decommission Status : Normal
Configured Capacity: 1739300028416 (1.58 TB)
DFS Used: 161735507968 (150.63 GB)
Non DFS Used: 174588116992 (162.6 GB)
DFS Remaining: 1402976403456(1.28 TB)
DFS Used%: 9.3%
DFS Remaining%: 80.66%
Last contact: Tue Jul 26 19:17:59 CST 2011
Name: 192.168.5.71:50010
Decommission Status : Normal
Configured Capacity: 1739300028416 (1.58 TB)
DFS Used: 89621299200 (83.47 GB)
Non DFS Used: 594986618880 (554.12 GB)
DFS Remaining: 1054692110336(982.26 GB)
DFS Used%: 5.15%
DFS Remaining%: 60.64%
Last contact: Tue Jul 26 19:17:59 CST 2011
Name: 192.168.5.91:50010
Decommission Status : Decommissioned
Configured Capacity: 0 (0 KB)
DFS Used: 0 (0 KB)
Non DFS Used: 0 (0 KB)
DFS Remaining: 0(0 KB)
DFS Used%: 100%
DFS Remaining%: 0%
Last contact: Thu Jan 01 08:00:00 CST 1970
Name: 192.168.5.113
Decommission Status : Normal
Configured Capacity: 0 (0 KB)
DFS Used: 0 (0 KB)
Non DFS Used: 0 (0 KB)
DFS Remaining: 0(0 KB)
DFS Used%: 100%
DFS Remaining%: 0%
Last contact: Thu Jan 01 08:00:00 CST 1970
5,通过4中命令可以查看到被删除节点状态,如192.168.5.91
Decommission Status : Decommissioned
说明从91往其他节点同步数据已经完成,如果状态为Decommission Status : Decommissione in process则还在执行。
至此删除节点操作完成
问题总结
在拔掉节点时注意要把往hadoop放数据程序先停掉,否则程序还会往要删除节点同步数据,删除节点程序会一直执行。
数据节点间的数据同步还是相当给力的,眨眼功夫就完成了~~~哈哈
相关文章推荐
- Hadoop2.x集群动态添加删除数据节点
- Hadoop动态添加/删除节点(datanode和tacktracker)
- Hadoop动态加入/删除节点(datanode和tacktracker)
- 【hadoop 2.7.1 】动态添加节点、删除节点
- hadoop2.X动态添加删除节点及相关问题总结
- Hadoop学习记录(5)|集群搭建|节点动态添加删除
- hadoop动态删除节点
- Hadoop动态删除节点
- hadoop2.7 动态新增节点和删除节点
- Hadoop集群节点的动态增加与删除
- Hadoop之——Hadoop 2.6.3动态增加/删除DataNode节点
- hadoop动态增加和删除节点方法介绍
- Hadoop 2.6.3动态增加/删除DataNode节点
- Hadoop 2.6.3动态增加/删除DataNode节点
- Hadoop集群节点的动态增加与删除
- Hadoop2.x集群动态添加删除数据节点
- Hadoop 2.6.3动态增加/删除DataNode节点
- hadoop集群动态添加和删除节点说明
- Hadoop集群节点的动态增加与删除
- hadoop 动态下架(删除)节点