您的位置:首页 > 运维架构

hadoop集群空间使用情况报告脚本

2015-01-16 13:39 471 查看
近期集群空间有点紧张,总是操心空间不足而崩溃,近期扩容又不太现实,经与集群用户沟通发现:集群上存储了非常多没用的历史数据,能够删除,这样就能够通过一个crontab脚本每天生成集群空间使用报告,当使用量超过70%、80%是分别报警,并通知那些用户占用空间较大,预留当机冗余空间,这样就不须要时刻操心集群空间爆满了。

[hdfs@hanagios48 root]$ more /home/hdfs/dfsadmin_report.sh

#!/bin/bash

source ~/.bash_profile

today=$(date +%Y%m%d)

#report=`hadoop dfsadmin -report|head -n 11`

report=/tmp/report

echo "Hi,各位集群用户

当Hadoop集群空间使用量达到70%时,各位将会收到Warning邮件,提示清理空间;

当Hadoop集群空间使用量到达80%时,各位将会收到Critical邮件, 提示清理空间;

为保证集群数据安全,预留down机冗余空间,请各位务必清理,如因数据量确实巨大,集群无法容纳,请及时告知运维进行扩容,谢谢!">$report

echo >>$report

hadoop dfsadmin -report|head -n 11 >>$report

echo ---------------------- >>$report

echo dfs used details: >>$report

hadoop fs -du / >>$report

echo >>$report

hadoop fs -du /user >>$report

dfs_used_percent=`cat $report|grep "DFS Used%"|awk -F: '{print $2}'`

dfs_used=`echo ${dfs_used_percent}|awk -F% '{print $1}'`

# 百分百小数比較

#expr ${dfs_used} \>\= 80

user=laijingli2006@126.com

title=`echo "${today}[${dfs_used_percent}] WBY Hadoop Cluster Hdfs Useage Report: dfs_used ${dfs_used_percent}"`

echo $title

if [ $(expr ${dfs_used} \>\= 80) = 1 ];then

echo dfs_used 80

#cat $report|mail -s "Critical: $title" $user

cat $report|/usr/bin/mutt -s "Critical: $title" $user

elif [ $(expr ${dfs_used} \>\= 70) = 1 ];then

echo 70

#cat $report|mail -s "Warning: $title" $user

cat $report|/usr/bin/mutt -s "Warning: $title" $user

else

echo 60

#cat $report|mail -s "Normal: $title" $user

#cat $report|/usr/bin/mutt -s "Normal: $title" $user

fi

#cat $report|mail -s $today DfsReport: dfs_used ${dfs_used_percent} 362560701@qq.com

crontab运行效果还不错:

[hdfs@hanagios48 root]$ crontab -l

05 8 * * * /home/hdfs/dfsadmin_report.sh

[hdfs@hanagios48 root]$ more /tmp/report

Hi,各位集群用户

当Hadoop集群空间使用量达到70%时,各位将会收到Warning邮件,提示清理空间;

当Hadoop集群空间使用量到达80%时,各位将会收到Critical邮件,提示清理空间;

为保证集群数据安全,预留down机冗余空间,请各位务必清理,如因数据量确实巨大,集群无法容纳,请及时告知运维进行扩容,谢谢!

Configured Capacity: 124854950621184 (113.55 TB)

Present Capacity: 118317151626783 (107.61 TB)

DFS Remaining: 38704545865728 (35.2 TB)

DFS Used: 79612605761055 (72.41 TB)

DFS Used%: 67.29%

Under replicated blocks: 0

Blocks with corrupt replicas: 0

Missing blocks: 0

-------------------------------------------------

Datanodes available: 15 (15 total, 0 dead)

----------------------

dfs used details:

Found 6 items

0 hdfs://hamaster140:9000/benchmarks

125752 hdfs://hamaster140:9000/data0

0 hdfs://hamaster140:9000/system

13721821810608 hdfs://hamaster140:9000/tech

1803375805154 hdfs://hamaster140:9000/tmp

6411197575455 hdfs://hamaster140:9000/user

Found 14 items

33222938 hdfs://hamaster140:9000/user/azk

4072247213805 hdfs://hamaster140:9000/user/cla

40705761240 hdfs://hamaster140:9000/user/din

0 hdfs://hamaster140:9000/user/fea

0 hdfs://hamaster140:9000/user/gao

36454169547 hdfs://hamaster140:9000/user/gmz

1877816487439 hdfs://hamaster140:9000/user/hdf

148965233376 hdfs://hamaster140:9000/user/imp

2416017438 hdfs://hamaster140:9000/user/in

0 hdfs://hamaster140:9000/user/lin

0 hdfs://hamaster140:9000/user/luo

149973222708 hdfs://hamaster140:9000/user/shi

82586246964 hdfs://hamaster140:9000/user/wuy

0 hdfs://hamaster140:9000/user/zho

[hdfs@hanagios48 root]$
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: