hadoop datanode启动失败(All directories in dfs.data.dir are invalid)
2014-04-17 15:57
537 查看
由于hadoop节点的磁盘满了,导致节点死掉,今天对其进行扩容。首先,将原节点的数据拷贝到目标节点下,从而避免数据的丢失,但是在执行hadoop_daemon.sh start datanode后没有启动datanode,查看log如下:
2014-04-17 11:44:06,200 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Invalid directory in dfs.data.dir: Incorrect permission
for /hadoop_data/hadoop/hdfs/data/hadoop-hadoop, expected: rwxr-xr-x, while actual: rwxrwxr-x
2014-04-17 11:44:06,200 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: All directories in dfs.data.dir are invalid.
2014-04-17 11:44:06,201 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
2014-04-17 11:44:06,202 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
注意到:2014-04-17 11:44:06,200 WARN org.apache.hadoop.hdfs.server.datanode.DataNode:
Invalid directory in dfs.data.dir: Incorrect permission for /hadoop_data/hadoop/hdfs/data/hadoop-hadoop, expected: rwxr-xr-x, while actual: rwxrwxr-x
于是网上找资料,看到只要按提示修改这个权限就ok了。原因:多了用户组的写权限能造成集群系统的无法启动。
于是,执行以下命令:
chmod
g-w /hadoop_data/hadoop/hdfs/data/hadoop-hadoop
结果该命令没有起作用,怪事!!!
百思不得其解,想到肯定是该目录做了权限设置不让修改。于是联系公司运维人员,才知/hadoop_data是个nas盘挂载点,哎,无语的nas。
最后,在来回顾以下为何会出现这个状况?首先,我的datanode新的数据存储位置/hadoop_data下的数据是从原来的位置/data_hadoop下copy过来的,这样的话正常情况是权限不会变的,hadoop肯定是不存在权限的问题。所以可见都是nas惹的货,它将原来的drwxr-xr-x权限给我改为了drwxrwxr-x,从而出现了datanode启动不了的麻烦!!
2014-04-17 11:44:06,200 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Invalid directory in dfs.data.dir: Incorrect permission
for /hadoop_data/hadoop/hdfs/data/hadoop-hadoop, expected: rwxr-xr-x, while actual: rwxrwxr-x
2014-04-17 11:44:06,200 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: All directories in dfs.data.dir are invalid.
2014-04-17 11:44:06,201 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
2014-04-17 11:44:06,202 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
注意到:2014-04-17 11:44:06,200 WARN org.apache.hadoop.hdfs.server.datanode.DataNode:
Invalid directory in dfs.data.dir: Incorrect permission for /hadoop_data/hadoop/hdfs/data/hadoop-hadoop, expected: rwxr-xr-x, while actual: rwxrwxr-x
于是网上找资料,看到只要按提示修改这个权限就ok了。原因:多了用户组的写权限能造成集群系统的无法启动。
于是,执行以下命令:
chmod
g-w /hadoop_data/hadoop/hdfs/data/hadoop-hadoop
结果该命令没有起作用,怪事!!!
百思不得其解,想到肯定是该目录做了权限设置不让修改。于是联系公司运维人员,才知/hadoop_data是个nas盘挂载点,哎,无语的nas。
最后,在来回顾以下为何会出现这个状况?首先,我的datanode新的数据存储位置/hadoop_data下的数据是从原来的位置/data_hadoop下copy过来的,这样的话正常情况是权限不会变的,hadoop肯定是不存在权限的问题。所以可见都是nas惹的货,它将原来的drwxr-xr-x权限给我改为了drwxrwxr-x,从而出现了datanode启动不了的麻烦!!
相关文章推荐
- hadoop datanode启动失败(All directories in dfs.data.dir are invalid)
- Hadoop Datanode节点无法启动(All directories in dfs.data.dir are invalid)
- Hadoop Datanode节点无法启动(All directories in dfs.data.dir are invalid)
- Hadoop Datanode节点无法启动(All directories in dfs.data.dir are invalid)
- ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: All directories in dfs.data.dir are invalid
- Hadoop datanode 无法启动(All specified directories are failed to load.)
- 【解决办法】Hadoop2.6.4 datanode 无法启动,错误:All specified directories are failed to load.
- Hadoop datanode 无法启动,报错:java.io.IOException: All specified directories are failed to load.
- org.apache.hadoop.hdfs.server.datanode.DataNode: Invalid dfs.datanode.data.dir /chunk : java.io.Fil
- hadoop hdfs 启动出现dfs/name/in_use.lock (Permission denied)错误,导致NN启动失败
- hadoop 启动报错 Incompatible clusterIDs in /tmp/hadoop-root/dfs/data: namenode
- hadoop启动,DataNode报错”Incompatible clusterIDs“
- [spring]:org.springframework.dao.InvalidDataAccessApiUsageException: Write operations are not allowed in read-only mode
- hibernate框架学习错误集锦-org.springframework.dao.InvalidDataAccessApiUsageException: Write operations are not allowed in read-only mode (FlushMode.MANUAL)
- 关于hadoop中datanode节点不同的dfs.data.dir之间数据均衡问题
- org.springframework.dao.InvalidDataAccessApiUsageException: Write operations are not allowed in read
- [spring]:org.springframework.dao.InvalidDataAccessApiUsageException: Write operations are not allowed in read-only mode
- Hadoop datanode重新加载失败 无法启动解决步骤
- Hadoop参数:dfs.name.dir 与 dfs.data.dir
- Hadoop中正常启动了datanode但管理界面却却显示0个datanode节点或者只有本机的一个datanade,DFS Used显示0(100%)