您的位置:首页 > Web前端 > Node.js

hadoop启动datanode的一个异常处理

2015-06-17 19:25 771 查看
启动datanode失败,查看日志发现

2015-06-17 03:41:05,710 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties

2015-06-17 03:41:05,734 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.

2015-06-17 03:41:05,735 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).

2015-06-17 03:41:05,735 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started

2015-06-17 03:41:05,975 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.

2015-06-17 03:41:07,109 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.FileNotFoundException: /hadoop/hadoop-1.2.1/tmp/dfs/data/in_use.lock (Permission denied)

at java.io.RandomAccessFile.open(Native Method)

at java.io.RandomAccessFile.(RandomAccessFile.java:241)

at org.apache.hadoop.hdfs.server.common.StorageStorageDirectory.tryLock(Storage.java:617)atorg.apache.hadoop.hdfs.server.common.StorageStorageDirectory.tryLock(Storage.java:617)
at org.apache.hadoop.hdfs.server.common.StorageStorageDirectory.lock(Storage.java:594)

at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:452)

at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:111)

at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:414)

at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:321)

at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1712)

at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1651)

at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1669)

at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1795)

at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1812)

2015-06-17 03:41:07,111 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:

/[b]**************************************************[/b]

SHUTDOWN_MSG: Shutting down DataNode at hadoop-server-01/192.168.2.101

[b]**************************************************[/b]/

2015-06-17 03:42:46,588 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:

/[b]**************************************************[/b]

STARTUP_MSG: Starting DataNode

STARTUP_MSG: host = hadoop-server-01/192.168.2.101

STARTUP_MSG: args = []

STARTUP_MSG: version = 1.2.1

分析原因发现tmp目录下文件存在权限问题:

drwxr-xr-x. 2 root root 4096 Jun 15 02:10 blocksBeingWritten

drwxr-xr-x. 2 root root 4096 Jun 15 02:10 current

drwxr-xr-x. 2 root root 4096 Jun 15 02:10 detach

-rw-r–r–. 1 root root 0 Jun 15 02:10 in_use.lock

-rw-r–r–. 1 root root 157 Jun 15 02:10 storage

drwxr-xr-x. 2 root root 4096 Jun 15 02:10 tmp

因为之前用root账户操作过hadoop导致生存了root权限的data文件,重新用hadoop账户操作时权限不足.

这个时候可以进入root账户将data文件权限修改为hadoop所有或者没有重要数据时直接删除(学习环境下) .

用chown命令修改权限.
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: