hadoop启动没有namenode节点的错误分析
2017-07-11 17:12
639 查看
我用的是hadoop-1.0.4,伪分布式。没有为hadoop新建用户,我直接使用root启动,当输入#hadoop namenode -format后,也没什么异常,继续往下执行,
输入#start-all.sh后,启动完后输入#jps发现没有namenode,立马去看日志,显示如下:
2017-07-11 14:18:47,407 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = localhost.localdomain/127.0.0.1
STARTUP_MSG: args = []
STARTUP_MSG: version = 1.0.4
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r 1393290; compiled by 'hortonfo' on Wed Oct 3 05:13:58 UTC 2012
************************************************************/
2017-07-11 14:18:47,670 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2017-07-11 14:18:47,682 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
2017-07-11 14:18:47,683 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2017-07-11 14:18:47,683 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2017-07-11 14:18:48,070 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
2017-07-11 14:18:48,074 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists!
2017-07-11 14:18:48,109 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm registered.
2017-07-11 14:18:48,110 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source NameNode registered.
2017-07-11 14:18:48,152 INFO org.apache.hadoop.hdfs.util.GSet: VM type = 64-bit
2017-07-11 14:18:48,152 INFO org.apache.hadoop.hdfs.util.GSet: 2% max memory = 19.33375 MB
2017-07-11 14:18:48,152 INFO org.apache.hadoop.hdfs.util.GSet: capacity = 2^21 = 2097152 entries
2017-07-11 14:18:48,152 INFO org.apache.hadoop.hdfs.util.GSet: recommended=2097152, actual=2097152
2017-07-11 14:18:48,274 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=root
2017-07-11 14:18:48,274 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
2017-07-11 14:18:48,274 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true
2017-07-11 14:18:48,279 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.block.invalidate.limit=100
2017-07-11 14:18:48,279 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
2017-07-11 14:18:48,799 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStateMBean and NameNodeMXBean
2017-07-11 14:18:48,823 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
2017-07-11 14:18:48,839 ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed.
java.io.IOException: NameNode is not formatted.
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
2017-07-11 14:18:48,840 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException: NameNode is not formatted.
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
就是java.io.IOException: NameNode is not formatted.这个错误。但是我明明有执行format命令啊,在网上搜了各种方法(例如终止hadoop进程,删除namenode和tmp文件夹下的所有文件,再重新格式化都不行),后来有一篇博客提到权限的问题,我不知道是不是没有为hadoop专门创建用户,所以即使在hdfs-site.xml里面配置了namenode的路径,他也没有权限读写,所以格式化失败,难道root用户执行hadoop的格式化命令,权限也不够吗,是不是用root执行还要做些什么配置啊,后来他博客还提到一句话-------
dfs.name.dir和dfs.data.dir这两个路径不需要手动建,hadoop初始化会自动建立,我刚开始是手动建的,后来删了,让他自己建就成功了,再也不抱错了,namenode成功出现在进程中
附上他博客地址http://buguoruci.blog.51cto.com/4104173/1278610
输入#start-all.sh后,启动完后输入#jps发现没有namenode,立马去看日志,显示如下:
2017-07-11 14:18:47,407 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = localhost.localdomain/127.0.0.1
STARTUP_MSG: args = []
STARTUP_MSG: version = 1.0.4
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r 1393290; compiled by 'hortonfo' on Wed Oct 3 05:13:58 UTC 2012
************************************************************/
2017-07-11 14:18:47,670 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2017-07-11 14:18:47,682 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
2017-07-11 14:18:47,683 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2017-07-11 14:18:47,683 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2017-07-11 14:18:48,070 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
2017-07-11 14:18:48,074 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists!
2017-07-11 14:18:48,109 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm registered.
2017-07-11 14:18:48,110 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source NameNode registered.
2017-07-11 14:18:48,152 INFO org.apache.hadoop.hdfs.util.GSet: VM type = 64-bit
2017-07-11 14:18:48,152 INFO org.apache.hadoop.hdfs.util.GSet: 2% max memory = 19.33375 MB
2017-07-11 14:18:48,152 INFO org.apache.hadoop.hdfs.util.GSet: capacity = 2^21 = 2097152 entries
2017-07-11 14:18:48,152 INFO org.apache.hadoop.hdfs.util.GSet: recommended=2097152, actual=2097152
2017-07-11 14:18:48,274 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=root
2017-07-11 14:18:48,274 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
2017-07-11 14:18:48,274 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true
2017-07-11 14:18:48,279 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.block.invalidate.limit=100
2017-07-11 14:18:48,279 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
2017-07-11 14:18:48,799 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStateMBean and NameNodeMXBean
2017-07-11 14:18:48,823 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
2017-07-11 14:18:48,839 ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed.
java.io.IOException: NameNode is not formatted.
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
2017-07-11 14:18:48,840 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException: NameNode is not formatted.
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
就是java.io.IOException: NameNode is not formatted.这个错误。但是我明明有执行format命令啊,在网上搜了各种方法(例如终止hadoop进程,删除namenode和tmp文件夹下的所有文件,再重新格式化都不行),后来有一篇博客提到权限的问题,我不知道是不是没有为hadoop专门创建用户,所以即使在hdfs-site.xml里面配置了namenode的路径,他也没有权限读写,所以格式化失败,难道root用户执行hadoop的格式化命令,权限也不够吗,是不是用root执行还要做些什么配置啊,后来他博客还提到一句话-------
dfs.name.dir和dfs.data.dir这两个路径不需要手动建,hadoop初始化会自动建立,我刚开始是手动建的,后来删了,让他自己建就成功了,再也不抱错了,namenode成功出现在进程中
附上他博客地址http://buguoruci.blog.51cto.com/4104173/1278610
相关文章推荐
- hadoop Yarn 搭建集群时错误 主节点 NameNode 没有启动成功
- Hadoop启动之后jps没有NameNode节点
- 集群启动使用Hadoop,运行后没有NameNode节点
- Hadoop的多节点集群启动,唯独没有namenode进程?(血淋淋教训,一定拍快照)(四十五)
- hadoop 2.x之HDFS HA讲解之八HDFS HA测试启动NameNode遇见错误分析解决
- 【Hadoop错误】secondarynamenode没有启动
- hadoop启动后jps没有namenode
- Hadoop namenode启动瓶颈分析
- Hadoop源码分析笔记(十五):名字节点--启动和停止
- 启动Hadoop HDFS时的“Incompatible clusterIDs”错误原因分析
- hadoop中master中为什么没有namenode启动org.apache.hadoop.dfs.SafeModeException: Cannot delete /user/报错
- Hadoop源码分析之NameNode的启动与停止
- Hadoop0.21.0源码流程分析(3)-Task节点管理启动任务
- Hadoop0.21.0源码流程分析(3)-Task节点管理启动任务
- Hadoop源码分析之NameNode的启动与停止(续)
- 启动Hadoop HDFS时的“Incompatible clusterIDs”错误原因分析
- hadoop namenode启动过程详细剖析及瓶颈分析
- hadoop namenode启动过程详细剖析及瓶颈分析
- Hadoop namenode启动瓶颈分析
- C访问hadoop程序终端显示运行正确,因为连接参数错误,使得通过网页查看就是没有成功原因分析和解决方案