您的位置:首页 > Web前端 > Node.js

hadoop启动没有namenode节点的错误分析

2017-07-11 17:12 639 查看
      我用的是hadoop-1.0.4,伪分布式。没有为hadoop新建用户,我直接使用root启动,当输入#hadoop namenode -format后,也没什么异常,继续往下执行,

输入#start-all.sh后,启动完后输入#jps发现没有namenode,立马去看日志,显示如下:

2017-07-11 14:18:47,407 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:

/************************************************************

STARTUP_MSG: Starting NameNode

STARTUP_MSG:   host = localhost.localdomain/127.0.0.1

STARTUP_MSG:   args = []

STARTUP_MSG:   version = 1.0.4

STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r 1393290; compiled by 'hortonfo' on Wed Oct  3 05:13:58 UTC 2012

************************************************************/

2017-07-11 14:18:47,670 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties

2017-07-11 14:18:47,682 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.

2017-07-11 14:18:47,683 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).

2017-07-11 14:18:47,683 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started

2017-07-11 14:18:48,070 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.

2017-07-11 14:18:48,074 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists!

2017-07-11 14:18:48,109 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm registered.

2017-07-11 14:18:48,110 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source NameNode registered.

2017-07-11 14:18:48,152 INFO org.apache.hadoop.hdfs.util.GSet: VM type       = 64-bit

2017-07-11 14:18:48,152 INFO org.apache.hadoop.hdfs.util.GSet: 2% max memory = 19.33375 MB

2017-07-11 14:18:48,152 INFO org.apache.hadoop.hdfs.util.GSet: capacity      = 2^21 = 2097152 entries

2017-07-11 14:18:48,152 INFO org.apache.hadoop.hdfs.util.GSet: recommended=2097152, actual=2097152

2017-07-11 14:18:48,274 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=root

2017-07-11 14:18:48,274 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup

2017-07-11 14:18:48,274 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true

2017-07-11 14:18:48,279 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.block.invalidate.limit=100

2017-07-11 14:18:48,279 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)

2017-07-11 14:18:48,799 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStateMBean and NameNodeMXBean

2017-07-11 14:18:48,823 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times

2017-07-11 14:18:48,839 ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed.

java.io.IOException: NameNode is not formatted.

    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)

    at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)

    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)

    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)

    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)

    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)

    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)

    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)

2017-07-11 14:18:48,840 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException: NameNode is not formatted.

    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:330)

    at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)

    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:388)

    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:362)

    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276)

    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)

    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)

    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)

       就是java.io.IOException: NameNode is not formatted.这个错误。但是我明明有执行format命令啊,在网上搜了各种方法(例如终止hadoop进程,删除namenode和tmp文件夹下的所有文件,再重新格式化都不行),后来有一篇博客提到权限的问题,我不知道是不是没有为hadoop专门创建用户,所以即使在hdfs-site.xml里面配置了namenode的路径,他也没有权限读写,所以格式化失败,难道root用户执行hadoop的格式化命令,权限也不够吗,是不是用root执行还要做些什么配置啊,后来他博客还提到一句话-------

dfs.name.dir和dfs.data.dir这两个路径不需要手动建,hadoop初始化会自动建立,我刚开始是手动建的,后来删了,让他自己建就成功了,再也不抱错了,namenode成功出现在进程中

         附上他博客地址http://buguoruci.blog.51cto.com/4104173/1278610
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签:  hadoop namenode