Hadoop namenode无法启动
2014-03-08 02:44
471 查看
转载http://blog.csdn.net/bychjzh/article/details/7830508
刚刚装好Pig,打算打开hadoop模式尝试下Pig,但是发现Pig报错
[root@master pig-0.9.1]# pig
2011-12-03 07:27:30,158 [main] INFO org.apache.pig.Main - Logging error messages to: /home/bell/software/hadoop-0.20.2/pig-0.9.1/pig_1322868450154.log
2011-12-03 07:27:30,403 [main] INFO org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to hadoop
file system at: hdfs://localhost/
2011-12-03 07:27:31,544 [main] INFO org.apache.hadoop.ipc.Client - Retrying connect to server: localhost/127.0.0.1:8020.
Already tried 0 time(s).
2011-12-03 07:27:32,545 [main] INFO org.apache.hadoop.ipc.Client - Retrying connect to server: localhost/127.0.0.1:8020.
Already tried 1 time(s).
2011-12-03 07:27:33,546 [main] INFO org.apache.hadoop.ipc.Client - Retrying connect to server: localhost/127.0.0.1:8020.
Already tried 2 time(s).
2011-12-03 07:27:34,547 [main] INFO org.apache.hadoop.ipc.Client - Retrying connect to server: localhost/127.0.0.1:8020.
Already tried 3 time(s).
2011-12-03 07:27:35,548 [main] INFO org.apache.hadoop.ipc.Client - Retrying connect to server: localhost/127.0.0.1:8020.
Already tried 4 time(s).
2011-12-03 07:27:36,549 [main] INFO org.apache.hadoop.ipc.Client - Retrying connect to server: localhost/127.0.0.1:8020.
Already tried 5 time(s).
2011-12-03 07:27:37,550 [main] INFO org.apache.hadoop.ipc.Client - Retrying connect to server: localhost/127.0.0.1:8020.
Already tried 6 time(s).
2011-12-03 07:27:38,550 [main] INFO org.apache.hadoop.ipc.Client - Retrying connect to server: localhost/127.0.0.1:8020.
Already tried 7 time(s).
2011-12-03 07:27:39,551 [main] INFO org.apache.hadoop.ipc.Client - Retrying connect to server: localhost/127.0.0.1:8020.
Already tried 8 time(s).
2011-12-03 07:27:40,552 [main] INFO org.apache.hadoop.ipc.Client - Retrying connect to server: localhost/127.0.0.1:8020.
Already tried 9 time(s).
2011-12-03 07:27:40,582 [main] ERROR org.apache.pig.Main - ERROR 2999: Unexpected internal error. Failed to create DataStorage
Details at logfile: /home/bell/software/hadoop-0.20.2/pig-0.9.1/pig_1322868450154.log
发现错误后查看了下log,发现是connection refused,那么可能是namenode的问题,JPS查看进程,发现没有namenode进程,搜索网络发现如下解决方案:
最近遇到了一个问题,执行start-all.sh的时候发现JPS一下namenode没有启动
每次开机都得重新格式化一下namenode才可以
其实问题就出在tmp文件,默认的tmp文件每次重新开机会被清空,与此同时namenode的格式化信息就会丢失
于是我们得重新配置一个tmp文件目录
首先在home目录下建立一个hadoop_tmp目录
sudo mkdir ~/hadoop_tmp
然后修改hadoop/conf目录里面的core-site.xml文件,加入以下节点:
<property>
<name>hadoop.tmp.dir</name>
<value>/home/chjzh/hadoop_tmp</value>
<description>A base for other temporary directories.</description>
</property>
注意:我的用户是chjzh所以目录是/home/chjzh/hadoop_tmp
OK了,重新格式化Namenode
hadoop namenode -format
然后启动hadoop
start-all.sh
执行下JPS命令就可以看到NameNode了
刚刚装好Pig,打算打开hadoop模式尝试下Pig,但是发现Pig报错
[root@master pig-0.9.1]# pig
2011-12-03 07:27:30,158 [main] INFO org.apache.pig.Main - Logging error messages to: /home/bell/software/hadoop-0.20.2/pig-0.9.1/pig_1322868450154.log
2011-12-03 07:27:30,403 [main] INFO org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to hadoop
file system at: hdfs://localhost/
2011-12-03 07:27:31,544 [main] INFO org.apache.hadoop.ipc.Client - Retrying connect to server: localhost/127.0.0.1:8020.
Already tried 0 time(s).
2011-12-03 07:27:32,545 [main] INFO org.apache.hadoop.ipc.Client - Retrying connect to server: localhost/127.0.0.1:8020.
Already tried 1 time(s).
2011-12-03 07:27:33,546 [main] INFO org.apache.hadoop.ipc.Client - Retrying connect to server: localhost/127.0.0.1:8020.
Already tried 2 time(s).
2011-12-03 07:27:34,547 [main] INFO org.apache.hadoop.ipc.Client - Retrying connect to server: localhost/127.0.0.1:8020.
Already tried 3 time(s).
2011-12-03 07:27:35,548 [main] INFO org.apache.hadoop.ipc.Client - Retrying connect to server: localhost/127.0.0.1:8020.
Already tried 4 time(s).
2011-12-03 07:27:36,549 [main] INFO org.apache.hadoop.ipc.Client - Retrying connect to server: localhost/127.0.0.1:8020.
Already tried 5 time(s).
2011-12-03 07:27:37,550 [main] INFO org.apache.hadoop.ipc.Client - Retrying connect to server: localhost/127.0.0.1:8020.
Already tried 6 time(s).
2011-12-03 07:27:38,550 [main] INFO org.apache.hadoop.ipc.Client - Retrying connect to server: localhost/127.0.0.1:8020.
Already tried 7 time(s).
2011-12-03 07:27:39,551 [main] INFO org.apache.hadoop.ipc.Client - Retrying connect to server: localhost/127.0.0.1:8020.
Already tried 8 time(s).
2011-12-03 07:27:40,552 [main] INFO org.apache.hadoop.ipc.Client - Retrying connect to server: localhost/127.0.0.1:8020.
Already tried 9 time(s).
2011-12-03 07:27:40,582 [main] ERROR org.apache.pig.Main - ERROR 2999: Unexpected internal error. Failed to create DataStorage
Details at logfile: /home/bell/software/hadoop-0.20.2/pig-0.9.1/pig_1322868450154.log
发现错误后查看了下log,发现是connection refused,那么可能是namenode的问题,JPS查看进程,发现没有namenode进程,搜索网络发现如下解决方案:
最近遇到了一个问题,执行start-all.sh的时候发现JPS一下namenode没有启动
每次开机都得重新格式化一下namenode才可以
其实问题就出在tmp文件,默认的tmp文件每次重新开机会被清空,与此同时namenode的格式化信息就会丢失
于是我们得重新配置一个tmp文件目录
首先在home目录下建立一个hadoop_tmp目录
sudo mkdir ~/hadoop_tmp
然后修改hadoop/conf目录里面的core-site.xml文件,加入以下节点:
<property>
<name>hadoop.tmp.dir</name>
<value>/home/chjzh/hadoop_tmp</value>
<description>A base for other temporary directories.</description>
</property>
注意:我的用户是chjzh所以目录是/home/chjzh/hadoop_tmp
OK了,重新格式化Namenode
hadoop namenode -format
然后启动hadoop
start-all.sh
执行下JPS命令就可以看到NameNode了
相关文章推荐
- hadoop Namenode因硬盘写满无法启动
- 解决hadoop namenode 无法启动
- hadoop学习笔记之start-all.sh 无法启动NameNode,DataNode
- Hadoop的datanode,namenode无法启动
- Hadoop运维笔记 之 Namenode异常停止后无法正常启动
- hadoop伪分布式每次启动时需要重新format否则namenode无法启动的问题
- hadoop的namenode无法启动的解决办法
- hadoop集群两个namenode无法正常启动
- hadoop Namenode因硬盘写满无法启动
- hadoop-2.4.1伪分布式搭建出现的namenode无法启动的问题
- hadoop2.7 namenode无法启动
- hadoop datanode 无法启动之 namenode ID 不一致解决办法。
- Hadoop DataNode, NameNode无法启动
- 解决hadoop namenode 无法启动
- Hadoop无法启动NameNode问题
- Hadoop的namenode无法启动问题(50070无法访问,50030可以访问)
- hadoop之端口被占用问题namenode无法启动
- hadoop伪分布式每次启动时需要重新format否则namenode无法启动的问题
- hadoop之端口被占用问题namenode无法启动
- Hadoop namenode无法启动常见解决办法