Hbase的完全分布式环境的搭建及常见错误的排解(hbase启动后自动关闭的问题分析)
2015-03-09 23:01
786 查看
Hbase的完全分布式环境的搭建及常见错误的排解(hbase启动后自动关闭的问题分析):
(1)所用Hadoop版本为1.0.1;Hbase的版本为0.94.12
(2)主机: 172.16.2.27 Masterpc.hadoop
从机: 172.16.2.39 Slave1pc.hadoop
172.16.2.49 Slave2pc.hadoop
172.16.2.51 Slave3pc.hadoop
(3)将Hbase-0.94.12.tar.gz 上传到/usr/目录下,然后解压:
tar -xzf Hbase-0.94.12.tar.gz ; 并将其命名为 hbase;
(4)接下来开始配置hbase:
先打开 vim /usr/hbase/conf/hbase-env.sh
然后添加下面内容:
export HBASE_MANAGES_ZK=true
export J***A_HOME=/usr/java/jdk1.8.0_25
export HBASE_CLASSPATH=/usr/hadoop/conf/
(5)然后打开 vim /usr/hbase/conf/hbase-site.xml
然后添加下面内容:
<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://172.16.2.27:9000/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.master</name>
<value>hdfs://172.16.2.27:60000</value>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2181</value>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/home/hadoop/tmp/zookeeper</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>Slave1pc.Hadoop,Slave2pc.Hadoop,Slave3pc.Hadoop</value>
</property>
</configuration>
(6)再打开 vim /usr/hbase/conf/regionservers
并添加下面内容:
Slave1pc.Hadoop
Slave2pc.Hadoop
Slave3pc.Hadoop
(7)然后打开 vim /etc/profile, (其他从机也需要配置)
添加下面内容:
#set Hbase path
export HBASE_HOME=/usr/hbase
export PATH=$PATH:$HBASE_HOME:$HBASE_HOME/bin
(8)然后将配置好的hbase复制到其他从机上去:
scp -r /usr/hbase hadoop@172.16.2.39:/usr/
scp -r /usr/hbase hadoop@172.16.2.49:/usr/
scp -r /usr/hbase hadoop@172.16.2.51:/usr/
(9)启动hbase:
[hadoop@Masterpc ~]$ start-hbase.sh
Slave1pc.Hadoop: starting zookeeper, logging to /usr/hbase/bin/../logs/hbase-hadoop-zookeeper-Slave1pc.Hadoop.out
Slave2pc.Hadoop: starting zookeeper, logging to /usr/hbase/bin/../logs/hbase-hadoop-zookeeper-Slave2pc.Hadoop.out
Slave3pc.Hadoop: starting zookeeper, logging to /usr/hbase/bin/../logs/hbase-hadoop-zookeeper-Slave3pc.Hadoop.out
starting master, logging to /usr/hbase/logs/hbase-hadoop-master-Masterpc.Hadoop.out
Slave1pc.Hadoop: starting regionserver, logging to /usr/hbase/bin/../logs/hbase-hadoop-regionserver-Slave1pc.Hadoop.out
Slave2pc.Hadoop: starting regionserver, logging to /usr/hbase/bin/../logs/hbase-hadoop-regionserver-Slave2pc.Hadoop.out
Slave3pc.Hadoop: starting regionserver, logging to /usr/hbase/bin/../logs/hbase-hadoop-regionserver-Slave3pc.Hadoop.out
然后查看:
[hadoop@Masterpc ~]$ jps
18147 JobTracker
17914 NameNode
22060 Jps
21919 HMaster
18063 SecondaryNameNode
过了大概10秒后发现:HMaster启动后自动关闭,如下:
[hadoop@Masterpc ~]$ jps
18147 JobTracker
17914 NameNode
22079 Jps
18063 SecondaryNameNode
查看日志:发现下面问题:
(10)对出现的问题进行排解: 打开 vim /usr/hbase/conf/regionservers
并在文件中新增与从机名对应的IP地址:如下:
172.16.2.39 Slave1pc.Hadoop
172.16.2.49 Slave2pc.Hadoop
172.16.2.51 Slave3pc.Hadoop
另外,修改hbase.zookeeper.quorum的配置,将从机名改为对应的IP地址:
<property>
<name>hbase.zookeeper.quorum</name>
<value>172.16.2.39,172.16.2.49,172.16.2.51</value>
</property>
重新启动:start-hbase.sh 后:结果显示正常:
主机:
[hadoop@Masterpc logs]$ jps
18147 JobTracker
27722 HMaster
17914 NameNode
27660 HQuorumPeer
28221 Jps
18063 SecondaryNameNode
从机:
[hadoop@Slave3pc ~]$ jps
15557 HQuorumPeer
9702 DataNode
15351 HRegionServer
9785 TaskTracker
32175 Jps
(11)启动 hbase shell
[hadoop@Masterpc ~]$ hbase shell
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 0.94.12, r1524863, Fri Sep 20 04:44:41 UTC 2013
hbase(main):001:0>
(1)所用Hadoop版本为1.0.1;Hbase的版本为0.94.12
(2)主机: 172.16.2.27 Masterpc.hadoop
从机: 172.16.2.39 Slave1pc.hadoop
172.16.2.49 Slave2pc.hadoop
172.16.2.51 Slave3pc.hadoop
(3)将Hbase-0.94.12.tar.gz 上传到/usr/目录下,然后解压:
tar -xzf Hbase-0.94.12.tar.gz ; 并将其命名为 hbase;
(4)接下来开始配置hbase:
先打开 vim /usr/hbase/conf/hbase-env.sh
然后添加下面内容:
export HBASE_MANAGES_ZK=true
export J***A_HOME=/usr/java/jdk1.8.0_25
export HBASE_CLASSPATH=/usr/hadoop/conf/
(5)然后打开 vim /usr/hbase/conf/hbase-site.xml
然后添加下面内容:
<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://172.16.2.27:9000/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.master</name>
<value>hdfs://172.16.2.27:60000</value>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2181</value>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/home/hadoop/tmp/zookeeper</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>Slave1pc.Hadoop,Slave2pc.Hadoop,Slave3pc.Hadoop</value>
</property>
</configuration>
(6)再打开 vim /usr/hbase/conf/regionservers
并添加下面内容:
Slave1pc.Hadoop
Slave2pc.Hadoop
Slave3pc.Hadoop
(7)然后打开 vim /etc/profile, (其他从机也需要配置)
添加下面内容:
#set Hbase path
export HBASE_HOME=/usr/hbase
export PATH=$PATH:$HBASE_HOME:$HBASE_HOME/bin
(8)然后将配置好的hbase复制到其他从机上去:
scp -r /usr/hbase hadoop@172.16.2.39:/usr/
scp -r /usr/hbase hadoop@172.16.2.49:/usr/
scp -r /usr/hbase hadoop@172.16.2.51:/usr/
(9)启动hbase:
[hadoop@Masterpc ~]$ start-hbase.sh
Slave1pc.Hadoop: starting zookeeper, logging to /usr/hbase/bin/../logs/hbase-hadoop-zookeeper-Slave1pc.Hadoop.out
Slave2pc.Hadoop: starting zookeeper, logging to /usr/hbase/bin/../logs/hbase-hadoop-zookeeper-Slave2pc.Hadoop.out
Slave3pc.Hadoop: starting zookeeper, logging to /usr/hbase/bin/../logs/hbase-hadoop-zookeeper-Slave3pc.Hadoop.out
starting master, logging to /usr/hbase/logs/hbase-hadoop-master-Masterpc.Hadoop.out
Slave1pc.Hadoop: starting regionserver, logging to /usr/hbase/bin/../logs/hbase-hadoop-regionserver-Slave1pc.Hadoop.out
Slave2pc.Hadoop: starting regionserver, logging to /usr/hbase/bin/../logs/hbase-hadoop-regionserver-Slave2pc.Hadoop.out
Slave3pc.Hadoop: starting regionserver, logging to /usr/hbase/bin/../logs/hbase-hadoop-regionserver-Slave3pc.Hadoop.out
然后查看:
[hadoop@Masterpc ~]$ jps
18147 JobTracker
17914 NameNode
22060 Jps
21919 HMaster
18063 SecondaryNameNode
过了大概10秒后发现:HMaster启动后自动关闭,如下:
[hadoop@Masterpc ~]$ jps
18147 JobTracker
17914 NameNode
22079 Jps
18063 SecondaryNameNode
查看日志:发现下面问题:
(10)对出现的问题进行排解: 打开 vim /usr/hbase/conf/regionservers
并在文件中新增与从机名对应的IP地址:如下:
172.16.2.39 Slave1pc.Hadoop
172.16.2.49 Slave2pc.Hadoop
172.16.2.51 Slave3pc.Hadoop
另外,修改hbase.zookeeper.quorum的配置,将从机名改为对应的IP地址:
<property>
<name>hbase.zookeeper.quorum</name>
<value>172.16.2.39,172.16.2.49,172.16.2.51</value>
</property>
重新启动:start-hbase.sh 后:结果显示正常:
主机:
[hadoop@Masterpc logs]$ jps
18147 JobTracker
27722 HMaster
17914 NameNode
27660 HQuorumPeer
28221 Jps
18063 SecondaryNameNode
从机:
[hadoop@Slave3pc ~]$ jps
15557 HQuorumPeer
9702 DataNode
15351 HRegionServer
9785 TaskTracker
32175 Jps
(11)启动 hbase shell
[hadoop@Masterpc ~]$ hbase shell
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 0.94.12, r1524863, Fri Sep 20 04:44:41 UTC 2013
hbase(main):001:0>
相关文章推荐
- hbase参考官方文档搭建完全分布式环境遇到的问题
- hadoop+hbase+zookeeper完全分布式环境搭建
- hbase环境搭建,启动之后HMaster挂掉,或者是集群里,只启动了HMaster节点,HRegionServer节点没有启动的问题
- Spark +hadoop 完全分布式搭建 以及常见问题
- hbase+hadoop完全分布式环境搭建
- hbase:伪分布环境搭建及常见错误解决方法
- hbase集群部分节点HRegionServer启动后自动关闭的问题
- Wamp环境搭建常见错误问题解决
- hbase集群部分节点HRegionServer启动后自动关闭的问题
- HBase0.98 + Hadoop2.6 Fully Distributed 完全分布式环境搭建
- 前端开发环境之GRUNT自动WATCH压缩JS文件与编译SASS文件环境下Ruby安装sass常见错误分析
- Hbase完全分布式环境搭建
- hbase完全分布式环境搭建
- HBase启动之后HMaster自动关闭的问题处理
- hbase完全分布式环境搭建,教你一次性成功!
- Hadoop-04-HBase完全分布式环境搭建
- Hadoop2.7.2 Centos 完全分布式集群环境搭建 (3) - 问题汇总
- Hadoop Hbase完全分布式环境搭建
- (csdn内转载)hbase完全分布式环境搭建
- hadoop完全分布式环境搭建,整合zookeeper,hbase,spark,hive,hue