您的位置:首页 > 其它

zookeeper3.4.8 +hbase1.2.6配置详细步骤

2017-10-28 22:44 417 查看

一 ZooKeeper安装配置

三台服务器:

192.168.15.5 master

192.168.15.6 slaver1

192.168.15.7 slaver2

在每台服务器的/etc/hosts中添加:

192.168.15.5 master

192.168.15.6 slaver1

192.168.15.7 slaver2

 

 zookkeeper下载路径:点击打开链接

随便在某一台上如:192.168.15.5

解压zookeeper压缩文件:

tar -zxvf zookeeper-3.4.8.tar.gz
 

主从节点配置环境变量:

#zookeeper
export ZOOKEEPER=/usr/tools/zookeeper-3.4.8
export PATH=$PATH:$ZOOKEEPER/bin


使修改生效:

source /etc/profile
 

到zookeeper的conf目录下面,新增一个zoo.cfg文件

cp zoo_sample.cfg zoo.cfg
 

修改:

dataDir=/usr/tools/zookeeper-3.4.8/data

添加:

server.1=master:2888:3888
server.2=slaver1:2888:3888
server.3=slaver2:2888:3888


 

配置完以后将上述内容全部拷贝到另外两台服务的相同位置

使用scp

scp -r /usr/tools/zookeeper-3.4.8 root@slaver1: /usr/tools/
scp -r /usr/tools/zookeeper-3.4.8 root@slaver2: /usr/tools/


 

三台机器下面的data目录里面各自建一个myid的文件

然后里面填上相应的数字

master是server.1,myid里面填1;

slaver1是server.2,myid里面填2 ;

slaver2是server.3,myid里面填3 ;

三台分别启动zookeeper

zkServer.sh start
然后,每台机器上查看状态:

zkServer.sh status
结果如下正常,:

ZooKeeper JMX enabled by default
Using config: /usr/tools/zookeeper-3.4.8/bin/../conf/zoo.cfg
Mode: follower


使用jps查看:

jps
结果

QuorumPeerMain

二 HBase安装配置

下载路径:点击打开链接
在某一台上解压hbase的压缩文件,如在192.168.15.5

tar -zxvf hbase-1.2.6-bin.tar.gz
 

主从节点添加环境变量:

#hbase
export HBASE_HOME=/usr/tools/hbase-1.2.6
export PATH=$PATH:$HBASE_HOME/bin


 

使环境变量生效

source /etc/profile
 

进入hbase的conf目录,需要修改三个文件:hbase-env.sh、hbase-site.xml和regionservers

 

①其中hbase-env.sh中,(红色为要添加的配置)

# The java implementation to use.  Java 1.7+ required.
# export JAVA_HOME=/usr/java/jdk1.6.0/
export JAVA_HOME=/usr/tools/jdk1.8.0_73
# Extra Java CLASSPATH elements.  Optional.
# export HBASE_CLASSPATH=
然后在后面添加:
# Seconds to sleep between slave commands.  Unset by default.  This
# can be useful in large clusters, where, e.g., slave rsyncs can
# otherwise arrive faster than the master can service them.
# export HBASE_SLAVE_SLEEP=0.1
# Tell HBase whether it should manage it's own instance of Zookeeper or not.
export HBASE_MANAGES_ZK=false

②hbase-site.xml中
<configuration>
<property>
<name>hbase.zookeeper.quorum</name>
<value>master,slaver1,slaver2</value>
<description>The directory shared by RegionServers.</description>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/usr/tools/hbase-1.2.6/zookeeperdata</value>
<description>Property from ZooKeeper config zoo.cfg.
The directory where the snapshot is stored.
</description>
</property>
<property>
<name>hbase.tmp.dir</name>
<value>/usr/tools/hbase-1.2.6/tmpdata</value>
</property>
<property>
<name>hbase.rootdir</name>
<value>hdfs://master:9000/hbase</value>
<description>The directory shared by RegionServers.</description>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
<description>The mode the cluster will be in. Possible values are
false: standalone and pseudo-distributed setups with managed Zookeeper
true: fully-distributed with unmanaged Zookeeper Quorum (see hbase-env.sh)
</description>
</property>
</configuration>


 

③regionservers文件中添加各个从属服务器的ip或者hostname:

 

master
slaver1
slaver2


 

保存后分别把hbase的整个文件夹拷贝到其他服务器:

 

scp -r /usr/tools/hbase-1.2.6 root@slaver1:/usr/tools/
scp -r /usr/tools/hbase-1.2.6 root@slaver2:/usr/tools/


在hadoop的namenode节点上启动hbase服务

start-hbase.sh
 

启动后:jps

HRegionServer
HMaster
子节点

HRegionServer

 

启动顺序

Hadoop-hdfs-------》hadoop-yarn------》zookeeper------》hbase
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息