您的位置:首页 > 其它

Hbase配置安装

2016-04-17 13:33 351 查看
Base简介 HBase – Hadoop Database,是一个高可靠性、高性能、面向列、可伸缩的分布式存储系统,利用HBase技术可在廉价PC Server上搭建起大规模结构化存储集群。HBase利用Hadoop HDFS作为其文件存储系统,利用Hadoop MapReduce来处理HBase中的海量数据,利用Zookeeper作为协调工具。

Hbase基于HDFS之上,分布式的,面向列的开源数据库,由Google BigTable的开源实现,它主要用于海量数据,有丰富的工具支持
Hbase 表的特点
1.大: 一个表可以有数十亿行,百万列
2. 面向列
3.数据类型单一
4.无模式
HBase 术语
1.主键(Row key)
2.列族(Column Family)
3.时间戳与存储单元(Timestamp and Cell)

Linux环境
1.关闭防火墙
[root@chen ~]# service iptables stop(这种方法是暂时的)
[root@chen ~]# chkconfig iptables off(永久的方法)
2.关闭selinux
修改[root@chen ~]# vim /etc/sysconfig/selinux

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.

SELINUX=disabled (改为disabled)
# SELINUXTYPE= can take one of these two values:
# targeted - Targeted processes are protected,
# mls - Multi Level Security protection.
SELINUXTYPE=targeted

3.配置主机名

[root@chen ~]# vim /etc/sysconfig/network

NETWORKING=yes
HOSTNAME=chen(自己的主机名)
4.配置IP的映射关系

[root@chen ~]# vim /etc/hosts

127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.236.128 chen(自己的Ip与主机名)

5 ssh免密码登录

[root@chen ~]# ssh-keygen -t rsa(生成密钥)
选择yes 生成成功,将密钥拷贝到本机

[root@chen ~]# ssh-copy-id 192.168.236.128(自己的IP)
选择yes 输入密码
重启Linux [root@chen ~]# reboot

6.安装Java环境(hadoop和hbase 是基于java的)
在自己的linux环境下新建一个目录存放安装需要的工具

drwxrwxrwx. 8 uucp 143 4096 Oct 8 2013 jdk1.7.0_45
-rwxrwxrwx. 1 root root 138094686 Mar 30 06:42 jdk-7u45-linux-x64.gz
配置环境变量

[root@chen java]# vim /etc/profile

export JAVA_HOME=/usr/java/jdk1.7.0_45
export PATH=$JAVA_HOME/bin:$PATH
保存退出(ctrl+zz)
执行以下profile文件
[root@chen java]# source /etc/profile

查看java版本(出现下列信息说明安装成功)

[root@chen java]# java -version
java version "1.7.0_45"
Java(TM) SE Runtime Environment (build 1.7.0_45-b18)
Java HotSpot(TM) 64-Bit Server VM (build 24.45-b08, mixed mode)

7.安装hadoop
进入官网下载hadoop

http://archive.apache.org(这个网站上有apache的所以你需要的东西)

在自己的linux环境下新建一个目录存放安装需要的工具然后解压

[root@chen tools]# tar -zxf hadoop-2.6.0.tar.gz -C ../softwares/
修改配置文件(根据官方文档来配)

进入 http://hadoop.apache.org/
documentation--->release2.6.0(选择自己的hadoop版本)---->Single Node Setup(参考文档进行配置)

JAVA_HOME路径

[root@chen tools]# echo $JAVA_HOME
/usr/java/jdk1.7.0_45

[root@chen hadoop]# vi hadoop-env.sh
export JAVA_HOME=/usr/java/jdk1.7.0_45

Configuration

Use the following:

etc/hadoop/core-site.xml:

<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>(localhost自己的IP)
</property></configuration>

etc/hadoop/hdfs-site.xml:

<property>        <name>fs.defaultFS</name>        <value>hdfs://192.168.236.128:9000</value>    </property> <property>        <name>hadoop.tmp.dir</name>        <value>/usr/bigdata/softwares/hadoop-2.6.0/date/tmp</value>
</configuration>


修改yarn

etc/hadoop/mapred-site.xml:

<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>


etc/hadoop/yarn-site.xml:

<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>


格式化hadoop
[root@chen hadoop-2.6.0]# bin/hdfs
(这是帮助命令)
dfs run a filesystem command on the file systems supported in Hadoop.
namenode -format format the DFS filesystem
secondarynamenode run the DFS secondary namenode
namenode run the DFS namenode
journalnode run the DFS journalnode
zkfc run the ZK Failover Controller daemon
datanode run a DFS datanode
dfsadmin run a DFS admin client
haadmin run a DFS HA admin client
fsck run a DFS filesystem checking utility
balancer run a cluster balancing utility
jmxget get JMX exported values from NameNode or DataNode.
mover run a utility to move block replicas across
storage types
oiv apply the offline fsimage viewer to an fsimage
oiv_legacy apply the offline fsimage viewer to an legacy fsimage
oev apply the offline edits viewer to an edits file
fetchdt fetch a delegation token from the NameNode
getconf get config values from configuration
groups get the groups which users belong to
snapshotDiff diff two snapshots of a directory or diff the
current directory contents with a snapshot
lsSnapshottableDir list all snapshottable dirs owned by the current user
Use -help to see options
portmap run a portmap service
nfs3 run an NFS version 3 gateway
cacheadmin configure the HDFS cache
crypto configure HDFS encryption zones
storagepolicies get all the existing block storage policies
version print the version
[root@chen hadoop-2.6.0]# bin/hdfs namenode -format

/dfs/name has been successfully formatted.(看大这句话表示成功)

启动hadoop

[root@chen hadoop-2.6.0]# sbin/start-dfs.sh

启动yarn

[root@chen hadoop-2.6.0]# sbin/start-yarn.sh

查看是否成功

[root@chen hadoop-2.6.0]# jps
2822 DataNode
11925 ResourceManager
12009 NodeManager
2999 SecondaryNameNode
2738 NameNode
12050 Jps

(出现上面文字表示已成功)
配置hadoop环境变量

~/.bash_profile

export HADOOP_HOME=/home/storm/hadoop-2.6.0

export PATH=$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH

source ~/.bash_profile

创建一个文件夹

[root@chen conf]# vi hbase-env.sh

export JAVA_HOME=/usr/java/jdk1.7.0_45

[root@chen conf]# vi hbase-site.xml

<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://localhost:9000/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
</configuration>


另外,我们需要设置一些环境变量。修改HBase下的conf目录中的hbase-env.sh文件(你的jdk路径可能不一样):
export JAVA_HOME=/usr/local/jdk1.7.0_67
export HBASE_MANAGES_ZK=true


export HBASE_MANAGES_ZK=true
此配置信息,表示设置由hbase自己管理zookeeper,不需要单独的zookeeper, 本文搭建的 Hbase 用的是自带的 zookeeper,故设置为true.
[root@chen conf]# vi regionservers
192.168。238.128(自己的IP)
启动hbase

[root@chen hbase-0.98.13-hadoop2]# bin/hbase-daemon.sh start zookeeper
starting zookeeper, logging to /usr/bigdata/softwares/hbase-0.98.13-hadoop2/bin/../logs/hbase-storm-zookeeper-chen.out
[root@chen hbase-0.98.13-hadoop2]# bin/hbase-daemon.sh start master
starting master, logging to /usr/bigdata/softwares/hbase-0.98.13-hadoop2/bin/../logs/hbase-storm-master-chen.out
[root@chen hbase-0.98.13-hadoop2]# bin/hbase-daemon.sh start regionser
starting regionser, logging to /usr/bigdata/softwares/hbase-0.98.13-hadoop2/bin/../logs/hbase-storm-regionser-chen.out
查看是否启动成功

[root@chen hbase-0.98.13-hadoop2]# jps
13527 HMaster
2822 DataNode
13662 Jps
11925 ResourceManager
13640 GetJavaProperty
12009 NodeManager
13431 HQuorumPeer
2999 SecondaryNameNode
2738 NameNode
(启动成功)
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: