您的位置:首页 > 数据库 > SQL

hadoop1.2.1集群安装hive1.2.0,用mysql作为元数据库

2015-10-15 20:16 806 查看
安装Hive,有条件的同学可考虑用mysql作为元数据库安装(有一定难度,可以获得老师极度赞赏),安装完成后做简单SQL操作测试。

安装环境:

hadoop1.2.1集群

安装MySQL:

配置本地yum源,

vi /etc/yum.repos.d/dvd.repo

Redhat6.6添加如下内容:

[dvd]

name=install dvd

baseurl=file:///media/RHEL-6.6\ Server.x86_64/Server

enabled=1

gpgcheck=0

复制代码

运行:yum -y install mysql-server命令安装完成。

验证安装命令:rpm -qa     mysql-server



设置172.16.247.140为mySQL Server的IP地址。

修改主机名

修改当前会话中主机名Hostname mysqlserver

修改配置文件中的主机名Vi /etc/sysconfig/network

关闭防火墙:service iptables stop

启动mysqld,建立相应的mySQL账号并赋予足够权限。

[root@localhost hadoop]# service mysqld start

[root@localhost hadoop]# chkconfig mysqld on //加入开机启动

mysql

mysql> create user 'hive' identified by '123456';

mysql> grant all privileges on *.* to 'hive'@'%' with grant option;

mysql> flush privileges;

mysql> grant all privileges on *.* to 'hive'@'localhost' with grant option;

mysql> flush privileges;

mysql> grant all privileges on *.* to 'hive'@'mysqlserver' with grant option;

mysql> flush privileges;

mysql> exit;

复制代码

用hive用户登录测试并创建hive数据库:

[root@localhost hadoop]# mysql -h 172.16.247.140 -u hive -p

mysql> create database hive;

mysql> show databases;

mysql> use hive

mysql> show tables;

复制代码

下载驱动包(mysql-connector-java-5.1.35.tar.gz):http://dev.mysql.com/downloads/connector/j/

appledeMacBook-Pro:softwares     apple$ tar -xvf mysql-connector-java-5.1.35.tar

appledeMacBook-Pro:mysql-connector-java-5.1.35     apple$ scp mysql-connector-java-5.1.35-bin.jar     hadoop@hadoop0:/home/hadoop/apache-hive-1.2.0-bin/lib/

启动metastore:hive
--service     metastore &

启动hiveserver:hive
--service hiveserver &

(错误排查命令:hive-hiveconf hive.root.logger=DEBUG,console)

hive

hive>create table a1(a string,b int);

hive>show tables

复制代码

[root@localhost hadoop]# mysql -h 172.16.247.140 -u hive -p

mysql> show databases;

mysql> use hive

mysql> show tables;

mysql>select * from TBLS;

hive>drop tables

mysql>flush privileges;

mysql>show tables;

mysql>select * from TBLS;

复制代码

二、安装Hive

下载hive apache-hive-1.1.0-bin.tar.gz from http://hive.apache.org

安装scp -r     ./apache-hive-1.2.0-bin
hadoop@hadoop0:/home/hadoop/

配置vi /etc/profile

#set hive environment

export HIVE_HOME=/home/hadoop/apache-hive-1.2.0-bin

export CLASSPATH=$CLASSPATH:$HIVE_HOME/lib

export PATH=$HIVE_HOME/bin:$PATH

export HIVE_CONF_DIR=$HIVE_HOME/conf

复制代码

备注:最后要使用命令:source /etc/profile  使环境变量立即生效。

2.修改$HIVE_HOME/bin的hive-config.sh,增加以下三行

export HIVE_HOME=/home/hadoop/apache-hive-1.2.0-bin

export JAVA_HOME=/home/hadoop/jdk1.7.0_79

export HADOOP_HOME=/home/hadoop/hadoop-1.2.1

复制代码

创建hive-env.sh、hive-site.xml文件

[root@hadoop0 conf]# pwd

/home/hadoop/apache-hive-1.2.0-bin/conf

[root@hadoop0 conf]# cp hive-env.sh.template hive-env.sh

# HADOOP_HOME=${bin}/../../hadoop

HADOOP_HOME=/home/hadoop/hadoop-1.2.1

# Hive Configuration Directory can be controlled by:

export HIVE_CONF_DIR=/home/hadoop/apache-hive-1.2.0-bin/conf

复制代码

[root@hadoop0 conf]# cp hive-default.xml.template hive-site.xml

cp hive-default.xml.templatehive-site.xml添加并修改里面的配置参数

<property>

<name>hive.metastore.local</name>

<value>false</value>

</property>

<property>

<name>javax.jdo.option.ConnectionURL</name>

<value>jdbc:mysql://172.16.247.140:3306/hive?createDatabaseIfNotExist=true</value>

</property>

<property>

<name>javax.jdo.option.ConnectionDriverName</name>

<value>com.mysql.jdbc.Driver</value>

</property>

<property>

<name>javax.jdo.option.ConnectionUserName</name>

<value>hive</value>

</property>

<property>

<name>javax.jdo.option.ConnectionPassword</name>

<value>123456</value>

</property>

<property>

<name>hive.metastore.uris</name>

<value>thrift://172.16.247.140:9083</value>

</property>

复制代码

设置参考:http://blog.csdn.net/reesun/article/details/8556078

[root@hadoop0 conf]# cp hive-default.xml.template hive-default.xml

修改hive-env.sh文件

HADOOP_HOME=/home/hadoop/hadoop-1.2.1

export HIVE_CONF_DIR=/home/hadoop/apache-hive-1.2.0-bin/conf

复制代码

创建文件夹:

[root@hadoop0 bin]# hadoop fs -mkdir /tmp/hive

[root@hadoop0 bin]# hadoop fs -chmod 733 /tmp/hive

[root@hadoop0 bin]# hadoop fs -mkdir /user/hive

[root@hadoop0 bin]# hadoop fs -chmod 733 /user/hive

复制代码

先启动hadoop,mysql,再启动hive

错误

[root@hadoop0 bin]# hive

Exception in thread "main" java.lang.NoClassDefFoundError

rg/apache/hadoop/hive/ql/CommandNeedRetryException

at java.lang.Class.forName0(Native Method)

at java.lang.Class.forName(Class.java:274)

at org.apache.hadoop.util.RunJar.main(RunJar.java:153)

Caused by: java.lang.ClassNotFoundException

rg.apache.hadoop.hive.ql.CommandNeedRetryException

at java.net.URLClassLoader$1.run(URLClassLoader.java:366)

at java.net.URLClassLoader$1.run(URLClassLoader.java:355)

at java.security.AccessController.doPrivileged(Native Method)

at java.net.URLClassLoader.findClass(URLClassLoader.java:354)

at java.lang.ClassLoader.loadClass(ClassLoader.java:425)

at java.lang.ClassLoader.loadClass(ClassLoader.java:358)

... 3 more

解决办法如下:

但是有一种解决办法,我也是用了这个办法启动的。

要修改master节点的hadoop的conf文件夹里的hadoop-env.sh文件,没有修改之前的是

export HADOOP_CLASSPATH=/home/hadoop/hadoop-1.2.1/myclass

修改之后为

exportHADOOP_CLASSPATH=.:$CLASSPATH:$HADOOP_CLASSPATH:$HADOOP_HOME/bin:/home/hadoop/hadoop-1.2.1/myclass

改成如下也可以:

exportHADOOP_CLASSPATH=$CLASSPATH:/home/hadoop/hadoop-1.2.1/myclass

解决${system:java.io.tmpdir}/${system:user.name}引起的错误:

[root@hadoop0 bin]# hive

15/05/30 08:55:39 WARN conf.HiveConf: HiveConf of namehive.metastore.local does not exist

Logging initialized using configuration injar:file:/home/hadoop/apache-hive-1.2.0-bin/lib/hive-common-1.2.0.jar!/hive-log4j.properties

Exception in thread "main"java.lang.RuntimeException: java.lang.IllegalArgumentException:java.net.URISyntaxException:
Relative path in absolute URI:${system:java.io.tmpdir%7D/$%7Bsystem:user.name%7D

atorg.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:519)

atorg.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:677)

atorg.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621)

at sun.reflect.NativeMethodAccessorImpl.invoke0(NativeMethod)

atsun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

atsun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:606)

at org.apache.hadoop.util.RunJar.main(RunJar.java:160)

Caused by: java.lang.IllegalArgumentException:java.net.URISyntaxException: Relative path in absolute URI:${system:java.io.tmpdir%7D/$%7Bsystem:user.name%7D

at org.apache.hadoop.fs.Path.initialize(Path.java:148)

at org.apache.hadoop.fs.Path.<init>(Path.java:126)

atorg.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:560)

atorg.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:505)

... 7 more

Caused by: java.net.URISyntaxException: Relative path inabsolute URI: ${system:java.io.tmpdir%7D/$%7Bsystem:user.name%7D

at java.net.URI.checkPath(URI.java:1804)

at java.net.URI.<init>(URI.java:752)

at org.apache.hadoop.fs.Path.initialize(Path.java:145)

... 10 more

解决办法如下:

1.查看hive-site.xml配置,会看到配置值含有"system:java.io.tmpdir"的配置项,添加属性:

<property>

    <name>system:java.io.tmpdir</name>

    <value>/home/hadoop/apache-hive-1.2.0-bin/iotmp</value>

  </property>

<property>

    <name>system:user.name</name>

    <value>hive</value>

  </property>

复制代码

也可以参考:http://www.netfoucs.com/article/fhg12225/105246.html

新建文件夹/home/hadoop/apache-hive-1.2.0-bin/iotmp

                         chmod 733 iotmp

启动hive,成功!

[root@hadoop0 bin]# hive

15/05/30 09:10:09 WARN conf.HiveConf: HiveConf of name hive.metastore.local does not exist

Logging initialized using configuration in jar:file:/home/hadoop/apache-hive-1.2.0-bin/lib/hive-common-1.2.0.jar!/hive-log4j.properties

hive>

复制代码

测试,运行hive

hive

hive>create table a1(a string,b int);

hive>show tables

[root@localhost hadoop]# mysql -h 172.16.247.140 -u hive -p

mysql> show databases;

mysql> use hive

mysql> show tables;

mysql>select * from TBLS;

mysql> show databases

    -> ;

+--------------------+

| Database           |

+--------------------+

| information_schema |

| hive               |

| mysql              |

| test               |

+--------------------+

4 rows in set (0.00 sec)

mysql> show databases;

+--------------------+

| Database           |

+--------------------+

| information_schema |

| hive               |

| mysql              |

| test               |

+--------------------+

4 rows in set (0.00 sec)

mysql> use hive;

Reading table information for completion of table and column names

You can turn off this feature to get a quicker startup with -A

Database changed

mysql> show tables;

+---------------------------+

| Tables_in_hive            |

+---------------------------+

| BUCKETING_COLS            |

| CDS                       |

| COLUMNS_V2                |

| DATABASE_PARAMS           |

| DBS                       |

| FUNCS                     |

| FUNC_RU                   |

| GLOBAL_PRIVS              |

| PARTITIONS                |

| PARTITION_KEYS            |

| PARTITION_KEY_VALS        |

| PARTITION_PARAMS          |

| PART_COL_STATS            |

| ROLES                     |

| SDS                       |

| SD_PARAMS                 |

| SEQUENCE_TABLE            |

| SERDES                    |

| SERDE_PARAMS              |

| SKEWED_COL_NAMES          |

| SKEWED_COL_VALUE_LOC_MAP  |

| SKEWED_STRING_LIST        |

| SKEWED_STRING_LIST_VALUES |

| SKEWED_VALUES             |

| SORT_COLS                 |

| TABLE_PARAMS              |

| TAB_COL_STATS             |

| TBLS                      |

| VERSION                   |

+---------------------------+

29 rows in set (0.00 sec)

复制代码

Hive端删除表a1:

hive>drop table a1;

复制代码

服务器端刷新查看:

mysql>flush privileges;

mysql>show tables;

mysql>select * from TBLS;

复制代码

问题及其解决相关帖子:http://f.dataguru.cn/thread-501728-1-1.html

http://www.dataguru.cn/thread-525044-1-1.html

转载: http://f.dataguru.cn/thread-525071-1-2.html
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: