您的位置:首页 > 数据库

HiveServer2 JDBC客户端连接Hive数据库

2018-01-15 17:23 337 查看
官方地址

1 简介

两者都允许远程客户端使用多种编程语言,通过HiveServer或者HiveServer2,客户端可以在不启动CLI的情况下对Hive中的数据进行操作,连这个和都允许远程客户端使用多种编程语言如java,python等向hive提交请求,取回结果(从hive0.15起就不再支持hiveserver了),但是在这里我们还是要说一下hiveserver

HiveServer或者HiveServer2都是基于Thrift的,但HiveSever有时被称为Thrift server,而HiveServer2却不会。既然已经存在HiveServer,为什么还需要HiveServer2呢?这是因为HiveServer不能处理多于一个客户端的并发请求,这是由于HiveServer使用的Thrift接口所导致的限制,不能通过修改HiveServer的代码修正。因此在Hive-0.11.0版本中重写了HiveServer代码得到了HiveServer2,进而解决了该问题。HiveServer2支持多客户端的并发和认证,为开放API客户端如JDBC、ODBC提供更好的支持。

2 启动hiveserver2

[hadoop@zhangyu ~]$ cd /opt/software/hive/bin/
[hadoop@zhangyu bin]$ hiveserver2
which: no hbase in (/opt/software/hive/bin:/opt/software/hadoop/sbin:/opt/software/hadoop/bin:/opt/software/apache-maven-3.3.9/bin:/usr/java/jdk1.8.0_45/bin:/usr/lib64/qt-3.3/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hadoop/bin)


3 连接jdbc

[hadoop@zhangyu bin]$ ./beeline -u jdbc:hive2://localhost:10000/default -n hadoop

which: no hbase in (/opt/software/hive/bin:/opt/software/hadoop/sbin:/opt/software/hadoop/bin:/opt/software/apache-maven-3.3.9/bin:/usr/java/jdk1.8.0_45/bin:/usr/lib64/qt-3.3/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/home/hadoop/bin)
scan complete in 4ms
Connecting to jdbc:hive2://localhost:10000/default
Connected to: Apache Hive (version 1.1.0-cdh5.7.0)
Driver: Hive JDBC (version 1.1.0-cdh5.7.0)
Transaction isolation: TRANSACTION_REPEATABLE_READ
Beeline version 1.1.0-cdh5.7.0 by Apache Hive
0: jdbc:hive2://localhost:10000/default>


也可以在该窗口输入SQL语句查询

0: jdbc:hive2://localhost:10000/default> show databases;
INFO  : Compiling command(queryId=hadoop_20180114082525_e8541a4a-e849-4017-9dab-ad5162fa74c1): show databases
INFO  : Semantic Analysis Completed
INFO  : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:database_name, type:string, comment:from deserializer)], properties:null)
INFO  : Completed compiling command(queryId=hadoop_20180114082525_e8541a4a-e849-4017-9dab-ad5162fa74c1); Time taken: 0.478 seconds
INFO  : Concurrency mode is disabled, not creating a lock manager
INFO  : Executing command(queryId=hadoop_20180114082525_e8541a4a-e849-4017-9dab-ad5162fa74c1): show databases
INFO  : Starting task [Stage-0:DDL] in serial mode
INFO  : Completed executing command(queryId=hadoop_20180114082525_e8541a4a-e849-4017-9dab-ad5162fa74c1); Time taken: 0.135 seconds
INFO  : OK
+----------------+--+
| database_name  |
+----------------+--+
| default        |
+----------------+--+
1 row selected


编写java代码

import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.ResultSet;
import java.sql.Statement;

public class JdbcApp {
private static String driverName = "org.apache.hive.jdbc.HiveDriver";

public static void main(String[] args) throws Exception {
try {
Class.forName(driverName);
} catch (ClassNotFoundException e) {
// TODO Auto-generated catch block
e.printStackTrace();
System.exit(1);
}
Connection con = DriverManager.getConnection("jdbc:hive2://192.168.137.200:10000/default", "", "");
Statement stmt = con.createStatement();

//select table:ename
String tableName = "emp";
String sql = "select ename from " + tableName;
System.out.println("Running: " + sql);
ResultSet res = stmt.executeQuery(sql);
while(res.next()) {
System.out.println(res.getString(1));
}
// describe table
sql = "describe " + tableName;
System.out.println("Running: " + sql);
res = stmt.executeQuery(sql);
while (res.next()) {
System.out.println(res.getString(1) + "\t" + res.getString(2));
}
}
}

运行代码结果:


4 默认参数的设置

Hiveserver2允许在配置文件hive-site.xml中进行配置管理,具体的参数为:

hive.server2.thrift.min.worker.threads– 最小工作线程数,默认为5。
hive.server2.thrift.max.worker.threads – 最小工作线程数,默认为500。
hive.server2.thrift.port– TCP 的监听端口,默认为10000。
hive.server2.thrift.bind.host– TCP绑定的主机,默认为localhost


配置监听端口和路径

sudo vi hive-site.xml
<property>
<name>hive.server2.thrift.port</name>
<value>10000</value>
</property>
<property>
<name>hive.server2.thrift.bind.host</name>
<value>192.168.48.130</value>
</property>
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息