hiveserver2&&beeline&&java client
2017-01-19 18:58
253 查看
hiveserver2
-》启动
bin/hiveserver2:前段运行
bin/hiveserver2 & :后台运行
bin/hive –service hiveserver2
beeline(先启动hiveserver2)
-》启动
bin/beeline
bin/beeline -u jdbc:hive2://hadoop01.com:10000
-n lm -p 123456
java client 连接
由于这个和j d b c差不多,所以可以参考官网,但是由于官网访问速度慢,我吧官网的内容直接拷贝下来,留作使用和查看。
-》启动
bin/hiveserver2:前段运行
bin/hiveserver2 & :后台运行
bin/hive –service hiveserver2
beeline(先启动hiveserver2)
-》启动
bin/beeline
bin/beeline -u jdbc:hive2://hadoop01.com:10000
-n lm -p 123456
java client 连接
由于这个和j d b c差不多,所以可以参考官网,但是由于官网访问速度慢,我吧官网的内容直接拷贝下来,留作使用和查看。
import java.sql.SQLException; import java.sql.Connection; import java.sql.ResultSet; import java.sql.Statement; import java.sql.DriverManager; public class HiveJdbcClient { private static String driverName = "org.apache.hive.jdbc.HiveDriver"; /** * @param args * @throws SQLException */ public static void main(String[] args) throws SQLException { try { Class.forName(driverName); } catch (ClassNotFoundException e) { // TODO Auto-generated catch block e.printStackTrace(); System.exit(1); } //replace "hive" here with the name of the user the queries should run as Connection con = DriverManager.getConnection("jdbc:hive2://localhost:10000/default", "hive", ""); Statement stmt = con.createStatement(); String tableName = "testHiveDriverTable"; stmt.execute("drop table if exists " + tableName); stmt.execute("create table " + tableName + " (key int, value string)"); // show tables String sql = "show tables '" + tableName + "'"; System.out.println("Running: " + sql); ResultSet res = stmt.executeQuery(sql); if (res.next()) { System.out.println(res.getString(1)); } // describe table sql = "describe " + tableName; System.out.println("Running: " + sql); res = stmt.executeQuery(sql); while (res.next()) { System.out.println(res.getString(1) + "\t" + res.getString(2)); } // load data into table // NOTE: filepath has to be local to the hive server // NOTE: /tmp/a.txt is a ctrl-A separated file with two fields per line String filepath = "/tmp/a.txt"; sql = "load data local inpath '" + filepath + "' into table " + tableName; System.out.println("Running: " + sql); stmt.execute(sql); // select * query sql = "select * from " + tableName; System.out.println("Running: " + sql); res = stmt.executeQuery(sql); while (res.next()) { System.out.println(String.valueOf(res.getInt(1)) + "\t" + res.getString(2)); } // regular hive query sql = "select count(1) from " + tableName; System.out.println("Running: " + sql); res = stmt.executeQuery(sql); while (res.next()) { System.out.println(res.getString(1)); } } }
Running the JDBC Sample Code # Then on the command-line $ javac HiveJdbcClient.java # To run the program using remote hiveserver in non-kerberos mode, we need the following jars in the classpath # from hive/build/dist/lib # hive-jdbc*.jar # hive-service*.jar # libfb303-0.9.0.jar # libthrift-0.9.0.jar # log4j-1.2.16.jar # slf4j-api-1.6.1.jar # slf4j-log4j12-1.6.1.jar # commons-logging-1.0.4.jar # # # To run the program using kerberos secure mode, we need the following jars in the classpath # hive-exec*.jar # commons-configuration-1.6.jar (This is not needed with Hadoop 2.6.x and later). # and from hadoop # hadoop-core*.jar (use hadoop-common*.jar for Hadoop 2.x) # # To run the program in embedded mode, we need the following additional jars in the classpath # from hive/build/dist/lib # hive-exec*.jar # hive-metastore*.jar # antlr-runtime-3.0.1.jar # derby.jar # jdo2-api-2.1.jar # jpox-core-1.2.2.jar # jpox-rdbms-1.2.2.jar # and from hadoop/build # hadoop-core*.jar # as well as hive/build/dist/conf, any HIVE_AUX_JARS_PATH set, # and hadoop jars necessary to run MR jobs (eg lzo codec) $ java -cp $CLASSPATH HiveJdbcClient Alternatively, you can run the following bash script, which will seed the data file and build your classpath before invoking the client. The script adds all the additional jars needed for using HiveServer2 in embedded mode as well. #!/bin/bash HADOOP_HOME=/your/path/to/hadoop HIVE_HOME=/your/path/to/hive echo -e '1\x01foo' > /tmp/a.txt echo -e '2\x01bar' >> /tmp/a.txt HADOOP_CORE=$(ls $HADOOP_HOME/hadoop-core*.jar) CLASSPATH=.:$HIVE_HOME/conf:$(hadoop classpath) for i in ${HIVE_HOME}/lib/*.jar ; do CLASSPATH=$CLASSPATH:$i done java -cp $CLASSPATH HiveJdbcClient
相关文章推荐
- 24-hadoop-hiveserver2&jdbc
- how-to: resolve "java.lang.OutOfMemoryError: Java heap space" during using beeline && hiveserver2
- hive 非等值连接sql写法-2<转>
- Federated HDFS+beeline+hiveserver2 遇到的坑
- HiveServer2与beeline的使用 & Hive JDBC编程
- Beeline version 1.2.1 by Apache Hive beeline&gt; !connect jdbc:hive://centoshostnameKL2:10000
- Hive&SqlServerql:inner join on条件中如果两边都是空值的情况下,关联结果中会把数据给过滤掉。
- beeline通过HiveServer2访问Hive的配置和操作
- HIVE学习笔记:HiveServer2Beeline
- 由“Beeline连接HiveServer2后如何使用指定的队列(Yarn)运行Hive SQL语句”引发的一系列思考
- Hadoop Hive概念学习系列之hive三种方式区别和搭建、HiveServer2环境搭建、HWI环境搭建和beeline环境搭建(五)
- hiveserver2以及beeline客户端的使用
- hive-site.xml修改之后不生效与一些beeline,hiveserver2的报错
- beeline连接hiveserver2源码分析
- Java 连接hive2 server 通过jdbc 出现了问题 Required field 'client_protocol' is unset! Struct:TOpenSessionReq
- hive三种方式区别和搭建、HiveServer2环境搭建、HWI环境搭建和beeline环境搭建
- ambari离线方式安装Hive不能连接mysql和不能启动hive metastore&hiveserver2
- hive&&beeline 数据导入导出
- hive2中使用beeline:Required field 'serverProtocolVersion' is unset!