您的位置:首页 > 其它

Hbase通过命令将数据批量导入的方法

2017-12-12 13:36 405 查看
抛砖引玉:

hbase建表:

hbase(main):003:0> create 'people','0'

将提前准备好的数据上传到hdfs:

[hadoop@h71 ~]$ vi people.txt

1,jimmy,25,jiujinshan

2,tina,25,hunan

[hadoop@h71 ~]$ hadoop fs -mkdir /bulkload

[hadoop@h71 ~]$ hadoop fs -put people.txt /bulkload

将刚上传到hdfs上的数据通过hbase bulkload导入到hbase:

importtsv:

HADOOP_CLASSPATH=`/home/hadoop/hbase-1.0.0-cdh5.5.2/bin/hbase classpath` /home/hadoop/hadoop-2.6.0-cdh5.5.2/bin/hadoop jar /home/hadoop/hbase-1.0.0-cdh5.5.2/lib/hbase-server-1.0.0-cdh5.5.2.jar importtsv -Dimporttsv.separator=, -Dimporttsv.columns=HBASE_ROW_KEY,0:name,0:age,0:province
-Dimporttsv.bulk.output=hdfs:///bulkload/output people hdfs:///bulkload/people.txt

(importtsv工具只从HDFS中读取数据,所以就需要将数据从Linux本地导入到HDFS中)

[hadoop@h71 ~]$ hadoop fs -lsr /bulkload

drwxr-xr-x   - hadoop supergroup          0 2017-03-20 02:16 /bulkload/output

drwxr-xr-x   - hadoop supergroup          0 2017-03-20 02:15 /bulkload/output/0

-rw-r--r--   2 hadoop supergroup       1247 2017-03-20 02:16 /bulkload/output/0/e9124651e9e04ab29794572e67b87736

-rw-r--r--   2 hadoop supergroup          0 2017-03-20 02:16 /bulkload/output/_SUCCESS

-rw-r--r--   2 hadoop supergroup         38 2017-03-20 01:50 /bulkload/people.txt

completebulkload:

HADOOP_CLASSPATH=`/home/hadoop/hbase-1.0.0-cdh5.5.2/bin/hbase classpath` /home/hadoop/hadoop-2.6.0-cdh5.5.2/bin/hadoop jar /home/hadoop/hbase-1.0.0-cdh5.5.2/lib/hbase-server-1.0.0-cdh5.5.2.jar completebulkload hdfs:///bulkload/output people

hbase(main):004:0> scan 'people'
ROW                                         COLUMN+CELL
1                                          column=0:age, timestamp=1489947175529, value=25
1                                          column=0:name, timestamp=1489947175529, value=jimmy
1                                          column=0:province, timestamp=1489947175529, value=jiujinshan
2                                          column=0:age, timestamp=1489947175529, value=25
2                                          column=0:name, timestamp=1489947175529, value=tina
2                                          column=0:province, timestamp=1489947175529, value=hunan
其实hbase本身就已经提供了直接通过命令行模式来将数据直接批量导入到hbase中去(第4个不是自带的,是第三方插件),但我感觉这种命令行只适合那种比较简单的场景,需求复杂的话还得自己编写代码吧。

目前总结了四种方法:
(1)利用ImportTsv将文件导入到Hbase中

可直接将CSV文件导入到hbase表中,不过得先在hbase中建立相应的表

hbase(main):012:0> create 'hbase-tb1-001','cf'

[hadoop@h71 ~]$ vi simple.csv

1,"tom"

2,"sam"

3,"jerry"

4,"marry"

5,"john"

[hadoop@h71 ~]$ hadoop fs -put simple.csv /

再执行:

HADOOP_CLASSPATH=`/home/hadoop/hbase-1.0.0-cdh5.5.2/bin/hbase classpath` /home/hadoop/hadoop-2.6.0-cdh5.5.2/bin/hadoop jar /home/hadoop/hbase-1.0.0-cdh5.5.2/lib/hbase-server-1.0.0-cdh5.5.2.jar importtsv -Dimporttsv.separator=, -Dimporttsv.columns=HBASE_ROW_KEY,cf
hbase-tb1-001 /simple.csv

(2)利用completebulkload将数据导入到hbase中

和最上面那种导入people的方法一样,只不过上面的方法先在hbase中建立相应的表,而这个方法不用先建表,在指令里就可以在hbase中自动建表了

HADOOP_CLASSPATH=`/home/hadoop/hbase-1.0.0-cdh5.5.2/bin/hbase classpath` /home/hadoop/hadoop-2.6.0-cdh5.5.2/bin/hadoop jar /home/hadoop/hbase-1.0.0-cdh5.5.2/lib/hbase-server-1.0.0-cdh5.5.2.jar importtsv -Dimporttsv.separator=, -Dimporttsv.bulk.output=/output
-Dimporttsv.columns=HBASE_ROW_KEY,cf hbase-tb1-002 /simple.csv

(在指定路径生成了HFile文件并且在hbase中建立了hbase-tb1-002空表)

HADOOP_CLASSPATH=`/home/hadoop/hbase-1.0.0-cdh5.5.2/bin/hbase classpath` /home/hadoop/hadoop-2.6.0-cdh5.5.2/bin/hadoop jar /home/hadoop/hbase-1.0.0-cdh5.5.2/lib/hbase-server-1.0.0-cdh5.5.2.jar completebulkload /output hbase-tb1-002

或者这条命令

hadoop jar /home/hadoop/hbase-1.0.0-cdh5.5.2/lib/hbase-server-1.0.0-cdh5.5.2.jar completebulkload /output hbase-tb1-002

hbase(main):014:0> scan 'hbase-tb1-002'
ROW                                         COLUMN+CELL
1                                          column=cf:, timestamp=1489846700133, value="tom"
2                                          column=cf:, timestamp=1489846700133, value="sam"
3                                          column=cf:, timestamp=1489846700133, value="jerry"
4                                          column=cf:, timestamp=1489846700133, value="marry"
5                                          column=cf:, timestamp=1489846700133, value="john"
注:这两种方法(1)、(2)和文章一开始抛砖引玉中的两个方法其实就是一回事,只不过命令形式有点区别罢了

(3)利用improt将数据导入到hbase中

首先hbase中存在hbase-tb1-002表并且其中有数据:

hbase(main):014:0> scan 'hbase-tb1-002'
ROW                                         COLUMN+CELL
1                                          column=cf:, timestamp=1489846700133, value="tom"
2                                          column=cf:, timestamp=1489846700133, value="sam"
3                                          column=cf:, timestamp=1489846700133, value="jerry"
4                                          column=cf:, timestamp=1489846700133, value="marry"
5                                          column=cf:, timestamp=1489846700133, value="john"
hadoop jar /home/hadoop/hbase-1.0.0-cdh5.5.2/lib/hbase-server-1.0.0-cdh5.5.2.jar export hbase-tb1-002 /test-output

(在hbase0.96中用的命令是bin/hbase org.apache.hadoop.hbase.mapreduce.Export hbase-tb1-002 /test-output)

[hadoop@h71 hbase-1.0.0-cdh5.5.2]$ hadoop fs -lsr /test-output

-rw-r--r--   2 hadoop supergroup          0 2017-03-19 00:16 /test-output/_SUCCESS

-rw-r--r--   2 hadoop supergroup        344 2017-03-19 00:16 /test-output/part-m-00000

(生成的是sequence file格式的数据文件,用hadoop fs -cat命令查看乱码)

hbase(main):025:0> create 'hbase-tb1-003','cf'

hadoop jar /home/hadoop/hbase-1.0.0-cdh5.5.2/lib/hbase-server-1.0.0-cdh5.5.2.jar import hbase-tb1-003 /test-output

(并且/test-output/part-m-00000不会像用completebulkload时消失)
hbase(main):026:0> scan 'hbase-tb1-003'
ROW                                         COLUMN+CELL
1                                          column=cf:, timestamp=1489853023886, value="tom"
2                                          column=cf:, timestamp=1489853023886, value="sam"
3                                          column=cf:, timestamp=1489853023886, value="jerry"
4                                          column=cf:, timestamp=1489853023886, value="marry"
5                                          column=cf:, timestamp=1489853023886, value="john"
(4)Phoenix使用MapReduce加载大批量数据(bulkload)

参考地址:http://blog.csdn.net/maomaosi2009/article/details/45623821 (这个博客中说在导入数据的时候写file:///指定为本地文件路径虽然报错但可以导入数据,我做的结果是报错并且不会导入数据,在Phoenix中查询该表为空)
http://blog.csdn.net/d6619309/article/details/51334126
(做这个实验我在装有Apache版的hbase和Phoenix中成功了,但是在cdh版中却失败了,并且报这个错:
Error: java.lang.ClassNotFoundException: org.apache.commons.csv.CSVFormat
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at org.apache.phoenix.mapreduce.CsvToKeyValueMapper$CsvLineParser.<init>(CsvToKeyValueMapper.java:282)
at org.apache.phoenix.mapreduce.CsvToKeyValueMapper.setup(CsvToKeyValueMapper.java:142)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:142)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
因为Phoenix官方默认不支持cdh版的,所以用maven重新编译适配cdh5.5.2版本,我还以为是修改了什么东西导致报错)
解决:后来我试探性的将phoenix-4.6.0-cdh5.5.2-client.jar复制到了主节点/home/hadoop/hbase-1.0.0-cdh5.5.2/lib再执行以上命令就好使了

在phoenix的CLI界面创建user表:

0: jdbc:phoenix:h40,h41,h42:2181> create table user (id varchar primary key,account varchar ,passwd varchar);

在【PHOENIX_HOME】目录下创建data_import.txt,内容如下:

[hadoop@h40 ~]$ vi data_import.txt

001,google,AM

002,baidu,BJ

003,alibaba,HZ

执行MapReduce

[hadoop@h40 phoenix-4.6.0-HBase-1.0-bin]$ hadoop jar phoenix-4.6.0-HBase-1.0-client.jar org.apache.phoenix.mapreduce.CsvBulkLoadTool --table USER --input /data_import.txt

0: jdbc:phoenix:h40,h41,h42:2181> select * from user;
+------------------------------------------+------------------------------------------+------------------------------------------+
|                    ID                    |                 ACCOUNT                  |                  PASSWD                  |
+------------------------------------------+------------------------------------------+------------------------------------------+
| 001                                      | google                                   | AM                                       |
| 002                                      | baidu                                    | BJ                                       |
| 003                                      | alibaba                                  | HZ                                       |
+------------------------------------------+------------------------------------------+------------------------------------------+
hbase(main):004:0> scan 'USER'
ROW                                                          COLUMN+CELL
001                                                         column=0:ACCOUNT, timestamp=1492424759793, value=google
001                                                         column=0:PASSWD, timestamp=1492424759793, value=AM
001                                                         column=0:_0, timestamp=1492424759793, value=
002                                                         column=0:ACCOUNT, timestamp=1492424759793, value=baidu
002                                                         column=0:PASSWD, timestamp=1492424759793, value=BJ
002                                                         column=0:_0, timestamp=1492424759793, value=
003                                                         column=0:ACCOUNT, timestamp=1492424759793, value=alibaba
003                                                         column=0:PASSWD, timestamp=1492424759793, value=HZ
003                                                         column=0:_0, timestamp=1492424759793, value=
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: