sqoop mysql数据导入Hive中
2018-02-07 15:31
417 查看
//sqoop导入数据测试
## by coco
## 2014-11-21
1. 下载sqoop,本测试安装的hadoop-2.2.0。所以下载的sqoop为:
sqoop-1.4.5.bin__hadoop-2.0.4-alpha.tar.gz
下载地址为:
http://mirrors.cnnic.cn/apache/sqoop/1.4.5/
2. 下载后,解压缩,配置即可。
tar -zxvf sqoop-1.4.5.bin__hadoop-2.0.4-alpha.tar.gz -C /usr/local
cd /usr/local
ln -s /usr/local/sqoop-1.4.5.bin__hadoop-2.0.4-alpha/ sqoop
配置环境变量:
vim /etc/profile 添加如下信息:
export SQOOP_HOME=/usr/local/sqoop
export PATH=.:$JAVA_HOME/bin:$ZK_HOME/bin:$TOMCAT_HOME/bin:$HADOOP_HOME/bin:$HBASE_HOME/bin:$HIVE_HOME/bin:$FLUME_HOME/bin:$SQOOP_HOME:/bin:$PATH
修改sqoop配置信息:
cd /usr/local/sqoop/conf
cp sqoop-env-template.sh sqoop-env.sh
vim sqoop-env.sh 修改如下配置信息:
export HADOOP_COMMON_HOME=/usr/local/hadoop
export HADOOP_MAPRED_HOME=/usr/local/hadoop/share/hadoop/mapreduce
export HBASE_HOME=/usr/local/hbase
export HIVE_HOME=/usr/local/hive
source /etc/profile
3. 测试
sqoop不需要启动,通过测试与mysql的链接测试sqoop的可用性。
1. 测试与mysql的链接
[root@wwn97 ~]# /usr/local/sqoop/bin/sqoop list-databases --connect jdbc:mysql://192.168.8.97:3306 --username root --password 123456
Warning: /usr/local/sqoop/../hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Warning: /usr/local/sqoop/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
14/11/21 11:12:32 INFO sqoop.Sqoop: Running Sqoop version: 1.4.5
14/11/21 11:12:32 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
14/11/21 11:12:33 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
information_schema
cacti
cr_debug
db1
gcd_sup
hive
hive4
mysql
performance_schema
test
显示出数据库中的数据库信息。说明sqoop配置成功。
*****测试参考的文件:
http://blog.javachen.com/2014/08/04/import-data-to-hive-with-sqoop/
2. 测试把mysql表中的数据装载到hive中
sqoop import --connect jdbc:mysql://192.168.8.97:3306/db1?charset-utf8 --username root --password 123456 --table pd_info --columns "pid,cid" --hive-import --hive-table pid_cid
3. 测试把mysql中表的增量数据追加数据到hive中:
sqoop import --connect jdbc:mysql://192.168.8.97:3306/db1?charset-utf8 --username root --password 123456 --table pd_info --columns "pid,cid" --check-column pid --incremental append --last-value
165 --hive-import --hive-table pid_cid
4. 测试把mysql表中数据导入到hdfs中
[root@wwn97 ~]# hadoop dfs -mkdir /test
sqoop import --connect jdbc:mysql://192.168.8.97:3306/db1?charset-utf8 --username root --password 123456 --table pd_info --columns "pid,cid" --target-dir "/test/aa.txt"
//可以加条件
sqoop import --connect jdbc:mysql://192.168.8.97:3306/db1?charset-utf8 --username root --password 123456 --table pd_info --columns "pid,cid" --where "pid=3" --target-dir "/test/b.txt"
5. 测试把mysql表中的数据,创建到hive中对应的表
sqoop import --connect jdbc:mysql://192.168.8.97:3306/db1?charset-utf8 --username root --password 123456 --table pd_info --columns "pid,cid" --fields-terminated-by "\t" --lines-terminated-by
"\n" --hive-import --create-hive-table --hive-table hive_tab
6. 测试合并hdfs文件
sqoop merge --new-data /test/aa.txt --onto /test/b.txt --merge-key pid --target-dir /test/merged --jar-file /test/merged.jar --class-name merged
注: 其中--class-name所指定的class名是对应于merged.jar中的merged类,而merged.jar是通过Codegen生成的。
7. 测试mysql表中的数据,导入到hbase中:
//sqoop导入参数详解:
[root@master ~]# sqoop-import --help
Common arguments:
--connect <jdbc-uri> Specify JDBC connect
string
--connection-manager <class-name> Specify connection manager
class name
--connection-param-file <properties-file> Specify connection
parameters file
--driver <class-name> Manually specify JDBC
driver class to use
--hadoop-home <hdir>
18838
Override
$HADOOP_MAPRED_HOME_ARG
--hadoop-mapred-home <dir> Override
$HADOOP_MAPRED_HOME_ARG
--help Print usage instructions
-P Read password from console
--password <password> Set authentication
password
--password-file <password-file> Set authentication
password file path
--relaxed-isolation Use read-uncommitted
isolation for imports
--skip-dist-cache Skip copying jars to
distributed cache
--username <username> Set authentication
username
--verbose Print more information
while working
Import control arguments:
--append Imports data
in append
mode
--as-avrodatafile Imports data
to Avro data
files
--as-parquetfile Imports data
to Parquet
files
--as-sequencefile Imports data
to
SequenceFile
s
--as-textfile Imports data
as plain
text
(default)
--boundary-query <statement> Set boundary
query for
retrieving
max and min
value of the
primary key
--columns <col,col,col...> Columns to
import from
table
--compression-codec <codec> Compression
codec to use
for import
--delete-target-dir Imports data
in delete
mode
--direct Use direct
import fast
path
--direct-split-size <n> Split the
input stream
every 'n'
bytes when
importing in
direct mode
-e,--query <statement> Import
results of
SQL
'statement'
--fetch-size <n> Set number
'n' of rows
to fetch
from the
database
when more
rows are
needed
--inline-lob-limit <n> Set the
maximum size
for an
inline LOB
-m,--num-mappers <n> Use 'n' map
tasks to
import in
parallel
--mapreduce-job-name <name> Set name for
generated
mapreduce
job
--merge-key <column> Key column
to use to
join results
--split-by <column-name> Column of
the table
used to
split work
units
--table <table-name> Table to
read
--target-dir <dir> HDFS plain
table
destination
--validate Validate the
copy using
the
configured
validator
--validation-failurehandler <validation-failurehandler> Fully
qualified
class name
for
ValidationFa
ilureHandler
--validation-threshold <validation-threshold> Fully
qualified
class name
for
ValidationTh
reshold
--validator <validator> Fully
qualified
class name
for the
Validator
--warehouse-dir <dir> HDFS parent
for table
destination
--where <where clause> WHERE clause
to use
during
import
-z,--compress Enable
compression
Incremental import arguments:
--check-column <column> Source column to check for incremental
change
--incremental <import-type> Define an incremental import of type
'append' or 'lastmodified'
--last-value <value> Last imported value in the incremental
check column
Output line formatting arguments:
--enclosed-by <char> Sets a required field enclosing
character
--escaped-by <char> Sets the escape character
--fields-terminated-by <char> Sets the field separator character
--lines-terminated-by <char> Sets the end-of-line character
--mysql-delimiters Uses MySQL's default delimiter set:
fields: , lines: \n escaped-by: \
optionally-enclosed-by: '
--optionally-enclosed-by <char> Sets a field enclosing character
Input parsing arguments:
--input-enclosed-by <char> Sets a required field encloser
--input-escaped-by <char> Sets the input escape
character
--input-fields-terminated-by <char> Sets the input field separator
--input-lines-terminated-by <char> Sets the input end-of-line
char
--input-optionally-enclosed-by <char> Sets a field enclosing
character
Hive arguments:
--create-hive-table Fail if the target hive
table exists
--hive-database <database-name> Sets the database name to
use when importing to hive
--hive-delims-replacement <arg> Replace Hive record \0x01
and row delimiters (\n\r)
from imported string fields
with user-defined string
--hive-drop-import-delims Drop Hive record \0x01 and
row delimiters (\n\r) from
imported string fields
--hive-home <dir> Override $HIVE_HOME
--hive-import Import tables into Hive
(Uses Hive's default
delimiters if none are
set.)
--hive-overwrite Overwrite existing data in
the Hive table
--hive-partition-key <partition-key> Sets the partition key to
use when importing to hive
--hive-partition-value <partition-value> Sets the partition value to
use when importing to hive
--hive-table <table-name> Sets the table name to use
when importing to hive
--map-column-hive <arg> Override mapping for
specific column to hive
types.
HBase arguments:
--column-family <family> Sets the target column family for the
import
--hbase-bulkload Enables HBase bulk loading
--hbase-create-table If specified, create missing HBase tables
--hbase-row-key <col> Specifies which input column to use as the
row key
--hbase-table <table> Import to <table> in HBase
HCatalog arguments:
--hcatalog-database <arg> HCatalog database name
--hcatalog-home <hdir> Override $HCAT_HOME
--hcatalog-partition-keys <partition-key> Sets the partition
keys to use when
importing to hive
--hcatalog-partition-values <partition-value> Sets the partition
values to use when
importing to hive
--hcatalog-table <arg> HCatalog table name
--hive-home <dir> Override $HIVE_HOME
--hive-partition-key <partition-key> Sets the partition key
to use when importing
to hive
--hive-partition-value <partition-value> Sets the partition
value to use when
importing to hive
--map-column-hive <arg> Override mapping for
specific column to
hive types.
HCatalog import specific options:
--create-hcatalog-table Create HCatalog before import
--hcatalog-storage-stanza <arg> HCatalog storage stanza for table
creation
Accumulo arguments:
--accumulo-batch-size <size> Batch size in bytes
--accumulo-column-family <family> Sets the target column family for
the import
--accumulo-create-table If specified, create missing
Accumulo tables
--accumulo-instance <instance> Accumulo instance name.
--accumulo-max-latency <latency> Max write latency in milliseconds
--accumulo-password <password> Accumulo password.
--accumulo-row-key <col> Specifies which input column to
use as the row key
--accumulo-table <table> Import to <table> in Accumulo
--accumulo-user <user> Accumulo user name.
--accumulo-visibility <vis> Visibility token to be applied to
all rows imported
--accumulo-zookeepers <zookeepers> Comma-separated list of
zookeepers (host:port)
Code generation arguments:
--bindir <dir> Output directory for compiled
objects
--class-name <name> Sets the generated class name.
This overrides --package-name.
When combined with --jar-file,
sets the input class.
--input-null-non-string <null-str> Input null non-string
representation
--input-null-string <null-str> Input null string representation
--jar-file <file> Disable code generation; use
specified jar
--map-column-java <arg> Override mapping for specific
columns to java types
--null-non-string <null-str> Null non-string representation
--null-string <null-str> Null string representation
--outdir <dir> Output directory for generated
code
--package-name <name> Put auto-generated classes in
this package
Generic Hadoop command-line arguments:
(must preceed any tool-specific arguments)
Generic options supported are
-conf <configuration file> specify an application configuration file
-D <property=value> use value for given property
-fs <local|namenode:port> specify a namenode
-jt <local|jobtracker:port> specify a job tracker
-files <comma separated list of files> specify comma separated files to be copied to the map reduce cluster
-libjars <comma separated list of jars> specify comma separated jar files to include in the classpath.
-archives <comma separated list of archives> specify comma separated archives to be unarchived on the compute machines.
The general command line syntax is
bin/hadoop command [genericOptions] [commandOptions]
At minimum, you must specify --connect and --table
Arguments to mysqldump and other subprograms may be supplied
after a '--' on the command line.
sqoop import --connect jdbc:mysql://192.168.2.63:3306/rg_web3_1 --username root --password 123456 --table dim_date --columns "year,quarter,month,week,day,day_of_year,yeard,monthd,dayd,date_dim_id,day_caption,month_caption,week_caption"
--hive-import --hive-table bi_rg.dim_date
## by coco
## 2014-11-21
1. 下载sqoop,本测试安装的hadoop-2.2.0。所以下载的sqoop为:
sqoop-1.4.5.bin__hadoop-2.0.4-alpha.tar.gz
下载地址为:
http://mirrors.cnnic.cn/apache/sqoop/1.4.5/
2. 下载后,解压缩,配置即可。
tar -zxvf sqoop-1.4.5.bin__hadoop-2.0.4-alpha.tar.gz -C /usr/local
cd /usr/local
ln -s /usr/local/sqoop-1.4.5.bin__hadoop-2.0.4-alpha/ sqoop
配置环境变量:
vim /etc/profile 添加如下信息:
export SQOOP_HOME=/usr/local/sqoop
export PATH=.:$JAVA_HOME/bin:$ZK_HOME/bin:$TOMCAT_HOME/bin:$HADOOP_HOME/bin:$HBASE_HOME/bin:$HIVE_HOME/bin:$FLUME_HOME/bin:$SQOOP_HOME:/bin:$PATH
修改sqoop配置信息:
cd /usr/local/sqoop/conf
cp sqoop-env-template.sh sqoop-env.sh
vim sqoop-env.sh 修改如下配置信息:
export HADOOP_COMMON_HOME=/usr/local/hadoop
export HADOOP_MAPRED_HOME=/usr/local/hadoop/share/hadoop/mapreduce
export HBASE_HOME=/usr/local/hbase
export HIVE_HOME=/usr/local/hive
source /etc/profile
3. 测试
sqoop不需要启动,通过测试与mysql的链接测试sqoop的可用性。
1. 测试与mysql的链接
[root@wwn97 ~]# /usr/local/sqoop/bin/sqoop list-databases --connect jdbc:mysql://192.168.8.97:3306 --username root --password 123456
Warning: /usr/local/sqoop/../hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Warning: /usr/local/sqoop/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
14/11/21 11:12:32 INFO sqoop.Sqoop: Running Sqoop version: 1.4.5
14/11/21 11:12:32 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
14/11/21 11:12:33 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
information_schema
cacti
cr_debug
db1
gcd_sup
hive
hive4
mysql
performance_schema
test
显示出数据库中的数据库信息。说明sqoop配置成功。
*****测试参考的文件:
http://blog.javachen.com/2014/08/04/import-data-to-hive-with-sqoop/
2. 测试把mysql表中的数据装载到hive中
sqoop import --connect jdbc:mysql://192.168.8.97:3306/db1?charset-utf8 --username root --password 123456 --table pd_info --columns "pid,cid" --hive-import --hive-table pid_cid
3. 测试把mysql中表的增量数据追加数据到hive中:
sqoop import --connect jdbc:mysql://192.168.8.97:3306/db1?charset-utf8 --username root --password 123456 --table pd_info --columns "pid,cid" --check-column pid --incremental append --last-value
165 --hive-import --hive-table pid_cid
4. 测试把mysql表中数据导入到hdfs中
[root@wwn97 ~]# hadoop dfs -mkdir /test
sqoop import --connect jdbc:mysql://192.168.8.97:3306/db1?charset-utf8 --username root --password 123456 --table pd_info --columns "pid,cid" --target-dir "/test/aa.txt"
//可以加条件
sqoop import --connect jdbc:mysql://192.168.8.97:3306/db1?charset-utf8 --username root --password 123456 --table pd_info --columns "pid,cid" --where "pid=3" --target-dir "/test/b.txt"
5. 测试把mysql表中的数据,创建到hive中对应的表
sqoop import --connect jdbc:mysql://192.168.8.97:3306/db1?charset-utf8 --username root --password 123456 --table pd_info --columns "pid,cid" --fields-terminated-by "\t" --lines-terminated-by
"\n" --hive-import --create-hive-table --hive-table hive_tab
6. 测试合并hdfs文件
sqoop merge --new-data /test/aa.txt --onto /test/b.txt --merge-key pid --target-dir /test/merged --jar-file /test/merged.jar --class-name merged
注: 其中--class-name所指定的class名是对应于merged.jar中的merged类,而merged.jar是通过Codegen生成的。
7. 测试mysql表中的数据,导入到hbase中:
//sqoop导入参数详解:
[root@master ~]# sqoop-import --help
Common arguments:
--connect <jdbc-uri> Specify JDBC connect
string
--connection-manager <class-name> Specify connection manager
class name
--connection-param-file <properties-file> Specify connection
parameters file
--driver <class-name> Manually specify JDBC
driver class to use
--hadoop-home <hdir>
18838
Override
$HADOOP_MAPRED_HOME_ARG
--hadoop-mapred-home <dir> Override
$HADOOP_MAPRED_HOME_ARG
--help Print usage instructions
-P Read password from console
--password <password> Set authentication
password
--password-file <password-file> Set authentication
password file path
--relaxed-isolation Use read-uncommitted
isolation for imports
--skip-dist-cache Skip copying jars to
distributed cache
--username <username> Set authentication
username
--verbose Print more information
while working
Import control arguments:
--append Imports data
in append
mode
--as-avrodatafile Imports data
to Avro data
files
--as-parquetfile Imports data
to Parquet
files
--as-sequencefile Imports data
to
SequenceFile
s
--as-textfile Imports data
as plain
text
(default)
--boundary-query <statement> Set boundary
query for
retrieving
max and min
value of the
primary key
--columns <col,col,col...> Columns to
import from
table
--compression-codec <codec> Compression
codec to use
for import
--delete-target-dir Imports data
in delete
mode
--direct Use direct
import fast
path
--direct-split-size <n> Split the
input stream
every 'n'
bytes when
importing in
direct mode
-e,--query <statement> Import
results of
SQL
'statement'
--fetch-size <n> Set number
'n' of rows
to fetch
from the
database
when more
rows are
needed
--inline-lob-limit <n> Set the
maximum size
for an
inline LOB
-m,--num-mappers <n> Use 'n' map
tasks to
import in
parallel
--mapreduce-job-name <name> Set name for
generated
mapreduce
job
--merge-key <column> Key column
to use to
join results
--split-by <column-name> Column of
the table
used to
split work
units
--table <table-name> Table to
read
--target-dir <dir> HDFS plain
table
destination
--validate Validate the
copy using
the
configured
validator
--validation-failurehandler <validation-failurehandler> Fully
qualified
class name
for
ValidationFa
ilureHandler
--validation-threshold <validation-threshold> Fully
qualified
class name
for
ValidationTh
reshold
--validator <validator> Fully
qualified
class name
for the
Validator
--warehouse-dir <dir> HDFS parent
for table
destination
--where <where clause> WHERE clause
to use
during
import
-z,--compress Enable
compression
Incremental import arguments:
--check-column <column> Source column to check for incremental
change
--incremental <import-type> Define an incremental import of type
'append' or 'lastmodified'
--last-value <value> Last imported value in the incremental
check column
Output line formatting arguments:
--enclosed-by <char> Sets a required field enclosing
character
--escaped-by <char> Sets the escape character
--fields-terminated-by <char> Sets the field separator character
--lines-terminated-by <char> Sets the end-of-line character
--mysql-delimiters Uses MySQL's default delimiter set:
fields: , lines: \n escaped-by: \
optionally-enclosed-by: '
--optionally-enclosed-by <char> Sets a field enclosing character
Input parsing arguments:
--input-enclosed-by <char> Sets a required field encloser
--input-escaped-by <char> Sets the input escape
character
--input-fields-terminated-by <char> Sets the input field separator
--input-lines-terminated-by <char> Sets the input end-of-line
char
--input-optionally-enclosed-by <char> Sets a field enclosing
character
Hive arguments:
--create-hive-table Fail if the target hive
table exists
--hive-database <database-name> Sets the database name to
use when importing to hive
--hive-delims-replacement <arg> Replace Hive record \0x01
and row delimiters (\n\r)
from imported string fields
with user-defined string
--hive-drop-import-delims Drop Hive record \0x01 and
row delimiters (\n\r) from
imported string fields
--hive-home <dir> Override $HIVE_HOME
--hive-import Import tables into Hive
(Uses Hive's default
delimiters if none are
set.)
--hive-overwrite Overwrite existing data in
the Hive table
--hive-partition-key <partition-key> Sets the partition key to
use when importing to hive
--hive-partition-value <partition-value> Sets the partition value to
use when importing to hive
--hive-table <table-name> Sets the table name to use
when importing to hive
--map-column-hive <arg> Override mapping for
specific column to hive
types.
HBase arguments:
--column-family <family> Sets the target column family for the
import
--hbase-bulkload Enables HBase bulk loading
--hbase-create-table If specified, create missing HBase tables
--hbase-row-key <col> Specifies which input column to use as the
row key
--hbase-table <table> Import to <table> in HBase
HCatalog arguments:
--hcatalog-database <arg> HCatalog database name
--hcatalog-home <hdir> Override $HCAT_HOME
--hcatalog-partition-keys <partition-key> Sets the partition
keys to use when
importing to hive
--hcatalog-partition-values <partition-value> Sets the partition
values to use when
importing to hive
--hcatalog-table <arg> HCatalog table name
--hive-home <dir> Override $HIVE_HOME
--hive-partition-key <partition-key> Sets the partition key
to use when importing
to hive
--hive-partition-value <partition-value> Sets the partition
value to use when
importing to hive
--map-column-hive <arg> Override mapping for
specific column to
hive types.
HCatalog import specific options:
--create-hcatalog-table Create HCatalog before import
--hcatalog-storage-stanza <arg> HCatalog storage stanza for table
creation
Accumulo arguments:
--accumulo-batch-size <size> Batch size in bytes
--accumulo-column-family <family> Sets the target column family for
the import
--accumulo-create-table If specified, create missing
Accumulo tables
--accumulo-instance <instance> Accumulo instance name.
--accumulo-max-latency <latency> Max write latency in milliseconds
--accumulo-password <password> Accumulo password.
--accumulo-row-key <col> Specifies which input column to
use as the row key
--accumulo-table <table> Import to <table> in Accumulo
--accumulo-user <user> Accumulo user name.
--accumulo-visibility <vis> Visibility token to be applied to
all rows imported
--accumulo-zookeepers <zookeepers> Comma-separated list of
zookeepers (host:port)
Code generation arguments:
--bindir <dir> Output directory for compiled
objects
--class-name <name> Sets the generated class name.
This overrides --package-name.
When combined with --jar-file,
sets the input class.
--input-null-non-string <null-str> Input null non-string
representation
--input-null-string <null-str> Input null string representation
--jar-file <file> Disable code generation; use
specified jar
--map-column-java <arg> Override mapping for specific
columns to java types
--null-non-string <null-str> Null non-string representation
--null-string <null-str> Null string representation
--outdir <dir> Output directory for generated
code
--package-name <name> Put auto-generated classes in
this package
Generic Hadoop command-line arguments:
(must preceed any tool-specific arguments)
Generic options supported are
-conf <configuration file> specify an application configuration file
-D <property=value> use value for given property
-fs <local|namenode:port> specify a namenode
-jt <local|jobtracker:port> specify a job tracker
-files <comma separated list of files> specify comma separated files to be copied to the map reduce cluster
-libjars <comma separated list of jars> specify comma separated jar files to include in the classpath.
-archives <comma separated list of archives> specify comma separated archives to be unarchived on the compute machines.
The general command line syntax is
bin/hadoop command [genericOptions] [commandOptions]
At minimum, you must specify --connect and --table
Arguments to mysqldump and other subprograms may be supplied
after a '--' on the command line.
sqoop import --connect jdbc:mysql://192.168.2.63:3306/rg_web3_1 --username root --password 123456 --table dim_date --columns "year,quarter,month,week,day,day_of_year,yeard,monthd,dayd,date_dim_id,day_caption,month_caption,week_caption"
--hive-import --hive-table bi_rg.dim_date
相关文章推荐
- sqoop mysql数据导入Hive中
- 使用Sqoop将Hive表数据导入到mysql
- sqoop将Mysql数据导入Hive中
- OOzie调度sqoop1 Action 从mysql导入数据到hive
- 利用sqoop将hive数据导入导出数据到mysql
- sqoop:mysql和Hbase/Hive/Hdfs之间相互导入数据
- 使用Sqoop从MySQL导入数据到Hive和HBase 及近期感悟
- sqoop:mysql和Hbase/Hive/Hdfs之间相互导入数据
- 利用sqoop将hive数据导入导出数据到mysql
- 用Sqoop将mysql中的表和数据导入到Hive中
- 教程 | 使用Sqoop从MySQL导入数据到Hive和HBase
- SQOOP从MySQL导入数据到Hive
- 使用Sqoop将HDFS/Hive/HBase与MySQL/Oracle中的数据相互导入、导出
- 用sqoop将mysql的数据导入到hive表中
- sqoop-从mysql导入数据到hive出现java.sql.SQLException: Streaming result set com.mysql.jdbc.
- Hive数据仓库-Sqoop将数据从Mysql导入Hive中
- sqoop将mysql数据导入到hive指定的数据库中
- Sqoop增量从MySQL中向hive导入数据
- 使用Sqoop将HDFS/Hive/HBase与MySQL/Oracle中的数据相互导入、导出