hdfs 和hive与 partition
2018-03-12 14:23
309 查看
home/wangshumin
[wangshumin@centoshostnameKL2 ~]$ hdfs dfs -cat /user/hive/warehouse/hive2_db1.db/stuin
cat: `/user/hive/warehouse/hive2_db1.db/stuin': Is a directory
[wangshumin@centoshostnameKL2 ~]$ hdfs dfs -rm r /user/hive/warehouse/hive2_db1.db/stuin
rm: `r': No such file or directory
rm: `/user/hive/warehouse/hive2_db1.db/stuin': Is a directory
[wangshumin@centoshostnameKL2 ~]$ hdfs dfs -rm /user/hive/warehouse/hive2_db1.db/stuout
rm: `/user/hive/warehouse/hive2_db1.db/stuout': Is a directory
[wangshumin@centoshostnameKL2 ~]$ hdfs dfs
Usage: hadoop fs [generic options]
[-appendToFile <localsrc> ... <dst>]
[-cat [-ignoreCrc] <src> ...]
[-checksum <src> ...]
[-chgrp [-R] GROUP PATH...]
[-chmod [-R] <MODE[,MODE]... | OCTALMODE> PATH...]
[-chown [-R] [OWNER][:[GROUP]] PATH...]
[-copyFromLocal [-f] [-p] <localsrc> ... <dst>]
[-copyToLocal [-p] [-ignoreCrc] [-crc] <src> ... <localdst>]
[-count [-q] <path> ...]
[-cp [-f] [-p] <src> ... <dst>]
[-createSnapshot <snapshotDir> [<snapshotName>]]
[-deleteSnapshot <snapshotDir> <snapshotName>]
[-df [-h] [<path> ...]]
[-du [-s] [-h] <path> ...]
[-expunge]
[-get [-p] [-ignoreCrc] [-crc] <src> ... <localdst>]
[-getfacl [-R] <path>]
[-getmerge [-nl] <src> <localdst>]
[-help [cmd ...]]
[-ls [-d] [-h] [-R] [<path> ...]]
[-mkdir [-p] <path> ...]
[-moveFromLocal <localsrc> ... <dst>]
[-moveToLocal <src> <localdst>]
[-mv <src> ... <dst>]
[-put [-f] [-p] <localsrc> ... <dst>]
[-renameSnapshot <snapshotDir> <oldName> <newName>]
[-rm [-f] [-r|-R] [-skipTrash] <src> ...]
[-rmdir [--ignore-fail-on-non-empty] <dir> ...]
[-setfacl [-R] [{-b|-k} {-m|-x <acl_spec>} <path>]|[--set <acl_spec> <path>]]
[-setrep [-R] [-w] <rep> <path> ...]
[-stat [format] <path> ...]
[-tail [-f] <file>]
[-test -[defsz] <path>]
[-text [-ignoreCrc] <src> ...]
[-touchz <path> ...]
[-usage [cmd ...]]
Generic options supported are
-conf <configuration file> specify an application configuration file
-D <property=value> use value for given property
-fs <local|namenode:port> specify a namenode
-jt <local|jobtracker:port> specify a job tracker
-files <comma separated list of files> specify comma separated files to be copied to the map reduce cluster
-libjars <comma separated list of jars> specify comma separated jar files to include in the classpath.
-archives <comma separated list of archives> specify comma separated archives to be unarchived on the compute machines.
The general command line syntax is
bin/hadoop command [genericOptions] [commandOptions]
[wangshumin@centoshostnameKL2 ~]$ hdfs dfs -rmdir /user/hive/warehouse/hive2_db1.db/stuout
[wangshumin@centoshostnameKL2 ~]$ hdfs dfs -ls /
Found 7 items
drwxr-xr-x - wangshumin supergroup 0 2018-02-09 07:32 /data
drwxr-xr-x - wangshumin supergroup 0 2018-02-09 07:41 /dataload_balance
drwxr-xr-x - wangshumin supergroup 0 2018-02-09 07:18 /flumedata2
drwxr-xr-x - wangshumin supergroup 0 2018-02-26 15:35 /hbase
drwxr-xr-x - wangshumin supergroup 0 2018-03-12 13:16 /hivedata
drwx-wx-wx - wangshumin supergroup 0 2018-03-12 13:04 /tmp
drwxr-xr-x - wangshumin supergroup 0 2018-02-09 06:50 /user
[wangshumin@centoshostnameKL2 ~]$ hdfs dfs -rmdir /hivedata
[wangshumin@centoshostnameKL2 ~]$ hdfs dfs -ls /
Found 6 items
drwxr-xr-x - wangshumin supergroup 0 2018-02-09 07:32 /data
drwxr-xr-x - wangshumin supergroup 0 2018-02-09 07:41 /dataload_balance
drwxr-xr-x - wangshumin supergroup 0 2018-02-09 07:18 /flumedata2
drwxr-xr-x - wangshumin supergroup 0 2018-02-26 15:35 /hbase
drwx-wx-wx - wangshumin supergroup 0 2018-03-12 13:04 /tmp
drwxr-xr-x - wangshumin supergroup 0 2018-02-09 06:50 /user
[wangshumin@centoshostnameKL2 ~]$ hdfs dfs -ls /user/hive/warehouse/hive2_db1.db/
Found 4 items
drwxr-xr-x - wangshumin supergroup 0 2018-03-12 11:31 /user/hive/warehouse/hive2_db1.db/stu1
drwxr-xr-x - wangshumin supergroup 0 2018-03-12 11:48 /user/hive/warehouse/hive2_db1.db/stu2
drwxr-xr-x - wangshumin supergroup 0 2018-03-12 11:48 /user/hive/warehouse/hive2_db1.db/stu3
drwxr-xr-x - wangshumin supergroup 0 2018-03-12 13:13 /user/hive/warehouse/hive2_db1.db/stuin
[wangshumin@centoshostnameKL2 ~]$ hdfs dfs -ls /user/hive/warehouse/hive2_db1.db/
Found 4 items
drwxr-xr-x - wangshumin supergroup 0 2018-03-12 11:31 /user/hive/warehouse/hive2_db1.db/stu1
drwxr-xr-x - wangshumin supergroup 0 2018-03-12 11:48 /user/hive/warehouse/hive2_db1.db/stu2
drwxr-xr-x - wangshumin supergroup 0 2018-03-12 11:48 /user/hive/warehouse/hive2_db1.db/stu3
drwxr-xr-x - wangshumin supergroup 0 2018-03-12 13:13 /user/hive/warehouse/hive2_db1.db/stuin
[wangshumin@centoshostnameKL2 ~]$ hdfs dfs -put stu3 /user/hive/warehouse/hive2_db1.db/stuout
[wangshumin@centoshostnameKL2 ~]$ hdfs dfs -ls /user/hive/warehouse/hive2_db1.db/stuout
-rw-r--r-- 3 wangshumin supergroup 53 2018-03-12 13:22 /user/hive/warehouse/hive2_db1.db/stuout
[wangshumin@centoshostnameKL2 ~]$ hdfs dfs -cat /user/hive/warehouse/hive2_db1.db/stuout
1 , zhangshan , 20
2 , wangwu , 19
3 , xiaolu , 26
[wangshumin@centoshostnameKL2 ~]$ hdfs dfs -put stu3 /user/hive/hivedata
[wangshumin@centoshostnameKL2 ~]$ hdfs dfs -cat /user/hive/hivedata/stu3
1 , zhangshan , 20
2 , wangwu , 19
3 , xiaolu , 26
[wangshumin@centoshostnameKL2 ~]$ hdfs dfs -rm /user/hive/hivedata/stu3
18/03/12 13:31:51 INFO fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = 0 minutes, Emptier interval = 0 minutes.
Deleted /user/hive/hivedata/stu3
[wangshumin@centoshostnameKL2 ~]$ hdfs dfs -ls /user/hive/hivedata
[wangshumin@centoshostnameKL2 ~]$ hdfs dfs -put stu3 /user/hive/hivedata
[wangshumin@centoshostnameKL2 ~]$ hdfs dfs -cat /user/hive/hivedata/stu3
1 , zhangshan , 20
2 , wangwu , 19
3 , xiaolu , 26
[wangshumin@centoshostnameKL2 ~]$ hdfs dfs -cat /user/hive/hivedata/stu3
1 , zhangshan , 20
2 , wangwu , 19
3 , xiaolu , 26
[wangshumin@centoshostnameKL2 ~]$ hdfs dfs -ls /user/hive/warehouse/hive2_db1.db
Found 5 items
drwxr-xr-x - wangshumin supergroup 0 2018-03-12 11:31 /user/hive/warehouse/hive2_db1.db/stu1
drwxr-xr-x - wangshumin supergroup 0 2018-03-12 11:48 /user/hive/warehouse/hive2_db1.db/stu2
drwxr-xr-x - wangshumin supergroup 0 2018-03-12 11:48 /user/hive/warehouse/hive2_db1.db/stu3
drwxr-xr-x - wangshumin supergroup 0 2018-03-12 13:13 /user/hive/warehouse/hive2_db1.db/stuin
-rw-r--r-- 3 wangshumin supergroup 53 2018-03-12 13:22 /user/hive/warehouse/hive2_db1.db/stuout
[wangshumin@centoshostnameKL2 ~]$ vim hsql
[wangshumin@centoshostnameKL2 ~]$ cat hsql
create external table stuout( id int , name String , age int )
row format delimited
fields terminated by ','
location "/user/hive/hivedata"
;
[wangshumin@centoshostnameKL2 ~]$ vim stu2
[wangshumin@centoshostnameKL2 ~]$ pwd
/home/wangshumin
[wangshumin@centoshostnameKL2 ~]$ hdfs dfs -ls /user/hive/warehouse/hive2_db1.db
Found 6 items
drwxr-xr-x - wangshumin supergroup 0 2018-03-12 11:31 /user/hive/warehouse/hive2_db1.db/stu1
drwxr-xr-x - wangshumin supergroup 0 2018-03-12 13:59 /user/hive/warehouse/hive2_db1.db/stu12
drwxr-xr-x - wangshumin supergroup 0 2018-03-12 11:48 /user/hive/warehouse/hive2_db1.db/stu2
drwxr-xr-x - wangshumin supergroup 0 2018-03-12 11:48 /user/hive/warehouse/hive2_db1.db/stu3
drwxr-xr-x - wangshumin supergroup 0 2018-03-12 13:13 /user/hive/warehouse/hive2_db1.db/stuin
-rw-r--r-- 3 wangshumin supergroup 53 2018-03-12 13:22 /user/hive/warehouse/hive2_db1.db/stuout
[wangshumin@centoshostnameKL2 ~]$ hdfs dfs -cat /user/hive/warehouse/hive2_db1.db/stu12
cat: `/user/hive/warehouse/hive2_db1.db/stu12': Is a directory
[wangshumin@centoshostnameKL2 ~]$ hdfs dfs -ls /user/hive/warehouse/hive2_db1.db/stu12
Found 3 items
drwxr-xr-x - wangshumin supergroup 0 2018-03-12 13:56 /user/hive/warehouse/hive2_db1.db/stu12/coutr=10000
drwxr-xr-x - wangshumin supergroup 0 2018-03-12 13:58 /user/hive/warehouse/hive2_db1.db/stu12/coutr=10001
drwxr-xr-x - wangshumin supergroup 0 2018-03-12 13:59 /user/hive/warehouse/hive2_db1.db/stu12/coutr=10002
[wangshumin@centoshostnameKL2 ~]$ hdfs dfs -ls /user/hive/warehouse/hive2_db1.db/stu12/coutr=10000
Found 2 items
-rwxr-xr-x 3 wangshumin supergroup 69 2018-03-12 13:55 /user/hive/warehouse/hive2_db1.db/stu12/coutr=10000/stu2
-rwxr-xr-x 3 wangshumin supergroup 69 2018-03-12 13:56 /user/hive/warehouse/hive2_db1.db/stu12/coutr=10000/stu2_copy_1
[wangshumin@centoshostnameKL2 ~]$ hdfs dfs -cat /user/hive/warehouse/hive2_db1.db/stu12/coutr=10000/stu2
1 ,zhangshan ,20 ,1000
2 ,wangwu , 19 ,1100
3 ,xiaolu , 26 ,1200
[wangshumin@centoshostnameKL2 ~]$ hdfs dfs -cat /user/hive/warehouse/hive2_db1.db/stu12/coutr=10000/stu2_copy_1
1 ,zhangshan ,20 ,1000
2 ,wangwu , 19 ,1100
3 ,xiaolu , 26 ,1200
[wangshumin@centoshostnameKL2 ~]$
hive> load data local inpath "/home/wangshumin/stu2" into table stu12 partition(coutr="10000");
Loading data to table hive2_db1.stu12 partition (coutr=10000)
Partition hive2_db1.stu12{coutr=10000} stats: [numFiles=1, numRows=0, totalSize=69, rawDataSize=0]
OK
Time taken: 0.716 seconds
hive> load data local inpath "/home/wangshumin/stu2" into table stu12 partition(coutr="10000");
Loading data to table hive2_db1.stu12 partition (coutr=10000)
Partition hive2_db1.stu12{coutr=10000} stats: [numFiles=2, numRows=0, totalSize=138, rawDataSize=0]
OK
Time taken: 0.527 seconds
hive> select * from stu12;
OK
NULL zhangshan NULL 10000
NULL wangwu NULL 10000
NULL xiaolu NULL 10000
NULL zhangshan NULL 10000
NULL wangwu NULL 10000
NULL xiaolu NULL 10000
Time taken: 0.089 seconds, Fetched: 6 row(s)
hive>
>
>
>
> load data local inpath "/home/wangshumin/stu2" into table stu12 partition(coutr="10001");
Loading data to table hive2_db1.stu12 partition (coutr=10001)
Partition hive2_db1.stu12{coutr=10001} stats: [numFiles=1, numRows=0, totalSize=69, rawDataSize=0]
OK
Time taken: 0.51 seconds
hive> select * from stu12;
OK
NULL zhangshan NULL 10000
NULL wangwu NULL 10000
NULL xiaolu NULL 10000
NULL zhangshan NULL 10000
NULL wangwu NULL 10000
NULL xiaolu NULL 10000
NULL zhangshan NULL 10001
NULL wangwu NULL 10001
NULL xiaolu NULL 10001
Time taken: 0.131 seconds, Fetched: 9 row(s)
hive> load data local inpath "/home/wangshumin/stu2" into table stu12 partition(coutr="10002");
Loading data to table hive2_db1.stu12 partition (coutr=10002)
Partition hive2_db1.stu12{coutr=10002} stats: [numFiles=1, numRows=0, totalSize=69, rawDataSize=0]
OK
Time taken: 0.397 seconds
hive> select * from stu12;
OK
NULL zhangshan NULL 10000
NULL wangwu NULL 10000
NULL xiaolu NULL 10000
NULL zhangshan NULL 10000
NULL wangwu NULL 10000
NULL xiaolu NULL 10000
NULL zhangshan NULL 10001
NULL wangwu NULL 10001
NULL xiaolu NULL 10001
NULL zhangshan NULL 10002
NULL wangwu NULL 10002
NULL xiaolu NULL 10002
Time taken: 0.084 seconds, Fetched: 12 row(s)
hive>
>
>
>
[wangshumin@centoshostnameKL2 ~]$ hdfs dfs -cat /user/hive/warehouse/hive2_db1.db/stuin
cat: `/user/hive/warehouse/hive2_db1.db/stuin': Is a directory
[wangshumin@centoshostnameKL2 ~]$ hdfs dfs -rm r /user/hive/warehouse/hive2_db1.db/stuin
rm: `r': No such file or directory
rm: `/user/hive/warehouse/hive2_db1.db/stuin': Is a directory
[wangshumin@centoshostnameKL2 ~]$ hdfs dfs -rm /user/hive/warehouse/hive2_db1.db/stuout
rm: `/user/hive/warehouse/hive2_db1.db/stuout': Is a directory
[wangshumin@centoshostnameKL2 ~]$ hdfs dfs
Usage: hadoop fs [generic options]
[-appendToFile <localsrc> ... <dst>]
[-cat [-ignoreCrc] <src> ...]
[-checksum <src> ...]
[-chgrp [-R] GROUP PATH...]
[-chmod [-R] <MODE[,MODE]... | OCTALMODE> PATH...]
[-chown [-R] [OWNER][:[GROUP]] PATH...]
[-copyFromLocal [-f] [-p] <localsrc> ... <dst>]
[-copyToLocal [-p] [-ignoreCrc] [-crc] <src> ... <localdst>]
[-count [-q] <path> ...]
[-cp [-f] [-p] <src> ... <dst>]
[-createSnapshot <snapshotDir> [<snapshotName>]]
[-deleteSnapshot <snapshotDir> <snapshotName>]
[-df [-h] [<path> ...]]
[-du [-s] [-h] <path> ...]
[-expunge]
[-get [-p] [-ignoreCrc] [-crc] <src> ... <localdst>]
[-getfacl [-R] <path>]
[-getmerge [-nl] <src> <localdst>]
[-help [cmd ...]]
[-ls [-d] [-h] [-R] [<path> ...]]
[-mkdir [-p] <path> ...]
[-moveFromLocal <localsrc> ... <dst>]
[-moveToLocal <src> <localdst>]
[-mv <src> ... <dst>]
[-put [-f] [-p] <localsrc> ... <dst>]
[-renameSnapshot <snapshotDir> <oldName> <newName>]
[-rm [-f] [-r|-R] [-skipTrash] <src> ...]
[-rmdir [--ignore-fail-on-non-empty] <dir> ...]
[-setfacl [-R] [{-b|-k} {-m|-x <acl_spec>} <path>]|[--set <acl_spec> <path>]]
[-setrep [-R] [-w] <rep> <path> ...]
[-stat [format] <path> ...]
[-tail [-f] <file>]
[-test -[defsz] <path>]
[-text [-ignoreCrc] <src> ...]
[-touchz <path> ...]
[-usage [cmd ...]]
Generic options supported are
-conf <configuration file> specify an application configuration file
-D <property=value> use value for given property
-fs <local|namenode:port> specify a namenode
-jt <local|jobtracker:port> specify a job tracker
-files <comma separated list of files> specify comma separated files to be copied to the map reduce cluster
-libjars <comma separated list of jars> specify comma separated jar files to include in the classpath.
-archives <comma separated list of archives> specify comma separated archives to be unarchived on the compute machines.
The general command line syntax is
bin/hadoop command [genericOptions] [commandOptions]
[wangshumin@centoshostnameKL2 ~]$ hdfs dfs -rmdir /user/hive/warehouse/hive2_db1.db/stuout
[wangshumin@centoshostnameKL2 ~]$ hdfs dfs -ls /
Found 7 items
drwxr-xr-x - wangshumin supergroup 0 2018-02-09 07:32 /data
drwxr-xr-x - wangshumin supergroup 0 2018-02-09 07:41 /dataload_balance
drwxr-xr-x - wangshumin supergroup 0 2018-02-09 07:18 /flumedata2
drwxr-xr-x - wangshumin supergroup 0 2018-02-26 15:35 /hbase
drwxr-xr-x - wangshumin supergroup 0 2018-03-12 13:16 /hivedata
drwx-wx-wx - wangshumin supergroup 0 2018-03-12 13:04 /tmp
drwxr-xr-x - wangshumin supergroup 0 2018-02-09 06:50 /user
[wangshumin@centoshostnameKL2 ~]$ hdfs dfs -rmdir /hivedata
[wangshumin@centoshostnameKL2 ~]$ hdfs dfs -ls /
Found 6 items
drwxr-xr-x - wangshumin supergroup 0 2018-02-09 07:32 /data
drwxr-xr-x - wangshumin supergroup 0 2018-02-09 07:41 /dataload_balance
drwxr-xr-x - wangshumin supergroup 0 2018-02-09 07:18 /flumedata2
drwxr-xr-x - wangshumin supergroup 0 2018-02-26 15:35 /hbase
drwx-wx-wx - wangshumin supergroup 0 2018-03-12 13:04 /tmp
drwxr-xr-x - wangshumin supergroup 0 2018-02-09 06:50 /user
[wangshumin@centoshostnameKL2 ~]$ hdfs dfs -ls /user/hive/warehouse/hive2_db1.db/
Found 4 items
drwxr-xr-x - wangshumin supergroup 0 2018-03-12 11:31 /user/hive/warehouse/hive2_db1.db/stu1
drwxr-xr-x - wangshumin supergroup 0 2018-03-12 11:48 /user/hive/warehouse/hive2_db1.db/stu2
drwxr-xr-x - wangshumin supergroup 0 2018-03-12 11:48 /user/hive/warehouse/hive2_db1.db/stu3
drwxr-xr-x - wangshumin supergroup 0 2018-03-12 13:13 /user/hive/warehouse/hive2_db1.db/stuin
[wangshumin@centoshostnameKL2 ~]$ hdfs dfs -ls /user/hive/warehouse/hive2_db1.db/
Found 4 items
drwxr-xr-x - wangshumin supergroup 0 2018-03-12 11:31 /user/hive/warehouse/hive2_db1.db/stu1
drwxr-xr-x - wangshumin supergroup 0 2018-03-12 11:48 /user/hive/warehouse/hive2_db1.db/stu2
drwxr-xr-x - wangshumin supergroup 0 2018-03-12 11:48 /user/hive/warehouse/hive2_db1.db/stu3
drwxr-xr-x - wangshumin supergroup 0 2018-03-12 13:13 /user/hive/warehouse/hive2_db1.db/stuin
[wangshumin@centoshostnameKL2 ~]$ hdfs dfs -put stu3 /user/hive/warehouse/hive2_db1.db/stuout
[wangshumin@centoshostnameKL2 ~]$ hdfs dfs -ls /user/hive/warehouse/hive2_db1.db/stuout
-rw-r--r-- 3 wangshumin supergroup 53 2018-03-12 13:22 /user/hive/warehouse/hive2_db1.db/stuout
[wangshumin@centoshostnameKL2 ~]$ hdfs dfs -cat /user/hive/warehouse/hive2_db1.db/stuout
1 , zhangshan , 20
2 , wangwu , 19
3 , xiaolu , 26
[wangshumin@centoshostnameKL2 ~]$ hdfs dfs -put stu3 /user/hive/hivedata
[wangshumin@centoshostnameKL2 ~]$ hdfs dfs -cat /user/hive/hivedata/stu3
1 , zhangshan , 20
2 , wangwu , 19
3 , xiaolu , 26
[wangshumin@centoshostnameKL2 ~]$ hdfs dfs -rm /user/hive/hivedata/stu3
18/03/12 13:31:51 INFO fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = 0 minutes, Emptier interval = 0 minutes.
Deleted /user/hive/hivedata/stu3
[wangshumin@centoshostnameKL2 ~]$ hdfs dfs -ls /user/hive/hivedata
[wangshumin@centoshostnameKL2 ~]$ hdfs dfs -put stu3 /user/hive/hivedata
[wangshumin@centoshostnameKL2 ~]$ hdfs dfs -cat /user/hive/hivedata/stu3
1 , zhangshan , 20
2 , wangwu , 19
3 , xiaolu , 26
[wangshumin@centoshostnameKL2 ~]$ hdfs dfs -cat /user/hive/hivedata/stu3
1 , zhangshan , 20
2 , wangwu , 19
3 , xiaolu , 26
[wangshumin@centoshostnameKL2 ~]$ hdfs dfs -ls /user/hive/warehouse/hive2_db1.db
Found 5 items
drwxr-xr-x - wangshumin supergroup 0 2018-03-12 11:31 /user/hive/warehouse/hive2_db1.db/stu1
drwxr-xr-x - wangshumin supergroup 0 2018-03-12 11:48 /user/hive/warehouse/hive2_db1.db/stu2
drwxr-xr-x - wangshumin supergroup 0 2018-03-12 11:48 /user/hive/warehouse/hive2_db1.db/stu3
drwxr-xr-x - wangshumin supergroup 0 2018-03-12 13:13 /user/hive/warehouse/hive2_db1.db/stuin
-rw-r--r-- 3 wangshumin supergroup 53 2018-03-12 13:22 /user/hive/warehouse/hive2_db1.db/stuout
[wangshumin@centoshostnameKL2 ~]$ vim hsql
[wangshumin@centoshostnameKL2 ~]$ cat hsql
create external table stuout( id int , name String , age int )
row format delimited
fields terminated by ','
location "/user/hive/hivedata"
;
[wangshumin@centoshostnameKL2 ~]$ vim stu2
[wangshumin@centoshostnameKL2 ~]$ pwd
/home/wangshumin
[wangshumin@centoshostnameKL2 ~]$ hdfs dfs -ls /user/hive/warehouse/hive2_db1.db
Found 6 items
drwxr-xr-x - wangshumin supergroup 0 2018-03-12 11:31 /user/hive/warehouse/hive2_db1.db/stu1
drwxr-xr-x - wangshumin supergroup 0 2018-03-12 13:59 /user/hive/warehouse/hive2_db1.db/stu12
drwxr-xr-x - wangshumin supergroup 0 2018-03-12 11:48 /user/hive/warehouse/hive2_db1.db/stu2
drwxr-xr-x - wangshumin supergroup 0 2018-03-12 11:48 /user/hive/warehouse/hive2_db1.db/stu3
drwxr-xr-x - wangshumin supergroup 0 2018-03-12 13:13 /user/hive/warehouse/hive2_db1.db/stuin
-rw-r--r-- 3 wangshumin supergroup 53 2018-03-12 13:22 /user/hive/warehouse/hive2_db1.db/stuout
[wangshumin@centoshostnameKL2 ~]$ hdfs dfs -cat /user/hive/warehouse/hive2_db1.db/stu12
cat: `/user/hive/warehouse/hive2_db1.db/stu12': Is a directory
[wangshumin@centoshostnameKL2 ~]$ hdfs dfs -ls /user/hive/warehouse/hive2_db1.db/stu12
Found 3 items
drwxr-xr-x - wangshumin supergroup 0 2018-03-12 13:56 /user/hive/warehouse/hive2_db1.db/stu12/coutr=10000
drwxr-xr-x - wangshumin supergroup 0 2018-03-12 13:58 /user/hive/warehouse/hive2_db1.db/stu12/coutr=10001
drwxr-xr-x - wangshumin supergroup 0 2018-03-12 13:59 /user/hive/warehouse/hive2_db1.db/stu12/coutr=10002
[wangshumin@centoshostnameKL2 ~]$ hdfs dfs -ls /user/hive/warehouse/hive2_db1.db/stu12/coutr=10000
Found 2 items
-rwxr-xr-x 3 wangshumin supergroup 69 2018-03-12 13:55 /user/hive/warehouse/hive2_db1.db/stu12/coutr=10000/stu2
-rwxr-xr-x 3 wangshumin supergroup 69 2018-03-12 13:56 /user/hive/warehouse/hive2_db1.db/stu12/coutr=10000/stu2_copy_1
[wangshumin@centoshostnameKL2 ~]$ hdfs dfs -cat /user/hive/warehouse/hive2_db1.db/stu12/coutr=10000/stu2
1 ,zhangshan ,20 ,1000
2 ,wangwu , 19 ,1100
3 ,xiaolu , 26 ,1200
[wangshumin@centoshostnameKL2 ~]$ hdfs dfs -cat /user/hive/warehouse/hive2_db1.db/stu12/coutr=10000/stu2_copy_1
1 ,zhangshan ,20 ,1000
2 ,wangwu , 19 ,1100
3 ,xiaolu , 26 ,1200
[wangshumin@centoshostnameKL2 ~]$
hive> load data local inpath "/home/wangshumin/stu2" into table stu12 partition(coutr="10000");
Loading data to table hive2_db1.stu12 partition (coutr=10000)
Partition hive2_db1.stu12{coutr=10000} stats: [numFiles=1, numRows=0, totalSize=69, rawDataSize=0]
OK
Time taken: 0.716 seconds
hive> load data local inpath "/home/wangshumin/stu2" into table stu12 partition(coutr="10000");
Loading data to table hive2_db1.stu12 partition (coutr=10000)
Partition hive2_db1.stu12{coutr=10000} stats: [numFiles=2, numRows=0, totalSize=138, rawDataSize=0]
OK
Time taken: 0.527 seconds
hive> select * from stu12;
OK
NULL zhangshan NULL 10000
NULL wangwu NULL 10000
NULL xiaolu NULL 10000
NULL zhangshan NULL 10000
NULL wangwu NULL 10000
NULL xiaolu NULL 10000
Time taken: 0.089 seconds, Fetched: 6 row(s)
hive>
>
>
>
> load data local inpath "/home/wangshumin/stu2" into table stu12 partition(coutr="10001");
Loading data to table hive2_db1.stu12 partition (coutr=10001)
Partition hive2_db1.stu12{coutr=10001} stats: [numFiles=1, numRows=0, totalSize=69, rawDataSize=0]
OK
Time taken: 0.51 seconds
hive> select * from stu12;
OK
NULL zhangshan NULL 10000
NULL wangwu NULL 10000
NULL xiaolu NULL 10000
NULL zhangshan NULL 10000
NULL wangwu NULL 10000
NULL xiaolu NULL 10000
NULL zhangshan NULL 10001
NULL wangwu NULL 10001
NULL xiaolu NULL 10001
Time taken: 0.131 seconds, Fetched: 9 row(s)
hive> load data local inpath "/home/wangshumin/stu2" into table stu12 partition(coutr="10002");
Loading data to table hive2_db1.stu12 partition (coutr=10002)
Partition hive2_db1.stu12{coutr=10002} stats: [numFiles=1, numRows=0, totalSize=69, rawDataSize=0]
OK
Time taken: 0.397 seconds
hive> select * from stu12;
OK
NULL zhangshan NULL 10000
NULL wangwu NULL 10000
NULL xiaolu NULL 10000
NULL zhangshan NULL 10000
NULL wangwu NULL 10000
NULL xiaolu NULL 10000
NULL zhangshan NULL 10001
NULL wangwu NULL 10001
NULL xiaolu NULL 10001
NULL zhangshan NULL 10002
NULL wangwu NULL 10002
NULL xiaolu NULL 10002
Time taken: 0.084 seconds, Fetched: 12 row(s)
hive>
>
>
>
相关文章推荐
- hive external table partition 关联HDFS数据
- hive 使用笔记(partition; HDFS乱码;日期函数)
- hive external table partition 关联HDFS数据
- Hive,Hbase,HDFS等之间的关系
- Hive动态分区&Partition中跟函数
- hdfs hbase hive pig之间的区别
- 详解Hadoop核心架构HDFS+MapReduce+Hbase+Hive
- Thinking in BigData(八)大数据Hadoop核心架构HDFS+MapReduce+Hbase+Hive内部机理详解
- HDFS+MapReduce+Hive+HBase十分钟快速入门(转)
- 【转载】HDFS+MapReduce+Hive+HBase 十分钟快速入门(二)
- HDFS YARN zookeeper HBASE HIVE HIVE hwi的启动
- HDFS文件和HIVE表的一些操作
- 使用Sqoop将HDFS/Hive/HBase与MySQL/Oracle中的数据相互导入、导出
- Hadoop核心架构HDFS+MapReduce+Hbase+Hive内部机理详解
- linux、hdfs、hive、hbase经常使用的命令
- Hive 与 HDFS 之间的联系、Hive 与 关系型数据库的区别
- 用nifi把hdfs数据导到hive
- Hive Metastore Thrift Server Partition压力测试
- hive客户机也是可以执行一些hdfs的命令的;
- Hive Analytics Functions row_number rank over partition by