向hdfs上传本地文件-Hello World
2014-02-05 19:23
399 查看
[hadoop@Master ~]$
hadoop dfs -ls
ls: Cannot access .: No such file or directory.
[hadoop@Master ~]$ hadoop dfs -put /input in
put: File /input does not exist.
[hadoop@Master ~]$
hadoop dfs -ls
ls: Cannot access .: No such file or directory.
[hadoop@Master ~]$
hadoop dfs -mkdir in
[hadoop@Master ~]$
hadoop dfs -ls
Found 1 items
drwxr-xr-x - hadoop supergroup 0 2014-02-05 19:07 /user/hadoop/in
[hadoop@Master ~]$
hadoop dfs -put /input in
put: File /input does not exist.
[hadoop@Master ~]$
hadoop dfs -put input in
[hadoop@Master ~]$
hadoop dfs -ls
Found 1 items
drwxr-xr-x - hadoop supergroup 0 2014-02-05 19:07 /user/hadoop/in
[hadoop@Master ~]$
hadoop dfs -ls in
Found 1 items
drwxr-xr-x - hadoop supergroup 0 2014-02-05 19:07 /user/hadoop/in/input
[hadoop@Master ~]$ hadoop dfs -ls
in/input
Found 2 items
-rw-r--r-- 1 hadoop supergroup 12 2014-02-05 19:07 /user/hadoop/in/input/test1.txt
-rw-r--r-- 1 hadoop supergroup 13 2014-02-05 19:07 /user/hadoop/in/input/test2.txt
[hadoop@Master hadoop]$
hadoop jar hadoop-0.20.2-examples.jar wordcount
in/input/ out
14/02/05 19:16:10 INFO input.FileInputFormat: Total input paths to process : 2
14/02/05 19:16:10 INFO mapred.JobClient: Running job: job_201402051900_0001
14/02/05 19:16:11 INFO mapred.JobClient: map 0% reduce 0%
14/02/05 19:16:24 INFO mapred.JobClient: map 50% reduce 0%
14/02/05 19:16:30 INFO mapred.JobClient: map 100% reduce 0%
14/02/05 19:16:33 INFO mapred.JobClient: map 100% reduce 16%
14/02/05 19:16:43 INFO mapred.JobClient: map 100% reduce 100%
14/02/05 19:16:45 INFO mapred.JobClient: Job complete: job_201402051900_0001
14/02/05 19:16:45 INFO mapred.JobClient: Counters: 18
14/02/05 19:16:45 INFO mapred.JobClient: Job Counters
14/02/05 19:16:45 INFO mapred.JobClient: Launched reduce tasks=1
14/02/05 19:16:45 INFO mapred.JobClient: Rack-local map tasks=1
14/02/05 19:16:45 INFO mapred.JobClient: Launched map tasks=2
14/02/05 19:16:45 INFO mapred.JobClient: Data-local map tasks=1
14/02/05 19:16:45 INFO mapred.JobClient: FileSystemCounters
14/02/05 19:16:45 INFO mapred.JobClient: FILE_BYTES_READ=55
14/02/05 19:16:45 INFO mapred.JobClient: HDFS_BYTES_READ=25
14/02/05 19:16:45 INFO mapred.JobClient: FILE_BYTES_WRITTEN=180
14/02/05 19:16:45 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=25
14/02/05 19:16:45 INFO mapred.JobClient: Map-Reduce Framework
14/02/05 19:16:45 INFO mapred.JobClient: Reduce input groups=3
14/02/05 19:16:45 INFO mapred.JobClient: Combine output records=4
14/02/05 19:16:45 INFO mapred.JobClient: Map input records=2
14/02/05 19:16:45 INFO mapred.JobClient: Reduce shuffle bytes=61
14/02/05 19:16:45 INFO mapred.JobClient: Reduce output records=3
14/02/05 19:16:45 INFO mapred.JobClient: Spilled Records=8
14/02/05 19:16:45 INFO mapred.JobClient: Map output bytes=41
14/02/05 19:16:45 INFO mapred.JobClient: Combine input records=4
14/02/05 19:16:45 INFO mapred.JobClient: Map output records=4
14/02/05 19:16:45 INFO mapred.JobClient: Reduce input records=4
[hadoop@Master hadoop]$
hadoop dfs -ls
Found 2 items
drwxr-xr-x - hadoop supergroup 0 2014-02-05 19:07 /user/hadoop/in
drwxr-xr-x - hadoop supergroup 0 2014-02-05 19:16 /user/hadoop/out
[hadoop@Master hadoop]$
hadoop dfs -ls ./out
Found 2 items
drwxr-xr-x - hadoop supergroup 0 2014-02-05 19:16 /user/hadoop/out/_logs
-rw-r--r-- 1 hadoop supergroup 25 2014-02-05 19:16 /user/hadoop/out/part-r-00000
[hadoop@Master hadoop]$
hadoop dfs -cat ./out/part-r-00000
hadoop 1
hello 2
world 1
hadoop dfs -ls
ls: Cannot access .: No such file or directory.
[hadoop@Master ~]$ hadoop dfs -put /input in
put: File /input does not exist.
[hadoop@Master ~]$
hadoop dfs -ls
ls: Cannot access .: No such file or directory.
[hadoop@Master ~]$
hadoop dfs -mkdir in
[hadoop@Master ~]$
hadoop dfs -ls
Found 1 items
drwxr-xr-x - hadoop supergroup 0 2014-02-05 19:07 /user/hadoop/in
[hadoop@Master ~]$
hadoop dfs -put /input in
put: File /input does not exist.
[hadoop@Master ~]$
hadoop dfs -put input in
[hadoop@Master ~]$
hadoop dfs -ls
Found 1 items
drwxr-xr-x - hadoop supergroup 0 2014-02-05 19:07 /user/hadoop/in
[hadoop@Master ~]$
hadoop dfs -ls in
Found 1 items
drwxr-xr-x - hadoop supergroup 0 2014-02-05 19:07 /user/hadoop/in/input
[hadoop@Master ~]$ hadoop dfs -ls
in/input
Found 2 items
-rw-r--r-- 1 hadoop supergroup 12 2014-02-05 19:07 /user/hadoop/in/input/test1.txt
-rw-r--r-- 1 hadoop supergroup 13 2014-02-05 19:07 /user/hadoop/in/input/test2.txt
[hadoop@Master hadoop]$
hadoop jar hadoop-0.20.2-examples.jar wordcount
in/input/ out
14/02/05 19:16:10 INFO input.FileInputFormat: Total input paths to process : 2
14/02/05 19:16:10 INFO mapred.JobClient: Running job: job_201402051900_0001
14/02/05 19:16:11 INFO mapred.JobClient: map 0% reduce 0%
14/02/05 19:16:24 INFO mapred.JobClient: map 50% reduce 0%
14/02/05 19:16:30 INFO mapred.JobClient: map 100% reduce 0%
14/02/05 19:16:33 INFO mapred.JobClient: map 100% reduce 16%
14/02/05 19:16:43 INFO mapred.JobClient: map 100% reduce 100%
14/02/05 19:16:45 INFO mapred.JobClient: Job complete: job_201402051900_0001
14/02/05 19:16:45 INFO mapred.JobClient: Counters: 18
14/02/05 19:16:45 INFO mapred.JobClient: Job Counters
14/02/05 19:16:45 INFO mapred.JobClient: Launched reduce tasks=1
14/02/05 19:16:45 INFO mapred.JobClient: Rack-local map tasks=1
14/02/05 19:16:45 INFO mapred.JobClient: Launched map tasks=2
14/02/05 19:16:45 INFO mapred.JobClient: Data-local map tasks=1
14/02/05 19:16:45 INFO mapred.JobClient: FileSystemCounters
14/02/05 19:16:45 INFO mapred.JobClient: FILE_BYTES_READ=55
14/02/05 19:16:45 INFO mapred.JobClient: HDFS_BYTES_READ=25
14/02/05 19:16:45 INFO mapred.JobClient: FILE_BYTES_WRITTEN=180
14/02/05 19:16:45 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=25
14/02/05 19:16:45 INFO mapred.JobClient: Map-Reduce Framework
14/02/05 19:16:45 INFO mapred.JobClient: Reduce input groups=3
14/02/05 19:16:45 INFO mapred.JobClient: Combine output records=4
14/02/05 19:16:45 INFO mapred.JobClient: Map input records=2
14/02/05 19:16:45 INFO mapred.JobClient: Reduce shuffle bytes=61
14/02/05 19:16:45 INFO mapred.JobClient: Reduce output records=3
14/02/05 19:16:45 INFO mapred.JobClient: Spilled Records=8
14/02/05 19:16:45 INFO mapred.JobClient: Map output bytes=41
14/02/05 19:16:45 INFO mapred.JobClient: Combine input records=4
14/02/05 19:16:45 INFO mapred.JobClient: Map output records=4
14/02/05 19:16:45 INFO mapred.JobClient: Reduce input records=4
[hadoop@Master hadoop]$
hadoop dfs -ls
Found 2 items
drwxr-xr-x - hadoop supergroup 0 2014-02-05 19:07 /user/hadoop/in
drwxr-xr-x - hadoop supergroup 0 2014-02-05 19:16 /user/hadoop/out
[hadoop@Master hadoop]$
hadoop dfs -ls ./out
Found 2 items
drwxr-xr-x - hadoop supergroup 0 2014-02-05 19:16 /user/hadoop/out/_logs
-rw-r--r-- 1 hadoop supergroup 25 2014-02-05 19:16 /user/hadoop/out/part-r-00000
[hadoop@Master hadoop]$
hadoop dfs -cat ./out/part-r-00000
hadoop 1
hello 2
world 1
相关文章推荐
- 把本地文件夹下的所有文件上传到hdfs上并合并成一个文件
- java上传本地文件到HDFS简单demo
- 合并本地文件并上传到hdfs
- talend 将本地文件或者mysql文件上传到hadoop/hdfs
- 上传本地文件到HDFS
- 将本地文件上传至HDFS
- 解决从linux本地文件系统上传文件到HDFS时的权限问题
- HDFS如何实现本地文件上传?
- 本地多级文件 合并上传到hdfs(递归上传)
- 上传本地文件到hdfs
- JAVA实现:将文件从本地上传到HDFS上、从HDFS上读取等操作
- Spark之本地文件上传至HDFS
- 利用java API实现本地文件上传至hdfs
- Linux上传本地文件到HDFS
- 上传本地文件到HDFS spark可以直接使用
- 上传本地文件到HDFS
- Hadoop HDFS编程 API入门系列之从本地上传文件到HDFS(一)
- Hadoop——上传本地文件到hdfs
- JAVA实现批量上传本地文件至HDFS
- [Hadoop]Hadoop上传本地文件到HDFS