您的位置:首页 > 运维架构

WordCount MapReduce调试

2016-12-31 12:55 190 查看
版本: hadoop 2.6.5

第一次参考别人的内容写hadoop的mapreduce程序,花了两天时间调试,有点慢,好在调通,反复研究也学到不少东西。

[hadoop@master ~]$ cd file

[hadoop@master file]$ ls

file1.txt  file2.txt

[hadoop@master ~]$ hadoop fs -mkdir /user

[hadoop@master ~]$ hadoop fs -mkdir /user/hadoop

[hadoop@master ~]$ hadoop fs -mkdir /user/hadoop/wc_input

[hadoop@master ~]$ hadoop fs -put /home/hadoop/file/ /user/hadoop/wc_input

[hadoop@master ~]$ hadoop fs -ls hdfs://master:9000/user/hadoop/wc_input

Found 1 items

drwxr-xr-x   - hadoop supergroup          0 2016-12-31 12:15 hdfs://master:9000/user/hadoop/wc_input/file

[hadoop@master ~]$ hadoop fs -ls hdfs://master:9000/user/hadoop/wc_input/file

Found 2 items

-rw-r--r--   2 hadoop supergroup         18 2016-12-31 12:15 hdfs://master:9000/user/hadoop/wc_input/file/file1.txt

-rw-r--r--   2 hadoop supergroup         17 2016-12-31 12:15 hdfs://master:9000/user/hadoop/wc_input/file/file2.txt

--这里不需要输入类名(把输入路径当成输出了),花了好长时间研究

[hadoop@master ~]$ hadoop jar hadoop.jar com.yu.hadoop.WordCount wc_input/file wc_output

16/12/31 12:17:47 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032

Exception in thread "main" org.apache.hadoop.mapred.FileAlreadyExistsException:
Output directory hdfs://master:9000/user/hadoop/wc_input/file already exists

        at org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:146)

        at org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:267)

        at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:140)

        at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1297)

        at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1294)

        at java.security.AccessController.doPrivileged(Native Method)

        at javax.security.auth.Subject.doAs(Subject.java:422)

        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1692)

        at org.apache.hadoop.mapreduce.Job.submit(Job.java:1294)

        at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1315)

        at com.yu.hadoop.WordCount.main(WordCount.java:65)

        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

        at java.lang.reflect.Method.invoke(Method.java:498)

        at org.apache.hadoop.util.RunJar.run(RunJar.java:221)

        at org.apache.hadoop.util.RunJar.main(RunJar.java:136)

--把类名去掉,正常运行

因为MANIFEST.MF里面已经有配置

Manifest-Version: 1.0

Main-Class: com.yu.hadoop.WordCount

[hadoop@master ~]$ hadoop jar hadoop.jar wc_input/file wc_output1

16/12/31 12:18:57 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032

16/12/31 12:18:58 WARN mapreduce.JobResourceUploader: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.

16/12/31 12:19:04 INFO input.FileInputFormat: Total input paths to process : 2

16/12/31 12:19:04 INFO mapreduce.JobSubmitter: number of splits:2

16/12/31 12:19:05 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1483155278430_0013

16/12/31 12:19:06 INFO impl.YarnClientImpl: Submitted application application_1483155278430_0013

16/12/31 12:19:06 INFO mapreduce.Job: The url to track the job: http://master:8088/proxy/application_1483155278430_0013/
16/12/31 12:19:06 INFO mapreduce.Job: Running job: job_1483155278430_0013

16/12/31 12:19:36 INFO mapreduce.Job: Job job_1483155278430_0013 running in uber mode : false

16/12/31 12:19:36 INFO mapreduce.Job:  map 0% reduce 0%

16/12/31 12:20:22 INFO mapreduce.Job:  map 100% reduce 0%

16/12/31 12:20:42 INFO mapreduce.Job:  map 100% reduce 100%

16/12/31 12:20:43 INFO mapreduce.Job: Job job_1483155278430_0013 completed successfully

16/12/31 12:20:44 INFO mapreduce.Job: Counters: 49

        File System Counters

                FILE: Number of bytes read=77

                FILE: Number of bytes written=322253

                FILE: Number of read operations=0

                FILE: Number of large read operations=0

                FILE: Number of write operations=0

                HDFS: Number of bytes read=273

                HDFS: Number of bytes written=30

                HDFS: Number of read operations=9

                HDFS: Number of large read operations=0

                HDFS: Number of write operations=2

        Job Counters 

                Launched map tasks=2

                Launched reduce tasks=1

                Data-local map tasks=2

                Total time spent by all maps in occupied slots (ms)=89921

                Total time spent by all reduces in occupied slots (ms)=15506

                Total time spent by all map tasks (ms)=89921

                Total time spent by all reduce tasks (ms)=15506

                Total vcore-milliseconds taken by all map tasks=89921

                Total vcore-milliseconds taken by all reduce tasks=15506

                Total megabyte-milliseconds taken by all map tasks=92079104

                Total megabyte-milliseconds taken by all reduce tasks=15878144

        Map-Reduce Framework

                Map input records=2

                Map output records=6

                Map output bytes=59

                Map output materialized bytes=83

                Input split bytes=238

                Combine input records=0

                Combine output records=0

                Reduce input groups=4

                Reduce shuffle bytes=83

                Reduce input records=6

                Reduce output records=4

                Spilled Records=12

                Shuffled Maps =2

                Failed Shuffles=0

                Merged Map outputs=2

                GC time elapsed (ms)=730

                CPU time spent (ms)=3270

                Physical memory (bytes) snapshot=460414976

                Virtual memory (bytes) snapshot=6309380096

                Total committed heap usage (bytes)=283058176

        Shuffle Errors

                BAD_ID=0

                CONNECTION=0

                IO_ERROR=0

                WRONG_LENGTH=0

                WRONG_MAP=0

                WRONG_REDUCE=0

        File Input Format Counters 

                Bytes Read=35

        File Output Format Counters 

                Bytes Written=30
4000

[hadoop@master ~]$ hadoop fs -ls /user/hadoop/wc_output1

Found 2 items

-rw-r--r--   2 hadoop supergroup          0 2016-12-31 12:20 /user/hadoop/wc_output1/_SUCCESS

-rw-r--r--   2 hadoop supergroup         30 2016-12-31 12:20 /user/hadoop/wc_output1/part-r-00000

[hadoop@master ~]$ hadoop fs -cat /user/hadoop/wc_output1/_SUCCESS

[hadoop@master ~]$ hadoop fs -cat  /user/hadoop/wc_output1/part-r-00000

and     1

hadoop  2

hello   2

java    1

--再次执行wc_output1已经存在,在代码加了一些判断,存在就覆盖

[hadoop@master ~]$ hadoop jar hadoop.jar wc_input/file wc_output1

16/12/31 12:44:35 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032

Exception in thread "main" org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory hdfs://master:9000/user/hadoop/wc_output1 already exists

        at org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:146)

        at org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:267)

        at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:140)

        at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1297)

        at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1294)

        at java.security.AccessController.doPrivileged(Native Method)

        at javax.security.auth.Subject.doAs(Subject.java:422)

        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1692)

        at org.apache.hadoop.mapreduce.Job.submit(Job.java:1294)

        at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1315)

        at com.yu.hadoop.WordCount.main(WordCount.java:65)

        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

        at java.lang.reflect.Method.invoke(Method.java:498)

        at org.apache.hadoop.util.RunJar.run(RunJar.java:221)

        at org.apache.hadoop.util.RunJar.main(RunJar.java:136)

--原始代码如下

package com.yu.hadoop;

import java.io.IOException;
import java.util.StringTokenizer;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;

public class WordCount {
public static class WordCountMap extends Mapper<LongWritable, Text, Text, IntWritable> {
private final IntWritable one = new IntWritable(1);
private Text word = new Text();

public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
String line = value.toString();
StringTokenizer token = new StringTokenizer(line);
while (token.hasMoreTokens()) {
word.set(token.nextToken());
context.write(word, one);
}
}
}

public static class WordCountReduce extends Reducer<Text, IntWritable, Text, IntWritable> {
public void reduce(Text key, Iterable<IntWritable> values,
Context context) throws IOException, InterruptedException {
int sum = 0;
for (IntWritable val : values) {
sum += val.get();
}
context.write(key, new IntWritable(sum));
}
}

@SuppressWarnings("deprecation")
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
Job job = new Job(conf);
job.setJarByClass(WordCount.class);
job.setJobName("wordcount");
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
job.setMapperClass(WordCountMap.class);
job.setReducerClass(WordCountReduce.class);
job.setInputFormatClass(TextInputFormat.class);
job.setOutputFormatClass(TextOutputFormat.class);
Path path = new Path(args[1]);// 取第1个表示输出目录参数(第0个参数是输入目录)
FileSystem fileSystem = path.getFileSystem(conf);// 根据path找到这个文件
if (fileSystem.exists(path)) {
fileSystem.delete(path, true);
}
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
job.waitForCompletion(true);
}
}


本文参考:http://www.cnblogs.com/quchunhui/p/5421727.html
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息