mapreduce中最简单的wordcount程序
2016-11-20 16:39
495 查看
1.首先建一个WCMapper 类:
2.然后建一个WCReducer 类:
3.最后
5.将程序打成jar包,并拷贝到linux中。
6.建一个测试文件
7.运行jar包,指定主程序的地址
8.运行结果:
package com.zhichao.wan.mr.wordcount1; import java.io.IOException; import org.apache.commons.lang.StringUtils; import org.apache.hadoop.io.LongWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Mapper; public class WCMapper extends Mapper<LongWritable, Text, Text, LongWritable>{ @Override protected void map(LongWritable key, Text value,Context context) throws IOException, InterruptedException { //1.获取每一行的字符串 String sting = value.toString(); //2.开始切割字符串,成为一个个的单词 String[] words = StringUtils.split(sting, " "); for (String word : words) { context.write(new Text(word), new LongWritable(1)); } } }
2.然后建一个WCReducer 类:
package com.zhichao.wan.mr.wordcount1; import java.io.IOException; import org.apache.hadoop.io.LongWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Reducer; public class WCReducer extends Reducer<Text, LongWritable, Text, LongWritable>{ @Override protected void reduce(Text key, Iterable<LongWritable> values,Context context) throws IOException, InterruptedException { long count=0; for (LongWritable value : values) { count+=value.get(); } context.write(key, new LongWritable(count)); } }
3.最后
package com.zhichao.wan.mr.wordcount1; import java.io 4000 .IOException; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.LongWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; public class WCRunner { /** * @param args * @throws Exception */ public static void main(String[] args) throws Exception { Job job=Job.getInstance(); job.setJarByClass(WCRunner.class); job.setMapperClass(WCMapper.class); job.setReducerClass(WCReducer.class); job.setMapOutputKeyClass(Text.class); job.setMapOutputValueClass(LongWritable.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(LongWritable.class); FileInputFormat.setInputPaths(job, new Path("/wc/srcdata/")); FileOutputFormat.setOutputPath(job, new Path("/wc/output")); job.waitForCompletion(true); } }
5.将程序打成jar包,并拷贝到linux中。
6.建一个测试文件
7.运行jar包,指定主程序的地址
8.运行结果:
相关文章推荐
- 如何编写最简单的MapReduce之WordCount程序
- hadoop hdfs搭建 mapreduce环境搭建 wordcount程序简单注释
- Hadoop2.4.1 简单的wordCount的MapReduce程序
- 简单的MapReduce程序wordCount
- 【学习笔记】用Hadoop在MapReduce中WordCount简单程序运行详细流程
- 0915_MapReduce初窥——Word Count程序
- MapReduce程序——wordCount
- Hadoop MapReduce示例程序WordCount.java手动编译运行解析
- 第一个MapReduce程序——WordCount
- 第一个MapReduce程序----wordcount(编写并运行)
- 在eclipse上运行MapReduce的wordcount程序所遇到的问题
- Hadoop MapReduce示例程序WordCount.java手动编译运行解析
- 对hadoop第一个小程序WordCount的简单解释.
- (HADOOP入门)mapreduce入门程序wordcount旧版API
- 我的第一个MapReduce程序(WordCount)
- WordCount,第一个MapReduce程序
- Hadoop MapReduce基于新API的WordCount程序运行过程分析
- wordCount程序中MapReduce工作过程分析
- MapReduce中wordCount程序工作过程分析