您的位置:首页 > 运维架构

MapReduce经典案例——合并文档

2014-01-09 17:25 489 查看
资源文件file1hadoop
test
hello
word
资源文件file2
happy
birthday
this
is
a
test
最终的结果
hadooptesthellowordhappybirthdaythisisatest
分析:将两个文件合并成一个文件,是一个很简单的案例。设想我们可以将value设为空,这样就只有key在输出的时候直接数据就可以了。map过程将两个文件的每一行设为key,值设为空。在Reduce阶段只用将map阶段整理好的数据输出就可以了。

实现:

package com.bwzy.hadoop;
import java.io.IOException;
import java.util.StringTokenizer;
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.Mapper.Context;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;
import com.bwzy.hadoop.WordCount.Map;
import com.bwzy.hadoop.WordCount.Reduce;
public class HeBing extends Configured implements Tool {
public static class Map
extends Mapper<LongWritable, Text, Text, Text> {
public void map(LongWritable key, Text value, Context context) {
String line = value.toString();
try {
context.write(new Text(line), new Text(""));
} catch (IOException e) {
e.printStackTrace();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
public static class Reduce extends
Reducer<Text, Text, Text, Text> {
public void reduce(Text key, Iterable<Text> values,
Context context) throws IOException, InterruptedException {
context.write(key, new Text(""));
}
}
@Override
public int run(String[] arg0) throws Exception {
Job job = new Job(getConf());
job.setJobName("HeBing");
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(Text.class);
job.setMapperClass(Map.class);
job.setCombinerClass(Reduce.class);
job.setReducerClass(Reduce.class);
job.setInputFormatClass(TextInputFormat.class);
job.setOutputFormatClass(TextOutputFormat.class);
FileInputFormat.setInputPaths(job, new Path(arg0[0]));
FileOutputFormat.setOutputPath(job, new Path(arg0[1]));
boolean success = job.waitForCompletion(true);
return success ? 0 : 1;
}
public static void main(String[] args) throws Exception {
int ret = ToolRunner.run(new HeBing(), args);
System.exit(ret);
}
}

运行:
1:将程序打包 选中打包的类-->右击-->Export-->java-->JAR file--填入保存路径-->完成2:将jar包拷贝到hadoop的目录下。(因为程序中用到来hadoop的jar包)3:将资源文件上传到定义的hdfs目录下 创建hdfs目录命令(在hadoop已经成功启动的前提下):hadoop fs -mkdir /自定义/自定义/input 上传本地资源文件到hdfs上:hadop fs -put -copyFromLocal /home/user/Document/file1 /自定义/自定义/inputhadop fs -put -copyFromLocal /home/user/Document/file2 /自定义/自定义/input4:运行MapReduce程序:hadoop jar /home/user/hadoop-1.0.4/HeBing.jar com.bwzy.hadoop.HeBing /自定义/自定义/input /自定义/自定义/output
说明:hadoop运行后会自动创建/自定义/自定义/output目录,在该目录下会有两个文件,其中一个文件中存放来MapReduce运行的结果。如果重新运行该程序,需要将/自定义/自定义/output目录删除,否则系统认为该结果已经存在了。5:运行的结果为hadooptesthellowordhappybirthdaythisisatest
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息