您的位置:首页 > 运维架构

Hadoop之MapReduce程序应用三

2016-01-09 14:15 411 查看
摘要:MapReduce程序进行数据去重。

关键词:MapReduce 数据去重

数据源:人工构造日志数据集log-file1.txt和log-file2.txt。

log-file1.txt内容

2014-1-1 wangluqing

2014-1-2 root

2014-1-3 root

2014-1-4 wangluqing

2014-1-5 root

2014-1-6 wangluqing

log-file2.txt内容

2014-1-1 root

2014-1-2 root

2014-1-3 wangluqing

2014-1-4 wangluqing

2014-1-5 wangluqing

2014-1-6 root

问题描写叙述:

解决方式:

1 开发工具 VM10 + Ubuntu12.04+Hadoop1.1.2

2 设计思路 数据去重是让原始数据中出现次数超过一次的数据在输出文件里仅仅出现一次。利用键值的唯一性法则能够实现数据的去重。

程序清单

package com.wangluqing;

import java.io.IOException;

import java.util.StringTokenizer;

import org.apache.hadoop.conf.Configuration;

import org.apache.hadoop.fs.Path;

import org.apache.hadoop.io.Text;

import org.apache.hadoop.mapreduce.Job;

import org.apache.hadoop.mapreduce.Mapper;

import org.apache.hadoop.mapreduce.Reducer;

import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;

import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

import org.apache.hadoop.util.GenericOptionsParser;

public class DeleteDataDuplication {

public static class DeleteDataDuplicationMapper extends Mapper<Object,Text,Text,Text> {

private static Text line = new Text();

public void map(Object key, Text value, Context context) throws IOException,InterruptedException {

line = value;

context.write(line,new Text(" "));

}

}

public static class DeleteDataDuplicationReducer extends Reducer<Text,Text,Text,Text> {

public void reduce(Text key, Iterable<Text> values, Context context) throws IOException, InterruptedException {

context.write(key,new Text(" "));

}

}

public static void main(String[] args) throws Exception {

Configuration conf = new Configuration();

String[] otherArgs = new GenericOptionsParser(conf,args).getRemainingArgs();

if(otherArgs.length !=2 ) {

System.err.println("Usage:DeleteDataDuplication<in><out>");

System.exit(2);

}

Job job = new Job(conf,"delete data duplication");

job.setJarByClass(DeleteDataDuplication.class);

job.setMapperClass(DeleteDataDuplicationMapper.class);

job.setCombinerClass(DeleteDataDuplicationReducer.class);

job.setReducerClass(DeleteDataDuplicationReducer.class);

job.setOutputKeyClass(Text.class);

job.setOutputValueClass(Text.class);

FileInputFormat.addInputPath(job,new Path(otherArgs[0]));

FileOutputFormat.setOutputPath(job,new Path(otherArgs[1]));

System.exit(job.waitForCompletion(true)?0:1);

}

}

3 运行程序

关于怎样运行程序。能够參考《Hadoop之MapReduce程序应用二》一文中运行程序所述内容。

查看经过数据去重后的结果例如以下。

2014-1-1 root

2014-1-1 wangluqing

2014-1-2 root

2014-1-3 root

2014-1-3 wangluqing

2014-1-4 wangluqing

2014-1-5 root

2014-1-5 wangluqing

2014-1-6 root

2014-1-6 wangluqing

总结:

数据去重能够应用到统计大数据集上数据种类的个数。从站点日志文件里计算訪问地等场景。

Resource:

1 http://www.wangluqing.com/2014/03/hadoop-mapreduce-app3/

2 《Hadoop实战 第二版》陆嘉恒著 第5章 MapReduce应用案例
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: