远程提交Map/Reduce任务
2013-11-04 09:09
417 查看
1. 将开发好MR代码打包成jar。添加到distributed cache中。
bin/hadoop fs -copyFromLocal /root/stat-analysis-mapred-1.0-SNAPSHOT.jar /user/root/lib
bin/hadoop fs -copyFromLocal /root/stat-analysis-mapred-1.0-SNAPSHOT.jar /user/root/lib
2. 在服务器端创建和你客户端一模一样的用户。创建目录 /tmp/hadoop-root/stagging/用户
3. 客户端提交job的代码
Configuration conf = HBaseConfiguration.create();
conf.set("hbase.zookeeper.quorum", "node.tracker1");
conf.set("fs.default.name", "hdfs://node.tracker1:9000/hbase");
conf.set("mapred.job.tracker", "node.tracker1:9001");
Job job = new Job(conf, "Hbase_FreqCounter1");
job.setJarByClass(FreqCounter1.class);
Scan scan = new Scan();
String columns = "details"; // comma seperated
scan.addFamily(Bytes.toBytes(columns));
scan.setFilter(new FirstKeyOnlyFilter());
TableMapReduceUtil.initTableMapperJob("access_logs", scan, Mapper1.class, ImmutableBytesWritable.class,
IntWritable.class, job);
TableMapReduceUtil.initTableReducerJob("summary_user", Reducer1.class, job);
/ TableMapReduceUtil.addDependencyJars(job);
DistributedCache.addFileToClassPath(new Path("hdfs://node.tracker1:9000/user/root/lib/stat-analysis-mapred-1.0-SNAPSHOT.jar"),job.getConfiguration());
job.submit();
Configuration conf = HBaseConfiguration.create();
conf.set("hbase.zookeeper.quorum", "node.tracker1");
conf.set("fs.default.name", "hdfs://node.tracker1:9000/hbase");
conf.set("mapred.job.tracker", "node.tracker1:9001");
Job job = new Job(conf, "Hbase_FreqCounter1");
job.setJarByClass(FreqCounter1.class);
Scan scan = new Scan();
String columns = "details"; // comma seperated
scan.addFamily(Bytes.toBytes(columns));
scan.setFilter(new FirstKeyOnlyFilter());
TableMapReduceUtil.initTableMapperJob("access_logs", scan, Mapper1.class, ImmutableBytesWritable.class,
IntWritable.class, job);
TableMapReduceUtil.initTableReducerJob("summary_user", Reducer1.class, job);
/ TableMapReduceUtil.addDependencyJars(job);
DistributedCache.addFileToClassPath(new Path("hdfs://node.tracker1:9000/user/root/lib/stat-analysis-mapred-1.0-SNAPSHOT.jar"),job.getConfiguration());
job.submit();
4.运行java application,登陆node的MR管理页面,可以看到
bin/hadoop fs -copyFromLocal /root/stat-analysis-mapred-1.0-SNAPSHOT.jar /user/root/lib
bin/hadoop fs -copyFromLocal /root/stat-analysis-mapred-1.0-SNAPSHOT.jar /user/root/lib
2. 在服务器端创建和你客户端一模一样的用户。创建目录 /tmp/hadoop-root/stagging/用户
3. 客户端提交job的代码
Configuration conf = HBaseConfiguration.create();
conf.set("hbase.zookeeper.quorum", "node.tracker1");
conf.set("fs.default.name", "hdfs://node.tracker1:9000/hbase");
conf.set("mapred.job.tracker", "node.tracker1:9001");
Job job = new Job(conf, "Hbase_FreqCounter1");
job.setJarByClass(FreqCounter1.class);
Scan scan = new Scan();
String columns = "details"; // comma seperated
scan.addFamily(Bytes.toBytes(columns));
scan.setFilter(new FirstKeyOnlyFilter());
TableMapReduceUtil.initTableMapperJob("access_logs", scan, Mapper1.class, ImmutableBytesWritable.class,
IntWritable.class, job);
TableMapReduceUtil.initTableReducerJob("summary_user", Reducer1.class, job);
/ TableMapReduceUtil.addDependencyJars(job);
DistributedCache.addFileToClassPath(new Path("hdfs://node.tracker1:9000/user/root/lib/stat-analysis-mapred-1.0-SNAPSHOT.jar"),job.getConfiguration());
job.submit();
Configuration conf = HBaseConfiguration.create();
conf.set("hbase.zookeeper.quorum", "node.tracker1");
conf.set("fs.default.name", "hdfs://node.tracker1:9000/hbase");
conf.set("mapred.job.tracker", "node.tracker1:9001");
Job job = new Job(conf, "Hbase_FreqCounter1");
job.setJarByClass(FreqCounter1.class);
Scan scan = new Scan();
String columns = "details"; // comma seperated
scan.addFamily(Bytes.toBytes(columns));
scan.setFilter(new FirstKeyOnlyFilter());
TableMapReduceUtil.initTableMapperJob("access_logs", scan, Mapper1.class, ImmutableBytesWritable.class,
IntWritable.class, job);
TableMapReduceUtil.initTableReducerJob("summary_user", Reducer1.class, job);
/ TableMapReduceUtil.addDependencyJars(job);
DistributedCache.addFileToClassPath(new Path("hdfs://node.tracker1:9000/user/root/lib/stat-analysis-mapred-1.0-SNAPSHOT.jar"),job.getConfiguration());
job.submit();
4.运行java application,登陆node的MR管理页面,可以看到
相关文章推荐
- C 语言sscanf复杂应用。
- SharePoint还原大体积内容数据库报空间错误的解决办法
- The user's guide what comes in the kernel Documentation directory
- kvm镜像创建
- c++ 中的重载全局new,delete
- C++11(及现代C++风格)和快速迭代式开发 -- 刘未鹏
- Documentation\block\ioprio.txt
- wpf 使用DocumentViewer打印
- VS2010下QT配置
- 法133计算机科学课第9、10周实践题目及参考解答
- 如何配置 URLScan 工具
- 保持简单苹果发展早期的设计理念
- 保持简单苹果发展早期的设计理念
- 保持简单苹果发展早期的设计理念
- 保持简单苹果发展早期的设计理念
- 保持简单苹果发展早期的设计理念
- 保持简单苹果发展早期的设计理念
- 保持简单苹果发展早期的设计理念
- lua进阶5--C++调用lua函数
- 乘法法则