Mapreduce Error: Type mismatch in key from map
2013-11-26 17:40
274 查看
http://blog.csdn.net/doc_sgl/article/details/9413767
关于Mapreduce中出现的错误:Type mismatch in key from map: expected **, recieved org.apache.hadoop.io.LongWritable
例如:
13/07/22 02:53:32 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
13/07/22 02:53:32 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
13/07/22 02:53:32 WARN mapred.JobClient: No job jar file set. User classes may not be found. See JobConf(Class) or JobConf#setJar(String).
13/07/22 02:53:32 INFO input.FileInputFormat: Total input paths to process : 1
13/07/22 02:53:38 INFO mapred.JobClient: Running job: job_local_0001
13/07/22 02:53:38 INFO input.FileInputFormat: Total input paths to process : 1
13/07/22 02:53:38 INFO mapred.MapTask: io.sort.mb = 100
13/07/22 02:53:39 INFO mapred.JobClient: map 0% reduce 0%
13/07/22 02:53:39 INFO mapred.MapTask: data buffer = 79691776/99614720
13/07/22 02:53:39 INFO mapred.MapTask: record buffer = 262144/327680
13/07/22 02:53:39 WARN mapred.LocalJobRunner: job_local_0001
java.io.IOException: Type mismatch in key from map: expected CoOccurrence$TextPair, recieved org.apache.hadoop.io.LongWritable
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.collect(MapTask.java:845)
at org.apache.hadoop.mapred.MapTask$NewOutputCollector.write(MapTask.java:541)
at org.apache.hadoop.mapreduce.TaskInputOutputContext.write(TaskInputOutputContext.java:80)
at org.apache.hadoop.mapreduce.Mapper.map(Mapper.java:124)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:621)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:305)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:177)
13/07/22 02:53:40 INFO mapred.JobClient: Job complete: job_local_0001
13/07/22 02:53:40 INFO mapred.JobClient: Counters: 0
出现这个错误的原因:
1、map和reduce中的输入输出格式不对。
2、新旧api混用。你的map()方法 没有按新api写, 结果系统不认为它是一个重载,而是一个新方法,不会被调用。
解决办法:在的map(), reduce()前面加上@Override,并按照新的API来写map(), reduce()。
关于Mapreduce中出现的错误:Type mismatch in key from map: expected **, recieved org.apache.hadoop.io.LongWritable
例如:
13/07/22 02:53:32 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
13/07/22 02:53:32 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
13/07/22 02:53:32 WARN mapred.JobClient: No job jar file set. User classes may not be found. See JobConf(Class) or JobConf#setJar(String).
13/07/22 02:53:32 INFO input.FileInputFormat: Total input paths to process : 1
13/07/22 02:53:38 INFO mapred.JobClient: Running job: job_local_0001
13/07/22 02:53:38 INFO input.FileInputFormat: Total input paths to process : 1
13/07/22 02:53:38 INFO mapred.MapTask: io.sort.mb = 100
13/07/22 02:53:39 INFO mapred.JobClient: map 0% reduce 0%
13/07/22 02:53:39 INFO mapred.MapTask: data buffer = 79691776/99614720
13/07/22 02:53:39 INFO mapred.MapTask: record buffer = 262144/327680
13/07/22 02:53:39 WARN mapred.LocalJobRunner: job_local_0001
java.io.IOException: Type mismatch in key from map: expected CoOccurrence$TextPair, recieved org.apache.hadoop.io.LongWritable
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.collect(MapTask.java:845)
at org.apache.hadoop.mapred.MapTask$NewOutputCollector.write(MapTask.java:541)
at org.apache.hadoop.mapreduce.TaskInputOutputContext.write(TaskInputOutputContext.java:80)
at org.apache.hadoop.mapreduce.Mapper.map(Mapper.java:124)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:621)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:305)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:177)
13/07/22 02:53:40 INFO mapred.JobClient: Job complete: job_local_0001
13/07/22 02:53:40 INFO mapred.JobClient: Counters: 0
出现这个错误的原因:
1、map和reduce中的输入输出格式不对。
2、新旧api混用。你的map()方法 没有按新api写, 结果系统不认为它是一个重载,而是一个新方法,不会被调用。
解决办法:在的map(), reduce()前面加上@Override,并按照新的API来写map(), reduce()。
相关文章推荐
- Hadoop_2.1.0 MapReduce序列图
- JavaScript mapreduce工作原理简析
- HBase基本原理
- HDFS DatanodeProtocol——sendHeartbeat
- HDFS DatanodeProtocol——register
- Hadoop集群提交作业问题总结
- Hadoop源码分析 HDFS ClientProtocol——addBlock
- Hadoop源码分析HDFS ClientProtocol——create
- Hadoop源码分析FSNamesystem几个重要的成员变量
- Hadoop源码分析HDFS ClientProtocol——getBlockLocations
- Hadoop源码分析HDFS Client向HDFS写入数据的过程解析
- ZooKeeper基本理解
- HDFS源码分析——格式化
- Hadoop RPC整个使用流程——以DataNode向NameNode注册为例 4000
- MapReduce作业提交源码分析
- Hadoop DBOutputFormat的使用
- Hadoop LZO的安装与配置
- MapReduce: Simplified Data Processing on Large ...
- MapReduce初探之一~~基于Mongodb实现标签统计
- 7个改进 Hadoop MapReduce性能的窍门