MapReduce——LongWritable cannot be cast to org.apache.hadoop.io.Text 错误原因
2017-06-03 20:34
881 查看
运行环境:虚拟机,Ubuntu16,Ubuntu Server 做Hadoop集群(一主两从),编程软件eclipse。
运行时出现如下错误:
java.lang.Exception: java.lang.ClassCastException: org.apache.hadoop.io.LongWritable cannot be cast to org.apache.hadoop.io.Text
(Hadoop的输入LongWritable不能转变为Text)
When you read a file with a M/R program, the input key of your mapper should be the index of the line in the file, while the input value will be the full line.So here what's happening is that you're trying to have the line index
as a Text object which is wrong, and you need an LongWritable instead so that Hadoop doesn't complain about type.
在stackoverflow上找到了答案:当你用MapReduce程序读入文件是,mapper的输入key值应该是文件当前行的索引值,文件当前行的所有内容会被当做mapper的输入value值。所以Mapper程序的第一个参数(Mapper类的输入Key值)不可以具体化为LongWritable,可以用Text或者Object类。
参考stackoverflow:
https://stackoverflow.com/questions/11784729/hadoop-java-lang-classcastexception-org-apache-hadoop-io-longwritable-cannot
运行时出现如下错误:
java.lang.Exception: java.lang.ClassCastException: org.apache.hadoop.io.LongWritable cannot be cast to org.apache.hadoop.io.Text
(Hadoop的输入LongWritable不能转变为Text)
When you read a file with a M/R program, the input key of your mapper should be the index of the line in the file, while the input value will be the full line.So here what's happening is that you're trying to have the line index
as a Text object which is wrong, and you need an LongWritable instead so that Hadoop doesn't complain about type.
在stackoverflow上找到了答案:当你用MapReduce程序读入文件是,mapper的输入key值应该是文件当前行的索引值,文件当前行的所有内容会被当做mapper的输入value值。所以Mapper程序的第一个参数(Mapper类的输入Key值)不可以具体化为LongWritable,可以用Text或者Object类。
参考stackoverflow:
https://stackoverflow.com/questions/11784729/hadoop-java-lang-classcastexception-org-apache-hadoop-io-longwritable-cannot
相关文章推荐
- Hadoop-mapreduce org.apache.hadoop.io.LongWritable cannot be cast to org.apache.hadoop.io.Text错误
- Hadoop: LongWritable cannot be cast to org.apache.hadoop.io.IntWritable
- TaggedInputSplit cannot be cast to org.apache.hadoop.mapreduce.lib.input.FileSplit
- hadoop错误FATAL org.apache.hadoop.hdfs.server.namenode.NameNode Exception in namenode join java.io.IOException There appears to be a gap in the edit log
- org.apache.hadoop.hive.metastore.api.InvalidOperationException cannot be cast to java.lang.RuntimeEx
- MapReduce 异常 LongWritable cannot be cast to Text
- cannot be cast to org.apache.AnnotationProcessor 错误解决方案
- JAVA错误:org.apache.jasper.JasperException: java.lang.ClassCastException:org.apache.catalina.util.DefaultAnnotationProcessor cannot be cast to org.apach
- 解决kylin报错 ClassCastException org.apache.hadoop.hive.ql.exec.ConditionalTask cannot be cast to org.apache.hadoop.hive.ql.exec.mr.MapRedTask
- Java 向Hbase表插入数据报(org.apache.hadoop.hbase.client.HTablePool$PooledHTable cannot be cast to org.apac)
- MapReduce 异常 LongWritable cannot be cast to Text
- Java 向Hbase表插入数据报错(org.apache.hadoop.hbase.client.HTablePool$PooledHTable cannot be cast to org.apa)
- SparkR读取CSV格式文件错误java.lang.ClassCastException: java.lang.String cannot be cast to org.apache.spark.u
- 用java运行Hadoop程序报错:org.apache.hadoop.fs.LocalFileSystem cannot be cast to org.apache.
- Java 向Hbase表插入数据报(org.apache.hadoop.hbase.client.HTablePool$PooledHTable cannot be cast to org.apac)
- Java 向Hbase表插入数据报(org.apache.hadoop.hbase.client.HTablePool$PooledHTable cannot be cast to org.apac
- Java 向Hbase表插入数据报(org.apache.hadoop.hbase.client.HTablePool$PooledHTable cannot be cast to org.apac
- Java 向Hbase表插入数据异常org.apache.hadoop.hbase.client.HTablePool$PooledHTable cannot be cast to org.apache.client.HTable
- org.apache.catalina.util.DefaultAnnotationProcessor cannot be cast to org.ap错误
- org.apache.catalina.util.DefaultAnnotationProcessor cannot be cast to org.ap解决方案