spark 运行问题记录
2016-05-24 00:34
218 查看
在CDH5.5.2上运行spark1.5的程序,运行起来就直接shutdown,并报出如下的异常:
INFO YarnClientSchedulerBackend: SchedulerBackend is ready for scheduling beginning after waiting maxRegisteredResourcesWaitingTime: 30000(ms)
Exception in thread "main" java.lang.IllegalStateException: Cannot call methods on a stopped SparkContext
at org.apache.spark.SparkContext.org$apache$spark$SparkContext$$assertNotStopped(SparkContext.scala:104)
at org.apache.spark.SparkContext$$anonfun$newAPIHadoopRDD$1.apply(SparkContext.scala:1131)
at org.apache.spark.SparkContext$$anonfun$newAPIHadoopRDD$1.apply(SparkContext.scala:1130)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108)
at org.apache.spark.SparkContext.withScope(SparkContext.scala:709)
at org.apache.spark.SparkContext.newAPIHadoopRDD(SparkContext.scala:1130)
at com.xxx.spark.etl$.parquetRun(AdEtl.scala:76)
at com.xxx.spark.etl$.main(AdEtl.scala:32)
at com.xxx.spark.etl.main(AdEtl.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:672)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:120)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
出现这个问题时,尝试运行其他spark程序,能够正常运行,说明并不是spark不兼容问题导致的。那么什么原因导致的这个问题发生呢。继续查看yarn的日志,http://cloudera_master:8088查看所有yarn控制的程序,找到该异常的程序,查看运行日志:
INFO YarnClientSchedulerBackend: SchedulerBackend is ready for scheduling beginning after waiting maxRegisteredResourcesWaitingTime: 30000(ms)
Exception in thread "main" java.lang.IllegalStateException: Cannot call methods on a stopped SparkContext
at org.apache.spark.SparkContext.org$apache$spark$SparkContext$$assertNotStopped(SparkContext.scala:104)
at org.apache.spark.SparkContext$$anonfun$newAPIHadoopRDD$1.apply(SparkContext.scala:1131)
at org.apache.spark.SparkContext$$anonfun$newAPIHadoopRDD$1.apply(SparkContext.scala:1130)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108)
at org.apache.spark.SparkContext.withScope(SparkContext.scala:709)
at org.apache.spark.SparkContext.newAPIHadoopRDD(SparkContext.scala:1130)
at com.xxx.spark.etl$.parquetRun(AdEtl.scala:76)
at com.xxx.spark.etl$.main(AdEtl.scala:32)
at com.xxx.spark.etl.main(AdEtl.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:672)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:120)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
出现这个问题时,尝试运行其他spark程序,能够正常运行,说明并不是spark不兼容问题导致的。那么什么原因导致的这个问题发生呢。继续查看yarn的日志,http://cloudera_master:8088查看所有yarn控制的程序,找到该异常的程序,查看运行日志:
16/05/23 15:48:53 ERROR ApplicationMaster: Uncaught exception: org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException: Invalid resource request, requested virtual cores < 0, or requested virtual cores > max configured, requestedVirtualCores=6, maxVirtualCores=4 at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.validateResourceRequest(SchedulerUtils.java:212) at org.apache.hadoop.yarn.server.resourcemanager.RMServerUtils.validateResourceRequests(RMServerUtils.java:96) 收到如上的日志,通过这个日志可以很清楚的看到配置的执行器超出了最大执行数。所以异常了。 spark报出的异常并不是问题的真正异常,也就是出现问题的时候不要着急,一个地方发现不了问题,那么就多找几个地方,总会有发现问题的地方。
相关文章推荐
- 必须跟你说点事《TCP 三次握手》
- 微博推荐算法简述
- SQL Server error '80040e14'的处理
- Android Drawable - Inset Drawable使用详解(附图)
- 容易被忽略的事----sql语句中select语句的执行顺序
- 学习篇---幸运转盘
- Hadoop
- Spark Streaming源码解读之JobScheduler内幕实现和深度思考
- 记录下自己的脑X错误
- android之context(上下文)、五大布局
- 类的其他特性
- 一个文件系统过滤驱动的demo
- android开发笔记之Gson解析
- 特殊二极管的学习
- win8 win10 安装msi 提示2502、2503的错误代码
- shell执行三种方法
- Java-反射机制reflect
- ios 画图板
- 拷贝,赋值构造函数赋值
- DirectFB、Layer、Window、Surface之间关系