spark-summit 中出现Initial job has not accepted any resources; check your cluster UI to ensure that wor
2015-03-23 22:59
585 查看
#
Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory
15/03/23 22:15:07 ERROR SparkDeploySchedulerBackend: Application has been killed. Reason: All masters are unresponsive! Giving up.
问题描述:
这是我的启动命令: ./spark-submit –master spark://10.4.21.220:7077 –class RemoteDebug –executor-memory 256m –driver-memory 500m /home/hadoop/spark/SparkRemoteDebug.jar
下图是我的master的情况
问题来了,akka不能将10.4.21.220解析为master !,这导致前面问题的报错,即ip错误情况。
解决方法:
spark-submit的提交代码的master的地址(即–master spark://10.4.21.220:7077)应该跟master UI(见上图)上的地址一致,所以我把提交代码修改为 ./spark-submit –master spark://master:7077 –class RemoteDebug –executor-memory 256m –driver-memory 500m /home/hadoop/spark/SparkRemoteDebug.jar 就能解决问题!
Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory
15/03/23 22:15:07 ERROR SparkDeploySchedulerBackend: Application has been killed. Reason: All masters are unresponsive! Giving up.
[code]实际上,我这个问题时akka的一个bug,即它不能解析域名。
问题描述:
这是我的启动命令: ./spark-submit –master spark://10.4.21.220:7077 –class RemoteDebug –executor-memory 256m –driver-memory 500m /home/hadoop/spark/SparkRemoteDebug.jar
下图是我的master的情况
问题来了,akka不能将10.4.21.220解析为master !,这导致前面问题的报错,即ip错误情况。
解决方法:
spark-submit的提交代码的master的地址(即–master spark://10.4.21.220:7077)应该跟master UI(见上图)上的地址一致,所以我把提交代码修改为 ./spark-submit –master spark://master:7077 –class RemoteDebug –executor-memory 256m –driver-memory 500m /home/hadoop/spark/SparkRemoteDebug.jar 就能解决问题!
相关文章推荐
- Spark之submit任务时的Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory
- 运行spark问题:Initial job has not accepted any resources; check your cluster UI to ensure that workers a
- Initial job has not accepted any resources; check your cluster UI to ensure that workers are registe
- erImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are
- WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure
- spark WARNTaskSchedulerImpl:Initial job has not accepted any resources; check your cluster UI to
- Spark WARN cluster.ClusterScheduler: Initial job has not accepted any resources;check your cluster
- Spark WARN cluster.ClusterScheduler: Initial job has not accepted any resources;check your cluster
- WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster uito ensure
- Initial job has not accepted any resources;check your cluster All masters are unresponsive! Giving u
- 18/03/18 04:53:44 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your clu
- spark-submit 报错 Initial job has not accepted any resources
- spark-submit 报错 Initial job has not accepted any resources
- initial job has not accepted any resources的spark错误解决办法
- spark on yarn,Initial job has not accepted any resources
- spark运行报错:check your cluster UI to ensure that workers are registered and have sufficient resources
- spark 解决办法 check your cluster UI to ensure that workers are registered and have sufficient memory
- spark-submit 报错 Initial job has not accepted any resources
- Spark错误:WARN TaskSchedulerImpl: Initial job has not accepted any resources;
- Spark执行样例报警告:WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources