您的位置:首页 > 产品设计 > UI/UE

spark-summit 中出现Initial job has not accepted any resources; check your cluster UI to ensure that wor

2015-03-23 22:59 585 查看
#

Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory

15/03/23 22:15:07 ERROR SparkDeploySchedulerBackend: Application has been killed. Reason: All masters are unresponsive! Giving up.

[code]实际上,我这个问题时akka的一个bug,即它不能解析域名。


问题描述:

这是我的启动命令: ./spark-submit –master spark://10.4.21.220:7077 –class RemoteDebug –executor-memory 256m –driver-memory 500m /home/hadoop/spark/SparkRemoteDebug.jar

下图是我的master的情况




问题来了,akka不能将10.4.21.220解析为master !,这导致前面问题的报错,即ip错误情况。

解决方法:

spark-submit的提交代码的master的地址(即–master spark://10.4.21.220:7077)应该跟master UI(见上图)上的地址一致,所以我把提交代码修改为 ./spark-submit –master spark://master:7077 –class RemoteDebug –executor-memory 256m –driver-memory 500m /home/hadoop/spark/SparkRemoteDebug.jar 就能解决问题!
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: 
相关文章推荐