提交spark任务Requesting 1 new executor because tasks are backlogged (new desired total will be 1)
2017-11-03 14:37
766 查看
这两天在集群上提交任务时一直提交不成功,一直爆出下面的问题,如果不主动停掉就会一直刷 check your cluster UI to ensure that workers are registered and have sufficient resources,开始以为是资源不足的问题,在网上找了好多方法测试都不行,后来发现虽然一直再刷等资源的问题(下面的报错日志,以我的蹩脚英语理解为等资源,并且spark的管理页面显示有任务提交上来,一直处于waiting),检查yarn的管理页面发现,yarn上根本就没有接受到spark提交的任务请求,问题就出在这
问题原因是:提交任务的方式错了,我刚开始提交任务的命令是./spark-submit --class Wordcount
--master spark://10.0.10.29:7077 --num-executors 100 --executor-cores 2 --conf spark.default.parallelism=1000 --conf spark.storage.memoryFraction=0.5 /home/zkp/sparktest.jar sparktest/b.txt
解决办法:把提交命令改成./spark-submit --class Wordcount --master yarn-cluster--num-executors 100 --executor-cores 2 --conf spark.default.parallelism=1000
--conf spark.storage.memoryFraction=0.5 /home/zkp/sparktest.jar sparktest/b.txt就好了
Requesting 1 new executor because tasks are backlogged (new desired total will be 1)
17/11/03 14:35:35 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
17/11/03 14:35:50 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
17/11/03 14:36:05 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
记录下来,希望能帮到想我一样的新手
问题原因是:提交任务的方式错了,我刚开始提交任务的命令是./spark-submit --class Wordcount
--master spark://10.0.10.29:7077 --num-executors 100 --executor-cores 2 --conf spark.default.parallelism=1000 --conf spark.storage.memoryFraction=0.5 /home/zkp/sparktest.jar sparktest/b.txt
解决办法:把提交命令改成./spark-submit --class Wordcount --master yarn-cluster--num-executors 100 --executor-cores 2 --conf spark.default.parallelism=1000
--conf spark.storage.memoryFraction=0.5 /home/zkp/sparktest.jar sparktest/b.txt就好了
Requesting 1 new executor because tasks are backlogged (new desired total will be 1)
17/11/03 14:35:35 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
17/11/03 14:35:50 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
17/11/03 14:36:05 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
记录下来,希望能帮到想我一样的新手
相关文章推荐
- IIS Error:404.2 The page you are requesting cannot be served because of the ISAPI and CGI Restriction list settings on the Web server
- spark源码分析之Executor启动与任务提交篇
- HTTP Error 404.2 - Not Found The page you are requesting cannot be served because of the ISAPI and CGI Restriction list settings on the Web server(转)
- Spark 任务调度之启动CoarseGrainedExecutorBackend
- 异常 The page you are requesting cannot be served because of the extension configuration
- HTTP Error 404.2 - Not Found "The page you are requesting cannot be served because of the ISAPI and
- git add . 提交时报错warning: LF will be replaced by CRLF in .gitignore.
- 【Spark系列6】spark submit提交任务
- sparkuser is not in the sudoers file. This incident will be reported.
- [PLSQL]Are you sure it will be definitely random? (DBMS_RANDOM.SEED)
- 提交任务到spark集群及spark集群的安装
- 【Android】任务和返回栈(tasks and back stack)
- Spark1.1.1官网文档翻译3任务提交
- ssh无法登录,提示Pseudo-terminal will not be allocated because stdin is not a terminal.
- Failure to find xxx in xxx was cached in the local repository, resolution will not be reattempted until the update interval of nexus has elapsed or updates are forced @ xxx
- Spark on Yarn:任务提交参数配置
- Spark 提交任务详解
- ssh@host:""ssh切换到其他机器上执行多条命令;Pseudo-terminal will not be allocated because stdin is not a terminal.
- ValueError: total size of new array must be unchanged
- spark on yarn提交任务时一直显示ACCEPTED