spark 解决办法 check your cluster UI to ensure that workers are registered and have sufficient memory
2016-07-10 15:15
811 查看
报错:WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory
原因:内存不足
解决办法:设置内存
spark-submit --master spark://eb174:7077 --name WordCountByscala --class com.hq.WordCount --executor-memory
1G --total-executor-cores 2 ~/test/WordCount.jar hdfs://eb170:8020/user/ebupt/text
修改为
spark-submit
--master spark://eb174:7077 --name WordCountByscala --class com.hq.WordCount --executor-memory
512M --total-executor-cores 2 ~/test/WordCount.jar hdfs://eb170:8020/user/ebupt/text
spark集群运行
执行:
spark-submit --master spark://eb174:7077 --name WordCountByscala --class com.hq.WordCount --executor-memory
1G --total-executor-cores 2 ~/test/WordCount.jar hdfs://eb170:8020/user/ebupt/text
报错:
WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory
原因:内存不足
解决办法:设置内存
spark-submit --master spark://eb174:7077 --name WordCountByscala --class com.hq.WordCount --executor-memory
1G --total-executor-cores 2 ~/test/WordCount.jar hdfs://eb170:8020/user/ebupt/text
修改为
spark-submit
--master spark://eb174:7077 --name WordCountByscala --class com.hq.WordCount --executor-memory
512M --total-executor-cores 2 ~/test/WordCount.jar hdfs://eb170:8020/user/ebupt/text
spark集群运行
1 package com.hq 2 3 /** 4 * User: hadoop 5 * Date: 2014/10/10 0010 6 * Time: 18:59 7 */ 8 import org.apache.spark.SparkConf 9 import org.apache.spark.SparkContext 10 import org.apache.spark.SparkContext._ 11 12 /** 13 * 统计字符出现次数 14 */ 15 object WordCount { 16 def main(args: Array[String]) { 17 if (args.length < 1) { 18 System.err.println("Usage: <file>") 19 System.exit(1) 20 } 21 22 val conf = new SparkConf() 23 val sc = new SparkContext(conf) 24 val line = sc.textFile(args(0)) 25 26 line.flatMap(_.split(" ")).map((_, 1)).reduceByKey(_+_).collect().foreach(println) 27 28 sc.stop() 29 } 30 }
执行:
spark-submit --master spark://eb174:7077 --name WordCountByscala --class com.hq.WordCount --executor-memory
1G --total-executor-cores 2 ~/test/WordCount.jar hdfs://eb170:8020/user/ebupt/text
报错:
WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory
相关文章推荐
- Spark RDD API详解(一) Map和Reduce
- 使用spark和spark mllib进行股票预测
- Spark随谈——开发指南(译)
- Spark,一种快速数据分析替代方案
- 表单项的name命名为submit、reset引起的问题
- form.submit()不能提交表单的错误原因及解决方法
- 表单Form的submit事件不响应的解决方法
- 解决jquery submit()提交表单提示:f[s] is not a function
- document.forms[].submit()使用介绍
- js确认删除对话框适用于a标签及submit
- 在javaScript中关于submit和button的区别介绍
- js表单提交和submit提交的区别实例分析
- jquery中使用$(#form).submit()重写提交表单无效原因分析及解决
- js和jq使用submit方法无法提交表单的快速解决方法
- JS和jQuery使用submit方法无法提交表单的原因分析及解决办法
- eclipse 开发 spark Streaming wordCount
- Understanding Spark Caching
- ClassNotFoundException:scala.PreDef$
- Windows 下Spark 快速搭建Spark源码阅读环境