[Apache Spark On Yarn Resource Analysis]
2017-02-08 00:00
239 查看
==========================================================>
AppMaster Vs Driver
client Mode ==> AppMaster
cluster Mode ==> Driver
spark.yarn.am.memory=512MB
spark.yarn.am.memoryOverhead=0.1
[hadoop@hftest0001 spark-1.5.1-bin-hadoop2.6]$ spark-shell --master yarn
17/02/08 05:59:24 INFO Client: Will allocate AM container, with 896 MB memory including 384 MB overhead
[hadoop@hftest0001 spark-1.5.1-bin-hadoop2.6]$ spark-shell --master yarn --conf spark.yarn.am.memory=500m
17/02/08 05:59:24 INFO Client: Will allocate AM container, with 884 MB memory including 384 MB overhead
[hadoop@hftest0001 spark-1.5.1-bin-hadoop2.6]$ spark-shell --master yarn --conf spark.yarn.am.memory=4g
17/02/08 05:59:24 INFO Client: Will allocate AM container, with 4096 MB memory including 409 MB overhead
总结: AM的内存是 ${spark.yarn.am.memory} + MAX{${spark.yarn.am.memory} * ${spark.yarn.am.memoryOverhead} , min_memoryOverhead(384MB)}
==========================================================>
SparkUI上和日志上显示的Executor的Mem并不是Executor全部的Mem,而是能够cache计算临时结果最大的内存。如
17/02/08 06:30:07 INFO MemoryStore: MemoryStore started with capacity 530.3 MB
1024 * 0.9 * 0.6 ==> 552.96MB
==========================================================>
[hadoop@hftest0001 spark-1.5.1-bin-hadoop2.6]$ spark-shell --master yarn --executor-memory 2g
==========================================================>
[hadoop@hftest0001 spark-1.5.1-bin-hadoop2.6]$ spark-shell --master yarn --executor-memory 512m --num-executors 5 --executor-cores 2
AppMaster Vs Driver
client Mode ==> AppMaster
cluster Mode ==> Driver
spark.yarn.am.memory=512MB
spark.yarn.am.memoryOverhead=0.1
[hadoop@hftest0001 spark-1.5.1-bin-hadoop2.6]$ spark-shell --master yarn
17/02/08 05:59:24 INFO Client: Will allocate AM container, with 896 MB memory including 384 MB overhead
[hadoop@hftest0001 spark-1.5.1-bin-hadoop2.6]$ spark-shell --master yarn --conf spark.yarn.am.memory=500m
17/02/08 05:59:24 INFO Client: Will allocate AM container, with 884 MB memory including 384 MB overhead
[hadoop@hftest0001 spark-1.5.1-bin-hadoop2.6]$ spark-shell --master yarn --conf spark.yarn.am.memory=4g
17/02/08 05:59:24 INFO Client: Will allocate AM container, with 4096 MB memory including 409 MB overhead
总结: AM的内存是 ${spark.yarn.am.memory} + MAX{${spark.yarn.am.memory} * ${spark.yarn.am.memoryOverhead} , min_memoryOverhead(384MB)}
==========================================================>
SparkUI上和日志上显示的Executor的Mem并不是Executor全部的Mem,而是能够cache计算临时结果最大的内存。如
17/02/08 06:30:07 INFO MemoryStore: MemoryStore started with capacity 530.3 MB
1024 * 0.9 * 0.6 ==> 552.96MB
==========================================================>
[hadoop@hftest0001 spark-1.5.1-bin-hadoop2.6]$ spark-shell --master yarn --executor-memory 2g
==========================================================>
[hadoop@hftest0001 spark-1.5.1-bin-hadoop2.6]$ spark-shell --master yarn --executor-memory 512m --num-executors 5 --executor-cores 2
相关文章推荐
- 【Spark学习】Apache Spark部署之Spark on YARN
- Apache Spark源码走读之8 -- Spark on Yarn
- Apache Spark Resource Management and YARN App Models
- Spark On YARN自动调整Executor数量配置 - Dynamic Resource Allocation
- Spark On YARN自动调整Executor数量配置 - Dynamic Resource Allocation
- spark 2.1 on yarn -- container shell analysis
- spark on yarn 报 org.apache.hadoop.util.Shell$ExitCodeException: 问题
- spark 笔记 4:Apache Hadoop YARN: Yet Another Resource Negotiator
- Apache Spark Resource Management and YARN App Models
- Dream Spark ------spark on yarn ,yarn的配置
- spark on yarn的cpu使用
- Spark源码系列(七)Spark on yarn具体实现
- Spark2.0.1 on yarn with hue 集群安装部署(六)livy安装测试
- Spark -6:运行Spark on YARN
- spark streaming restart error: org.apache.spark.SparkException: Yarn application has already ended!
- Spark on Yarn 学习(一)
- Spark On YARN
- Spark on Yarn: Where Have All the Memory Gone?
- Spark源码走读10——Spark On Yarn
- Spark On YARN