您的位置:首页 > 运维架构 > Apache

[Apache Spark On Yarn Resource Analysis]

2017-02-08 00:00 239 查看
==========================================================>

AppMaster Vs Driver

client Mode ==> AppMaster

cluster Mode ==> Driver

spark.yarn.am.memory=512MB

spark.yarn.am.memoryOverhead=0.1

[hadoop@hftest0001 spark-1.5.1-bin-hadoop2.6]$ spark-shell --master yarn

17/02/08 05:59:24 INFO Client: Will allocate AM container, with 896 MB memory including 384 MB overhead

[hadoop@hftest0001 spark-1.5.1-bin-hadoop2.6]$ spark-shell --master yarn --conf spark.yarn.am.memory=500m

17/02/08 05:59:24 INFO Client: Will allocate AM container, with 884 MB memory including 384 MB overhead

[hadoop@hftest0001 spark-1.5.1-bin-hadoop2.6]$ spark-shell --master yarn --conf spark.yarn.am.memory=4g

17/02/08 05:59:24 INFO Client: Will allocate AM container, with 4096 MB memory including 409 MB overhead

总结: AM的内存是 ${spark.yarn.am.memory} + MAX{${spark.yarn.am.memory} * ${spark.yarn.am.memoryOverhead} , min_memoryOverhead(384MB)}

==========================================================>

SparkUI上和日志上显示的Executor的Mem并不是Executor全部的Mem,而是能够cache计算临时结果最大的内存。如

17/02/08 06:30:07 INFO MemoryStore: MemoryStore started with capacity 530.3 MB

1024 * 0.9 * 0.6 ==> 552.96MB

==========================================================>

[hadoop@hftest0001 spark-1.5.1-bin-hadoop2.6]$ spark-shell --master yarn --executor-memory 2g

==========================================================>

[hadoop@hftest0001 spark-1.5.1-bin-hadoop2.6]$ spark-shell --master yarn --executor-memory 512m --num-executors 5 --executor-cores 2
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: