您的位置:首页 > 大数据 > 人工智能

Container [pid=6263,containerID=container_1494900155967_0001_02_000001] is running beyond virtual memory limits

2017-05-16 10:18 302 查看
以Spark-Client模式运行,Spark-Submit时出现了下面的错误:

User:  hadoop
Name:  Spark Pi
Application Type:  SPARK
Application Tags:
YarnApplicationState:  FAILED
FinalStatus Reported by AM:  FAILED
Started:  16-五月-2017 10:03:02
Elapsed:  14sec
Tracking URL:  History
Diagnostics:  Application application_1494900155967_0001 failed 2 times due to AM Container for appattempt_1494900155967_0001_000002 exited with exitCode: -103
For more detailed output, check application tracking page:http://master:8088/proxy/application_1494900155967_0001/Then, click on links to logs of each attempt.
Diagnostics: Container [pid=6263,containerID=container_1494900155967_0001_02_000001] is running beyond virtual memory limits. Current usage: 107.3 MB of 1 GB physical memory used; 2.2 GB of 2.1 GB virtual memory used. Killing container.


意思是说Container要用2.2GB的内存,而虚拟内存只有2.1GB,不够用了,所以Kill了Container。

我的SPARK-EXECUTOR-MEMORY设置的是1G,即物理内存是1G,Yarn默认的虚拟内存和物理内存比例是2.1,也就是说虚拟内存是2.1G,小于了需要的内存2.2G。解决的办法是把拟内存和物理内存比例增大,在yarn-site.xml中增加一个设置:

<property>
<name>yarn.nodemanager.vmem-pmem-ratio</name>
<value>2.5</value>
</property>


再重启Yarn,这样一来就能有2.5G的虚拟内存,运行时就不会出错了。
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: 
相关文章推荐