Spark 运行出现java.lang.OutOfMemoryError: Java heap space
2017-01-06 15:14
573 查看
具体错误如截图:
主要就是java内存溢出。
之前尝试过很多方法:/conf中设置spark-java-opts 等,都没有解决问题。其实问题就是JVM在运行时内存不够导致。可以通过命令:
可以看到大家运行时内存的默认值,有的电脑是512M有的电脑是1024M
这是所有的命令参数,大家根据自己的需要选择,解决上述问题可以通过加上参数:
来解决,大小根据自己的需要和机子性能来设置。
主要就是java内存溢出。
之前尝试过很多方法:/conf中设置spark-java-opts 等,都没有解决问题。其实问题就是JVM在运行时内存不够导致。可以通过命令:
./spark-submit --help
可以看到大家运行时内存的默认值,有的电脑是512M有的电脑是1024M
******:bin duyang$ ./spark-submit --help Usage: spark-submit [options] <app jar | python file> [app arguments] Usage: spark-submit --kill [submission ID] --master [spark://...] Usage: spark-submit --status [submission ID] --master [spark://...] Usage: spark-submit run-example [options] example-class [example args] Options: --master MASTER_URL spark://host:port, mesos://host:port, yarn, or local. --deploy-mode DEPLOY_MODE Whether to launch the driver program locally ("client") or on one of the worker machines inside the cluster ("cluster") (Default: client). --class CLASS_NAME Your application's main class (for Java / Scala apps). --name NAME A name of your application. --jars JARS Comma-separated list of local jars to include on the driver and executor classpaths. --packages Comma-separated list of maven coordinates of jars to include on the driver and executor classpaths. Will search the local maven repo, then maven central and any additional remote repositories given by --repositories. The format for the coordinates should be groupId:artifactId:version. --exclude-packages Comma-separated list of groupId:artifactId, to exclude while resolving the dependencies provided in --packages to avoid dependency conflicts. --repositories Comma-separated list of additional remote repositories to search for the maven coordinates given with --packages. --py-files PY_FILES Comma-separated list of .zip, .egg, or .py files to place on the PYTHONPATH for Python apps. --files FILES Comma-separated list of files to be placed in the working directory of each executor. --conf PROP=VALUE Arbitrary Spark configuration property. --properties-file FILE Path to a file from which to load extra properties. If not specified, this will look for conf/spark-defaults.conf. --driver-memory MEM Memory for driver (e.g. 1000M, 2G) (Default: 1024M). --driver-java-options Extra Java options to pass to the driver. --driver-library-path Extra library path entries to pass to the driver. --driver-class-path Extra class path entries to pass to the driver. Note that jars added with --jars are automatically included in the classpath. --executor-memory MEM Memory per executor (e.g. 1000M, 2G) (Default: 1G). --proxy-user NAME User to impersonate when submitting the application. This argument does not work with --principal / --keytab. --help, -h Show this help message and exit. --verbose, -v Print additional debug output. --version, Print the version of current Spark. Spark standalone with cluster deploy mode only: --driver-cores NUM Cores for driver (Default: 1). Spark standalone or Mesos with cluster deploy mode only: --supervise If given, restarts the driver on failure. --kill SUBMISSION_ID If given, kills the driver specified. --status SUBMISSION_ID If given, requests the status of the driver specified. Spark standalone and Mesos only: --total-executor-cores NUM Total cores for all executors. Spark standalone and YARN only: --executor-cores NUM Number of cores per executor. (Default: 1 in YARN mode, or all available cores on the worker in standalone mode) YARN-only: --driver-cores NUM Number of cores used by the driver, only in cluster mode (Default: 1). --queue QUEUE_NAME The YARN queue to submit to (Default: "default"). --num-executors NUM Number of executors to launch (Default: 2). If dynamic allocation is enabled, the initial number of executors will be at least NUM. --archives ARCHIVES Comma separated list of archives to be extracted into the working directory of each executor. --principal PRINCIPAL Principal to be used to login to KDC, while running on secure HDFS. --keytab KEYTAB The full path to the file that contains the keytab for the principal specified above. This keytab will be copied to the node running the Application Master via the Secure Distributed Cache, for renewing the login tickets and the delegation tokens periodically.
这是所有的命令参数,大家根据自己的需要选择,解决上述问题可以通过加上参数:
--driver-memory MEM 2g
来解决,大小根据自己的需要和机子性能来设置。
相关文章推荐
- 运行的程序时,出现内存不足时的解决方式Exception in thread "main" java.lang.OutOfMemoryError: Java heap space(转)
- Hadoop 运行作业java堆溢出:java.lang.outofmemoryerror: java heap space hadoop
- tomcat启动出现异常:java.lang.OutOfMemoryError: Java heap space
- 超大文件上传到服务器,实现流式传输,不再出现java.lang.OutOfMemoryError: Java heap space
- hadoop Job 运行错误 java.lang.OutOfMemoryError: Java heap space
- Spark java.lang.OutOfMemoryError: Java heap space
- 使用studio进行nc开发启动客户端出现 java.lang.OutOfMemoryError: Java heap space 异常
- [hadoop异常] eclipse中运行mapreduce 异常 --java.lang.OutOfMemoryError: Java heap space
- eclipse使用hadoop插件出现java.lang.OutOfMemoryError: Java heap space
- 运行web项目报 java.lang.OutOfMemoryError: Java heap space
- 执行ant命令时出现java.lang.OutOfMemoryError: Java heap space
- Eclipse运行程序提示:Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
- Win7+Eclipse+Hadoop 运行WordCount报错:java.lang.OutOfMemoryError: Java heap space
- ant 封装 时出现java.lang.OutOfMemoryError: Java heap space
- Hadoop运行Mapreduce作业时报错:java.lang.OutOfMemoryError: Java heap space
- spark Caused by: java.lang.OutOfMemoryError: Java heap space 问题
- Mtalab 运行问题:java.lang.OutOfMemoryError:Java heap space
- Java程序运行时报错 Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
- spark java.lang.OutOfMemoryError: Java heap space
- tomcat下出现 java.lang.OutOfMemoryError: Java heap space