spark任务提交参数
2017-04-17 10:02
393 查看
~/spark$ bin/spark-submit Usage: spark-submit [options] [app arguments]
Usage: spark-submit --kill [submission ID] --master [spark://...]
Usage: spark-submit --status [submission ID] --master [spark://...] Options:
--master MASTER_URL spark://host:port, mesos://host:port, yarn, or local.
--deploy-mode DEPLOY_MODE Whether to launch the driver program locally ("client") or on one of the worker machines inside the cluster ("cluster") (Default: client).
--class CLASS_NAME Your application's main class (for Java / Scala apps).
--name NAME A name of your application.
--jars JARS Comma-separated list of local jars to include on the driver and executor classpaths.
--packages Comma-separated list of maven coordinates of jars to include on the driver and executor classpaths. Will search the local maven repo, then maven central and any additional remote repositories given by --repositories. The format
for the coordinates should be groupId:artifactId:version.
--repositories Comma-separated list of additional remote repositories to search for the maven coordinates given with --packages.
--py-files PY_FILES Comma-separated list of .zip, .egg, or .py files to place on the PYTHONPATH for Python apps.
--files FILES Comma-separated list of files to be placed in the working directory of each executor.
--conf PROP=VALUE Arbitrary Spark configuration property.
--properties-file FILE Path to a file from which to load extra properties. If not specified, this will look for conf/spark-defaults.conf.
--driver-memory MEM Memory for driver (e.g. 1M, 2G) (Default: 512M).
--driver-java-options Extra Java options to pass to the driver.
--driver-library-path Extra library path entries to pass to the driver.
--driver-class-path Extra class path entries to pass to the driver. Note that jars added with --jars are automatically included in the classpath.
--executor-memory MEM Memory per executor (e.g. 1M, 2G) (Default: 1G).
--proxy-user NAME User to impersonate when submitting the application. --help, -h Show this help message and exit
--verbose, -v Print additional debug output
--version, Print the version of current Spark Spark standalone with cluster deploy mode only:
--driver-cores NUM Cores for driver (Default: 1). Spark standalone or Mesos with cluster deploy mode only:
--supervise If given, restarts the driver on failure.
--kill SUBMISSION_ID If given, kills the driver specified.
--status SUBMISSION_ID If given, requests the status of the driver specified. Spark standalone and Mesos only:
--total-executor-cores NUM Total cores for all executors. Spark standalone and YARN only:
--executor-cores NUM Number of cores per executor. (Default: 1 in YARN mode, or all available cores on the worker in standalone mode) YARN-only:
--driver-cores NUM Number of cores used by the driver, only in cluster mode(Default: 1).
--queue QUEUE_NAME The YARN queue to submit to (Default: "default").
--num-executors NUM Number of executors to launch (Default: 2).
--archives ARCHIVES Comma separated list of archives to be extracted into the working directory of each executor.
--principal PRINCIPAL Principal to be used to login to KDC, while running on secure HDFS.
--keytab KEYTAB The full path to the file that contains the keytab for the principal specified above. This keytab will be copied to the node running the Application Master via the Secure Distributed Cache, for renewing the login tickets and the
delegation tokens periodically.
Usage: spark-submit --kill [submission ID] --master [spark://...]
Usage: spark-submit --status [submission ID] --master [spark://...] Options:
--master MASTER_URL spark://host:port, mesos://host:port, yarn, or local.
--deploy-mode DEPLOY_MODE Whether to launch the driver program locally ("client") or on one of the worker machines inside the cluster ("cluster") (Default: client).
--class CLASS_NAME Your application's main class (for Java / Scala apps).
--name NAME A name of your application.
--jars JARS Comma-separated list of local jars to include on the driver and executor classpaths.
--packages Comma-separated list of maven coordinates of jars to include on the driver and executor classpaths. Will search the local maven repo, then maven central and any additional remote repositories given by --repositories. The format
for the coordinates should be groupId:artifactId:version.
--repositories Comma-separated list of additional remote repositories to search for the maven coordinates given with --packages.
--py-files PY_FILES Comma-separated list of .zip, .egg, or .py files to place on the PYTHONPATH for Python apps.
--files FILES Comma-separated list of files to be placed in the working directory of each executor.
--conf PROP=VALUE Arbitrary Spark configuration property.
--properties-file FILE Path to a file from which to load extra properties. If not specified, this will look for conf/spark-defaults.conf.
--driver-memory MEM Memory for driver (e.g. 1M, 2G) (Default: 512M).
--driver-java-options Extra Java options to pass to the driver.
--driver-library-path Extra library path entries to pass to the driver.
--driver-class-path Extra class path entries to pass to the driver. Note that jars added with --jars are automatically included in the classpath.
--executor-memory MEM Memory per executor (e.g. 1M, 2G) (Default: 1G).
--proxy-user NAME User to impersonate when submitting the application. --help, -h Show this help message and exit
--verbose, -v Print additional debug output
--version, Print the version of current Spark Spark standalone with cluster deploy mode only:
--driver-cores NUM Cores for driver (Default: 1). Spark standalone or Mesos with cluster deploy mode only:
--supervise If given, restarts the driver on failure.
--kill SUBMISSION_ID If given, kills the driver specified.
--status SUBMISSION_ID If given, requests the status of the driver specified. Spark standalone and Mesos only:
--total-executor-cores NUM Total cores for all executors. Spark standalone and YARN only:
--executor-cores NUM Number of cores per executor. (Default: 1 in YARN mode, or all available cores on the worker in standalone mode) YARN-only:
--driver-cores NUM Number of cores used by the driver, only in cluster mode(Default: 1).
--queue QUEUE_NAME The YARN queue to submit to (Default: "default").
--num-executors NUM Number of executors to launch (Default: 2).
--archives ARCHIVES Comma separated list of archives to be extracted into the working directory of each executor.
--principal PRINCIPAL Principal to be used to login to KDC, while running on secure HDFS.
--keytab KEYTAB The full path to the file that contains the keytab for the principal specified above. This keytab will be copied to the node running the Application Master via the Secure Distributed Cache, for renewing the login tickets and the
delegation tokens periodically.
相关文章推荐
- Java Web提交参数到Spark集群执行任务
- Spark on Yarn:任务提交参数配置
- sparkStreaming提交任务时指定参数
- spark submit参数及调优,任务提交脚本
- Spark on Yarn:任务提交参数配置
- Spark任务提交-json参数踩坑
- spark提交任务,参数的形式是JSON
- Web提交参数到Spark集群执行任务
- spark-2.1.0提交任务的配置参数说明
- 蜗龙徒行-Spark学习笔记【四】Spark集群中使用spark-submit提交jar任务包实战经验
- 浅谈Spark几种不同的任务提交相关脚本(以Spark 1.5.0为例)
- Spark任务提交jar包依赖解决方案
- spark-submit提交任务到集群
- Spark 提交任务时,报: Invalid signature file digest for Manifest main attributes
- spark-client 一直 accepted,无法提交任务,报错Failed to connect to driver at
- spark core源码分析1 集群启动及任务提交过程
- Spark历险记之编译和远程任务提交
- Spark历险记之编译和远程任务提交
- windons下远程提交任务搭配linux上的spark集群
- Spark-submit提交任务到集群