spark-submit提交任务的方式
2016-12-08 10:47
363 查看
spark-submit命令
//(集群模式)限制资源,资源不足时候会卡在分配资源(--total-executor-cores 和 --executor-cores为总数和单点cores数量)
spark-submit --class test.Streamings --master spark://10.102.34.248:7077 --deploy-mode cluster --executor-memory 500M --total-executor-cores 5 sparkdemo-0.0.1-SNAPSHOT.jar
//(集群模式)控制台输出(不限制资源),可用于调试
spark-submit --class test.Streamings --master spark://10.102.34.248:7077 sparkdemo-0.0.1-SNAPSHOT.jar
//(集群模式)控制台无输出(不限制资源),提交任务,最大资源分配
spark-submit --class test.Streamings --master spark://10.102.34.248:7077 --deploy-mode cluster sparkdemo-0.0.1-SNAPSHOT.jar
//(单点模式)控制台输出
spark-submit --class test.Streamings sparkdemo-0.0.1-SNAPSHOT.jar
附官方解释:
spark-submit --help
Options:
--master MASTER_URL spark://host:port, mesos://host:port, yarn, or local.
--deploy-mode DEPLOY_MODE Whether to launch the driver program locally ("client") or
on one of the worker machines inside the cluster ("cluster")
(Default: client).
--class CLASS_NAME Your application's main class (for Java / Scala apps).
--name NAME A name of your application.
--jars JARS Comma-separated list of local jars to include on the driver
and executor classpaths.
--packages Comma-separated list of maven coordinates of jars to include
on the driver and executor classpaths. Will search the local
maven repo, then maven central and any additional remote
repositories given by --repositories. The format for the
coordinates should be groupId:artifactId:version.
--exclude-packages Comma-separated list of groupId:artifactId, to exclude while
resolving the dependencies provided in --packages to avoid
dependency conflicts.
--repositories Comma-separated list of additional remote repositories to
search for the maven coordinates given with --packages.
--py-files PY_FILES Comma-separated list of .zip, .egg, or .py files to place
on the PYTHONPATH for Python apps.
--files FILES Comma-separated list of files to be placed in the working
directory of each executor.
--conf PROP=VALUE Arbitrary Spark configuration property.
--properties-file FILE Path to a file from which to load extra properties. If not
specified, this will look for conf/spark-defaults.conf.
--driver-memory MEM Memory for driver (e.g. 1000M, 2G) (Default: 1024M).
--driver-java-options Extra Java options to pass to the driver.
--driver-library-path Extra library path entries to pass to the driver.
--driver-class-path Extra class path entries to pass to the driver. Note that
jars added with --jars are automatically included in the
classpath.
--executor-memory MEM Memory per executor (e.g. 1000M, 2G) (Default: 1G).
--proxy-user NAME User to impersonate when submitting the application.
This argument does not work with --principal / --keytab.
--help, -h Show this help message and exit.
--verbose, -v Print additional debug output.
--version, Print the version of current Spark.
Spark standalone with cluster deploy mode only:
--driver-cores NUM Cores for driver (Default: 1).
Spark standalone or Mesos with cluster deploy mode only:
--supervise If given, restarts the driver on failure.
--kill SUBMISSION_ID If given, kills the driver specified.
--status SUBMISSION_ID If given, requests the status of the driver specified.
Spark standalone and Mesos only:
--total-executor-cores NUM Total cores for all executors.
Spark standalone and YARN only:
--executor-cores NUM Number of cores per executor. (Default: 1 in YARN mode,
or all available cores on the worker in standalone mode)
YARN-only:
--driver-cores NUM Number of cores used by the driver, only in cluster mode
(Default: 1).
--queue QUEUE_NAME The YARN queue to submit to (Default: "default").
--num-executors NUM Number of executors to launch (Default: 2).
If dynamic allocation is enabled, the initial number of
executors will be at least NUM.
--archives ARCHIVES Comma separated list of archives to be extracted into the
working directory of each executor.
--principal PRINCIPAL Principal to be used to login to KDC, while running on
secure HDFS.
--keytab KEYTAB The full path to the file that contains the keytab for the
principal specified above. This keytab will be copied to
the node running the Application Master via the Secure
Distributed Cache, for renewing the login tickets and the
delegation tokens periodically.
//(集群模式)限制资源,资源不足时候会卡在分配资源(--total-executor-cores 和 --executor-cores为总数和单点cores数量)
spark-submit --class test.Streamings --master spark://10.102.34.248:7077 --deploy-mode cluster --executor-memory 500M --total-executor-cores 5 sparkdemo-0.0.1-SNAPSHOT.jar
//(集群模式)控制台输出(不限制资源),可用于调试
spark-submit --class test.Streamings --master spark://10.102.34.248:7077 sparkdemo-0.0.1-SNAPSHOT.jar
//(集群模式)控制台无输出(不限制资源),提交任务,最大资源分配
spark-submit --class test.Streamings --master spark://10.102.34.248:7077 --deploy-mode cluster sparkdemo-0.0.1-SNAPSHOT.jar
//(单点模式)控制台输出
spark-submit --class test.Streamings sparkdemo-0.0.1-SNAPSHOT.jar
附官方解释:
spark-submit --help
Options:
--master MASTER_URL spark://host:port, mesos://host:port, yarn, or local.
--deploy-mode DEPLOY_MODE Whether to launch the driver program locally ("client") or
on one of the worker machines inside the cluster ("cluster")
(Default: client).
--class CLASS_NAME Your application's main class (for Java / Scala apps).
--name NAME A name of your application.
--jars JARS Comma-separated list of local jars to include on the driver
and executor classpaths.
--packages Comma-separated list of maven coordinates of jars to include
on the driver and executor classpaths. Will search the local
maven repo, then maven central and any additional remote
repositories given by --repositories. The format for the
coordinates should be groupId:artifactId:version.
--exclude-packages Comma-separated list of groupId:artifactId, to exclude while
resolving the dependencies provided in --packages to avoid
dependency conflicts.
--repositories Comma-separated list of additional remote repositories to
search for the maven coordinates given with --packages.
--py-files PY_FILES Comma-separated list of .zip, .egg, or .py files to place
on the PYTHONPATH for Python apps.
--files FILES Comma-separated list of files to be placed in the working
directory of each executor.
--conf PROP=VALUE Arbitrary Spark configuration property.
--properties-file FILE Path to a file from which to load extra properties. If not
specified, this will look for conf/spark-defaults.conf.
--driver-memory MEM Memory for driver (e.g. 1000M, 2G) (Default: 1024M).
--driver-java-options Extra Java options to pass to the driver.
--driver-library-path Extra library path entries to pass to the driver.
--driver-class-path Extra class path entries to pass to the driver. Note that
jars added with --jars are automatically included in the
classpath.
--executor-memory MEM Memory per executor (e.g. 1000M, 2G) (Default: 1G).
--proxy-user NAME User to impersonate when submitting the application.
This argument does not work with --principal / --keytab.
--help, -h Show this help message and exit.
--verbose, -v Print additional debug output.
--version, Print the version of current Spark.
Spark standalone with cluster deploy mode only:
--driver-cores NUM Cores for driver (Default: 1).
Spark standalone or Mesos with cluster deploy mode only:
--supervise If given, restarts the driver on failure.
--kill SUBMISSION_ID If given, kills the driver specified.
--status SUBMISSION_ID If given, requests the status of the driver specified.
Spark standalone and Mesos only:
--total-executor-cores NUM Total cores for all executors.
Spark standalone and YARN only:
--executor-cores NUM Number of cores per executor. (Default: 1 in YARN mode,
or all available cores on the worker in standalone mode)
YARN-only:
--driver-cores NUM Number of cores used by the driver, only in cluster mode
(Default: 1).
--queue QUEUE_NAME The YARN queue to submit to (Default: "default").
--num-executors NUM Number of executors to launch (Default: 2).
If dynamic allocation is enabled, the initial number of
executors will be at least NUM.
--archives ARCHIVES Comma separated list of archives to be extracted into the
working directory of each executor.
--principal PRINCIPAL Principal to be used to login to KDC, while running on
secure HDFS.
--keytab KEYTAB The full path to the file that contains the keytab for the
principal specified above. This keytab will be copied to
the node running the Application Master via the Secure
Distributed Cache, for renewing the login tickets and the
delegation tokens periodically.
相关文章推荐
- spark-submit提交任务的方式
- spark下使用submit提交任务后报jar包已存在错误
- spark-submit 提交任务
- Spark(五)Spark任务提交方式和执行流程
- 蜗龙徒行-Spark学习笔记【四】Spark集群中使用spark-submit提交jar任务包实战经验
- spark-submit提交任务时报错,Error initializing SparkContext
- 【Spark篇】---Spark中yarn模式两种提交任务方式
- spark-2.0.0提交jar任务的几种方式
- spark-submit提交任务到集群
- spark-submit 提交任务报错
- spark-submit提交任务到集群
- Spark集群中使用spark-submit提交jar任务包实战经验
- Spark源码解析之任务提交(spark-submit)篇
- 关于spark-submit 使用yarn-client客户端提交spark任务的问题
- Spark 源码阅读(5)——Spark-submit任务提交流程
- spark submit参数及调优,任务提交脚本
- 【Spark篇】---Spark中yarn模式两种提交任务方式
- spark-submit提交任务到集群-案例
- IDEA Spark-submit提交任务到集群
- Spark-submit提交任务到集群