sparksql参数配置
2017-06-26 14:59
190 查看
转载自:http://www.cnblogs.com/wwxbi/p/6114410.html 查看当前环境SQL参数的配置
spark.sql("SET -v")
key | value |
spark.sql.hive.version | 1.2.1 |
spark.sql.sources.parallelPartitionDiscovery.threshold | 32 |
spark.sql.hive.metastore.barrierPrefixes | |
spark.sql.shuffle.partitions | 200 |
spark.sql.hive.metastorePartitionPruning | FALSE |
spark.sql.broadcastTimeout | 300 |
spark.sql.sources.bucketing.enabled | TRUE |
spark.sql.parquet.filterPushdown | TRUE |
spark.sql.statistics.fallBackToHdfs | FALSE |
spark.sql.adaptive.enabled | FALSE |
spark.sql.parquet.cacheMetadata | TRUE |
spark.sql.hive.metastore.sharedPrefixes | com.mysql.jdbc |
spark.sql.parquet.respectSummaryFiles | FALSE |
spark.sql.warehouse.dir | hdfs:///user/spark/warehouse |
spark.sql.orderByOrdinal | TRUE |
spark.sql.hive.convertMetastoreParquet | TRUE |
spark.sql.groupByOrdinal | TRUE |
spark.sql.hive.thriftServer.async | TRUE |
spark.sql.thriftserver.scheduler.pool | <undefined> |
spark.sql.orc.filterPushdown | FALSE |
spark.sql.adaptive.shuffle.targetPostShuffleInputSize | 67108864b |
spark.sql.sources.default | parquet |
spark.sql.parquet.compression.codec | snappy |
spark.sql.hive.metastore.version | 1.2.1 |
spark.sql.sources.partitionDiscovery.enabled | TRUE |
spark.sql.crossJoin.enabled | FALSE |
spark.sql.parquet.writeLegacyFormat | FALSE |
spark.sql.hive.verifyPartitionPath | FALSE |
spark.sql.variable.substitute | TRUE |
spark.sql.thriftserver.ui.retainedStatements | 200 |
spark.sql.hive.convertMetastoreParquet.mergeSchema | FALSE |
spark.sql.parquet.enableVectorizedReader | TRUE |
spark.sql.parquet.mergeSchema | FALSE |
spark.sql.parquet.binaryAsString | FALSE |
spark.sql.columnNameOfCorruptRecord | _corrupt_record |
spark.sql.files.maxPartitionBytes | 134217728 |
spark.sql.streaming.checkpointLocation | <undefined> |
spark.sql.variable.substitute.depth | 40 |
spark.sql.parquet.int96AsTimestamp | TRUE |
spark.sql.autoBroadcastJoinThreshold | 10485760 |
spark.sql.pivotMaxValues | 10000 |
spark.sql.sources.partitionColumnTypeInference.enabled | TRUE |
spark.sql.hive.metastore.jars | builtin |
spark.sql.thriftserver.ui.retainedSessions | 200 |
spark.sql.sources.maxConcurrentWrites | 1 |
spark.sql.parquet.output.committer.class | org.apache.parquet.hadoop.ParquetOutputCommitter |
相关文章推荐
- Delphi中SQL语句配置参数代码示例
- iBatis.Net系列-多参数的SQL语句的配置
- 如何在Hibernate log中显示所执行的sql的参数值 (以logback.xml的配置为例子)
- Spark 性能相关参数配置详解-任务调度篇
- Spark 性能相关参数配置详解-任务调度篇
- IBatis.Net系列-多参数的SQL语句的配置
- spark-05-spark 配置参数
- Spark性能相关参数配置
- Spark 性能相关参数配置详解-压缩与序列化篇
- Spark中配置Parquet参数
- 让Hibernate输出SQL语句以便更加深入调试程序----参数配置
- Hibernate配置Log4J显示SQL参数
- Hibernate配置Log4J显示SQL参数
- SparkSQL配置和使用初探
- Spark 性能相关参数配置详解-shuffle篇
- 让Hibernate输出SQL语句以便更加深入调试程序----参数配置
- 让Hibernate输出SQL语句以便更加深入调试程序----参数配置
- spark参数配置调优
- log4j显示hibernate sql参数的配置
- spark集群参数配置理解