您的位置:首页 > 其它

kylin2.0之spark构建cube

2017-09-07 11:29 751 查看
在kylin2.0中引入了构建cube的spark引擎,因此在构建cube的时候用spark代替MR。
kylin2.0+spark1.6
kylin2.1+spark2.1.1
        kylin2.0.0+HBase1.X
Hadoop等的底层依赖:hdp2.4,hive,hbase,yarn

1.修改Hadoop配置

在kylin.properties中配置好Hadoop的配置路径(注意要新建一个目录把底层依赖的Hadoop、hive、hbase等的配置都连接或copy过来)
kylin.env.hadoop-conf-dir=/usr/local/apache-kylin-2.0.0-bin/hadoop-conf

(core-site.xml, hdfs-site.xml, yarn-site.xml, hive-site.xml and hbase-site.xml)

mkdir $KYLIN_HOME/hadoop-conf
ln -s /etc/hadoop/conf/core-site.xml $KYLIN_HOME/hadoop-conf/core-site.xml
ln -s /etc/hadoop/conf/hdfs-site.xml $KYLIN_HOME/hadoop-conf/hdfs-site.xml
ln -s /etc/hadoop/conf/yarn-site.xml $KYLIN_HOME/hadoop-conf/yarn-site.xml
ln -s /etc/hbase/2.4.0.0-169/0/hbase-site.xml $KYLIN_HOME/hadoop-conf/hbase-site.xml
cp /etc/hive/2.4.0.0-169/0/hive-site.xml $KYLIN_HOME/hadoop-conf/hive-site.xml
vi $KYLIN_HOME/hadoop-conf/hive-site.xml (change "hive.execution.engine" value from "tez" to "mr")

2.检查spark的配置

kylin运行时通过 $KYLIN_HOME/conf/kylin.properties中的kylin.engine.spark-conf变量加载spark配置项;包括:
kylin.engine.spark-conf.spark.master=yarn
kylin.engine.spark-conf.spark.submit.deployMode=cluster
kylin.engine.spark-conf.spark.yarn.queue=default
kylin.engine.spark-conf.spark.executor.memory=1G
kylin.engine.spark-conf.spark.executor.cores=2
kylin.engine.spark-conf.spark.executor.instances=1
kylin.engine.spark-conf.spark.eventLog.enabled=true
kylin.engine.spark-conf.spark.eventLog.dir=hdfs\:///kylin/spark-history
kylin.engine.spark-conf.spark.history.fs.logDirectory=hdfs\:///kylin/spark-history
#kylin.engine.spark-conf.spark.yarn.jar=hdfs://namenode:8020/kylin/spark/spark-assembly-1.6.3-hadoop2.6.0.jar
#kylin.engine.spark-conf.spark.io.compression.codec=org.apache.spark.io.SnappyCompressionCodec

## uncomment for HDP
#kylin.engine.spark-conf.spark.driver.extraJavaOptions=-Dhdp.version=current
#kylin.engine.spark-conf.spark.yarn.am.extraJavaOptions=-Dhdp.version=current
#kylin.engine.spark-conf.spark.executor.extraJavaOptions=-Dhdp.version=current

配置spark的jar包;
hadoop fs -mkdir -p /kylin/spark/
hadoop fs -put $KYLIN_HOME/spark/lib/spark-assembly-1.6.3-hadoop2.6.0.jar /kylin/spark/
配置完spark的jar可以配置上面关于spark引擎的几个选项参数:
kylin.engine.spark-conf.spark.yarn.jar=hdfs://sandbox.hortonworks.com:8020/kylin/spark/spark-assembly-1.6.3-hadoop2.6.0.jar
kylin.engine.spark-conf.spark.driver.extraJavaOptions=-Dhdp.version=current
kylin.engine.spark-conf.spark.yarn.am.extraJavaOptions=-Dhdp.version=current
kylin.engine.spark-conf.spark.executor.extraJavaOptions=-Dhdp.version=current
在创建cube时可以选择cube Engine为spark 到此,配置结束。在构建cube的时候就可以用spark引擎构建。问题解决可以到kylin log/kylin.log里定位查找。注意:用MR情况:cube中韩勇超过12个维度的,或者有count distinct和top N等的度量。       用spark情况:cube的模式较为简单时。所有的度量仅为SUM/MIN/MAX/COUNT,且源数据规模中等时。
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: