您的位置:首页 > 运维架构 > Apache

Apache Kylin安装

2016-08-24 00:00 417 查看
摘要: Apache Kylin安装

上传安装包

使用ftp工具将安装包上传至服务器的/usr/local路径

解压

$tar –xvf apache-kylin-1.3-HBase-1.1-SNAPSHOT-bin.tar

设置环境变量

在/etc/profile 文件添加以下内容:

export KYLIN_HOME=/usr/local/apache-kylin-1.3-HBase-1.1-SNAPSHOT-bin

export HCAT_HOME=/usr/hdp/2.3.2.0-2950/hive-hcatalog

之后执行source

$source /etc/profile

配置

进入/usr/local/apache-kylin-1.3-HBase-1.1-SNAPSHOT-bin/conf目录,编辑kylin.properties

需要修改和添加参数解释:

参数
格式
含义
kylin.rest.servers
Hostname:7070
Hostname为kylin server 服务器ip
7070为kylin http端口
kylin.metadata.url
kylin_metadata@hbase
kylin_metadata为kylin在hbase中建的hbase系统库名
kylin.storage.url
hbase
默认参数
kylin.hdfs.working.dir
/kylin
kylin在hdfs上的工作目录,安装时候需要在hdfs创建这个参数值的目录,并且赋予可以kylin运行的读写权限
kylin.hbase.cluster.fs
hdfs://mycluster/apps/hbase/data_new
hbase在数据库目录,具体值和hbase的配置文件的中的hbase.rootdir参数值相同
kylin.route.hive.enabled
true
默认
kylin.route.hive.url
jdbc:hive2:// HiveServer2ip:10000
Hiveserver2ip为hive的server组件安装ip,1000位默认的jdbc端口
剩余参数不需要修改,直接默认

启动kylin

$cd /usr/local/apache-kylin-1.3-HBase-1.1-SNAPSHOT-bin/bin

$./kylin.sh start

检查是否启动成功

查看7070端口是否监听

netstat –an 7070

直接登录http://kylinserverip:7070/kylin

===============================================================

放一份系统的kylin配置文件

#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0 #
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#

## Config for Kylin Engine ##

# List of web servers in use, this enables one web server instance to sync up with other servers.
kylin.rest.servers=hadoop00:7070,hadoop01:7070,hadoop02:7070,hadoop04:7070,hadoop05:7070

#set display timezone on UI,format like[GMT+N or GMT-N]
kylin.rest.timezone=GMT+8
kylin.query.cache.enabled=true
# The metadata store in hbase
kylin.metadata.url=kylin_metadata@hbase

# The storage for final cube file in hbase
kylin.storage.url=hbase
kylin.job.yarn.app.rest.check.status.url=http://hadoop02:8088/ws/v1/cluster/apps/${job_id}?
kylin.job.yarn.app.rest.check.interval.seconds=20
kylin.query.security.enabled=false
# Temp folder in hdfs, make sure user has the right access to the hdfs directory
kylin.hdfs.working.dir=/kylin

# HBase Cluster FileSystem, which serving hbase, format as hdfs://hbase-cluster:8020
# leave empty if hbase running on same cluster with hive and mapreduce
kylin.hbase.cluster.fs=hdfs://mycluster/apps/hbase/data
kylin.route.hive.enabled=true
kylin.route.hive.url=jdbc:hive2://hadoop00:10000

kylin.job.mapreduce.default.reduce.input.mb=500

kylin.server.mode=all

# If true, job engine will not assume that hadoop CLI reside on the same server as it self
# you will have to specify kylin.job.remote.cli.hostname, kylin.job.remote.cli.username and kylin.job.remote.cli.password
# It should not be set to "true" unless you're NOT running Kylin.sh on a hadoop client machine
# (Thus kylin instance has to ssh to another real hadoop client machine to execute hbase,hive,hadoop commands)
kylin.job.run.as.remote.cmd=false

# Only necessary when kylin.job.run.as.remote.cmd=true
kylin.job.remote.cli.hostname=

# Only necessary when kylin.job.run.as.remote.cmd=true
kylin.job.remote.cli.username=

# Only necessary when kylin.job.run.as.remote.cmd=true
kylin.job.remote.cli.password=

# Used by test cases to prepare synthetic data for sample cube
kylin.job.remote.cli.working.dir=/tmp/kylin

# Max count of concurrent jobs running
kylin.job.concurrent.max.limit=10

# Time interval to check hadoop job status
kylin.job.yarn.app.rest.check.interval.seconds=10

# Hive database name for putting the intermediate flat tables
kylin.job.hive.database.for.intermediatetable=kylin

#default compression codec for htable,snappy,lzo,gzip,lz4
#kylin.hbase.default.compression.codec=lzo

# The cut size for hbase region, in GB.
# E.g, for cube whose capacity be marked as "SMALL", split region per 10GB by default
kylin.hbase.region.cut.small=10
kylin.hbase.region.cut.medium=20
kylin.hbase.region.cut.large=100

# HBase min and max region count
kylin.hbase.region.count.min=1
kylin.hbase.region.count.max=500

## Config for Restful APP ##
# database connection settings:
ldap.server=
ldap.username=
ldap.password=
ldap.user.searchBase=
ldap.user.searchPattern=
ldap.user.groupSearchBase=
ldap.service.searchBase=OU=
ldap.service.searchPattern=
ldap.service.groupSearchBase=
acl.adminRole=
acl.defaultRole=
ganglia.group=
ganglia.port=8664

## Config for mail service

# If true, will send email notification;
mail.enabled=false
mail.host=
mail.username=
mail.password=
mail.sender=

###########################config info for web#######################

#help info ,format{name|displayName|link} ,optional
kylin.web.help.length=4
kylin.web.help.0=start|Getting Started|
kylin.web.help.1=odbc|ODBC Driver|
kylin.web.help.2=tableau|Tableau Guide|
kylin.web.help.3=onboard|Cube Design Tutorial|
#hadoop url link ,optional
kylin.web.hadoop=
#job diagnostic url link ,optional
kylin.web.diagnostic=
#contact mail on web page ,optional
kylin.web.contact_mail=

###########################config info for front#######################

#env DEV|QA|PROD
deploy.env=PROD

###########################config info for sandbox#######################
kylin.sandbox=true

###########################config info for kylin monitor#######################
# hive jdbc url
kylin.monitor.hive.jdbc.connection.url=jdbc:hive2://hadoop00:10000

#config where to parse query log,split with comma ,will also read $KYLIN_HOME/tomcat/logs/ by default
kylin.monitor.ext.log.base.dir = /tmp/kylin_log1,/tmp/kylin_log2

#will create external hive table to query result csv file
#will set to kylin_query_log by default if not config here
kylin.monitor.query.log.parse.result.table = kylin_query_log
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: