您的位置:首页 > 其它

【转】flume+kafka+zookeeper 日志收集平台的搭建

2016-10-28 15:31 609 查看
from:https://my.oschina.net/jastme/blog/600573
flume+kafka+zookeeper 日志收集平台的搭建

收藏

jastme

发表于 10个月前

阅读 830

收藏 11

点赞 1

评论 0

摘要: flume+kafka+zookeeper 日志收集平台的搭建

首先说明下我的目的是什么,我的目的是单纯的收集nginx的日志以及各种应用程序的日志

nginx 日志

预留的位置

flume 和 kafka这个大小的作用是什么我就不再说了,大家去自己搜下

一 。 环境

AWS Red Hat Enterprise Linux Server release 7.1 (Maipo)

二。 需要的应用包

apache-flume-1.6.0-bin.tar.gz

kafka_2.10-0.8.1.1.tgz

jdk-7u67-linux-x64.tar.gz

KafkaOffsetMonitor-assembly-0.2.0.jar

kafka-manager-1.2.3.zip

zookeeper-3.4.7.tar.gz

三。 搭建

先看看我们host的配置

192.168.1.10 zoo1 zoo2 zoo3 kafka_1 kafka_2 kafka_3


ls /opt/tools/
apache-tomcat-7.0.65  flume  jdk1.7.0_67  kafka  nginx  redis-3.0.5  zookeeper

1.安装zookeeper

zookeeper 的配置比较简单。

部署3个zookeeper

配置文件举例

ls
zoo1  zoo2  zoo3 zkui

最后这个是zookeeper的WEB管理

cat master/conf/
configuration.xsl  log4j.properties   zoo.cfg            zoo_sample.cfg
[root@ip-172-31-9-125 zookeeper]# cat master/conf/zoo.cfg
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/opt/tool/zookeeper/zoo1/data
dataLogDir=/opt/tools/zookeeper/zoo1/logs
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance #
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
server.0=zoo1:8880:7770
server.1=zoo2:8881:7771
server.2=zoo3:8882:7772

分别启动3个zk 这里就不再说了

2. kafka

ls
kafka_1  kafka_2  kafka_3  kafka-manager-1.2.3  kafkaOffsetMonitor  kfkstart.sh

cat kfkstart.sh
#!/bin/bash
nohup /opt/tools/kafka/kafka_1/bin/kafka-server-start.sh /opt/tools/kafka/kafka_1/config/server.properties &
nohup /opt/tools/kafka/kafka_2/bin/kafka-server-start.sh /opt/tools/kafka/kafka_2/config/server.properties &
nohup /opt/tools/kafka/kafka_3/bin/kafka-server-start.sh /opt/tools/kafka/kafka_3/config/server.properties &
nohup /opt/tools/kafka/kafka-manager-1.2.3/bin/kafka-manager -Dkafka-manager.zkhosts="zoo1:2181,zoo2:2182,zoo3:2183" &

cat /opt/tools/kafka/kafka_1/config/server.properties

############################# Server Basics #############################

# The id of the broker. This must be set to a unique integer for each broker.
broker.id=0                               #这个很重要  就是唯一的ID号

############################# Socket Server Settings #############################

# The port the socket server listens on
port=9092                                 #监听的端口

# Hostname the broker will bind to. If not set, the server will bind to all interfaces
host.name=kafka_1                         #这里看清楚我们的前面配置的机器名啊

# Hostname the broker will advertise to producers and consumers. If not set, it uses the
# value for "host.name" if configured.  Otherwise, it will use the value returned from
# java.net.InetAddress.getCanonicalHostName().
#advertised.host.name=<hostname routable by clients>

# The port to publish to ZooKeeper for clients to use. If this is not set,
# it will publish the same port that the broker binds to.
#advertised.port=<port accessible by clients>

# The number of threads handling network requests
num.network.threads=2

# The number of threads doing disk I/O
num.io.threads=8

# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400

# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400

# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600

############################# Log Basics #############################

# A comma seperated list of directories under which to store log files
log.dirs=/opt/tools/kafka/kafka_1/logs

# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=2

############################# Log Flush Policy #############################

# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000

# The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000

############################# Log Retention Policy #############################

# The minimum age of a log file to be eligible for deletion
log.retention.hours=168

# A size-based retention policy for logs. Segments are pruned from the log as long as the remaining
# segments don't drop below log.retention.bytes.
#log.retention.bytes=1073741824

# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824

# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000

# By default the log cleaner is disabled and the log retention policy will default to just delete segments after their retention expires.
# If log.cleaner.enable=true is set the cleaner will be enabled and individual logs can then be marked for log compaction.
log.cleaner.enable=false

############################# Zookeeper #############################

# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.

zookeeper.connect=zoo1:2181,zoo2:2182,zoo3:2183

# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=6000

启动kafka

配置位偏移量工具

ls kafkaOffsetMonitor
KafkaOffsetMonitor-assembly-0.2.0.jar  logs  offsetapp.db  start.sh

cat kafkaOffsetMonitor/start.sh
#!/bin/bash
nohup java -cp KafkaOffsetMonitor-assembly-0.2.0.jar  com.quantifind.kafka.offsetapp.OffsetGetterWeb  --zk zoo1:2181,zoo2:2182,zoo3:2183  --port 8087  --refresh 10.seconds  --retain 1.days 1>logs/stdout.log 2>logs/stderr.log &

kafka 管理工具

cat kafka-manager-1.2.3/conf/application.conf

# Copyright 2015 Yahoo Inc. Licensed under the Apache License, Version 2.0
# See accompanying LICENSE file.

# This is the main configuration file for the application.
# ~~~~~

# Secret key
# ~~~~~
# The secret key is used to secure cryptographics functions.
# If you deploy your application to several instances be sure to use the same key!
application.secret="changeme"
application.secret=${?APPLICATION_SECRET}

# The application languages
# ~~~~~
application.langs="en"

# Global object class
# ~~~~~
# Define the Global object class for this application.
# Default to Global in the root package.
# global=Global

# Database configuration
# ~~~~~
# You can declare as many datasources as you want.
# By convention, the default datasource is named `default`
#
# db.default.driver=org.h2.Driver
# db.default.url="jdbc:h2:mem:play"
# db.default.user=sa
# db.default.password=
#
# You can expose this datasource via JNDI if needed (Useful for JPA)
# db.default.jndiName=DefaultDS

# Evolutions
# ~~~~~
# You can disable evolutions if needed
# evolutionplugin=disabled

# Ebean configuration
# ~~~~~
# You can declare as many Ebean servers as you want.
# By convention, the default server is named `default`
#
# ebean.default="models.*"

# Logger
# ~~~~~
# You can also configure logback (http://logback.qos.ch/), by providing a logger.xml file in the conf directory .

# Root logger:
logger.root=ERROR

# Logger used by the framework:
logger.play=INFO

# Logger provided to your application:
logger.application=DEBUG

kafka-manager.zkhosts="zoo1:2181,zoo2:2182,zoo3:2183"
kafka-manager.zkhosts=${?ZK_HOSTS}
pinned-dispatcher.type="PinnedDispatcher"
pinned-dispatcher.executor="thread-pool-executor"


3 flume

(1)目录模式 以及 exec模式

cat conf/flume-conf.properties
#定义agent的名字为statge_nginx
stage_nginx.sources = S1
stage_nginx.channels = M1
stage_nginx.sinks = sink

#定义source的一些设置   我在这里写了2个模式
stage_nginx.sources.S1.type = spooldir                              #目录模式
stage_nginx.sources.S1.channels = M1
stage_nginx.sources.S1.spoolDir = /logs/nginx/log/shop              #nginx 日志目录

#stage_nginx.sources.S1.type = exec                                 #SH模式
#stage_nginx.sources.S1.channels = M1
#stage_nginx.sources.S1.command = tail -F /logs/nginx/log/shop/access.log    #执行命令  如果我们有很多日志,那么久多启动几个flume吧。。。没想到其他的办法

#定义sink
stage_nginx.sinks.sink.type = org.apache.flume.sink.kafka.KafkaSink
stage_nginx.sinks.sink.topic = test                                #!!!!自己创建的tpoic
stage_nginx.sinks.sink.brokerList = kafka_1:9092,kafka_2:9093,kafka_3:9094
stage_nginx.sinks.sink.requiredAcks = 0
stage_nginx.sinks.sink.batchSize = 20
stage_nginx.sinks.sink.channel = M1

#定义channel
stage_nginx.channels.M1.type = memory
stage_nginx.channels.M1.capacity = 100

# Other config values specific to each type of channel(sink or source)
# can be defined as well
# In this case, it specifies the capacity of the memory channel

./bin/flume-ng agent -c /opt/tools/flume/conf/ -f /opt/tools/flume/conf/flume-conf.properties -n stage_nginx    启动flume

搜索python kafka consumer 来编写一个消费的程序
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: