102-整合log4j和flume架构
2015-10-08 22:50
671 查看
1、安装flume
1.1 版本: 1.5.2
1.2 修改配置文件flume-conf.properties
[hadoop@mycluster conf]$ vi flume-conf.properties
1.3 启动flume
bin/flume-ng agent --conf conf --conf-file conf/flume-conf.properties --name agent1 -Dflume.root.logger=INFO,console
2、验证flume产生日志
2.1 修改log4j的配置
注意:需要在项目中添加log4jappender的maven依赖、
说明: flume-ng 收集log4j产生的日志,是通过log4j配置来完成。生产环境中产生日志的服务和flume节点放在同一台机器上。
2.2 写一个log4j测试类,验证log4j和flume是否整合成功
2.3 查看Flume的控制台,发现不断的输出日志数据,表明整合成功
2015-09-29 03:44:52,572 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:70)] Event: { headers:{flume.client.log4j.log.level=20000,
flume.client.log4j.message.encoding=UTF8, flume.client.log4j.logger.name=jfyun.log.LogProducer, flume.client.log4j.timestamp=1443523492933} body: E7 9C 81 E5 85 AC E5 8F B8 E9 89 B4 E6 9D 83 E6 ................ }
2015-09-29 03:44:53,586 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:70)] Event: { headers:{flume.client.log4j.log.level=20000, flume.client.log4j.message.encoding=UTF8, flume.client.log4j.logger.name=jfyun.log.LogProducer,
flume.client.log4j.timestamp=1443523493947} body: E7 9C 81 E5 85 AC E5 8F B8 E9 89 B4 E6 9D 83 E6 ................ }
2015-09-29 03:44:54,603 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:70)] Event: { headers:{flume.client.log4j.log.level=20000, flume.client.log4j.message.encoding=UTF8, flume.client.log4j.logger.name=jfyun.log.LogProducer,
flume.client.log4j.timestamp=1443523494961} body: E7 9C 81 E5 85 AC E5 8F B8 E9 89 B4 E6 9D 83 E6 ................ }
2015-09-29 03:44:55,617 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:70)] Event: { headers:{flume.client.log4j.log.level=20000, flume.client.log4j.message.encoding=UTF8, flume.client.log4j.logger.name=jfyun.log.LogProducer,
flume.client.log4j.timestamp=1443523495975} body: E7 9C 81 E5 85 AC E5 8F B8 E9 89 B4 E6 9D 83 E6 ................ }
其中: 红色部分为收集到的日志数据。
1.1 版本: 1.5.2
1.2 修改配置文件flume-conf.properties
[hadoop@mycluster conf]$ vi flume-conf.properties
agent1.channels = ch1 agent1.sources = avro-source1 agent1.sinks = log-sink1 # 定义channel agent1.channels.ch1.type = memory # 定义source agent1.sources.avro-source1.channels = ch1 agent1.sources.avro-source1.type = avro agent1.sources.avro-source1.bind = 0.0.0.0 agent1.sources.avro-source1.port = 41414说明: 整合使用的是flume中的avro source
1.3 启动flume
bin/flume-ng agent --conf conf --conf-file conf/flume-conf.properties --name agent1 -Dflume.root.logger=INFO,console
2、验证flume产生日志
2.1 修改log4j的配置
log4j.rootLogger=INFO,flume log4j.appender.flume = org.apache.flume.clients.log4jappender.Log4jAppender log4j.appender.flume.Hostname = 192.168.2.20 log4j.appender.flume.Port = 41414 log4j.appender.flume.UnsafeMode = true log4j.appender.flume.layout=org.apache.log4j.PatternLayout log4j.appender.flume.layout.ConversionPattern= %m%n
注意:需要在项目中添加log4jappender的maven依赖、
<dependency> <groupId>org.apache.flume.flume-ng-clients</groupId> <artifactId>flume-ng-log4jappender</artifactId> <version>1.5.2</version> </dependency>
说明: flume-ng 收集log4j产生的日志,是通过log4j配置来完成。生产环境中产生日志的服务和flume节点放在同一台机器上。
2.2 写一个log4j测试类,验证log4j和flume是否整合成功
package jfyun.log; import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; public class LogProducer { public static void main(String[] args){ while(true ){ Log logger = LogFactory.getLog(LogProducer.class); try { long s1 = System.currentTimeMillis(); Thread. sleep(1000); long s2 = System.currentTimeMillis() - s1; logger.info( "省公司鉴权接口:" +"http://bj.auth.com" +",响应时间:" +s2 +",当前时间:" +System.currentTimeMillis ()); } catch (InterruptedException e ) { e.printStackTrace(); } } } }
2.3 查看Flume的控制台,发现不断的输出日志数据,表明整合成功
2015-09-29 03:44:52,572 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:70)] Event: { headers:{flume.client.log4j.log.level=20000,
flume.client.log4j.message.encoding=UTF8, flume.client.log4j.logger.name=jfyun.log.LogProducer, flume.client.log4j.timestamp=1443523492933} body: E7 9C 81 E5 85 AC E5 8F B8 E9 89 B4 E6 9D 83 E6 ................ }
2015-09-29 03:44:53,586 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:70)] Event: { headers:{flume.client.log4j.log.level=20000, flume.client.log4j.message.encoding=UTF8, flume.client.log4j.logger.name=jfyun.log.LogProducer,
flume.client.log4j.timestamp=1443523493947} body: E7 9C 81 E5 85 AC E5 8F B8 E9 89 B4 E6 9D 83 E6 ................ }
2015-09-29 03:44:54,603 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:70)] Event: { headers:{flume.client.log4j.log.level=20000, flume.client.log4j.message.encoding=UTF8, flume.client.log4j.logger.name=jfyun.log.LogProducer,
flume.client.log4j.timestamp=1443523494961} body: E7 9C 81 E5 85 AC E5 8F B8 E9 89 B4 E6 9D 83 E6 ................ }
2015-09-29 03:44:55,617 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:70)] Event: { headers:{flume.client.log4j.log.level=20000, flume.client.log4j.message.encoding=UTF8, flume.client.log4j.logger.name=jfyun.log.LogProducer,
flume.client.log4j.timestamp=1443523495975} body: E7 9C 81 E5 85 AC E5 8F B8 E9 89 B4 E6 9D 83 E6 ................ }
其中: 红色部分为收集到的日志数据。
相关文章推荐
- 优设 - 一个前端设计相关网站
- 一个简单的网站访问过程
- Linux内核工程导论——内核架构概览
- 高并发网站架构设计方案
- 福利!在线编写代码的网站
- 网站
- 多语言网站框架实践问题(实体数据模型问题未解决)
- ab测试网站吞吐率介绍
- Java 集合系列09之 Map架构
- Java 集合系列02之 Collection架构
- 自适应网站前途星辰大海
- java学习——架构的设计是项目的核心
- 大型网站架构与分布式架构
- 分布式发布订阅消息系统 Kafka 架构设计
- opencv日文网站
- 主题下载网站
- 一个小网站的架构演变案例
- 优酷、YouTube、Twitter及JustinTV几个视频网站的架构
- Android开发时遇到问题比较实用的工具和网站
- iOS 开发常用网站