logstash安装配置入kafka(配置hadoop审计日志)
2017-06-13 14:44
567 查看
es部署情况
logstash安装
nginx 配了一个json的format日志
logstash配置文件
/etc/logstash/conf.d/lbgate.conf
写了一个入kafka的,后面再通过python-kafka消费
hadoop审计日志
- 10.183.93.129 - 10.183.93.131 - 10.183.93.132
logstash安装
#!/bin/bash cd /letv rsync -avzP 10.180.92.199::wVioz35SWO9zywesmagfOrP9XjigoF8j/james/logstash.tar.gz . tar -xzf logstash.tar.gz ln -s /letv/logstash-2.4.0 /usr/local/logstash export LOGSTASH_HOME=/usr/local/logstash echo "export LOGSTASH_HOME=/usr/local/logstash export PATH=${LOGSTASH_HOME}/bin:$PATH " >> /root/.bashrc source /etc/profile
nginx 配了一个json的format日志
log_format json '{ "@timestamp": "$time_iso8601", ' '"@fields": { ' '"remote_addr": "$remote_addr", ' '"remote_user": "$remote_user", ' '"upstream_response_time": "$upstream_response_time", ' '"request_time": "$request_time", ' '"status": "$status", ' '"upstream_addr": "$upstream_addr", ' '"server_protocol": "$server_protocol", ' '"host": "$host", ' '"request_uri": "$request_uri", ' '"request": "$request", ' '"request_method": "$request_method", ' '"http_referrer": "$http_referer", ' '"body_bytes_sent":"$body_bytes_sent", ' '"request_length":"$request_length", ' '"bytes_sent":"$bytes_sent", ' '"content_type":"$content_type", ' '"request_body":"$request_body",' '"remote_port":"$remote_port",' '"request_body_file":"$request_body_file",' '"cookie_COKIE":"$cookie_COKIE",' '"http_x_forwarded_for": "$http_x_forwarded_for", ' '"http_user_agent": "$http_user_agent" } }';
logstash配置文件
/etc/logstash/conf.d/lbgate.conf
input { file { path => "/var/log/nginx/matrix*.json.log" codec => json start_position => "beginning" type => "nginx-log" } } output { if [type] == "nginx-log" { elasticsearch { hosts => ["10.183.93.129:9200"] index => "nginx-log-%{+YYYY.MM.dd}" } }
写了一个入kafka的,后面再通过python-kafka消费
input { file { path => "/var/log/nginx/matrix*json.log" codec => json start_position => "beginning" type => "nginx-log" } } output { if [type] == "nginx-log" { elasticsearch { hosts => ["10.183.93.129:9200"] index => "nginx-log-%{+YYYY.MM.dd}" } } if [type] == "nginx-log" { kafka { codec => json bootstrap_servers => "bops-10-183-93-131:9092,bops-10-183-93-132:9092,bops-10-183-93-129:9092" topic_id => "yanbo" timeout_ms => 10000 retries => 3 client_id => "yanbo_client" } # stdout { codec => rubydebug } } }
hadoop审计日志
input { file { type => "hdfs-audit" path => "/data/hadoop/data12/hadoop-logs/hdfs-audit.log" start_position => beginning sincedb_path => "/data/hadoop/data12/hadoop-logs/logstash" } } filter{ if [type] == "hdfs-audit" { grok { match => ["message", "ugi=(?<user>([\w\d\-]+))@|ugi=(?<user>([\w\d\-]+))/[\w\d\-.]+@|ugi=(?<user>([\w\d.\-_]+))[\s(]+"] } } } output { if [type] == "hdfs-audit" { kafka { codec => plain { format => "%{message}" } bootstrap_servers => "rm1:9092,rm2:9092,test-nn1:9092,test-nn2:9092,10-140-60-50:9092" topic_id => "hdfslog" timeout_ms => 10000 retries => 3 client_id => "hdfs-audit" } # stdout { codec => rubydebug } } }
相关文章推荐
- Logstash5.X 日志搜集处理框架 安装配置
- Logstash 日志搜集处理框架 安装配置
- hadoop2.2安装配置日志(完全分布式)
- Hadoopz安装与配置-日志分析(4)
- Hadoop详解(七)——Hive的原理和安装配置和UDF,flume的安装和配置以及简单使用,flume+hive+Hadoop进行日志处理
- 【VMCloud云平台】SCOM配置(十四)-安装SCOM日志审计(ACS)
- Hadoop审计日志配置[转自 AIMP平台wiki]
- Hadoop学习日志(2.安装配置Hadoop)
- kafka安装配置及与logstash集成
- Hadoop安装_单机伪分布式配置
- Hadoop安装与集群配置
- Hadoop集群安装 (2) 配置conf/core-site.xml
- hadoop2.6安装配置过程摘要
- linux下单节点Kafka安装配置流程
- hadoop 2.6 Eclipse 插件编译/配置/安装
- hadoop(二)搭建开发环境安装选项:DesktopGnome、Server、Server GUI、ssh、vi(编辑配置文件)、perl
- 一步步教你Hadoop多节点集群安装配置
- Hadoop2.6.0+eclipse安装配置
- 让你快速认识flume及安装和使用flume1.5传输数据(日志)到hadoop2.2
- 01_note_Hadoop集群2.8.1的安装配置, JDK安装, 免密码登录 (CentOS7)