您的位置:首页 > 其它

elk搭建完整搭建【篇1】

2016-05-02 20:05 459 查看
参考】官网:https://www.elastic.co/downloads参考:http://517sou.net/archives/centos%E4%B8%8B%E4%BD%BF%E7%94%A8elk%E5%A5%97%E4%BB%B6%E6%90%AD%E5%BB%BA%E6%97%A5%E5%BF%97%E5%88%86%E6%9E%90%E5%92%8C%E7%9B%91%E6%8E%A7%E5%B9%B3%E5%8F%B0/http://my.oschina.net/itblog/blog/547250/article/4240654.htmlELK组成】Elasticsearch:Search and analyze data in real time.Logstash:Collect, enrich, and transport data.Kibana:Explore and visualize your data.注意:三个组建缩写ELK【安装系统环境和软件包】系統信息:[root@log_server src]# du -sh elasticsearch-2.2.1.tar.gz logstash-all-plugins-2.2.0.tar.gz kibana-4.4.2-linux-x64.tar.gz29M elasticsearch-2.2.1.tar.gz72M logstash-2.2.2.tar.gz32M kibana-4.4.2-linux-x64.tar.gz可以检查的校验码与官网对比是否包已经下载完整[root@log_server src]# sha1sum kibana-4.4.2-linux-x64.tar.gz6251dbab12722ea1a036d8113963183f077f9fa7 kibana-4.4.2-linux-x64.tar.gz[root@log_server src]# cat /etc/redhat-release ; uname -mCentOS release 6.4 (Final)x86_64关闭防火墙[root@log_server src]# /etc/init.d/iptables statusiptables: Firewall is not running.[root@log_server src]# getenforceDisabled最大文件描述符(默人用户级别的1024太小咯 要求是65536以上)[root@master ~]# ulimit -n102400修改方式:打开文件增加两行 /etc/security/limits.conf* soft nofile 102400* hard nofile 102400查看 ulimit -n[root@log_server src]# java -versionjava version "1.7.0_99"OpenJDK Runtime Environment (rhel-2.6.5.0.el6_7-x86_64 u99-b00)OpenJDK 64-Bit Server VM (build 24.95-b01, mixed mode)注意:jdk可自行网上搜索安装方法注:由于Logstash的运行依赖于Java环境, 而Logstash 1.5以上版本不低于java 1.7,因此推荐使用最新版本的Java。因为我们只需要Java的运行环境,所以可以只安装JRE,不过这里我依然使用JDK,请自行搜索安装。(推荐yum安装yum install -y java-1.7.0-openjdk)!!! Please upgrade your java version, the current version '1.7.0_09-icedtea-mockbuild_2013_01_16_18_52-b00' may cause problems. We recommend a minimum version of 1.7.0_51这是版本建议(推荐1.8)下载来自官网源和软件安装介绍系统环境和版本信息:本文把ELK套件部署在一台CentOS单机上。具体的版本要求如下:
wget wget' target='_blank'>https://download.elasticsearch.org/elasticsearch/release/org/elasticsearch/distribution/tar/elasticsearch/2.2.1/elasticsearch-2.2.1.tar.gz
wget https://download.elastic.co/logstash/logstash/logstash-2.2.2.tar.gz
wget https://download.elastic.co/kibana/kibana/kibana-4.4.2-linux-x64.tar.gz[/code] href="http://s4.51cto.com/wyfs02/M02/7F/A5/wKiom1cnQfWjIXgDAAD6KIFnyNI326.png" target=_blank>【安装elasticsearch】解压、软连接、cd到目录下
tar xvf elasticsearch-2.2.1.tar.gz -C /usr/local/
ln -s /usr/local/elasticsearch-2.2.1/ /usr/local/elasticsearch
cd /usr/local/elasticsearc
安装这个重要的插件:
[root@master elasticsearc]# ./bin/plugin install  mobz/elasticsearch-head
-> Installing mobz/elasticsearch-head...Plugins directory [/usr/local/elasticsearc/plugins] does not exist. Creating...Trying https://github.com/mobz/elasticsearch-head/archive/master.zip ...Downloading .................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................DONEVerifying https://github.com/mobz/elasticsearch-head/archive/master.zip checksums if available ...NOTE: Unable to verify checksum for downloaded plugin (unable to find .sha1 or .md5 file to verify)Installed head into /usr/local/elasticsearc/plugins/head创建用户和目录(因为elasticsearch 2.0.0 以上版本不能用root用户运行)
[root@master elasticsearc]# groupadd -g 1000  elasticsearch
[root@master elasticsearc]# useradd -g 1000 -u 1000 elasticsearch
[root@master elasticsearc]# sudo -u elasticsearch mkdir /tmp/elasticsearch
[root@master elasticsearc]# ls /tmp/elasticsearch
[root@master elasticsearc]# sudo -u elasticsearch mkdir /tmp/elasticsearch/{data,logs}
mkdir /usr/local/elasticsearch/config/scripts编辑配置文件vim config/elasticsearch.yml 加如以下四行(注意冒号后面有空格):path.data: /tmp/elasticsearch/datapath.logs: /tmp/elasticsearch/logsnetwork.host: 192.168.100.10network.port: 9200启动配置默认,启动elasticsearch
sudo -u elastsearch /usr/local/elasticsearch/bin/elasticsearch
注意:如果正式应用需要在后台运行
sudo -u elastsearch /usr/local/elasticsearch/bin/elasticsearch -d
查看进程和端口[root@master ~]# ps -ef | grep java1000 9477 9338 2 21:02 pts/4 00:00:07 /usr/bin/java -Xms256m -Xmx1g -Djava.awt.headless=true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -XX:+DisableExplicitGC -Dfile.encoding=UTF-8 -Djna.nosys=true -Des.path.home=/usr/local/elasticsearch -cp /usr/local/elasticsearch/lib/elasticsearch-2.2.1.jar:/usr/local/elasticsearch/lib/* org.elasticsearch.bootstrap.Elasticsearch startroot 9620 9576 0 21:07 pts/0 00:00:00 grep --color=auto java[root@master ~]# netstat -tulnp |grep javatcp 0 0 ::ffff:192.168.100.10:9200 :::* LISTEN 9477/javatcp 0 0 ::ffff:192.168.100.10:9300 :::* LISTEN 9477/java注意:可以看到,它跟其他的节点的传输端口为9300,接受HTTP请求的端口为9200。# curl 192.168.100.10:9200{"name" : "Wilson Fisk","cluster_name" : "elasticsearch","version" : {"number" : "2.2.1","build_hash" : "d045fc29d1932bce18b2e65ab8b297fbf6cd41a1","build_timestamp" : "2016-03-09T09:38:54Z","build_snapshot" : false,"lucene_version" : "5.4.1"},"tagline" : "You Know, for Search"}返回展示了配置的cluster_name和name,以及安装的ES的版本等信息。刚刚安装的head插件,它是一个用浏览器跟ES集群交互的插件,可以查看集群状态、集群的doc内容、执行搜索和普通的Rest请求等。现在也可以使用它打开http://192.168.100.10:9200/_plugin/head/页面来查看ES集群状态:上面的功能还是不错的!安装Logstash---数据日志存储和传输】Logstash的功能如下:其实它就是一个收集器而已,收集(input)和传输(output),我们需要为它指定Input和Output(当然Input和Output可以为多个)。可以指定input的日志和output到elasticsearch中解压、软连接
tar xvf logstash-2.2.2.tar.gz -C /usr/local/
ln -s /usr/local/logstash-2.2.2/ /usr/local/logstash
测试logstash(1) 屏幕输入输出方式测试/usr/local/logstash/bin/logstash -e 'input { stdin { } } output { stdout {} }'
我们可以看到,我们输入什么内容logstash按照某种格式输出,其中-e参数参数允许Logstash直接通过命令行接受设置。这点尤其快速的帮助我们反复的测试配置是否正确而不用写配置文件。使用CTRL-C命令可以退出之前运行的Logstash。使用-e参数在命令行中指定配置是很常用的方式不过如果需要配置更多设置则需要很长的内容。这种情况,我们首先创建一个简单的配置文件,并且指定logstash使用这个配置文件。例如:在logstash安装目录下创建配置logstash
创建配置文件目录:mkdir -p /usr/local/logstash/etcvim /usr/local/logstash/etc/hello_search.conf输入下面:
# cat /usr/local/logstash/etc/hello_search.conf
input {
stdin {
type => "human"
}
}
output {
stdout {
codec => rubydebug
}
elasticsearch {
hosts => "192.168.100.10:9200"
}
}
启动:/usr/local/logstash/bin/logstash -f /usr/local/logstash/etc/hello_search.conf
(以频幕上输入的方式输入,以rubydebug格式输出到屏幕,并且传递到elasticsearch)
测试logstash日志是否传输到了elasticsearch 通过以下接口:curl 'http://192.168.100.10:9200/_search?pretty'表明已经接收到了日志至此,你已经成功利用Elasticsearch和Logstash来收集日志数据了。安装kibana---展示数据】注:现在kibanna可以自带了web服务,bin/kibana就可以直接启动了,建议不用nginx进行配合启动了。使用自带的web5.1 安装Kibana下载kibana后,解压到对应的目录就完成kibana的安装解压、软连接
tar -xzvf kibana-4.4.2-linux-x64.tar.gz -C /usr/local/ln -s /usr/local/kibana-4.4.2-linux-x64/ /usr/local/kibana
启动kibanna
/usr/local/kibana-4.4.2-linux-x64/bin/kibana
或者
/usr/local/kibana/bin/kibana
此时是没有连接上elasticsearch配置kibannavim /usr/local/kibana/config/kibana.yml修改# elasticsearch.url: "http://localhost:9200"elasticsearch.url: "http://192.168.100.10:9200"重启/usr/local/kibana/bin/kibanaweb访问:监听了5601作为web端口使用http://kibanaServerIP:5601访问Kibana,登录后,首先,配置一个索引,默认,Kibana的数据被指向Elasticsearch,使用默认的logstash-*的索引名称,并且是基于时间的,点击“Create”即可。In order to use Kibana you must configure at least one index pattern. Index patterns are used to identify the Elasticsearch index to run search and analytics against. They are also used to configure fields.为了后续使用Kibana,需要配置至少一个Index名字或者Pattern,它用于在分析时确定ES中的Index点击“Discover”,可以搜索和浏览Elasticsearch中的数据,默认搜索的是最近15分钟的数据。可以自定义选择时间。到此,说明你的ELK平台安装部署完成。补充:[配置logstash作为Indexer]将logstash配置为索引器,并将logstash的日志数据存储到Elasticsearch,本范例主要是索引本地系统日志cat /usr/local/logstash/etc/logstash-indexer.confinput {file {type =>"syslog"path => ["/var/log/messages", "/var/log/secure" ]}syslog {type =>"syslog"port =>"5544"}}output {stdout { codec=> rubydebug }elasticsearch {hosts => "192.168.100.10:9200" }}执行:/usr/local/logstash/bin/logstash -f /usr/local/logstash/etc/logstash-indexer.conf执行:echo "谷歌alphago和李世石围棋大战" >> /var/log/messages刷新kibana每个收集日志的启动,都是一个独立的进程本文出自 “崔德华运维打工从业路” 博客,请务必保留此出处http://cuidehua.blog.51cto.com/5449828/1769525
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: