ELK搭建

2018-12-06  本文已影响0人  许俊贤

ELK(ElasticSearch, Logstash, Kibana)

ELK(ElasticSearch, Logstash, Kibana),三者组合在一起搭建实时的日志分析平台。

下载地址:https://www.elastic.co/cn/downloads

备注:ELK 整套环境搭建版本很关键,最好全统一一个版本。

本文使用版本:5.1.1

安装前准备

下载好安装包

elasticsearch-5.1.1.zip、logstash-5.1.1.tar.gz、 kibana-5.1.2-linux-x86_64.tar.gz

创建用户、组

Groupadd elkgroup
useradd -d /home/elk -m elk -g elkgroup

解压

cd /opt/Elk 
unzip elasticsearch-5.1.1.zip 
tar -zxvf logstash-5.1.1.tar.gz
tar -zxvf kibana-5.1.2-linux-x86_64.tar.gz

将文件夹更改所属用户及组

chown -R elk:elkgroup ./

修改系统限制

vi /etc/security/limits.conf

修改如下内容

* soft nofile 65536
* hard nofile 131072
* soft nproc 2048
* hard nproc 4096

修改配置sysctl.conf

vi /etc/sysctl.conf

添加如下内容

vm.max_map_count=655360

并执行命令

sysctl -p

启动ElasticSearch服务:

切换用户并运行:

su elk
cd /opt/ELK/elasticsearch-5.1.2
./bin/elasticsearch -Xmx1g -Xms1g

出现如下日志启动成功:

 [2018-07-31T11:49:20,727][INFO ][o.e.n.Node               ] [] initializing ... 
 [2018-07-31T11:49:20,850][INFO ][o.e.e.NodeEnvironment    ] [o5YDWTK] using [1] data paths, mounts [[/ (rootfs)]], net usable_space [26.6gb], net total_space [39.2gb], spins? [unknown], types [rootfs] 
 [2018-07-31T11:49:20,850][INFO ][o.e.e.NodeEnvironment    ] [o5YDWTK] heap size [1.9gb], compressed ordinary object pointers [true]
 [2018-07-31T11:49:20,851][INFO ][o.e.n.Node               ] node name [o5YDWTK] derived from node ID [o5YDWTKoS3-Mr-M1WNWjNg]; set [node.name] to override
 [2018-07-31T11:49:20,853][INFO ][o.e.n.Node               ] version[5.1.2], pid[14033], build[c8c4c16/2017-01-11T20:18:39.146Z], OS[Linux/3.10.0-693.2.2.el7.x86_64/amd64], JVM[Oracle Corporation/Java HotSpot(TM) 64-Bit Server VM/1.8.0_161/25.161-b12]
 [2018-07-31T11:49:21,751][INFO ][o.e.p.PluginsService     ] [o5YDWTK] loaded module [aggs-matrix-stats]
 [2018-07-31T11:49:21,752][INFO ][o.e.p.PluginsService     ] [o5YDWTK] loaded module [ingest-common] 
 [2018-07-31T11:49:21,752][INFO ][o.e.p.PluginsService     ] [o5YDWTK] loaded module [lang-expression]
 [2018-07-31T11:49:21,752][INFO ][o.e.p.PluginsService     ] [o5YDWTK] loaded module [lang-groovy]
 [2018-07-31T11:49:21,752][INFO ][o.e.p.PluginsService     ] [o5YDWTK] loaded module [lang-mustache]
 [2018-07-31T11:49:21,752][INFO ][o.e.p.PluginsService     ] [o5YDWTK] loaded module [lang-painless]
 [2018-07-31T11:49:21,752][INFO ][o.e.p.PluginsService     ] [o5YDWTK] loaded module [percolator]
 [2018-07-31T11:49:21,752][INFO ][o.e.p.PluginsService     ] [o5YDWTK] loaded module [reindex] 
 [2018-07-31T11:49:21,752][INFO ][o.e.p.PluginsService     ] [o5YDWTK] loaded module [transport-netty3]
 [2018-07-31T11:49:21,752][INFO ][o.e.p.PluginsService     ] [o5YDWTK] loaded module [transport-netty4]
 [2018-07-31T11:49:21,753][INFO ][o.e.p.PluginsService     ] [o5YDWTK] no plugins loaded 
 [2018-07-31T11:49:24,087][INFO ][o.e.n.Node               ] initialized
 [2018-07-31T11:49:24,091][INFO ][o.e.n.Node               ] [o5YDWTK] starting ...
 [2018-07-31T11:49:24,394][INFO ][o.e.t.TransportService   ] [o5YDWTK] publish_address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}
 [2018-07-31T11:49:24,400][WARN ][o.e.b.BootstrapCheck     ] [o5YDWTK] max file descriptors [65535] for elasticsearch process is too low, increase to at least [65536]
 [2018-07-31T11:49:27,455][INFO ][o.e.c.s.ClusterService   ] [o5YDWTK] new_master {o5YDWTK}{o5YDWTKoS3-Mr-M1WNWjNg}{kdPu3SUFTb212sH1PQnBiw}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)
 [2018-07-31T11:49:27,468][INFO ][o.e.h.HttpServer         ] [o5YDWTK] publish_address {127.0.0.1:9200}, bound_addresses {127.0.0.1:9200}
 [2018-07-31T11:49:27,469][INFO ][o.e.n.Node               ] [o5YDWTK] started
 [2018-07-31T11:49:27,486][INFO ][o.e.g.GatewayService     ] [o5YDWTK] recovered [0] indices into cluster_state

日志中启动了两个端口分别是:9300和9200,9300用于跟其他的节点的传输,9200用于接受HTTP请求,ctrl+c可以结束进程

因服务器内存较小,修改了(默认2G)

./config/jvm.options
 -Xms1g 
 -Xmx1g

后台运行

./bin/elasticsearch -d

控制台出现信息:

 {
   "name" : "o5YDWTK",
   "cluster_name" : "elasticsearch",
   "cluster_uuid" : "X_xWRtxpQhe3nW1w7RbpRg",
   "version" : {
     "number" : "5.1.2",
     "build_hash" : "c8c4c16",
     "build_date" : "2017-01-11T20:18:39.146Z",
     "build_snapshot" : false,
     "lucene_version" : "6.3.0"
   },
   "tagline" : "You Know, for Search"
 }

如果需要局域网也能访问配置config/elasticsearch.ym

 network.host: 192.168.xxx.xxx

Logstash

配置

cd /opt/ELK/logstash-5.1.2
vi config/first-pipeline.conf
 #config
 input {
   #log4j {
   #  host => "127.0.0.1"
   #  port => 8888
   #}
   file{
      path => "/usr/local/servers/blog/logs/catalina.out"
   }
 }
 output {
     elasticsearch {
         hosts => [ "127.0.0.1:9200" ]
         index => "debug-%{+YYYY.MM.dd}"
     }  
     stdout{
         codec => rubydebug
     }
 }
 #congfig

启动服务

./bin/logstash -f config/first-pipeline.conf

(后台运行)

nohup ./logstash -f /opt/ELK/logstash-5.1.2/config/first-pipeline.conf &

然而,我们可以更改command命令,让它输出到我们指定的文件中,如下

> nohup ./logstash -f ./configs > myout.file 2>&1 &
> nohup ./logstash -f /opt/ELK/logstash-5.1.2/config/first-pipeline.conf  >/opt/ELK/logstash-5.1.2/logs/logstashout.file &

出现以下日志:

 Sending Logstash's logs to /opt/ELK/logstash-5.1.2/logs which is now configured via log4j2.properties
 [2018-07-31T17:09:51,958][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>["http://127.0.0.1:9200"]}}
 [2018-07-31T17:09:51,962][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:url=>#, :healthcheck_path=>"/"}
 [2018-07-31T17:09:52,281][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>#}
 [2018-07-31T17:09:52,284][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
 [2018-07-31T17:09:52,369][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>50001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"_all"=>{"enabled"=>true, "norms"=>false}, "dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword"}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date", "include_in_all"=>false}, "@version"=>{"type"=>"keyword", "include_in_all"=>false}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}} 
 [2018-07-31T17:09:52,381][INFO ][logstash.outputs.elasticsearch] Installing elasticsearch template to _template/logstash 
 [2018-07-31T17:09:52,562][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["127.0.0.1:9200"]}
 [2018-07-31T17:09:52,569][INFO ][logstash.pipeline        ] Starting pipeline {"id"=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>250} 
 [2018-07-31T17:09:52,591][INFO ][logstash.pipeline        ] Pipeline main started 
 [2018-07-31T17:09:52,676][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600} 
 New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["127.0.0.1:9200"]}

表示已经成功连接了指定的Elasticsearch。

Kibana

解压进入目录

tar -zxvf kibana-5.1.1-linux-x86_64.tar.gz
cd kibana-5.1.1-linux-x86_64

修改配置文件

vi config/kibana.yml

添加如下配置项

server.port: 5601 
server.host: "192.168.111.130" 
elasticsearch.url: "[http://192.168.111.131:9200](http://192.168.111.131:9200/)"
kibana.index: ".kibana"

启动服务

./bin/kibana

日志

log   [11:19:06.135] [info][status][plugin:kibana@5.1.2] Status changed from uninitialized to green - Ready
log   [11:19:06.194] [info][status][plugin:elasticsearch@5.1.2] Status changed from uninitialized to yellow - Waiting for Elasticsearch
log   [11:19:06.218] [info][status][plugin:console@5.1.2] Status changed from uninitialized to green - Ready
log   [11:19:06.425] [info][status][plugin:timelion@5.1.2] Status changed from uninitialized to green - Ready
log   [11:19:06.429] [info][listening] Server running at [http://127.0.0.1:5601](http://127.0.0.1:5601/)
log   [11:19:06.438] [info][status][ui settings] Status changed from uninitialized to yellow - Elasticsearch plugin is yellow
log   [11:19:11.452] [info][status][plugin:elasticsearch@5.1.2] Status changed from yellow to yellow - No existing Kibana index found
log   [11:19:11.509] [info][status][plugin:elasticsearch@5.1.2] Status changed from yellow to green - Kibana index ready
log   [11:19:11.509] [info][status][ui settings] Status changed from yellow to green - Ready

汉化

请到github下载:https://github.com/anbai-inc/Kibana_Hanization

python main.py /opt/ELK/kibana-5.1.2-linux-x86_64

定时任务

crontab -e  (仅有当前用户的定时任务) 

/opt/ELK/check_kibana.sh

备注:个人博客同步至简书。

上一篇 下一篇

猜你喜欢

热点阅读