linuxELK

ELK之logstash

2017-03-02  本文已影响267人  词穷又词贫
ELK架构图:
logstash

官方网站:https://www.elastic.co/
logstash工作模式:Agent---Server
logstash工作流程:input---(filter,codec)---output
Agent与Server并无区别。
常用插件:
input plugins: stdin,file,redis,
filter plugins:grok,
output plugins:stdout,redis,elasticsearch,
logstash是属于重量级数据收集工具,需要有JDK环境。

部署JDK
# yum install -y java-1.8.0-openjdk-headles java-1.8.0-openjdk-devel java-1.8.0-openjdk
# echo "export JAVA_HOME=/usr" > /etc/profile.d/java.sh
# source /etc/profile.d/java.sh
安装Logstash

Logstash版本有:1.X,2.X,5.X
(vm做实验可以设置CPU双核四线程,2G内存,重量级)
# yum install -y logstash-1.5.4-1.noarch.rpm
# echo "export PATH=/opt/logstash/bin:$PATH" > /etc/profile.d/logstash.sh //logstash命令路径
# /etc/sysconfig/logstash 启动参数
# /etc/logstash/conf.d/ 此目录下的所有文件
# logstash --help //需要好一会儿才出现帮助,启动比较慢
编辑测试文件:
# vim /etc/logstash/conf.d/simple.conf
# input { //设置数据输入方式
# stdin {} //标准输入,键盘
# }
# output { //设置数据输出方式
# stdout { //标准输出,屏幕
# codec => rubydebug //采用输出格式
# }
# }
运行:
# logstash -f /etc/logstash/conf.d/simple.conf --configtest //测试配置文件编写是否正确
# Configuration OK
# logstash -f /etc/logstash/conf.d/simple.conf //运行
# Logstash startup completed //信息提示启动完成
# hello,logstash //此时等待我们从标准输入数据(键盘),接着会在标准输出(屏幕)打印如下数据
# {
# "message" => "hello,logstash",
# "@version" => "1",
# "@timestamp" => "2017-03-02T09:35:12.773Z",
# "host" => "elk"
# }
Logstash基本工作流程完成,接下来就是研究各类插件。

input plugins:file, udp

file作为数据输入,参考说明https://www.elastic.co/guide/en/logstash/1.5/plugins-inputs-file.html#_file_rotation //官方说明
# vim /etc/logstash/conf.d/file-simple.conf
# input {
# file {
# path => ["/var/log/httpd/access_log"] //数组,可以输入多个日志文件
# type => "system" //归类,可以在filter插件中调用
# start_position => "beginning" //文件内容的监控位置从最先开始,(日志滚动是从新的日志文件第一行开始监控)
# }
# }
# output {
# stdout {
# codec => rubydebug
# }
# }
# logstash -f /etc/logstash/conf.d/file-simple.conf --configtest
# logstash -f /etc/logstash/conf.d/file-simple.conf
采用udp方式来输入数据到logstash,官方说明:https://www.elastic.co/guide/en/logstash/1.5/plugins-inputs-udp.html
数据生产者将数据以udp协议的方式通过网络发送至logstash指定的udp端口
数据生产者采用collectd性能监控工具实现,epel源中安装。
# 另外一台主机
# yum install collectd -y
# [root@elknode1 ~]# grep -Ev "(#|$)" /etc/collectd.conf
# Hostname "elk-node1"
# LoadPlugin syslog
# LoadPlugin cpu
# LoadPlugin df
# LoadPlugin disk
# LoadPlugin interface
# LoadPlugin load
# LoadPlugin memory
# LoadPlugin network
# <Plugin network>
# <Server "192.168.9.77" "25826"> //将监控数据发送至此Server
# </Server>
# </Plugin>
# Include "/etc/collectd.d"
# systemctl start collectd.service
配置logstash文件:
# vim /etc/logstash/conf.d/udp-simple.conf
# input {
# udp {
# port => "25826"
# codec => collectd {}
# type => "collectd"
# }
# }
# output {
# stdout {
# codec => rubydebug
# }
# }
# logstash -f /etc/logstash/conf.d/udp-simple.conf --configtest
# logstash -f /etc/logstash/conf.d/udp-simple.conf
启动完成就会有日志信息送过来
# {
# "host" => "elk-node1",
# "@timestamp" => "2017-02-28T23:46:14.354Z",
# "plugin" => "disk",
# "plugin_instance" => "dm-1",
# "collectd_type" => "disk_ops",
# "read" => 322,
# "write" => 358,
# "@version" => "1",
# "type" => "collectd"
# }

filter plugins:grok(web日志核心插件)

grok用于分析并结构化文本数据
格式化模版:
/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-patterns-core-0.3.0/patterns 此目录下放置很多默认模版:
aws bro firewalls grok-patterns haproxy java junos linux-syslog mcollective mongodb nagios postgresql rails redis ruby
# COMMONAPACHELOG %{IPORHOST:clientip} %{USER:ident} %{USER:auth} [%{HTTPDATE:timestamp}] "(?:%{WORD:verb} %{NOTSPACE:request}(?: HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})" %{NUMBER:response} (?:%{NUMBER:bytes}|-)
# 以上就是apache的common日志格式模版,
# 以IPORHOST为例:模版文件中定义了如何去匹配IP或者HOST
# IPORHOST (?:%{HOSTNAME}|%{IP})
# HOSTNAME \b(?:[0-9A-Za-z][0-9A-Za-z-]{0,62})(?:.(?:[0-9A-Za-z][0-9A-Za-z-]{0,62}))*(.?|\b)
# HOST %{HOSTNAME}
# IPV4 (?<


redis作为logstash的数据输入源类似作为输出对象。
上一篇下一篇

猜你喜欢

热点阅读