设计方案

ELK之redis缓存nginx access和error日志

2020-03-23  本文已影响0人  沙砾丶ye

准备环境
4台centos7的服务器()
192.168.0.4 (安装elasticsearch,kibana jdk)
192.168.0.5 (安装logstash jdk)
192.168.0.6 (安装filebeat nginx 作为web服务器)
192.168.0.7 (安装redis)

elasticsearch(192.168.0.4)

你可以根据自身需求,使用elasticsearch集群

一、java环境(省略)
二、安装es

[root@elk ~]# tar zxvf /usr/local/elasticsearch-6.5.4.tar.gz -C /usr/local/ 
[root@elk ~]# vim /usr/local/elasticsearch-6.5.4/config/elasticsearch.yml
cluster.name: bjbpe01-elk
node.name: elk01
node.master: true
node.data: true
path.data: /data/elasticsearch/data
path.logs: /data/elasticsearch/logs
bootstrap.memory_lock: false
bootstrap.system_call_filter: false
network.host: 0.0.0.0
http.port: 9200
#discovery.zen.ping.unicast.hosts: ["172.16.244.26", "172.16.244.27"]
#discovery.zen.minimum_master_nodes: 2
#discovery.zen.ping_timeout: 150s
#discovery.zen.fd.ping_retries: 10
#client.transport.ping_timeout: 60s
http.cors.enabled: true
http.cors.allow-origin: "*"

创建数据日志存储目录以及修改权限

[root@elk ~]# mkdir -p /data/elasticsearch/{data,logs}
[root@elk ~]# chown -R elsearch:elsearch /data/elasticsearch
[root@elk ~]# chown -R elsearch:elsearch /usr/local/elasticsearch-6.5.4

系统优化

[root@elk ~]#  vim vim /etc/security/limits.conf  添加以下内容
* soft nofile 65536
* hard nofile 131072
* soft nproc 2048
* hard nproc 4096
[root@elk ~]#  vim /etc/sysctl.conf   添加一下内容
vm.max_map_count=262144
vm.swappiness=0

启动es

[root@elk ~]# su - elsearch -c "cd /usr/local/elasticsearch-6.5.4 && nohup bin/elasticsearch &"

三、安装head

[root@elk ~]# wget https://npm.taobao.org/mirrors/node/latest-v4.x/node-v4.4.7-linux-x64.tar.gz
[root@elk ~]# tar -zxf node-v4.4.7-linux-x64.tar.gz –C /usr/local
[root@elk ~]# vim /etc/profile
NODE_HOME=/usr/local/node-v4.4.7-linux-x64
PATH=$NODE_HOME/bin:$PATH
export NODE_HOME PATH

下载head插件

[root@elk ~]# wget https://github.com/mobz/elasticsearch-head/archive/master.zip
[root@elk ~]# unzip –d /usr/local elasticsearch-head-master.zip

安装grunt

[root@elk ~]# cd /usr/local/elasticsearch-head-master
[root@elk ~]# npm install -g grunt-cli
[root@elk ~]# grunt -version  #检查grunt版本号

下载head必要的文件

[root@elk ~]# wget https://github.com/Medium/phantomjs/releases/download/v2.1.1/phantomjs-2.1.1-linux-x86_64.tar.bz2
[root@elk ~]# yum -y install bzip2
[root@elk ~]# tar -jxf phantomjs-2.1.1-linux-x86_64.tar.bz2 -C /tmp/

运行head

[root@elk ~]# cd /usr/local/elasticsearch-head-master/
[root@elk ~]# npm install
[root@elk ~]# nohup grunt server &

测试访问(192.168.0.4:9100)


image.png

四、安装kibana

[root@elk ~]# tar zxf kibana-6.5.4-linux-x86_64.tar.gz -C /usr/local/  安装
[root@elk ~]# vim /usr/local/kibana-6.5.4-linux-x86_64/config/kibana.yml   修改配置
server.port: 5601
server.host: "192.168.0.4"
elasticsearch.url: "http://192.168.0.4:9200"
kibana.index: ".kibana"
[root@elk ~]#  nohup /usr/local/kibana-6.5.4-linux-x86_64/bin/kibana &

redis(192.168.0.7)

只能是单节点 不支持redis集群

[root@redis ~]# yum -y install redis

配置文件修改

[root@redis ~]# wget http://download.redis.io/releases/redis-4.0.9.tar.gz 
[root@redis ~]# tar xzf redis-4.0.9.tar.gz -C /usr/local
[root@redis ~]# cd /usr/local/redis-4.0.9
[root@redis ~]# make && make install
[root@redis ~]# vim redis.conf   修改
bind 0.0.0.0  #监听所有IP
daemonize yes     #开启后台模式将on改为yes
timeout 300      #连接超时时间
port 6379                      #端口号
requirepass 123456       #设置密码
dir /usr/local/redis  #本地数据库存放持久化数据的目录该目录-----需要存在
pidfile /var/run/redis_6379.pid  #定义pid文件
logfile /var/log/redis.log  #定义log文件

启动redis

[root@redis ~]# /usr/local/redis/src/redis-server /usr/local/redis/redis.conf

filebeat(192.168.0.6)

nginx 日志改为json格式

   log_format access_json_log  '{"@timestamp":"$time_local",'
                                  '"http_host":"$http_host",'
                                  '"clinetip":"$remote_addr",'
                                  '"request":"$request",'
                                  '"status":"$status",'
                                  '"size":"$body_bytes_sent",'
                                  '"upstream_addr":"$upstream_addr",'
                                  '"upstream_status":"$upstream_status",'
                                  '"upstream_response_time":"$upstream_response_time",'
                                  '"request_time":"$request_time",'
                                  '"http_referer":"$http_referer",'
                                  '"http_user_agent":"$http_user_agent",'
                                  '"http_x_forwarded_for":"$http_x_forwarded_for"}';

    access_log  /var/log/nginx/access.log  access_json_log;

filebeat的安装

[root@filebeat ~]$ wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-
6.5.4-darwin-x86_64.tar.gz
[root@filebeat ~]$ tar -zxvf filebeat-6.5.4-darwin-x86_64.tar.gz -C /usr/local
[root@filebeat ~]$ mv /usr/local/filebeat-6.5.4-darwin-x86_64 /usr/local/filebeat

修改配置文件
修改 Filebeat 配置,支持收集本地目录日志,并输出日志到 redis中

[root@filebeat ~]$ vim /usr/local/filebeat/filebeat.yml
filebeat.inputs:
- type: log      #采集的数据格式 log
  enabled: true   #激活log采集功能
  paths:   #采集路径
    - /var/log/nginx/access.log
  json.keys_under_root: true
  json.add_error_key: true
  tags: ["access"]     #给该日志做上一个标识

set.kibana:
  host: "192.168.0.4:5601"   #kibana的地址和端口

output.redis:
  hosts: ["192.168.0.7"]   #单机 redis的地址
  port: 6379
  password: "123456"    #redis的密码
  key: "filebeat"   #存入redis时使用的key值,可自定义,但要和logstash取时用的key值一致
  db: 0 #数据写入的库

启动filebeat

[root@filebeat ~]$ /usr/local/filebeat/filebeat -e -c /usr/loca/filebeat/filebeat.yml

logstash(192.168.0.5)

[root@logstash ~]# tar zxf /usr/local/package/logstash-6.5.4.tar.gz -C /usr/local/
[root@logstash ~]# mkdir –p /usr/local/logstash-6.5.4/etc/conf.d   创建配置所在目录

创建配置

[root@logstash ~]# vim /usr/local/logstash-6.5.4/etc/conf.d/nginx.conf
input {
        redis {
                host => "192.168.0.7"    redis的地址 
                port => "6379" 
                passwd => "123456"
                data_type => "list"
                db => 0
                type => "log"
                key => "filebeat"    redis中的key值
        }
}
filter {
        mutate {
                #nginx与php交互的时延 转换成浮点型,可用于排序
                convert => ["upstream_response_time","float"]    
                convert => ["request_time","float"]
        }       
}
output {
        stdout {
                codec => rubydebug    标准输出用来检查
}
        elasticsearch {
                hosts => "192.168.0.4:9200"
                index => "nginx-access-%{+YYYY.MM.dd}"
        }

}

我们可以看一下redis中的key


以上是只收集nginx-access.log
简单图形为下图


image.png

下面内容将是使用redis实现同时收集access.log和error.log的日志

简单架构图如图


image.png

修改filebeat中的配置文件

[root@filebeat ~]$ vim /usr/local/filebeat/filebeat.yml
filebeat.inputs:
- input_type: log
  enabled: true
  paths:
    - /var/log/nginx/access.log
  json.keys_under_root: true
  json.add_error_key: true
  tags: ["access"]

- input_type: log
  enabled: true
  paths:
    - /var/log/nginx/error.log
  tags: ["error"]

set.kibana:
  host: "192.168.0.4:5601"

output.redis:
  hosts: ["192.168.0.7"]
  port: 6379
  password: "123456" 
  keys:
      - key: "nginx_access"
        when.contains:
             tags: "access"
      - key: "nginx_access"
        when.contains:
             tags: "access"

开启filebeat,然后查看redis中的key值如下图


image.png

修改logstash中的配置文件

input { 
        redis { 
                host => "192.168.0.7"
                port => "6379"
                password => "123456"
                data_type => "list"
                db => 0
                key => "nginx_access"
        }
        redis { 
                host => "192.168.0.7"
                port => "6379"
                password => "123456"
                data_type => "list"
                db => 0
                key => "nginx_error"
        }
}
filter {
        mutate {
                convert => ["upstream_response_time","float"]
                convert => ["request_time","float"]
        }
}
output {
        stdout {
                codec => rubydebug
}
        if "access" in [tags] {
                elasticsearch {
                        hosts => "192.168.0.4:9200"
                        index => "nginx-access-%{+YYYY.MM.dd}"
                }
        }
        if "error" in [tags] {
                elasticsearch {
                        hosts => "192.168.0.4:9200"
                        index => "nginx-error-%{+YYYY.MM.dd}"
                }
        }
}

ab压测访问产生日志

ab -n 1000 -c 500 http://192.168.0.6/     产生access.log日志
ab -n 1000 -c 500 http://192.168.0.6/haha 产生error.log日志

查看kibana中的日志

image.png
查看elasticsearch head
image.png

其实这个filebeat和logstash配置文件都可以进行优化,因为上述的判断是由两个不同的tag,实现区分的,根本不需要将两个日志分别存放到redis两个不同的key中,可以一直存放到同一个key里面

优化后的filebeat的配置文件

filebeat.inputs:
- input_type: log
  enabled: true
  paths:
    - /var/log/nginx/access.log
  json.keys_under_root: true
  json.add_error_key: true
  tags: ["access"]

- input_type: log
  enabled: true
  paths:
    - /var/log/nginx/error.log
  tags: ["error"]

set.kibana:
  host: "192.168.0.4:5601"

output.redis:
  hosts: ["192.168.0.7"]
  port: 6379
  password: "123456"
  key: "nginx"

优化后的logstash的配置文件

input {
        redis {
                host => "192.168.0.7"
                port => "6379"
                password => "123456"
                data_type => "list"
                db => 0
                key => "nginx"
        }
}
filter {
        mutate {
                convert => ["upstream_response_time","float"]
                convert => ["request_time","float"]
        }
}
output {
        stdout {
                codec => rubydebug
}
        if "access" in [tags] {
                elasticsearch {
                        hosts => "192.168.0.4:9200"
                        index => "nginx-access-%{+YYYY.MM.dd}"
                }
        }
        if "error" in [tags] {
                elasticsearch {
                        hosts => "192.168.0.4:9200"
                        index => "nginx-error-%{+YYYY.MM.dd}"
                }
        }
}

上一篇下一篇

猜你喜欢

热点阅读