编程杂货铺Docker

使用ELK收集Docker Swarm日志

2018-11-21  本文已影响23人  会动的木头疙瘩儿

源自:http://dockone.io/article/2252
https://github.com/deviantony/docker-elk

本文在现有的docker swarm环境上部署

日志收集的流程

Dockerized环境中的典型ELK日志收集流程如下所示:


源自网络

Logstash负责从各种Docker容器和主机中提取日志,这个流程的主要优点是可以更好地用过滤器来解析日志,Logstash将日志转发到Elasticsearch进行索引,Kibana分析和可视化数据。

elasticsearch是一个文档数据库,以mmap的方式管理索引。mmap是一种内存映射文件的方法,即将一个文件或者其它对象映射到进程的地址空间,实现文件磁盘地址和进程虚拟地址空间中一段虚拟地址的一一对映关系。实现这样的映射关系后,进程就可以采用指针的方式,向操作内存一样读写文件。mmap以页为单位实现映射,操作系统对页的的数量有限制,默认的值太小,elasticsearch要求的最小值是262144,小于此值elasticsearch无法启动。分别在三个node上执行以下命令,临时扩大这种限制:

sysctl -w vm.max_map_count=262144

源文的 docker-stack.yml

version: '3.3'

services:

  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch-oss:6.4.0
    ports:
      - "9200:9200"
      - "9300:9300"
    configs:
      - source: elastic_config
        target: /usr/share/elasticsearch/config/elasticsearch.yml
    environment:
      ES_JAVA_OPTS: "-Xmx256m -Xms256m"
    networks:
      - elk
    deploy:
      mode: replicated
      replicas: 1

  logstash:
    image: docker.elastic.co/logstash/logstash-oss:6.4.0
    ports:
      - "5000:5000"
      - "9600:9600"
    configs:
      - source: logstash_config
        target: /usr/share/logstash/config/logstash.yml
      - source: logstash_pipeline
        target: /usr/share/logstash/pipeline/logstash.conf
    environment:
      LS_JAVA_OPTS: "-Xmx256m -Xms256m"
    networks:
      - elk
    deploy:
      mode: replicated
      replicas: 1

  kibana:
    image: docker.elastic.co/kibana/kibana-oss:6.4.0
    ports:
      - "5601:5601"
    configs:
      - source: kibana_config
        target: /usr/share/kibana/config/kibana.yml
    networks:
      - elk
    deploy:
      mode: replicated
      replicas: 1

configs:

  elastic_config:
    file: ./elasticsearch/config/elasticsearch.yml
  logstash_config:
    file: ./logstash/config/logstash.yml
  logstash_pipeline:
    file: ./logstash/pipeline/logstash.conf
  kibana_config:
    file: ./kibana/config/kibana.yml

networks:
  elk:
    driver: overlay

从文中获取镜像可能会超时下不到,我自己下载下来,修改tag后上传到私库使用,以上不包括日志收集器,我将使用logpout来收集日志,下面是我修改源文添加的logpout,logpout容器需要限制容器占用的资源,因为在流量高峰期,日志容器会占大量的资源

version: '3.3'

services:

  elasticsearch:
    image: 172.16.10.192:5000/elasticsearch:6.5.0
    ports:
      - "9200:9200"
      - "9300:9300"
    configs:
      - source: elastic_config
        target: /usr/share/elasticsearch/config/elasticsearch.yml
    environment:
      ES_JAVA_OPTS: "-Xmx256m -Xms256m"
    networks:
      - elk
    deploy:
      mode: replicated
      replicas: 1

  logstash:
    image: 172.16.10.192:5000/logstash:6.5.0
    ports:
      - "5000:5000"
      - "9600:9600"
    configs:
      - source: logstash_config
        target: /usr/share/logstash/config/logstash.yml
      - source: logstash_pipeline
        target: /usr/share/logstash/pipeline/logstash.conf
    environment:
      LS_JAVA_OPTS: "-Xmx256m -Xms256m"
    networks:
     - elk
    deploy:
      mode: replicated
      replicas: 1
  
  logspout:
    image: bekt/logspout-logstash
    environment:
      ROUTE_URIS: 'logstash+tcp://logstash:5000'
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock 
    depends_on:
      - logstash
    networks:
      - elk

    deploy:
      mode: global
      restart_policy:
        condition: on-failure
        delay: 30s


  kibana:
    image: 172.16.10.192:5000/kibana:6.5.0
    ports:
      - "5601:5601"
    configs:
      - source: kibana_config
        target: /usr/share/kibana/config/kibana.yml
    networks:
      - elk
    deploy:
      mode: replicated
      replicas: 1

configs:

  elastic_config:
    file: ./elasticsearch/config/elasticsearch.yml
  logstash_config:
    file: ./logstash/config/logstash.yml
  logstash_pipeline:
    file: ./logstash/pipeline/logstash.conf
  kibana_config:
    file: ./kibana/config/kibana.yml

networks:
  elk:
    driver: overlay

我上传到私库的镜像:

REPOSITORY                                      TAG                 IMAGE ID            CREATED             SIZE
172.16.10.192:5000/logstash                     6.5.0               7d4604365acd        11 days ago         702MB
172.16.10.192:5000/kibana                       6.5.0               fcc1f039f61c        11 days ago         727MB
172.16.10.192:5000/elasticsearch                6.5.0               ff171d17e77c        11 days ago         774MB
docker.elastic.co/elasticsearch/elasticsearch   6.5.0               ff171d17e77c        11 days ago         774MB

并修改了 logstash/pipeline/logstash.conf

input {
    tcp {
        port => 5000
    }
}

## Add your filters / logstash plugins configuration here

filter {
    if [docker][image] =~ /logstash/ {
        drop {}
    }
}


output {
    elasticsearch {
        hosts => "elasticsearch:9200"
        index => "logstash-%{host}"
    }
}

input:表示logstash监听在udp的5000端口收集数据。
fileter:过滤器,表示过滤掉image为logstash的容器实例上报上来的数据。
output:表示如何上报过滤后的数据,这里是通过9200端口上报到elasticsearch数据库。

部署到docker swarm中

因为是私库需要加上 --with-registry-auth

docker stack deploy -c docker-stack.yml --with-registry-auth elk
[root@swarm-m docker-elk]# docker stack services elk
ID                  NAME                MODE                REPLICAS            IMAGE                                    PORTS
b2vqn2d78jw9        elk_kibana          replicated          1/1                 172.16.10.192:5000/kibana:6.5.0          *:5601->5601/tcp
bb0ko12i5267        elk_logstash        replicated          1/1                 172.16.10.192:5000/logstash:6.5.0        *:5000->5000/tcp, *:9600->9600/tcp
bw68dn5fe9zp        elk_logspout        global              3/3                 bekt/logspout-logstash:latest            
pv0lfryjmgxo        elk_elasticsearch   replicated          1/1                 172.16.10.192:5000/elasticsearch:6.5.0   *:9200->9200/tcp, *:9300->9300/tcp

部署完成后可以在访问 http://172.16.10.85:5601

WX20181121-135325@2x.png

index pattern 输入 logstash-* 点下一步

WX20181121-135532@2x.png

点击 create index pattern 之后查看菜单Discover

WX20181121-135803@2x.png

我fork后修改了源码:https://github.com/liangxiaobo/docker-elk

引用
docker swarm集群日志管理ELK实战
Logstash 基础入门
https://github.com/looplab/logspout-logstash

上一篇 下一篇

猜你喜欢

热点阅读