详细容器ELK部署+flume收集日志
2022-07-26 本文已影响0人
断水流大师兄vs魔鬼筋肉人
docker pull elasticsearch:7.13.2
mkdir /data/elk/es
mkdir /data/elk/es/data
mkdir /data/elk/es/config
echo http.host: 0.0.0.0>config/elasticsearch.yml
chmod -R 777 es (非必选)
docker run --name es -p 9200:9200 -p 9300:9300 -e “discovery.type=single-node” \
-e ES_JAVA_OPTS="-Xms500m -Xmx500m" \
-v /data/elk/es/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml \
-v /data/elk/es/data:/usr/share/elasticsearch/data \
-v /data/elk/es/plugins:/usr/share/elasticsearch/plugins -d elasticsearch:7.13.2 \
详解:
-e “discovery.type=single-node”:单例模式
-e ES_JAVA_OPTS=“-Xms500m -Xmx500m”:配置内存大小
部署成功,通过本地hosts解或者IP析访问,云主机记得开放端口
部署kibana
docker0 网桥,使本地和容器的网络互通我是部署在同一台主机的,所以后面的容器想调用其它容器可以使用172.17.0.1(容器IP)+端口调用 elasticsearch
docker run -p 5601:5601 -d -e ELASTICSEARCH_URL=http://172.17.0.1:9200 \
-e ELASTICSEARCH_HOSTS=http://172.17.0.1:9200 kibana:7.3.2
页面访问成功
IP+端口访问(云主机开放5601端口)搭建kafka
前置条件:zookeeper
docker run -d --name zookeeper -p 2181:2181 -v /etc/localtime:/etc/localtime zookeeper
部署kafka:
docker run -d --name kafka -p 9092:9092 -e KAFKA_BROKER_ID=0 \
-e KAFKA_ZOOKEEPER_CONNECT=172.17.0.1:2181 \
-e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://172.17.0.1:9092 \
-e KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9092 -t wurstmeister/kafka
#KAFKA_BROKER_ID:kafka节点Id,集群时要指定
#KAFKA_ZOOKEEPER_CONNECT:配置zookeeper管理kafka的路径,内网ip
#KAFKA_ADVERTISED_LISTENERS:把kafka的地址端口注册给zookeeperzookeeper
#KAFKA_LISTENERS: 配置kafka的监听端口
用kafka tool连接kafka看能否成功,下载地址:https://www.kafkatool.com/download.html
部署logstash
mkdir /data/elk/logstash
存放配置文件:logstash.yml和logstash.conf
[root@mayi-2 elk]# cat /data/elk/logstash/logstash.yml
http.host: "0.0.0.0"
xpack.monitoring.elasticsearch.hosts: [ "http://172.17.0.1:9200" ]
[root@mayi-2 elk]# cat /data/elk/logstash/logstash.conf
input {
kafka {
topics => "logkafka"
bootstrap_servers => "172.17.0.1:9092"
codec => "json"
}
}
output {
elasticsearch {
hosts => ["172.17.0.1:9200"]
index => "logkafka"
#user => "elastic"
#password => "changeme"
}
}
启动容器映射本地配置文件到容器:
docker run --rm -it --privileged=true -p 9600:9600 -d \
-v /data/elk/logstash/logstash.conf:/usr/share/logstash/pipeline/logstash.conf \
-v /data/elk/logstash/log/:/home/public/ \
-v /data/elk/logstash/logstash.yml:/usr/share/logstash/config/logstash.yml logstash:7.13.2
测试访问成功
进入kafka测试:
进入容器
docker exec -it kafka /bin/bash
找到kafka命令位置
root@30085794fa39:/# find ./* -name 'kafka-console-producer.sh'
./opt/kafka_2.13-2.8.1/bin/kafka-console-producer.sh
向topic测试写入数据
./kafka-console-producer.sh --broker-list 172.17.0.1:9092 --topic logkafka
写入测试数据
在Stack Management——Index Management可以看到这个topic和4条数据的产生
增加索引
索引名称得和topic前缀一样,可以是log*,logka*,logkaka*索引的时间字段
中文版
在Discover 可以发现刚刚写入的数据
flume部署:
创建对应的配置文件:
目录结构:
mkdir -p ./{logs,conf,flume_log}
配置文件:
[root@mayi-2 elk]# cat flume/conf/los-flume-kakfa.conf
app.sources = r1
app.channels = c1
# Describe/configure the source
app.sources.r1.type = exec
app.sources.r1.command = tail -F /tmp/test_logs/app.log
app.sources.r1.channels = c1
# Use a channel which buffers events in KafkaChannel
# 设置app.channels.c1的类型
app.channels.c1.type = org.apache.flume.channel.kafka.KafkaChannel
# 设置Kafka集群中的Broker 集群以逗号分割
app.channels.c1.kafka.bootstrap.servers = 127.17.0.1:9092
# 设置app.channels.c1使用的Kafka的Topic
app.channels.c1.kafka.topic = logkafka
# 设置成不按照flume event格式解析数据,因为同一个Kafka topic可能有非flume Event类数据传入
app.channels.c1.parseAsFlumeEvent = false
# 设置消费者组,保证每次消费时能够获取上次对应的Offset
app.channels.c1.kafka.consumer.group.id = logkafka-consumer-group
# 设置消费过程poll()超时时间(ms)
app.channels.c1.pollTimeout = 1000
启动容器:
docker run --name flume --net=host \
-v /data/elk/flume/conf:/opt/flume-config/flume.conf \
-v /data/elk/flume/flume_log:/var/tmp/flume_log \
-v /data/elk/flume/logs:/opt/flume/logs \
-v /tmp/test_logs/:/tmp/test_logs/ \
-e FLUME_AGENT_NAME="agent" \
-d docker.io/probablyfine/flume:latest
进入容器启动flume
docker exec -it flume bash
cd /opt/flume/bin/
nohup ./flume-ng agent -c /opt/flume/conf -f /opt/flume-config/flume.conf/los-flume-kakfa.conf -n app &
测试写入内容
kibana收集到日志