数据采集FlumeKafka

Flume3:集成Kafka

2020-04-28  本文已影响0人  勇于自信
1.kafka操作命令

一、单机版
1、启动进程:
]# ./bin/kafka-server-start.sh config/server.properties
2、查看topic列表:
]# ./bin/kafka-topics.sh --list --zookeeper localhost:2181
3、创建topic:
]# ./bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic newyear_test
4、查看topic描述:
]# ./bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic newyear_test
5、producer发送数据:
]# ./bin/kafka-console-producer.sh --broker-list localhost:9092 --topic newyear_test
6、consumer接收数据:
]# ./bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic newyear_test --from-beginning
7、删除topic:
]# ./bin/kafka-topics.sh --delete --zookeeper localhost:2181 --topic newyear_test
二、集群版
在slave1和slave2上的broker.id一定设置不同
分别在slave1和slave2上开启进程:
./bin/kafka-server-start.sh config/server.properties
创建topic:
]# ./bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 3 --partitions 5 --topic newyear_many_test

2.kafka打通flume

2.1 新建./conf/kafka_test/flume_kafka.conf

# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1

# Describe/configure the source
a1.sources.r1.type = exec
a1.sources.r1.command = tail -f /home/dong/flume_test/1.log

#a1.sources.r1.type = http
#a1.sources.r1.host = master
#a1.sources.r1.port = 52020

a1.sources.r1.interceptors = i1
a1.sources.r1.interceptors.i1.type = org.apache.flume.sink.solr.morphline.UUIDInterceptor$Builder
a1.sources.r1.interceptors.i1.headerName = key
a1.sources.r1.interceptors.i1.preserveExisting = false

# Describe the sink
a1.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink
a1.sinks.k1.brokerList  = master:9092
#a1.sinks.k1.topic = badou_flume_kafka_test
#a1.sinks.k1.topic = badou_storm_kafka_test
a1.sinks.k1.topic = topic_1013

# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

2.2 启动flume:
]# flume-ng agent -c conf -f ./conf/kafka_test/flume_kafka.conf -n a1 -Dflume.root.logger=INFO,console
启动成功,如下图:


3.测试

3.1 flume监控产生的数据:
]# for i in seq 1 100; do echo '====> '$i >> 1.log ; done
3.2 kafka消费数据:
]# ./bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic topic_1013 --from-beginning
消费结果如下图:


至此,消费成功!
补充,另外一种消费方式可以使用下面命令来产生测试数据:
]# curl -X POST -d ‘[{“headers”:{“flume”:“flume is very easy!”}, “body”:“111”}]’ http://master:52020
上一篇下一篇

猜你喜欢

热点阅读