kafka单机安装部署
2019-10-17 本文已影响0人
吃货大米饭
一、环境准备
- kafka-2.2.1-kafka4.1.0.tar.gz
- 已经安装好zookeeper环境
- kafka-eagle-bin-1.3.9.tar.gz
二、安装部署
- 解压安装包
[hadoop@hadoop000 software]$ tar -zxvf kafka-2.2.1-kafka4.1.0.tar.gz -C ../app/
[hadoop@hadoop000 app]$ ln -s kafka_2.11-2.2.1-kafka-4.1.0 kafka
- 配置环境变量
[hadoop@hadoop000 ~]$ vi ~/.bash_profile
export KAFKA_HOME=/home/hadoop/app/kafka
export PATH=${KAFKA_HOME}/bin:${PATH}
[hadoop@hadoop000 ~]$source ~/.bash_profile
- 修改kafka配置文件
[hadoop@hadoop000 config]$ vi server.properties
broker.id=0
host.name=hadoop000
port=9092
advertised.listeners=PLAINTEXT://hadoop000:9092
log.dirs=/home/hadoop/log/kafka-logs
zookeeper.connect=hadoop000:2181/kafka
auto.create.topics.enable=false
unclean.leader.election.enable=false
auto.leader.rebalance.enable=false
delete.topic.enable=true
- 启动与停止
[hadoop@hadoop000 bin]$ vi kafka-server-start.sh
if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then
# export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G"
export KAFKA_HEAP_OPTS="-server -Xms2G -Xmx2G -XX:PermSize=128m -XX:+UseG1GC -XX:MaxGCPauseMillis=200 -XX:ParallelGCThreads=8 -XX:
ConcGCThreads=5 -XX:InitiatingHeapOccupancyPercent=70"
export JMX_PORT="9999"
fi
#启动
[hadoop@hadoop000 kafka]$ bin/kafka-server-start.sh -daemon config/server.properties
#停止
[hadoop@hadoop000 kafka]$ bin/kafka-server-stop.sh
#如果出现 No kafka server to stop
[hadoop@hadoop000 kafka]$ vi bin/kafka-server-stop.sh
#PIDS=$(ps ax | grep -i 'kafka\.Kafka' | grep java | grep -v grep | awk '{print $1}')
PIDS=$(ps ax | grep -i 'kafka\.wrap\.Kafka' | grep java | grep -v grep | awk '{print $1}')
三、kafka基本操作
TODO......
四、安装kafka-eagle
- 解压安装包
[hadoop@hadoop000 software]$ tar -zxvf kafka-eagle-bin-1.3.9.tar.gz -C ../app/
[hadoop@hadoop000 kafka-eagle-bin-1.3.9]$ tar -zxvf kafka-eagle-web-1.3.9-bin.tar.gz
[hadoop@hadoop000 kafka-eagle-web-1.3.9]$ ln -s /home/hadoop/app/kafka-eagle-bin-1.3.9/kafka-eagle-web-1.3.9 /home/hadoop/app/kafka-eagle
- 配置system-config.properties
[hadoop@hadoop000 conf]$ vi system-config.properties
######################################
# multi zookeeper&kafka cluster list
######################################
kafka.eagle.zk.cluster.alias=cluster1
cluster1.zk.list=hadoop000:2181
######################################
# zk client thread limit
######################################
kafka.zk.limit.size=25
######################################
# kafka eagle webui port
######################################
kafka.eagle.webui.port=8048
######################################
# kafka offset storage
######################################
cluster1.kafka.eagle.offset.storage=kafka
######################################
# enable kafka metrics
######################################
kafka.eagle.metrics.charts=true
kafka.eagle.sql.fix.error=true
######################################
# kafka sql topic records max
######################################
kafka.eagle.sql.topic.records.max=5000
######################################
# alarm email configure
######################################
kafka.eagle.mail.enable=false
kafka.eagle.mail.sa=alert_sa@163.com
kafka.eagle.mail.username=alert_sa@163.com
kafka.eagle.mail.password=mqslimczkdqabbbh
kafka.eagle.mail.server.host=smtp.163.com
kafka.eagle.mail.server.port=25
######################################
# alarm im configure
######################################
#kafka.eagle.im.dingding.enable=true
#kafka.eagle.im.dingding.url=https://oapi.dingtalk.com/robot/send?access_token=
#kafka.eagle.im.wechat.enable=true
#kafka.eagle.im.wechat.token=https://qyapi.weixin.qq.com/cgi-bin/gettoken?corpid=xxx&corpsecret=xxx
#kafka.eagle.im.wechat.url=https://qyapi.weixin.qq.com/cgi-bin/message/send?access_token=
#kafka.eagle.im.wechat.touser=
#kafka.eagle.im.wechat.toparty=
#kafka.eagle.im.wechat.totag=
#kafka.eagle.im.wechat.agentid=
######################################
# delete kafka topic token
######################################
kafka.eagle.topic.token=keadmin
######################################
# kafka sasl authenticate
######################################
cluster1.kafka.eagle.sasl.enable=false
cluster1.kafka.eagle.sasl.protocol=SASL_PLAINTEXT
cluster1.kafka.eagle.sasl.mechanism=PLAIN
cluster1.kafka.eagle.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="admin" password="kafka-eagle";
cluster2.kafka.eagle.sasl.enable=false
cluster2.kafka.eagle.sasl.protocol=SASL_PLAINTEXT
cluster2.kafka.eagle.sasl.mechanism=PLAIN
cluster2.kafka.eagle.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="admin" password="kafka-eagle";
######################################
# kafka jdbc driver address
######################################
#kafka.eagle.driver=org.sqlite.JDBC
#kafka.eagle.url=jdbc:sqlite:/hadoop/kafka-eagle/db/ke.db
#kafka.eagle.username=root
#kafka.eagle.password=www.kafka-eagle.org
kafka.eagle.driver=com.mysql.jdbc.Driver
kafka.eagle.url=jdbc:mysql://hadoop000:3306/ke?createDatabaseIfNotExist=true&useUnicode=true&characterEncoding=UTF-8&zeroDateTimeBehavior=convertToNull
kafka.eagle.username=root
kafka.eagle.password=123456
- 启动Kafka Eagle
[hadoop@hadoop000 kafka-eagle]$ vi ~/.bash_profile
export KE_HOME=/home/hadoop/app/kafka-eagle
export PATH=${KE_HOME}/bin:${PATH}
[hadoop@hadoop000 kafka-eagle]$ source ~/.bash_profile
[hadoop@hadoop000 bin]$ chmod +x ke.sh
在ke.sh脚本中,支持以下命令:
Usage: ke.sh {start|stop|restart|status|stats|find|gc|jdk}
命令 | 说明 |
---|---|
ke.sh start | 启动Kafka Eagle系统 |
ke.sh stop | 停止Kafka Eagle系统 |
ke.sh restart | 重启Kafka Eagle系统 |
ke.sh status | 查看Kafka Eagle系统运行状态 |
ke.sh stats | 统计Kafka Eagle系统占用Linux资源情况 |
ke.sh find [ClassName] | 查看Kafka Eagle系统中的类是否存在 |
- 特别注意
如果启动发现,打开页面是500,且查看日志为:
Caused by: java.lang.NoClassDefFoundError: Could not initialize class com.fasterxml.jackson.annotation.JsonInclude$Value
需要重新编译源码,修改源码 kafka-eagle-web/pom.xml,把 jackson.version 版本由 2.9.6 改为 2.4.5,重新编译一下即可。
<properties>
...
<jackson.version>2.9.6</jackson.version>
</properties>
<properties>
...
<jackson.version>2.4.5</jackson.version>
</properties>