《日子》.分布式之开篇-Zookeeper集群安装
zookeeper集群安装
(集群安装测试时以ip为例说明,正式上线后可以配主机名称)
1)解压zookeeper安装包
tar -zxvfzookeeper-3.4.5.tar.gz
2)进入zookeeper-3.4.5文件夹,创建data和log
创建目录并赋于写权限
指定zookeeper的数据存放目录和日志目录
3)拷贝zookeeper配制文件zoo_sample.cfg
拷贝zookeeper配制文件zoo_sample.cfg并重命名zoo.cfg
cp /solrcloud/zookeeper-3.4.5/conf/zoo_sample.cfg/solrcloud/zookerper-3.4.5/conf/zoo.cfg
4)修改zoo.cfg
加入dataDir=/solrcloud/zookeeper-3.4.5/data
dataLogDir=/solrcloud/zookeeper-3.4.5/log
server.1=192.168.56.11:2888:3888
server.2=192.168.56.12:2888:3888
server.3=192.168.56.13:2888:3888
zoo.cfg配制完后如下:
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/solrcloud/zookeeper-3.4.5/data
dataLogDir=/solrcloud/zookeeper-3.4.5/log
# the port at which the clients will connect
clientPort=2181
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
autopurge.purgeInterval=1
server.1=192.168.56.11:2888:3888
server.2=192.168.56.12:2888:3888
server.3=192.168.56.13:2888:3888
5)进入data文件夹建立对应的myid文件
例如server.1=192.168.56.11 data文件夹下的myid文件内容为1
6)制zookeeper-3.4.5文件夹到其他机器
将data文件夹下的myid修改为对应的server.【序号】
7)开启zookeeper的端口
/sbin/iptables -I INPUT -p tcp --dport 2181-j ACCEPT
/sbin/iptables -I INPUT -p tcp --dport 2888-j ACCEPT
/sbin/iptables -I INPUT -p tcp --dport 3888-j ACCEPT
/sbin/iptables -I INPUT -p tcp --dport 8080
-j ACCEPT--顺便启用tomcat 8080端口
/etc/rc.d/init.d/iptables save#将更改进行保存
/etc/init.d/iptables restart#重启防火墙以便改动生效
8)启动zookeeper
进入bin
./zkServer.sh start
查看集群状态
./zkServer.sh status刚启动可能会有错误,集群中其他节点一并起来后就正常了
./zkCli.sh
ls /等命令可以查看zookeeper上面的文件。
例如: ls /live_nodes 查看各节点存活情况
rmr 可以删除文件