redis-cluster 利用redis自带脚本搭建主从集群
2018-03-09 本文已影响0人
hisenyuan
本次搭建是在centos 7.3上进行,默认的ruby版本为2.2.2。无法通过yum升级。
单机三主三从、多机原理类似
一、redis安装
# 下载
cd /usr/local/
wget http://download.redis.io/releases/redis-3.2.1.tar.gz
tar -zxvf /redis-3.2.1.tar.gz
# 编译安装
cd redis-3.2.1
make && make install
二、创建配置文件
mkdir cluster
cd cluster
vi 7000.conf
# 内容如下
port 7000 //端口7000,7002,7003
bind 10.93.84.53 //默认ip为127.0.0.1 需要改为其他节点机器可访问的ip 否则创建集群时无法访问对应的端口,无法创建集群
daemonize yes //redis后台运行
pidfile ./redis_7000.pid //pidfile文件对应7000,7001,7002
cluster-enabled yes //开启集群 把注释#去掉
cluster-config-file nodes_7000.conf //集群的配置 配置文件首次启动自动生成 7000,7001,7002
cluster-node-timeout 15000 //请求超时 默认15秒,可自行设置
appendonly yes //aof日志开启 有需要就开启,它会每次写操作都记录一条日志
#拷贝5份,注意修改里面对应的内容(端口号)
cp 7000.conf 7001.conf
三、启动redis,并校验
# 启动
cd ..
redis-server cluster/7000.conf
redis-server cluster/7001.conf
redis-server cluster/7002.conf
redis-server cluster/7003.conf
redis-server cluster/7004.conf
redis-server cluster/7005.conf
# 查看进程
ps -ef | grep redis | grep cluster
root 15792 1 0 14:15 ? 00:00:00 redis-server 192.168.1.174:7000 [cluster]
root 15805 1 0 14:17 ? 00:00:00 redis-server 192.168.1.174:7001 [cluster]
root 15809 1 0 14:17 ? 00:00:00 redis-server 192.168.1.174:7002 [cluster]
root 15814 1 0 14:17 ? 00:00:00 redis-server 192.168.1.174:7003 [cluster]
root 15818 1 0 14:17 ? 00:00:00 redis-server 192.168.1.174:7004 [cluster]
root 15822 1 0 14:17 ? 00:00:00 redis-server 192.168.1.174:7005 [cluster]
四、安装高版本ruby(高于2.2.2版本的可以跳过)
由于低版本无法使用脚本,所以得安装高版本的ruby
wget http://cache.ruby-lang.org/pub/ruby/2.3/ruby-2.3.5.tar.gz
tar zxvf ruby-2.3.5.tar.gz
cd ruby-2.3.5
./configure --prefix=/opt/ruby
make && make install
# 下面两步如果遇到错误,可以先删除原来的链接
ln -s /opt/ruby/bin/ruby /usr/bin/ruby
ln -s /opt/ruby/bin/gem /usr/bin/gem
# 查看版本
ruby -v
五、创建集群
# 安装插件
gem install redis
# redis目录下执行(相对而已,自行决定)
ruby src/redis-trib.rb
# 安装之后,进行集群的创建,--replicas 1 代表master:slave=1:1
ruby src/redis-trib.rb create --replicas 1 192.168.1.174:7000 192.168.1.174:7001 192.168.1.174:7002 192.168.1.174:7003 192.168.1.174:7004 192.168.1.174:7005
# 执行之后会提示相关信息
>>> Creating cluster
>>> Performing hash slots allocation on 6 nodes...
Using 3 masters:
192.168.1.174:7000
192.168.1.174:7001
192.168.1.174:7002
Adding replica 192.168.1.174:7003 to 192.168.1.174:7000
Adding replica 192.168.1.174:7004 to 192.168.1.174:7001
Adding replica 192.168.1.174:7005 to 192.168.1.174:7002
M: 18133a296a94716e580a5c4a16ca4190676f9f67 192.168.1.174:7000
slots:0-5460 (5461 slots) master
M: d7ebddbcb9b191aa14adaec638606edf7277c749 192.168.1.174:7001
slots:5461-10922 (5462 slots) master
M: 9751f758405e44008b7829126cb7adc0eb4f1539 192.168.1.174:7002
slots:10923-16383 (5461 slots) master
S: 47f9deb2756f5007c92657120da339d0ecaabcee 192.168.1.174:7003
replicates 18133a296a94716e580a5c4a16ca4190676f9f67
S: bc1d885ce61c401a07dcedbf3505b6613c1138d3 192.168.1.174:7004
replicates d7ebddbcb9b191aa14adaec638606edf7277c749
S: a25d1e5a5561126f8026c2d11fc366d97cc81ca5 192.168.1.174:7005
replicates 9751f758405e44008b7829126cb7adc0eb4f1539
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join....
>>> Performing Cluster Check (using node 192.168.1.174:7000)
M: 18133a296a94716e580a5c4a16ca4190676f9f67 192.168.1.174:7000
slots:0-5460 (5461 slots) master
M: d7ebddbcb9b191aa14adaec638606edf7277c749 192.168.1.174:7001
slots:5461-10922 (5462 slots) master
M: 9751f758405e44008b7829126cb7adc0eb4f1539 192.168.1.174:7002
slots:10923-16383 (5461 slots) master
M: 47f9deb2756f5007c92657120da339d0ecaabcee 192.168.1.174:7003
slots: (0 slots) master
replicates 18133a296a94716e580a5c4a16ca4190676f9f67
M: bc1d885ce61c401a07dcedbf3505b6613c1138d3 192.168.1.174:7004
slots: (0 slots) master
replicates d7ebddbcb9b191aa14adaec638606edf7277c749
M: a25d1e5a5561126f8026c2d11fc366d97cc81ca5 192.168.1.174:7005
slots: (0 slots) master
replicates 9751f758405e44008b7829126cb7adc0eb4f1539
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
六、验证集群状态
利用客户端,链接到任意一个主节点,执行相关命令即可
redis-cli -h 192.168.1.174 -p 7000 -c
192.168.1.174:7000> cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_sent:334
cluster_stats_messages_received:334
192.168.1.174:7000> cluster nodes
9751f758405e44008b7829126cb7adc0eb4f1539 192.168.1.174:7002 master - 0 1519800963508 3 connected 10923-16383
a25d1e5a5561126f8026c2d11fc366d97cc81ca5 192.168.1.174:7005 slave 9751f758405e44008b7829126cb7adc0eb4f1539 0 1519800964511 6 connected
d7ebddbcb9b191aa14adaec638606edf7277c749 192.168.1.174:7001 master - 0 1519800962506 2 connected 5461-10922
47f9deb2756f5007c92657120da339d0ecaabcee 192.168.1.174:7003 slave 18133a296a94716e580a5c4a16ca4190676f9f67 0 1519800961504 4 connected
bc1d885ce61c401a07dcedbf3505b6613c1138d3 192.168.1.174:7004 slave d7ebddbcb9b191aa14adaec638606edf7277c749 0 1519800959499 5 connected
18133a296a94716e580a5c4a16ca4190676f9f67 192.168.1.174:7000 myself,master - 0 0 1 connected 0-5460
七、重启
官方并没有提供重启的方案,一旦新建了集群
后期启动即可,不用重新创建集群