redis 集群新增节点,slots槽分配,删除节点, [ERR
- redis reshard 重新分槽(slots)
https://github.com/antirez/redis/issues/5029 redis 官方已确认该bug
- redis 集群重新(reshard)分片故障
[ERR] Calling MIGRATE ERR Syntax error, try CLIENT (LIST | KILL | GETNAME | SETNAME | PAUSE | REPLY)
-
错误背景描述
redis版本:4.0.1
ruby gem reids 版本:4.0.0ruby gem安装的redis库,版本不能使用最新的4.0,否则redis-trib.rb reshard 172.16.160.60:6377 重新分片时会报错误
-
解决方案
a. 卸载最新redis库,gem uninstall redis
b. 安装3.x版本,gem install redis -v 3.3.5 测试3.2.1到3.3.5都可以,4.x以上的分片报错
- redis slots migrating importing 解决方法
- 错误描述
[WARNING] Node 172.16.160.34:6368 has slots in migrating state (6390)
[WARNING] Node 172.16.160.61:6377 has slots in migrating state (6390)
[WARNING] The following slots are open: 6390
-
处理方式:
登入两个提示错误的节点,执行清除命令即可
cluster setslot 6390 stable
- redis 集群新增节点
新增节点有两种方式,一种为添加主节点(master),另一种为添加一个从节点(slave)
- 列出redis现有节点
172.16.160.60:6377> cluster nodes
afa51bebb90da31a1da4912c762edfdb713411c5 172.16.160.34:6368@16368 master - 0 1537168776000 12 connected 0-1364 5461-6826 10923-12287
c447385f64b9294ee9fdab634254505e06dd3770 172.16.160.34:6367@16367 slave e4c7c9cb80caf727cb5724af7b47ce0b462b9749 0 1537168776158 4 connected
bdc98c07bdcfc5141a3a41af25ac5b1826aa9f2a 172.16.160.61:6367@16367 slave 92ab349e2f5723cec93e8b3e26af1d4062cd1469 0 1537168776000 5 connected
e4c7c9cb80caf727cb5724af7b47ce0b462b9749 172.16.160.34:6377@16377 master - 0 1537168778163 3 connected 12288-16383
78fd5441a07f6762d821b51fa330d535239953fe 172.16.160.61:6377@16377 master - 0 1537168774000 11 connected 6827-10922
92ab349e2f5723cec93e8b3e26af1d4062cd1469 172.16.160.60:6377@16377 myself,master - 0 1537168775000 1 connected 1365-5460
0b6f0cabbb8488f43a6b5c8a44c781656d3075d2 172.16.160.60:6367@16367 slave 78fd5441a07f6762d821b51fa330d535239953fe 0 1537168777161 11 connected
0a170716fa820a056a8826c63a5a4c02a9aaa34a 172.16.160.34:6369@16369 slave afa51bebb90da31a1da4912c762edfdb713411c5 0 1537168775000 12 connected
-
新增节点(master)主节点
add-node 要添加的节点 ip :prot 用来标识添加至哪个集群 ip : prot
[root@yoyo60 bin]# ./redis-trib.rb add-node 172.16.160.61:6368 172.16.160.60:6377
>>> Adding node 172.16.160.61:6368 to cluster 172.16.160.60:6377
>>> Performing Cluster Check (using node 172.16.160.60:6377)
M: 92ab349e2f5723cec93e8b3e26af1d4062cd1469 172.16.160.60:6377
slots:1365-5460 (4096 slots) master
1 additional replica(s)
M: afa51bebb90da31a1da4912c762edfdb713411c5 172.16.160.34:6368
slots:0-1364,5461-6826,10923-12287 (4096 slots) master
1 additional replica(s)
S: c447385f64b9294ee9fdab634254505e06dd3770 172.16.160.34:6367
slots: (0 slots) slave
replicates e4c7c9cb80caf727cb5724af7b47ce0b462b9749
S: bdc98c07bdcfc5141a3a41af25ac5b1826aa9f2a 172.16.160.61:6367
slots: (0 slots) slave
replicates 92ab349e2f5723cec93e8b3e26af1d4062cd1469
M: e4c7c9cb80caf727cb5724af7b47ce0b462b9749 172.16.160.34:6377
slots:12288-16383 (4096 slots) master
1 additional replica(s)
M: 78fd5441a07f6762d821b51fa330d535239953fe 172.16.160.61:6377
slots:6827-10922 (4096 slots) master
1 additional replica(s)
S: 0b6f0cabbb8488f43a6b5c8a44c781656d3075d2 172.16.160.60:6367
slots: (0 slots) slave
replicates 78fd5441a07f6762d821b51fa330d535239953fe
S: 0a170716fa820a056a8826c63a5a4c02a9aaa34a 172.16.160.34:6369
slots: (0 slots) slave
replicates afa51bebb90da31a1da4912c762edfdb713411c5
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 172.16.160.61:6368 to make it join the cluster.
[OK] New node added correctly.
-
查看新增的主节点(master)
新增主节点成功,主节点id: 117dd5c58a92c602ee6fc2df2d76a6bb3216654f 并且新增的主节点并未分分槽(slots),处于不可使用状态(没有分slots数据不会存至该节点)
[root@yoyo60 bin]# ./redis-trib.rb check 172.16.160.61:6368
>>> Performing Cluster Check (using node 172.16.160.61:6368)
M: 117dd5c58a92c602ee6fc2df2d76a6bb3216654f 172.16.160.61:6368
slots: (0 slots) master
0 additional replica(s)
S: c447385f64b9294ee9fdab634254505e06dd3770 172.16.160.34:6367
slots: (0 slots) slave
replicates e4c7c9cb80caf727cb5724af7b47ce0b462b9749
M: 92ab349e2f5723cec93e8b3e26af1d4062cd1469 172.16.160.60:6377
slots:1365-5460 (4096 slots) master
1 additional replica(s)
S: bdc98c07bdcfc5141a3a41af25ac5b1826aa9f2a 172.16.160.61:6367
slots: (0 slots) slave
replicates 92ab349e2f5723cec93e8b3e26af1d4062cd1469
S: 0b6f0cabbb8488f43a6b5c8a44c781656d3075d2 172.16.160.60:6367
slots: (0 slots) slave
replicates 78fd5441a07f6762d821b51fa330d535239953fe
M: 78fd5441a07f6762d821b51fa330d535239953fe 172.16.160.61:6377
slots:6827-10922 (4096 slots) master
1 additional replica(s)
S: 0a170716fa820a056a8826c63a5a4c02a9aaa34a 172.16.160.34:6369
slots: (0 slots) slave
replicates afa51bebb90da31a1da4912c762edfdb713411c5
M: e4c7c9cb80caf727cb5724af7b47ce0b462b9749 172.16.160.34:6377
slots:12288-16383 (4096 slots) master
1 additional replica(s)
M: afa51bebb90da31a1da4912c762edfdb713411c5 172.16.160.34:6368
slots:0-1364,5461-6826,10923-12287 (4096 slots) master
1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
-
新增从节点(slave)并指定其对应的主节点
--master-id 指定主节点的节点id(添加的节点要作为谁的从节点) ip : prot 要添加从节点ip及端口 ip :prot 标识给那个集群添加节点
[root@yoyo60 bin]# ./redis-trib.rb add-node --slave --master-id 117dd5c58a92c602ee6fc2df2d76a6bb3216654f 172.16.160.61:6369 172.16.160.61:6368
>>> Adding node 172.16.160.61:6369 to cluster 172.16.160.61:6368
>>> Performing Cluster Check (using node 172.16.160.61:6368)
M: 117dd5c58a92c602ee6fc2df2d76a6bb3216654f 172.16.160.61:6368
slots: (0 slots) master
0 additional replica(s)
S: c447385f64b9294ee9fdab634254505e06dd3770 172.16.160.34:6367
slots: (0 slots) slave
replicates e4c7c9cb80caf727cb5724af7b47ce0b462b9749
M: 92ab349e2f5723cec93e8b3e26af1d4062cd1469 172.16.160.60:6377
slots:1365-5460 (4096 slots) master
1 additional replica(s)
S: bdc98c07bdcfc5141a3a41af25ac5b1826aa9f2a 172.16.160.61:6367
slots: (0 slots) slave
replicates 92ab349e2f5723cec93e8b3e26af1d4062cd1469
S: 0b6f0cabbb8488f43a6b5c8a44c781656d3075d2 172.16.160.60:6367
slots: (0 slots) slave
replicates 78fd5441a07f6762d821b51fa330d535239953fe
M: 78fd5441a07f6762d821b51fa330d535239953fe 172.16.160.61:6377
slots:6827-10922 (4096 slots) master
1 additional replica(s)
S: 0a170716fa820a056a8826c63a5a4c02a9aaa34a 172.16.160.34:6369
slots: (0 slots) slave
replicates afa51bebb90da31a1da4912c762edfdb713411c5
M: e4c7c9cb80caf727cb5724af7b47ce0b462b9749 172.16.160.34:6377
slots:12288-16383 (4096 slots) master
1 additional replica(s)
M: afa51bebb90da31a1da4912c762edfdb713411c5 172.16.160.34:6368
slots:0-1364,5461-6826,10923-12287 (4096 slots) master
1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 172.16.160.61:6369 to make it join the cluster.
Waiting for the cluster to join...
>>> Configure node as replica of 172.16.160.61:6368.
[OK] New node added correctly.
-
查看集群状态
新增从节点成功,从信息中可以看到 172.16.160.61:6369 作为从节点,并在 replicates 中记录了主节点的id 117dd5c58a92c602ee6fc2df2d76a6bb3216654f
[root@yoyo60 bin]# ./redis-trib.rb check 172.16.160.61:6368
>>> Performing Cluster Check (using node 172.16.160.61:6368)
M: 117dd5c58a92c602ee6fc2df2d76a6bb3216654f 172.16.160.61:6368
slots: (0 slots) master
1 additional replica(s)
S: c447385f64b9294ee9fdab634254505e06dd3770 172.16.160.34:6367
slots: (0 slots) slave
replicates e4c7c9cb80caf727cb5724af7b47ce0b462b9749
M: 92ab349e2f5723cec93e8b3e26af1d4062cd1469 172.16.160.60:6377
slots:1365-5460 (4096 slots) master
1 additional replica(s)
S: bdc98c07bdcfc5141a3a41af25ac5b1826aa9f2a 172.16.160.61:6367
slots: (0 slots) slave
replicates 92ab349e2f5723cec93e8b3e26af1d4062cd1469
S: 0b6f0cabbb8488f43a6b5c8a44c781656d3075d2 172.16.160.60:6367
slots: (0 slots) slave
replicates 78fd5441a07f6762d821b51fa330d535239953fe
M: 78fd5441a07f6762d821b51fa330d535239953fe 172.16.160.61:6377
slots:6827-10922 (4096 slots) master
1 additional replica(s)
S: 0a170716fa820a056a8826c63a5a4c02a9aaa34a 172.16.160.34:6369
slots: (0 slots) slave
replicates afa51bebb90da31a1da4912c762edfdb713411c5
M: e4c7c9cb80caf727cb5724af7b47ce0b462b9749 172.16.160.34:6377
slots:12288-16383 (4096 slots) master
1 additional replica(s)
M: afa51bebb90da31a1da4912c762edfdb713411c5 172.16.160.34:6368
slots:0-1364,5461-6826,10923-12287 (4096 slots) master
1 additional replica(s)
S: 66c7917a991442068c2741207980fb5d8f60e218 172.16.160.61:6369
slots: (0 slots) slave
replicates 117dd5c58a92c602ee6fc2df2d76a6bb3216654f
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
但是通过上面查看的集群信息发现:
M: 117dd5c58a92c602ee6fc2df2d76a6bb3216654f 172.16.160.61:6368
slots: (0 slots) master
1 additional replica(s)
slots: (0 slots) master 可以看出并未给这个主节点分槽,因此该节点并未负责数据存取,我们需要手动对集群节点分槽。
- redis 分槽(slots)
通过上面,我们发现新增的主节点并未分slots ,需要我们手动对其分槽。
- 使用reshard命令对集群slots重新分配
ip : prot ip与端口标识将要操作的集群
[root@yoyo60 bin]# ./redis-trib.rb reshard 172.16.160.60:6367
>>> Performing Cluster Check (using node 172.16.160.60:6367)
S: 0b6f0cabbb8488f43a6b5c8a44c781656d3075d2 172.16.160.60:6367
slots: (0 slots) slave
replicates 78fd5441a07f6762d821b51fa330d535239953fe
S: c447385f64b9294ee9fdab634254505e06dd3770 172.16.160.34:6367
slots: (0 slots) slave
replicates e4c7c9cb80caf727cb5724af7b47ce0b462b9749
M: 78fd5441a07f6762d821b51fa330d535239953fe 172.16.160.61:6377
slots:6827-10922 (4096 slots) master
1 additional replica(s)
S: 66c7917a991442068c2741207980fb5d8f60e218 172.16.160.61:6369
slots: (0 slots) slave
replicates 117dd5c58a92c602ee6fc2df2d76a6bb3216654f
S: bdc98c07bdcfc5141a3a41af25ac5b1826aa9f2a 172.16.160.61:6367
slots: (0 slots) slave
replicates 92ab349e2f5723cec93e8b3e26af1d4062cd1469
S: 0a170716fa820a056a8826c63a5a4c02a9aaa34a 172.16.160.34:6369
slots: (0 slots) slave
replicates afa51bebb90da31a1da4912c762edfdb713411c5
M: 92ab349e2f5723cec93e8b3e26af1d4062cd1469 172.16.160.60:6377
slots:1365-5460 (4096 slots) master
1 additional replica(s)
M: e4c7c9cb80caf727cb5724af7b47ce0b462b9749 172.16.160.34:6377
slots:12288-16383 (4096 slots) master
1 additional replica(s)
M: afa51bebb90da31a1da4912c762edfdb713411c5 172.16.160.34:6368
slots:0-1364,5461-6826,10923-12287 (4096 slots) master
1 additional replica(s)
M: 117dd5c58a92c602ee6fc2df2d76a6bb3216654f 172.16.160.61:6368
slots: (0 slots) master
1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
How many slots do you want to move (from 1 to 16384)?
提示我们需要迁移多少个slots到172.16.160.61:6368上,slots总数为16384个,现在有五个节点,为了平衡分配16384/5≈3276,我们需要移动3276个slots
How many slots do you want to move (from 1 to 16384)? 3276
What is the receiving node ID?
此时提示我们需要用哪个节点的nodeId来接收这些slots,通过172.16.160.61:6368找到该节点对应的nodeId为:117dd5c58a92c602ee6fc2df2d76a6bb3216654f
How many slots do you want to move (from 1 to 16384)? 3276
What is the receiving node ID? 117dd5c58a92c602ee6fc2df2d76a6bb3216654f
Please enter all the source node IDs.
Type 'all' to use all the nodes as source nodes for the hash slots.
Type 'done' once you entered all the source nodes IDs.
Source node #1:
提示我们需要从哪些节点迁移出这些slots,由于我们需要平均分配slots,所以需要从其他所有主节点上迁移slots,根据提示录入: all 即可
这样的话, 集群中的所有主节点都会成为源节点, redis-trib 将从各个源节点中各取出一部分哈希槽, 凑够 3276 个, 然后移动到172.16.160.61:6368节点上:
推荐使用:
Source node #1: all
也可以列出需要迁出的节点id,以done结束(事例,不做参考,如需指定迁出多个节点id可选择操作)
Please enter all the source node IDs.
Type 'all' to use all the nodes as source nodes for the hash slots.
Type 'done' once you entered all the source nodes IDs.
Source node #1:117dd5c58a92c602ee6fc2df2d76a6bb3216654f
Source node #2:done
接下来提示是否迁移,录入yes/no
Do you want to proceed with the proposed reshard plan (yes/no)?
录入yes,回车之后redis-trib 就会正式的帮我们分槽处理。当然在重新分槽也遇到了一系列问题,可参考1/2。
-
查看集群状态
新增172.16.160.61:6368的这个主节点已经分配到slots,可以负责存取
[root@yoyo60 bin]# ./redis-trib.rb check 172.16.160.61:6368
>>> Performing Cluster Check (using node 172.16.160.61:6368)
M: 117dd5c58a92c602ee6fc2df2d76a6bb3216654f 172.16.160.61:6368
slots:0-818,1365-2183,6827-7645,12288-13106 (3276 slots) master
1 additional replica(s)
S: c447385f64b9294ee9fdab634254505e06dd3770 172.16.160.34:6367
slots: (0 slots) slave
replicates e4c7c9cb80caf727cb5724af7b47ce0b462b9749
M: 92ab349e2f5723cec93e8b3e26af1d4062cd1469 172.16.160.60:6377
slots:2184-5460 (3277 slots) master
1 additional replica(s)
S: bdc98c07bdcfc5141a3a41af25ac5b1826aa9f2a 172.16.160.61:6367
slots: (0 slots) slave
replicates 92ab349e2f5723cec93e8b3e26af1d4062cd1469
S: 0b6f0cabbb8488f43a6b5c8a44c781656d3075d2 172.16.160.60:6367
slots: (0 slots) slave
replicates 78fd5441a07f6762d821b51fa330d535239953fe
M: 78fd5441a07f6762d821b51fa330d535239953fe 172.16.160.61:6377
slots:7646-10922 (3277 slots) master
1 additional replica(s)
S: 0a170716fa820a056a8826c63a5a4c02a9aaa34a 172.16.160.34:6369
slots: (0 slots) slave
replicates afa51bebb90da31a1da4912c762edfdb713411c5
M: e4c7c9cb80caf727cb5724af7b47ce0b462b9749 172.16.160.34:6377
slots:13107-16383 (3277 slots) master
1 additional replica(s)
M: afa51bebb90da31a1da4912c762edfdb713411c5 172.16.160.34:6368
slots:819-1364,5461-6826,10923-12287 (3277 slots) master
1 additional replica(s)
S: 66c7917a991442068c2741207980fb5d8f60e218 172.16.160.61:6369
slots: (0 slots) slave
replicates 117dd5c58a92c602ee6fc2df2d76a6bb3216654f
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
-
移除节点
移除节点亲测,踩了不少坑。如果redis cluster 16384 个slots不完整,会导致集群失败。所以强烈建议先将需要删除节点的slots移交到其他节点,然后再移除节点。
- 移除节点,使用reshard移交节点slots。
同上一样,使用reshard重新分配slots,操作步骤基本一致,但需要保证每个节点slots分配平衡。
移除172.16.160.61:6368这个主节点
[root@yoyo60 bin]# ./redis-trib.rb reshard 172.16.160.60:6367
>>> Performing Cluster Check (using node 172.16.160.60:6367)
S: 0b6f0cabbb8488f43a6b5c8a44c781656d3075d2 172.16.160.60:6367
slots: (0 slots) slave
replicates 78fd5441a07f6762d821b51fa330d535239953fe
S: c447385f64b9294ee9fdab634254505e06dd3770 172.16.160.34:6367
slots: (0 slots) slave
replicates e4c7c9cb80caf727cb5724af7b47ce0b462b9749
M: 78fd5441a07f6762d821b51fa330d535239953fe 172.16.160.61:6377
slots:7646-10922 (3277 slots) master
1 additional replica(s)
S: 66c7917a991442068c2741207980fb5d8f60e218 172.16.160.61:6369
slots: (0 slots) slave
replicates 117dd5c58a92c602ee6fc2df2d76a6bb3216654f
S: bdc98c07bdcfc5141a3a41af25ac5b1826aa9f2a 172.16.160.61:6367
slots: (0 slots) slave
replicates 92ab349e2f5723cec93e8b3e26af1d4062cd1469
S: 0a170716fa820a056a8826c63a5a4c02a9aaa34a 172.16.160.34:6369
slots: (0 slots) slave
replicates afa51bebb90da31a1da4912c762edfdb713411c5
M: 92ab349e2f5723cec93e8b3e26af1d4062cd1469 172.16.160.60:6377
slots:2184-5460 (3277 slots) master
1 additional replica(s)
M: e4c7c9cb80caf727cb5724af7b47ce0b462b9749 172.16.160.34:6377
slots:13107-16383 (3277 slots) master
1 additional replica(s)
M: afa51bebb90da31a1da4912c762edfdb713411c5 172.16.160.34:6368
slots:819-1364,5461-6826,10923-12287 (3277 slots) master
1 additional replica(s)
M: 117dd5c58a92c602ee6fc2df2d76a6bb3216654f 172.16.160.61:6368
slots:0-818,1365-2183,6827-7645,12288-13106 (3276 slots) master
1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
How many slots do you want to move (from 1 to 16384)?
- 录入需要迁出的数量,172.16.160.61:6368这个节点上有3276个slots,slots:0-818,1365-2183,6827-7645,12288-13106创建时由四个节点迁移过来,所以决定分四次迁出。
How many slots do you want to move (from 1 to 16384)? 819
- 录入接收迁出slots的节点id
How many slots do you want to move (from 1 to 16384)? 819
What is the receiving node ID? afa51bebb90da31a1da4912c762edfdb713411c5
- 录入迁出slots的节点id,以done结束
Please enter all the source node IDs.
Type 'all' to use all the nodes as source nodes for the hash slots.
Type 'done' once you entered all the source nodes IDs.
Source node #1:117dd5c58a92c602ee6fc2df2d76a6bb3216654f
Source node #2:done
- 确认迁出,yes/no
Do you want to proceed with the proposed reshard plan (yes/no)? yes
重复迁出剩余三次,查看集群状态。
[root@yoyo60 bin]# ./redis-trib.rb check 172.16.160.61:6368
>>> Performing Cluster Check (using node 172.16.160.61:6368)
M: 117dd5c58a92c602ee6fc2df2d76a6bb3216654f 172.16.160.61:6368
slots: (0 slots) master
0 additional replica(s)
S: c447385f64b9294ee9fdab634254505e06dd3770 172.16.160.34:6367
slots: (0 slots) slave
replicates e4c7c9cb80caf727cb5724af7b47ce0b462b9749
M: 92ab349e2f5723cec93e8b3e26af1d4062cd1469 172.16.160.60:6377
slots:1365-5460 (4096 slots) master
1 additional replica(s)
S: bdc98c07bdcfc5141a3a41af25ac5b1826aa9f2a 172.16.160.61:6367
slots: (0 slots) slave
replicates 92ab349e2f5723cec93e8b3e26af1d4062cd1469
S: 0b6f0cabbb8488f43a6b5c8a44c781656d3075d2 172.16.160.60:6367
slots: (0 slots) slave
replicates 78fd5441a07f6762d821b51fa330d535239953fe
M: 78fd5441a07f6762d821b51fa330d535239953fe 172.16.160.61:6377
slots:6827-10922 (4096 slots) master
1 additional replica(s)
S: 0a170716fa820a056a8826c63a5a4c02a9aaa34a 172.16.160.34:6369
slots: (0 slots) slave
replicates afa51bebb90da31a1da4912c762edfdb713411c5
M: e4c7c9cb80caf727cb5724af7b47ce0b462b9749 172.16.160.34:6377
slots:12288-16383 (4096 slots) master
2 additional replica(s)
M: afa51bebb90da31a1da4912c762edfdb713411c5 172.16.160.34:6368
slots:0-1364,5461-6826,10923-12287 (4096 slots) master
1 additional replica(s)
S: 66c7917a991442068c2741207980fb5d8f60e218 172.16.160.61:6369
slots: (0 slots) slave
replicates e4c7c9cb80caf727cb5724af7b47ce0b462b9749
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
- 移除该主节点对应的从节点
移除节点与新增节点不同,移除节点需要标识集群的ip :prot ,被移除节点的nodeId
[root@yoyo60 bin]# ./redis-trib.rb del-node 172.16.160.60:6377 66c7917a991442068c2741207980fb5d8f60e218
>>> Removing node 66c7917a991442068c2741207980fb5d8f60e218 from cluster 172.16.160.60:6377
>>> Sending CLUSTER FORGET messages to the cluster...
>>> SHUTDOWN the node.
移除从节点成功,查看集群状态
从节点成功移除
[root@yoyo60 bin]# ./redis-trib.rb check 172.16.160.61:6368
>>> Performing Cluster Check (using node 172.16.160.61:6368)
M: 117dd5c58a92c602ee6fc2df2d76a6bb3216654f 172.16.160.61:6368
slots: (0 slots) master
0 additional replica(s)
S: c447385f64b9294ee9fdab634254505e06dd3770 172.16.160.34:6367
slots: (0 slots) slave
replicates e4c7c9cb80caf727cb5724af7b47ce0b462b9749
M: 92ab349e2f5723cec93e8b3e26af1d4062cd1469 172.16.160.60:6377
slots:1365-5460 (4096 slots) master
1 additional replica(s)
S: bdc98c07bdcfc5141a3a41af25ac5b1826aa9f2a 172.16.160.61:6367
slots: (0 slots) slave
replicates 92ab349e2f5723cec93e8b3e26af1d4062cd1469
S: 0b6f0cabbb8488f43a6b5c8a44c781656d3075d2 172.16.160.60:6367
slots: (0 slots) slave
replicates 78fd5441a07f6762d821b51fa330d535239953fe
M: 78fd5441a07f6762d821b51fa330d535239953fe 172.16.160.61:6377
slots:6827-10922 (4096 slots) master
1 additional replica(s)
S: 0a170716fa820a056a8826c63a5a4c02a9aaa34a 172.16.160.34:6369
slots: (0 slots) slave
replicates afa51bebb90da31a1da4912c762edfdb713411c5
M: e4c7c9cb80caf727cb5724af7b47ce0b462b9749 172.16.160.34:6377
slots:12288-16383 (4096 slots) master
1 additional replica(s)
M: afa51bebb90da31a1da4912c762edfdb713411c5 172.16.160.34:6368
slots:0-1364,5461-6826,10923-12287 (4096 slots) master
1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
- 移除主节点,步骤与移除从节点基本一致
[root@yoyo60 bin]# ./redis-trib.rb del-node 172.16.160.60:6377 117dd5c58a92c602ee6fc2df2d76a6bb3216654f
>>> Removing node 117dd5c58a92c602ee6fc2df2d76a6bb3216654f from cluster 172.16.160.60:6377
>>> Sending CLUSTER FORGET messages to the cluster...
>>> SHUTDOWN the node.
- 移除主节点成功,登录集群查看节点状态。
[root@yoyo60 bin]# ./redis-cli -c -h 172.16.160.60 -p 6377
172.16.160.60:6377> cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:8
cluster_size:4
cluster_current_epoch:17
cluster_my_epoch:15
cluster_stats_messages_ping_sent:292968
cluster_stats_messages_pong_sent:308843
cluster_stats_messages_update_sent:24
cluster_stats_messages_sent:601835
cluster_stats_messages_ping_received:308834
cluster_stats_messages_pong_received:292968
cluster_stats_messages_meet_received:9
cluster_stats_messages_update_received:1
cluster_stats_messages_received:601812
- 查看节点信息
172.16.160.60:6377> cluster nodes
afa51bebb90da31a1da4912c762edfdb713411c5 172.16.160.34:6368@16368 master - 0 1537195995792 14 connected 0-1364 5461-6826 10923-12287
bdc98c07bdcfc5141a3a41af25ac5b1826aa9f2a 172.16.160.61:6367@16367 slave 92ab349e2f5723cec93e8b3e26af1d4062cd1469 0 1537195994000 15 connected
e4c7c9cb80caf727cb5724af7b47ce0b462b9749 172.16.160.34:6377@16377 master - 0 1537195994789 17 connected 12288-16383
92ab349e2f5723cec93e8b3e26af1d4062cd1469 172.16.160.60:6377@16377 myself,master - 0 1537195993000 15 connected 1365-5460
78fd5441a07f6762d821b51fa330d535239953fe 172.16.160.61:6377@16377 master - 0 1537195993000 16 connected 6827-10922
0b6f0cabbb8488f43a6b5c8a44c781656d3075d2 172.16.160.60:6367@16367 slave 78fd5441a07f6762d821b51fa330d535239953fe 0 1537195993000 16 connected
0a170716fa820a056a8826c63a5a4c02a9aaa34a 172.16.160.34:6369@16369 slave afa51bebb90da31a1da4912c762edfdb713411c5 0 1537195996795 14 connected
c447385f64b9294ee9fdab634254505e06dd3770 172.16.160.34:6367@16367 slave e4c7c9cb80caf727cb5724af7b47ce0b462b9749 0 1537195994000 17 connected