OpenShift集群故障恢复--只剩一个master节点可用,
2020-03-23 本文已影响0人
陈光辉_akr8s
版本信息
- OpenShift v3.11
故障场景
- 3/5 master 节点的集群
- 因机器故障只剩一个 master 可用,没有备份
- 集群 etcd 不可用
2020-01-07 14:20:58.154738 E | etcdserver: publish error: etcdserver: request timed out
2020-01-07 14:20:58.163767 W | rafthttp: health check for peer d9e07999afebd092 could not connect: dial tcp 192.168.0.128:2380: getsockopt: connection refused
2020-01-07 14:20:58.164548 W | rafthttp: health check for peer ebcee2e9c4c528b2 could not connect: dial tcp 192.168.0.110:2380: getsockopt: connection refused
2020-01-07 14:21:00.624133 I | raft: e00e9e227c7f69e4 is starting a new election at term 25
2020-01-07 14:21:00.624175 I | raft: e00e9e227c7f69e4 became candidate at term 26
2020-01-07 14:21:00.624189 I | raft: e00e9e227c7f69e4 received MsgVoteResp from e00e9e227c7f69e4 at term 26
2020-01-07 14:21:00.624199 I | raft: e00e9e227c7f69e4 [logterm: 4, index: 40800] sent MsgVote request to d9e07999afebd092 at term 26
2020-01-07 14:21:00.624219 I | raft: e00e9e227c7f69e4 [logterm: 4, index: 40800] sent MsgVote request to ebcee2e9c4c528b2 at term 26
- oc 命令不可用
# oc get nodes
Unable to connect to the server: EOF
故障处理过程
- 先恢复etcd,使剩余的1个etcd可用
- 修改配置,使剩下的一个etcd形成一个可用的etcd集群
- 在剩余的节点操作,比如我实验环境的master3.b8d8.internal
[root@master3 bak]# mv /etc/origin/node/pods/etcd.yaml .
[root@master3 bak]# cp /etc/etcd/etcd.conf etcd.conf.bak
[root@master3 bak]# echo "ETCD_FORCE_NEW_CLUSTER=true" >> /etc/etcd/etcd.conf
[root@master3 bak]# mv etcd.yaml /etc/origin/node/pods/.
- 去掉ETCD_FORCE_NEW_CLUSTER=true变量,重新启动etcd
[root@master3 bak]# mv /etc/origin/node/pods/etcd.yaml .
[root@master3 bak]# rm -rf /etc/etcd/etcd.conf
[root@master3 bak]# mv etcd.conf.bak /etc/etcd/etcd.conf
[root@master3 bak]# mv etcd.yaml /etc/origin/node/pods/.
- 恢复一个etcd后集群已经可用正常使用,只是没有高可用了
[root@master3 bak]# oc get nodes
NAME STATUS ROLES AGE VERSION
infranode1.b8d8.internal Ready infra 2h v1.11.0+d4cacc0
infranode2.b8d8.internal Ready infra 2h v1.11.0+d4cacc0
master1.b8d8.internal NotReady master 2h v1.11.0+d4cacc0
master2.b8d8.internal NotReady master 2h v1.11.0+d4cacc0
master3.b8d8.internal Ready master 2h v1.11.0+d4cacc0
node1.b8d8.internal Ready compute 2h v1.11.0+d4cacc0
node2.b8d8.internal Ready compute 2h v1.11.0+d4cacc0
node3.b8d8.internal Ready compute 2h v1.11.0+d4cacc0
- 删除故障节点
[root@master3 bak]# oc delete node master1.b8d8.internal master2.b8d8.internal
node "master1.b8d8.internal" deleted
node "master2.b8d8.internal" deleted
[root@master3 bak]# oc get nodes
NAME STATUS ROLES AGE VERSION
infranode1.b8d8.internal Ready infra 2h v1.11.0+d4cacc0
infranode2.b8d8.internal Ready infra 2h v1.11.0+d4cacc0
master3.b8d8.internal Ready master 2h v1.11.0+d4cacc0
node1.b8d8.internal Ready compute 2h v1.11.0+d4cacc0
node2.b8d8.internal Ready compute 2h v1.11.0+d4cacc0
node3.b8d8.internal Ready compute 2h v1.11.0+d4cacc0
- 重建etcd ca证书,因为我们失去了inventory file文件里“第一”个master节点。“第一”个master节点非常重要,这个节点有ca证书的详细信息。
- 修改inventory file,去掉故障节点
[OSEv3:children]
lb
masters
etcd
nodes
nfs
[lb]
loadbalancer.b8d8.internal
[masters]
## 删掉故障节点
#master1.b8d8.internal
#master2.b8d8.internal
master3.b8d8.internal
[etcd]
## 删掉故障节点
#master1.b8d8.internal
#master2.b8d8.internal
master3.b8d8.internal
[nodes]
## These are the masters
## 删掉故障节点
#master1.b8d8.internal openshift_node_group_name='node-config-master' openshift_node_problem_detector_install=true
#master2.b8d8.internal openshift_node_group_name='node-config-master' openshift_node_problem_detector_install=true
master3.b8d8.internal openshift_node_group_name='node-config-master' openshift_node_problem_detector_install=true
## These are infranodes
infranode1.b8d8.internal openshift_node_group_name='node-config-infra' openshift_node_problem_detector_install=true
infranode2.b8d8.internal openshift_node_group_name='node-config-infra' openshift_node_problem_detector_install=true
## These are regular nodes
node1.b8d8.internal openshift_node_group_name='node-config-compute' openshift_node_problem_detector_install=true
node2.b8d8.internal openshift_node_group_name='node-config-compute' openshift_node_problem_detector_install=true
node3.b8d8.internal openshift_node_group_name='node-config-compute' openshift_node_problem_detector_install=true
···
- 跑重建etcd ca证书脚本
- 在ansible节点运行
[root@bastion ~]# cd /usr/share/ansible/openshift-ansible
[root@bastion openshift-ansible]# ansible-playbook -i /etc/ansible/hosts_redeploy-certificates playbooks/openshift-etcd/redeploy-ca.yml -vv
- 扩展master节点,恢复3/5个
- 修改inventory file,将需要添加回来的节点写上去
OSEv3:children]
lb
masters
etcd
## 需要添加这两个flag new_nodes/new_masters
new_nodes
new_masters
nodes
nfs
[lb]
loadbalancer.b8d8.internal
[masters]
#master1.b8d8.internal
#master2.b8d8.internal
master3.b8d8.internal
## 重新添加回来的master
[new_masters]
master1.b8d8.internal
master2.b8d8.internal
[etcd]
#master1.b8d8.internal
#master2.b8d8.internal
master3.b8d8.internal
[nodes]
## These are the masters
#master1.b8d8.internal openshift_node_group_name='node-config-master' openshift_node_problem_detector_install=true
#master2.b8d8.internal openshift_node_group_name='node-config-master' openshift_node_problem_detector_install=true
master3.b8d8.internal openshift_node_group_name='node-config-master' openshift_node_problem_detector_install=true
## These are infranodes
infranode1.b8d8.internal openshift_node_group_name='node-config-infra' openshift_node_problem_detector_install=true
infranode2.b8d8.internal openshift_node_group_name='node-config-infra' openshift_node_problem_detector_install=true
## These are regular nodes
node1.b8d8.internal openshift_node_group_name='node-config-compute' openshift_node_problem_detector_install=true
node2.b8d8.internal openshift_node_group_name='node-config-compute' openshift_node_problem_detector_install=true
node3.b8d8.internal openshift_node_group_name='node-config-compute' openshift_node_problem_detector_install=true
## 重新添加回来的master
[new_nodes]
master1.b8d8.internal openshift_node_group_name='node-config-master' openshift_node_problem_detector_install=true
master2.b8d8.internal openshift_node_group_name='node-config-master' openshift_node_problem_detector_install=true
···
- 在剩余节点去掉yum的unexclude, grep的结果不能含有atomic-openshift*
[root@master3 bak]# atomic-openshift-excluder unexclude
[root@master3 bak]# grep exclude /etc/yum.conf
exclude= docker*1.20* docker*1.19* docker*1.18* docker*1.17* docker*1.16* docker*1.15* docker*1.14*
- 把ca.serial.txt文件放在剩余master节点的/etc/origin/master/,这个文件是在部署集群的时候生成的,在集群的“第一”个master节点上,如果没有这个文件,这个集群就扩不回3/5个master节点的高可用集群了.
- 如果真的没有怎么办?可以尝试手写一个,值为0F
[root@master3 bak]# cp ca.serial.txt /etc/origin/master/
- 运行 add master脚本
[root@bastion ~]# cd /usr/share/ansible/openshift-ansible
[root@bastion openshift-ansible]# ansible-playbook -i /etc/ansible/hosts_add_masters playbooks/openshift-master/scaleup.yml -vv
- approve csr
- 在剩余的节点操作
[root@master3 bak]# oc get csr -o name | xargs oc adm certificate approve
- 扩展etcd节点,恢复3/5个节点的集群
- 修改inventory file,将需要添加的etcd节点写上去,注意目前的master节点顺序
[OSEv3:children]
lb
masters
etcd
## 添加etcd节点
new_etcd
nodes
nfs
[lb]
loadbalancer.b8d8.internal
[masters]
## 注意这里的顺序,现在master3.b8d8.internal才是“第一”个master了
master3.b8d8.internal
master1.b8d8.internal
master2.b8d8.internal
[etcd]
#master1.b8d8.internal
#master2.b8d8.internal
master3.b8d8.internal
[new_etcd]
## 添加etcd节点
master1.b8d8.internal
master2.b8d8.internal
[nodes]
## These are the masters
master1.b8d8.internal openshift_node_group_name='node-config-master' openshift_node_problem_detector_install=true
master2.b8d8.internal openshift_node_group_name='node-config-master' openshift_node_problem_detector_install=true
master3.b8d8.internal openshift_node_group_name='node-config-master' openshift_node_problem_detector_install=true
## These are infranodes
infranode1.b8d8.internal openshift_node_group_name='node-config-infra' openshift_node_problem_detector_install=true
infranode2.b8d8.internal openshift_node_group_name='node-config-infra' openshift_node_problem_detector_install=true
## These are regular nodes
node1.b8d8.internal openshift_node_group_name='node-config-compute' openshift_node_problem_detector_install=true
node2.b8d8.internal openshift_node_group_name='node-config-compute' openshift_node_problem_detector_install=true
node3.b8d8.internal openshift_node_group_name='node-config-compute' openshift_node_problem_detector_install=true
···
- 运行add etcd脚本
[root@bastion ~]# cd /usr/share/ansible/openshift-ansible
[root@bastion ~]# ansible-playbook -i /etc/ansible/hosts_add_etcd playbooks/openshift-etcd/scaleup.yml -vv
验证
- 查看etcd/api pod
[root@master3 bak]# oc get pod --all-namespaces |grep etcd
kube-system master-etcd-master1.b8d8.internal 1/1 Running 0 18s
kube-system master-etcd-master2.b8d8.internal 1/1 Running 0 1m
kube-system master-etcd-master3.b8d8.internal 1/1 Running 3 3h
[root@master3 bak]# oc get pod --all-namespaces |grep master-api
kube-system master-api-master1.b8d8.internal 1/1 Running 2 9m
kube-system master-api-master2.b8d8.internal 1/1 Running 2 9m
kube-system master-api-master3.b8d8.internal 1/1 Running 12 3h
- 检查etcd集群状态
[root@master3 bak]# oc get pod --all-namespaces -owide |grep etcd
kube-system master-etcd-master1.b8d8.internal 1/1 Running 0 3m 192.168.0.128 master1.b8d8.internal <none>
kube-system master-etcd-master2.b8d8.internal 1/1 Running 0 4m 192.168.0.110 master2.b8d8.internal <none>
kube-system master-etcd-master3.b8d8.internal 1/1 Running 3 3h 192.168.0.152 master3.b8d8.internal <none>
[root@master3 bak]# oc get pod --all-namespaces -owide |grep master-api
kube-system master-api-master1.b8d8.internal 1/1 Running 2 12m 192.168.0.128 master1.b8d8.internal <none>
kube-system master-api-master2.b8d8.internal 1/1 Running 2 12m 192.168.0.110 master2.b8d8.internal <none>
kube-system master-api-master3.b8d8.internal 1/1 Running 12 3h 192.168.0.152 master3.b8d8.internal <none>
- 停掉原来的节点检查集群是否还正常工作
完