Openshift:可靠的Kubernetes发行版

OpenShift集群故障恢复--只剩一个master节点可用,

2020-03-23  本文已影响0人  陈光辉_akr8s

版本信息


故障场景

2020-01-07 14:20:58.154738 E | etcdserver: publish error: etcdserver: request timed out
2020-01-07 14:20:58.163767 W | rafthttp: health check for peer d9e07999afebd092 could not connect: dial tcp 192.168.0.128:2380: getsockopt: connection refused
2020-01-07 14:20:58.164548 W | rafthttp: health check for peer ebcee2e9c4c528b2 could not connect: dial tcp 192.168.0.110:2380: getsockopt: connection refused
2020-01-07 14:21:00.624133 I | raft: e00e9e227c7f69e4 is starting a new election at term 25
2020-01-07 14:21:00.624175 I | raft: e00e9e227c7f69e4 became candidate at term 26
2020-01-07 14:21:00.624189 I | raft: e00e9e227c7f69e4 received MsgVoteResp from e00e9e227c7f69e4 at term 26
2020-01-07 14:21:00.624199 I | raft: e00e9e227c7f69e4 [logterm: 4, index: 40800] sent MsgVote request to d9e07999afebd092 at term 26
2020-01-07 14:21:00.624219 I | raft: e00e9e227c7f69e4 [logterm: 4, index: 40800] sent MsgVote request to ebcee2e9c4c528b2 at term 26
# oc get nodes
Unable to connect to the server: EOF

故障处理过程

  1. 先恢复etcd,使剩余的1个etcd可用
[root@master3 bak]# mv /etc/origin/node/pods/etcd.yaml .
[root@master3 bak]# cp /etc/etcd/etcd.conf etcd.conf.bak
[root@master3 bak]# echo "ETCD_FORCE_NEW_CLUSTER=true" >> /etc/etcd/etcd.conf
[root@master3 bak]# mv etcd.yaml /etc/origin/node/pods/.
[root@master3 bak]# mv /etc/origin/node/pods/etcd.yaml .
[root@master3 bak]# rm -rf /etc/etcd/etcd.conf 
[root@master3 bak]# mv etcd.conf.bak /etc/etcd/etcd.conf
[root@master3 bak]# mv etcd.yaml /etc/origin/node/pods/.
[root@master3 bak]# oc get nodes
NAME                       STATUS     ROLES     AGE       VERSION
infranode1.b8d8.internal   Ready      infra     2h        v1.11.0+d4cacc0
infranode2.b8d8.internal   Ready      infra     2h        v1.11.0+d4cacc0
master1.b8d8.internal      NotReady   master    2h        v1.11.0+d4cacc0
master2.b8d8.internal      NotReady   master    2h        v1.11.0+d4cacc0
master3.b8d8.internal      Ready      master    2h        v1.11.0+d4cacc0
node1.b8d8.internal        Ready      compute   2h        v1.11.0+d4cacc0
node2.b8d8.internal        Ready      compute   2h        v1.11.0+d4cacc0
node3.b8d8.internal        Ready      compute   2h        v1.11.0+d4cacc0
[root@master3 bak]# oc delete node master1.b8d8.internal master2.b8d8.internal
node "master1.b8d8.internal" deleted
node "master2.b8d8.internal" deleted

[root@master3 bak]# oc get nodes
NAME                       STATUS    ROLES     AGE       VERSION
infranode1.b8d8.internal   Ready     infra     2h        v1.11.0+d4cacc0
infranode2.b8d8.internal   Ready     infra     2h        v1.11.0+d4cacc0
master3.b8d8.internal      Ready     master    2h        v1.11.0+d4cacc0
node1.b8d8.internal        Ready     compute   2h        v1.11.0+d4cacc0
node2.b8d8.internal        Ready     compute   2h        v1.11.0+d4cacc0
node3.b8d8.internal        Ready     compute   2h        v1.11.0+d4cacc0
  1. 重建etcd ca证书,因为我们失去了inventory file文件里“第一”个master节点。“第一”个master节点非常重要,这个节点有ca证书的详细信息。
[OSEv3:children]
lb
masters
etcd
nodes
nfs

[lb]
loadbalancer.b8d8.internal

[masters]
## 删掉故障节点
#master1.b8d8.internal
#master2.b8d8.internal
master3.b8d8.internal


[etcd]
## 删掉故障节点
#master1.b8d8.internal
#master2.b8d8.internal
master3.b8d8.internal


[nodes]
## These are the masters
## 删掉故障节点
#master1.b8d8.internal openshift_node_group_name='node-config-master' openshift_node_problem_detector_install=true
#master2.b8d8.internal openshift_node_group_name='node-config-master' openshift_node_problem_detector_install=true
master3.b8d8.internal openshift_node_group_name='node-config-master' openshift_node_problem_detector_install=true

## These are infranodes
infranode1.b8d8.internal openshift_node_group_name='node-config-infra' openshift_node_problem_detector_install=true
infranode2.b8d8.internal openshift_node_group_name='node-config-infra' openshift_node_problem_detector_install=true

## These are regular nodes
node1.b8d8.internal openshift_node_group_name='node-config-compute' openshift_node_problem_detector_install=true
node2.b8d8.internal openshift_node_group_name='node-config-compute' openshift_node_problem_detector_install=true
node3.b8d8.internal openshift_node_group_name='node-config-compute' openshift_node_problem_detector_install=true
···
[root@bastion ~]# cd /usr/share/ansible/openshift-ansible
[root@bastion openshift-ansible]# ansible-playbook -i /etc/ansible/hosts_redeploy-certificates playbooks/openshift-etcd/redeploy-ca.yml -vv 
  1. 扩展master节点,恢复3/5个
OSEv3:children]
lb
masters
etcd
## 需要添加这两个flag new_nodes/new_masters
new_nodes
new_masters
nodes
nfs

[lb]
loadbalancer.b8d8.internal

[masters]
#master1.b8d8.internal
#master2.b8d8.internal
master3.b8d8.internal

## 重新添加回来的master
[new_masters]
master1.b8d8.internal
master2.b8d8.internal

[etcd]
#master1.b8d8.internal
#master2.b8d8.internal
master3.b8d8.internal


[nodes]
## These are the masters
#master1.b8d8.internal openshift_node_group_name='node-config-master' openshift_node_problem_detector_install=true
#master2.b8d8.internal openshift_node_group_name='node-config-master' openshift_node_problem_detector_install=true
master3.b8d8.internal openshift_node_group_name='node-config-master' openshift_node_problem_detector_install=true

## These are infranodes
infranode1.b8d8.internal openshift_node_group_name='node-config-infra' openshift_node_problem_detector_install=true
infranode2.b8d8.internal openshift_node_group_name='node-config-infra' openshift_node_problem_detector_install=true

## These are regular nodes
node1.b8d8.internal openshift_node_group_name='node-config-compute' openshift_node_problem_detector_install=true
node2.b8d8.internal openshift_node_group_name='node-config-compute' openshift_node_problem_detector_install=true
node3.b8d8.internal openshift_node_group_name='node-config-compute' openshift_node_problem_detector_install=true

## 重新添加回来的master
[new_nodes]
master1.b8d8.internal openshift_node_group_name='node-config-master' openshift_node_problem_detector_install=true
master2.b8d8.internal openshift_node_group_name='node-config-master' openshift_node_problem_detector_install=true
···
[root@master3 bak]# atomic-openshift-excluder unexclude
[root@master3 bak]# grep exclude /etc/yum.conf
exclude= docker*1.20*  docker*1.19*  docker*1.18*  docker*1.17*  docker*1.16*  docker*1.15*  docker*1.14*
[root@master3 bak]# cp ca.serial.txt /etc/origin/master/
[root@bastion ~]# cd /usr/share/ansible/openshift-ansible
[root@bastion openshift-ansible]# ansible-playbook -i /etc/ansible/hosts_add_masters playbooks/openshift-master/scaleup.yml -vv
[root@master3 bak]# oc get csr -o name | xargs oc adm certificate approve
  1. 扩展etcd节点,恢复3/5个节点的集群
[OSEv3:children]
lb
masters
etcd
## 添加etcd节点
new_etcd
nodes
nfs

[lb]
loadbalancer.b8d8.internal

[masters]
## 注意这里的顺序,现在master3.b8d8.internal才是“第一”个master了
master3.b8d8.internal
master1.b8d8.internal
master2.b8d8.internal


[etcd]
#master1.b8d8.internal
#master2.b8d8.internal
master3.b8d8.internal

[new_etcd]
## 添加etcd节点
master1.b8d8.internal
master2.b8d8.internal

[nodes]
## These are the masters
master1.b8d8.internal openshift_node_group_name='node-config-master' openshift_node_problem_detector_install=true
master2.b8d8.internal openshift_node_group_name='node-config-master' openshift_node_problem_detector_install=true
master3.b8d8.internal openshift_node_group_name='node-config-master' openshift_node_problem_detector_install=true

## These are infranodes
infranode1.b8d8.internal openshift_node_group_name='node-config-infra' openshift_node_problem_detector_install=true
infranode2.b8d8.internal openshift_node_group_name='node-config-infra' openshift_node_problem_detector_install=true

## These are regular nodes
node1.b8d8.internal openshift_node_group_name='node-config-compute' openshift_node_problem_detector_install=true
node2.b8d8.internal openshift_node_group_name='node-config-compute' openshift_node_problem_detector_install=true
node3.b8d8.internal openshift_node_group_name='node-config-compute' openshift_node_problem_detector_install=true

···
[root@bastion ~]# cd /usr/share/ansible/openshift-ansible
[root@bastion ~]# ansible-playbook -i /etc/ansible/hosts_add_etcd playbooks/openshift-etcd/scaleup.yml -vv

验证

  1. 查看etcd/api pod
[root@master3 bak]# oc get pod --all-namespaces |grep etcd
kube-system                         master-etcd-master1.b8d8.internal              1/1       Running            0          18s
kube-system                         master-etcd-master2.b8d8.internal              1/1       Running            0          1m
kube-system                         master-etcd-master3.b8d8.internal              1/1       Running            3          3h
[root@master3 bak]# oc get pod --all-namespaces |grep master-api
kube-system                         master-api-master1.b8d8.internal               1/1       Running            2          9m
kube-system                         master-api-master2.b8d8.internal               1/1       Running            2          9m
kube-system                         master-api-master3.b8d8.internal               1/1       Running            12         3h
  1. 检查etcd集群状态
[root@master3 bak]# oc get pod --all-namespaces -owide |grep etcd
kube-system                         master-etcd-master1.b8d8.internal              1/1       Running     0          3m        192.168.0.128   master1.b8d8.internal      <none>
kube-system                         master-etcd-master2.b8d8.internal              1/1       Running     0          4m        192.168.0.110   master2.b8d8.internal      <none>
kube-system                         master-etcd-master3.b8d8.internal              1/1       Running     3          3h        192.168.0.152   master3.b8d8.internal      <none>
[root@master3 bak]# oc get pod --all-namespaces -owide |grep master-api
kube-system                         master-api-master1.b8d8.internal               1/1       Running     2          12m       192.168.0.128   master1.b8d8.internal      <none>
kube-system                         master-api-master2.b8d8.internal               1/1       Running     2          12m       192.168.0.110   master2.b8d8.internal      <none>
kube-system                         master-api-master3.b8d8.internal               1/1       Running     12         3h        192.168.0.152   master3.b8d8.internal      <none>
  1. 停掉原来的节点检查集群是否还正常工作

上一篇下一篇

猜你喜欢

热点阅读