云原生

二十六 Kubernetes集群升级

2022-06-16  本文已影响0人  負笈在线

(一)Kubernetes升级说明

注意:
1.非业务高峰时升级,一般夜间
2.kubernetes与etcd、calico、coredns等版本匹配
3.升级前认真阅读changelog

ETCD
https://github.com/etcd-io/etcd/releases
https://github.com/etcd-io/etcd/releases/download/v3.5.2/etcd-v3.5.2-linux-amd64.tar.gz
https://github.com/etcd-io/etcd/blob/main/CHANGELOG/CHANGELOG-3.5.md

Kubernetes
https://kubernetes.io/
https://github.com/kubernetes/kubernetes/releases
https://github.com/kubernetes/kubernetes/archive/refs/tags/v1.23.5.tar.gz
https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.23.md
https://github.com/kubernetes/kubernetes/releases/tag/v1.23.5
https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.23.md#server-binaries

calico
https://www.tigera.io/project-calico/
https://projectcalico.docs.tigera.io/getting-started/kubernetes/self-managed-onprem/onpremises
Install Calico with Kubernetes API datastore, 50 nodes or less
https://projectcalico.docs.tigera.io/getting-started/kubernetes/self-managed-onprem/onpremises#install-calico-with-kubernetes-api-datastore-50-nodes-or-less

Install Calico with Kubernetes API datastore, more than 50 nodes
https://projectcalico.docs.tigera.io/getting-started/kubernetes/self-managed-onprem/onpremises#install-calico-with-kubernetes-api-datastore-more-than-50-nodes

Install Calico with etcd datastore
https://projectcalico.docs.tigera.io/getting-started/kubernetes/self-managed-onprem/onpremises#install-calico-with-etcd-datastore

Upgrade Calico on Kubernetes
https://projectcalico.docs.tigera.io/maintenance/kubernetes-upgrade

CoreDNS
https://github.com/coredns/coredns/releases/download/v1.9.1/coredns_1.9.1_linux_arm.tgz
https://github.com/coredns/coredns/releases/
https://github.com/coredns/deployment/tree/master/kubernetes

(二) Etcd集群升级

1.备份etcd数据

# export ETCDCTL_API = 3
# etcdctl –cacert =/etc/kubernetes/pki/etcd/etcd-ca.pem –key =/etc/kubernetes/pki/etcd/etcd-key.pem –cert /etc/kubernetes/pki/etcd/etcd.pem --endpoints=https://192.168.8.19:2379, https://192.168.8.19:2379, [https://192.168.8.19:2379](https://192.168.8.19:2379) snapshot save YYYYMMDD.db
ETCDCTL_API=3 etcdctl \ #指定3版本 
snapshot save snap.db \ #备份命令
--endpoints=https://127.0.0.1:2379 \ # #指定ip+端口
--cacert=/etc/kubernetes/pki/etcd/ca.crt \             #指定https证书
-cert=/etc/kubernetes/pki/etcd/server.crt \ #指定数字证书
--key=/etc/kubernetes/pki/etcd/server.key #指定key

2.下载新版etcd包

# get https://github.com/etcd-io/etcd/releases/download/v3.5.2/etcd-v3.5.2-linux-amd64.tar.gz
# etcd-v3.5.2-linux-amd64.tar.gz

3.停止etcd

# systemctl stop etcd

4.替换etcd和etcdctl

# which etcd
# mkdir /home/bak
# cp -p /usr/local/bin/etcd* /home/bak/
#scp etcd* k8s-master02:/usr/local/bin
# etcdctl –ver sion

5.启动etcd

# vi /etc/etcd/etcd.config.yml
 修改log-output : [default]
# systemct restart etcd
# etcdctl –cacert =/etc/kubernetes/pki/etcd/etcd-ca.pem –key =/etc/kubernetes/pki/etcd/etcd-key.pem –cert /etc/kubernetes/pki/etcd/etcd.pem --endpoints=https://192.168.8.19:2379, https://192.168.8.19:2379, [https://192.168.8.19:2379](https://192.168.8.19:2379) endpoint health

6.更新其他节点,最后更新主节点

(三) Kubernetes Master节点升级

1.kube-apiserver

# wget https://dl.k8s.io/v1.23.5/kubernetes-server-linux-amd64.tar.gz
# tar xf kubernetes-server-linux-amd64.tar.gz
# cd server
# ./kubect version
# systemctl stop kube-apiserver
# which kube-apiserver
# cp -rp kuber-apiserver /usr/local/bin/ kuber-apiserver
# /usr/local/bin/ kuber-apiserver version
# daemon-reload
# systemctl start kube-apiserver
# systemctl status kube-apiserver

2.kube-controller-manager kube-scheduler

# systemctl stop kube-controller-manager kube-scheduler
# cp -rp kube-controller kube-scheduler /usr/local/bin
# systemctl start kube-controller-manager
# systemctl status kube-controller-manager
# systemctl start kube-scheduler
# systemctl status kube-scheduler

3.Kube-proxy

# systemctl stop kube-proxy
# cp -rp kube-proxy /usr/local/bin
# systemctl start kube-proxy
# systemctl status kube-proxy

4.确认

# kubectl get node
# cp -rp kubectl /usr/local/bin/
# kubectl get cluster-info

5.其他节点参考如上步骤升级

(四) Kubernetes Node节点和Calico升级**

1.驱除Node节点

# kuberctl drain k8s-node01 –delete-local-date –force –ignore-daemonsets
# kubectl get node

2.kubelet

# systemctl stop kubelet
# cp -rp kubectl /usr/local/bin/kubelet

3.Calico

# curl https://projectcalico.docs.tigera.io/manifests/calico.yaml -O
# kubectl apply -f <manifest-file-name>
建议修改为ondelete更新方式
# kubectl edit ds -n kuber-system
# kubectl get po -n kube-system
# kubectl get po -n kube-system -owide
# kuberctl po calico-node-7f9vj -n kube-system
 删除node01节点上的calico
# kubectl get po -n kube-system -owide
 升级中
# kubectl describe po calico-node-3de4d -n kube-system
  查看新安装的calico正在pull新版本

(五) Kubernetes CoreDNS升级**

# kubectl get po -n kube-system coredns-wewee23445 -oymal |grep image
# git [https://github.com/coredns/deployment/tree/master/kubernetes](https://github.com/coredns/deployment/tree/master/kubernetes)
# cd kubernetes
# kubectl get cm,deploy -n kube-system
# kubectl get cm coredns -n kube-system -oyaml> /home/ba/coredns-cm.yaml
# kubectl get deploy coredns -n kube-system -oyaml> /home/ba/coredns-deploy.yaml
# ./deploy.sh -s
# kubectl get clusterrole system:coredns -oyaml> /home/ba/coredns- clusterrole.yaml
# kubectl get clusterrolebinding system:coredns -oyaml> /home/ba/coredns- clusterrolebinding.yaml
# ./deploy.sh -s| kubectl apply -f -
# kubectl logs 0f coredns-edewdwe-34fcd -n kube-system
# kubectl delete --namespace=kube-system deployment kube-dns

测试

# kubectl get po
# kubectl exec -ti nginx-edwdesdwes –bash
# curl Kubernetes:443
上一篇 下一篇

猜你喜欢

热点阅读