Kubernetes V1.19.4升级到1.20.0
2020-12-15 本文已影响0人
jinbulee
本文参考:
https://kubernetes.io/zh/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/
镜像下载参考:
https://blog.51cto.com/14268033/2457006
一、首先更新下yum缓存
yum clean all
yum makecache
二、yum list 查一下最新的稳定版本
yum list --showduplicates kubeadm --disableexcludes=kubernetes
yum list
三、升级master
3.1、kubeadm 升级
yum install -y kubeadm-1.20.0-0 --disableexcludes=kubernetes
升级kubeadm
3.2、验证kubeadm 版本。
kubeadm version
验证kubeadm版本
3.3、排空master节点
kubectl drain master --ignore-daemonsets
3.4、然后运行
sudo kubeadm upgrade plan
输出结果如下:
[root@master ~]# sudo kubeadm upgrade plan
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.19.4
[upgrade/versions] kubeadm version: v1.20.0
[upgrade/versions] Latest stable version: v1.20.0
[upgrade/versions] Latest stable version: v1.20.0
[upgrade/versions] Latest version in the v1.19 series: v1.19.5
[upgrade/versions] Latest version in the v1.19 series: v1.19.5
Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT CURRENT AVAILABLE
kubelet 3 x v1.19.4 v1.19.5
Upgrade to the latest version in the v1.19 series:
COMPONENT CURRENT AVAILABLE
kube-apiserver v1.19.4 v1.19.5
kube-controller-manager v1.19.4 v1.19.5
kube-scheduler v1.19.4 v1.19.5
kube-proxy v1.19.4 v1.19.5
CoreDNS 1.7.0 1.7.0
etcd 3.4.13-0 3.4.9-1
You can now apply the upgrade by executing the following command:
kubeadm upgrade apply v1.19.5
_____________________________________________________________________
Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT CURRENT AVAILABLE
kubelet 3 x v1.19.4 v1.20.0
Upgrade to the latest stable version:
COMPONENT CURRENT AVAILABLE
kube-apiserver v1.19.4 v1.20.0
kube-controller-manager v1.19.4 v1.20.0
kube-scheduler v1.19.4 v1.20.0
kube-proxy v1.19.4 v1.20.0
CoreDNS 1.7.0 1.7.0
etcd 3.4.13-0 3.4.13-0
You can now apply the upgrade by executing the following command:
kubeadm upgrade apply v1.20.0
_____________________________________________________________________
The table below shows the current state of component configs as understood by this version of kubeadm.
Configs that have a "yes" mark in the "MANUAL UPGRADE REQUIRED" column require manual config upgrade or
resetting to kubeadm defaults before a successful upgrade can be performed. The version to manually
upgrade to is denoted in the "PREFERRED VERSION" column.
API GROUP CURRENT VERSION PREFERRED VERSION MANUAL UPGRADE REQUIRED
kubeproxy.config.k8s.io v1alpha1 v1alpha1 no
kubelet.config.k8s.io v1beta1 v1beta1 no
_____________________________________________________________________
[root@master ~]#
3.5、选择要升级到的版本,执行如下命令
kubeadm upgrade apply v1.20.0
会提示用我们一开始安装kubernetes 集群时候的init的镜像地址无法下载镜像的问题:
拉取镜像报错3.6、可以先把镜像下载下来然后重新打tag
#下载升级所需镜像
kubeadm config images list \
| sed -e 's/^/docker pull /g' -e 's#k8s.gcr.io#registry.cn-hangzhou.aliyuncs.com/google_containers#g' | sh -x
3.7、给镜像打官方镜像的tag
docker images | grep registry.cn-hangzhou.aliyuncs.com/google_containers | awk '{print "docker tag",$1":"$2,$1":"$2}' | sed -e 's/registry.cn-hangzhou.aliyuncs.com\/google_containers/k8s.gcr.io/2' | sh -x
3.8、删除原镜像
docker images | grep registry.cn-hangzhou.aliyuncs.com/google_containers | awk '{print "docker rmi """$1""":"""$2}' | sh -x
3.9、然后再重新执行
kubeadm upgrade apply v1.20.0
升级成功
3.10、另一个master升级
kubeadm upgrade node
3.11、安装kubect 和kubelet
yum install -y kubelet-1.20.0-0 kubectl-1.20.0-0 --disableexcludes=kubernetes
3.12、重启 kubelet
sudo systemctl daemon-reload
sudo systemctl restart kubelet
现在执行kubectl get nodes
显示master 版本为1.20.0,node的版本为1.19.4
四、升级工作节点(Node)
升级Node1
4.1、腾空节点(在master上执行)
kubectl drain node1 --ignore-daemonsets
4.2 升级kubelet 配置(在master上执行)
sudo kubeadm upgrade node
4.3 升级kubelet 与kubectl (在工作节点上执行)
yum install -y kubelet-1.20.0-0 kubectl-1.20.0-0 --disableexcludes=kubernetes
4.4 重启 kubelet
sudo systemctl daemon-reload
sudo systemctl restart kubelet
4.5 取消对节点的保护(在master上执行)
通过将节点标记为可调度,让节点重新上线
kubectl uncordon node1
对node2 执行同样的操作
1 腾空节点(在master上执行)
kubectl drain node2 --ignore-daemonsets
2 升级kubelet 配置(在master上执行)
sudo kubeadm upgrade node
3 升级kubelet 与kubectl (在工作节点上执行)
yum install -y kubelet-1.20.0-0 kubectl-1.20.0-0 --disableexcludes=kubernetes
4 重启 kubelet (在工作节点上执行)
sudo systemctl daemon-reload
sudo systemctl restart kubelet
5 取消对节点的保护(在master上执行)
通过将节点标记为可调度,让节点重新上线:
kubectl uncordon node2
最后的验证
kubectl get nodes