Kubernetes 从1.10到1.11升级记录
2019-03-07 本文已影响110人
SetZero
自己维护的脚本跪着也要弄下去。当前Kubernetes 1.11的小版本是1.11.8。 在升级之前一定要多读几遍官方的升级须知Kubernetes 1.11 - Action Required Before Upgrading。
🙌 注意 🙌
- 升级后CoreDNS将取代原有Kube-DNS,如不想使用CoreDNS请看这里,若使用CoreDNS,请确认Kube-DNS是否有自定义配置,如有请先备份后按指南进行迁移。
- 请先升级Master节点,若先升级Worker节点,该Worker会出现权限不足情况Issues。
1.添加阿里云yum源
# 添加kubernetes yum源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
# 刷新缓存
$ yum makecache fast
2.升级kubeadm
$ yum install -y kubeadm-1.11.8
3.配置文件生成并修改
3.1Master节点配置文件生成并修改
3.1.1Master节点迁移配置
$ kubeadm config migrate --old-config /etc/kubernetes/kubeadm-config.yaml --new-config /etc/kubernetes/kubeadm-config-v.1.11.yaml
- 若未报错,
/etc/kubernetes/kubeadm-config-v.1.11.yaml
文件内容类似为:api: advertiseAddress: 192.168.12.159 bindPort: 6443 controlPlaneEndpoint: "" apiServerCertSANs: - kubernetes - kubernetes.default - kubernetes.default.svc - kubernetes.default.svc.cluster.local - 10.233.0.1 - localhost - 127.0.0.1 - clusternode4 - clusternode5 - clusternode6 - 192.168.12.159 - 192.168.12.160 - 192.168.12.161 apiServerExtraArgs: admission-control: Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ValidatingAdmissionWebhook,ResourceQuota allow-privileged: "true" apiserver-count: "3" insecure-bind-address: 127.0.0.1 insecure-port: "8080" runtime-config: admissionregistration.k8s.io/v1alpha1 service-node-port-range: 30000-32767 storage-backend: etcd3 apiVersion: kubeadm.k8s.io/v1alpha2 auditPolicy: logDir: /var/log/kubernetes/audit logMaxAge: 2 path: "" bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token token: abcdef.0123456789abcdef ttl: 0s usages: - signing - authentication certificatesDir: /etc/kubernetes/pki clusterName: kubernetes etcd: external: caFile: /etc/kubernetes/ssl/etcd/ca.pem certFile: /etc/kubernetes/ssl/etcd/client.pem endpoints: - https://192.168.12.159:2379 - https://192.168.12.160:2379 - https://192.168.12.161:2379 keyFile: /etc/kubernetes/ssl/etcd/client-key.pem imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers kind: MasterConfiguration kubeProxy: config: bindAddress: 0.0.0.0 clientConnection: acceptContentTypes: "" burst: 10 contentType: application/vnd.kubernetes.protobuf kubeconfig: /var/lib/kube-proxy/kubeconfig.conf qps: 5 clusterCIDR: 10.233.64.0/18 configSyncPeriod: 15m0s conntrack: max: null maxPerCore: 32768 min: 131072 tcpCloseWaitTimeout: 1h0m0s tcpEstablishedTimeout: 24h0m0s enableProfiling: false healthzBindAddress: 0.0.0.0:10256 hostnameOverride: "" iptables: masqueradeAll: false masqueradeBit: 14 minSyncPeriod: 0s syncPeriod: 30s ipvs: excludeCIDRs: null minSyncPeriod: 0s scheduler: "" syncPeriod: 30s metricsBindAddress: 127.0.0.1:10249 mode: "" nodePortAddresses: null oomScoreAdj: -999 portRange: "" resourceContainer: /kube-proxy udpIdleTimeout: 250ms kubeletConfiguration: baseConfig: address: 0.0.0.0 authentication: anonymous: enabled: false webhook: cacheTTL: 2m0s enabled: true x509: clientCAFile: /etc/kubernetes/pki/ca.crt authorization: mode: Webhook webhook: cacheAuthorizedTTL: 5m0s cacheUnauthorizedTTL: 30s cgroupDriver: cgroupfs cgroupsPerQOS: true clusterDNS: - 10.233.0.10 clusterDomain: cluster.local containerLogMaxFiles: 5 containerLogMaxSize: 10Mi contentType: application/vnd.kubernetes.protobuf cpuCFSQuota: true cpuManagerPolicy: none cpuManagerReconcilePeriod: 10s enableControllerAttachDetach: true enableDebuggingHandlers: true enforceNodeAllocatable: - pods eventBurst: 10 eventRecordQPS: 5 evictionHard: imagefs.available: 15% memory.available: 100Mi nodefs.available: 10% nodefs.inodesFree: 5% evictionPressureTransitionPeriod: 5m0s failSwapOn: true fileCheckFrequency: 20s hairpinMode: promiscuous-bridge healthzBindAddress: 127.0.0.1 healthzPort: 10248 httpCheckFrequency: 20s imageGCHighThresholdPercent: 85 imageGCLowThresholdPercent: 80 imageMinimumGCAge: 2m0s iptablesDropBit: 15 iptablesMasqueradeBit: 14 kubeAPIBurst: 10 kubeAPIQPS: 5 makeIPTablesUtilChains: true maxOpenFiles: 1000000 maxPods: 110 nodeStatusUpdateFrequency: 10s oomScoreAdj: -999 podPidsLimit: -1 port: 10250 registryBurst: 10 registryPullQPS: 5 resolvConf: /etc/resolv.conf rotateCertificates: true runtimeRequestTimeout: 2m0s serializeImagePulls: true staticPodPath: /etc/kubernetes/manifests streamingConnectionIdleTimeout: 4h0m0s syncFrequency: 1m0s volumeStatsAggPeriod: 1m0s kubernetesVersion: v1.10.12 networking: dnsDomain: cluster.local podSubnet: 10.233.64.0/18 serviceSubnet: 10.233.0.0/18 nodeRegistration: criSocket: /var/run/dockershim.sock name: clusternode4 taints: - effect: NoSchedule key: node-role.kubernetes.io/master unifiedControlPlaneImage: ""
3.1.2Master节点配置文件修改
- 替换
kubernetesVersion
版本号为v1.11.8
。 - kube-apiserver将取消
admission-control
参数,改为enable-admission-plugins
和disable-admission-plugins
,此处我们将admission-control
替换为enable-admission-plugins
即可。 -
nodeRegistration
属性添加:kubeletExtraArgs: pod-infra-container-image: registry.aliyuncs.com/google_containers/pause:3.1
3.2Worker节点配置文件生成并修改
3.2.1Worker节点配置文件生成
$ kubeadm config migrate --old-config /etc/kubernetes/kubeadm-config.yaml --new-config /etc/kubernetes/kubeadm-config-v.1.11.yaml
- 若未报错,
/etc/kubernetes/kubeadm-config-v.1.11.yaml
文件内容类似为:apiVersion: kubeadm.k8s.io/v1alpha2 caCertPath: /etc/kubernetes/pki/ca.crt clusterName: kubernetes discoveryFile: "" discoveryTimeout: 5m0s discoveryToken: abcdef.0123456789abcdef discoveryTokenAPIServers: - 192.168.16.188:6443 discoveryTokenUnsafeSkipCAVerification: true kind: NodeConfiguration nodeRegistration: criSocket: /var/run/dockershim.sock name: uat05 tlsBootstrapToken: abcdef.0123456789abcdef token: abcdef.0123456789abcdef
3.2.2Worker节点配置文件修改
-
nodeRegistration
属性添加:kubeletExtraArgs: pod-infra-container-image: registry.aliyuncs.com/google_containers/pause:3.1
- 使用
kubeadm config print-default
命令打印出默认配置,取上续步骤未生成的配置项,添加到/etc/kubernetes/kubeadm-config-v.1.11.yaml
文件中,注意使用---
进行链接,这是两个对象。api: advertiseAddress: 192.168.16.188 bindPort: 6443 controlPlaneEndpoint: "" apiVersion: kubeadm.k8s.io/v1alpha2 auditPolicy: logDir: /var/log/kubernetes/audit logMaxAge: 2 path: "" bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token token: abcdef.0123456789abcdef ttl: 24h0m0s usages: - signing - authentication certificatesDir: /etc/kubernetes/pki clusterName: kubernetes etcd: local: dataDir: /var/lib/etcd image: "" imageRepository: k8s.gcr.io kind: MasterConfiguration kubeProxy: config: bindAddress: 0.0.0.0 clientConnection: acceptContentTypes: "" burst: 10 contentType: application/vnd.kubernetes.protobuf kubeconfig: /var/lib/kube-proxy/kubeconfig.conf qps: 5 clusterCIDR: "" configSyncPeriod: 15m0s conntrack: max: null maxPerCore: 32768 min: 131072 tcpCloseWaitTimeout: 1h0m0s tcpEstablishedTimeout: 24h0m0s enableProfiling: false healthzBindAddress: 0.0.0.0:10256 hostnameOverride: "" iptables: masqueradeAll: false masqueradeBit: 14 minSyncPeriod: 0s syncPeriod: 30s ipvs: excludeCIDRs: null minSyncPeriod: 0s scheduler: "" syncPeriod: 30s metricsBindAddress: 127.0.0.1:10249 mode: "" nodePortAddresses: null oomScoreAdj: -999 portRange: "" resourceContainer: /kube-proxy udpIdleTimeout: 250ms kubeletConfiguration: baseConfig: address: 0.0.0.0 authentication: anonymous: enabled: false webhook: cacheTTL: 2m0s enabled: true x509: clientCAFile: /etc/kubernetes/pki/ca.crt authorization: mode: Webhook webhook: cacheAuthorizedTTL: 5m0s cacheUnauthorizedTTL: 30s cgroupDriver: cgroupfs cgroupsPerQOS: true clusterDNS: - 10.233.0.10 clusterDomain: cluster.local containerLogMaxFiles: 5 containerLogMaxSize: 10Mi contentType: application/vnd.kubernetes.protobuf cpuCFSQuota: true cpuManagerPolicy: none cpuManagerReconcilePeriod: 10s enableControllerAttachDetach: true enableDebuggingHandlers: true enforceNodeAllocatable: - pods eventBurst: 10 eventRecordQPS: 5 evictionHard: imagefs.available: 15% memory.available: 100Mi nodefs.available: 10% nodefs.inodesFree: 5% evictionPressureTransitionPeriod: 5m0s failSwapOn: true fileCheckFrequency: 20s hairpinMode: promiscuous-bridge healthzBindAddress: 127.0.0.1 healthzPort: 10248 httpCheckFrequency: 20s imageGCHighThresholdPercent: 85 imageGCLowThresholdPercent: 80 imageMinimumGCAge: 2m0s iptablesDropBit: 15 iptablesMasqueradeBit: 14 kubeAPIBurst: 10 kubeAPIQPS: 5 makeIPTablesUtilChains: true maxOpenFiles: 1000000 maxPods: 110 nodeStatusUpdateFrequency: 10s oomScoreAdj: -999 podPidsLimit: -1 port: 10250 registryBurst: 10 registryPullQPS: 5 resolvConf: /etc/resolv.conf rotateCertificates: true runtimeRequestTimeout: 2m0s serializeImagePulls: true staticPodPath: /etc/kubernetes/manifests streamingConnectionIdleTimeout: 4h0m0s syncFrequency: 1m0s volumeStatsAggPeriod: 1m0s kubernetesVersion: v1.11.8 networking: dnsDomain: cluster.local podSubnet: 10.233.64.0/18 serviceSubnet: 10.233.0.0/18 nodeRegistration: criSocket: /var/run/dockershim.sock name: uat05 taints: - effect: NoSchedule key: node-role.kubernetes.io/master unifiedControlPlaneImage: ""
4.升级kubelet、kubectl
$ yum install -y kubelet-1.11.8 kubectl-1.11.8
5.升级kubernetes剩余组件
$ kubeadm upgrade apply v1.11.8 --config=/etc/kubernetes/kubeadm-config-v1.11.yaml -f
6.重启kubelet
$ systemctl daemon-reload
$ systemctl restart kubelet
遇到的问题
- 升级完毕后有部分节点Pod无法通过SVC访问其他节点上的Pod,清空所有节点
iptables
规则后恢复正常,清空命令iptables -F
,iptables
由kube-proxy自动生成所以清空没问题。