k8s删除node节点
2022-03-08 本文已影响0人
落叶寒轩
步骤1:驱逐节点
[root@master01 ~]# kubectl cordon worker01
node/worker01 already cordoned
步骤2:设置节点为不可调度
[root@master01 ~]# kubectl drain worker01 --ignore-daemonsets
node/worker01 already cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/kube-proxy-4s6l
步骤3:删除该节点
[root@master01 ~]# kubectl delete node worker01
node "worker01" deleted
[root@master01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master01 NotReady control-plane,master 4h46m v1.23.4
步骤4:若需要重新部署该节点
在该节点上执行kubeadm reset
[root@worker01 ~]# kubeadm reset
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
W0308 05:13:53.138627 9272 removeetcdmember.go:80] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.
If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.