kubeadm 常见操作
目录
renew 证书
升级 Kubernetes
升级 Master 节点
升级剩余的节点
使用自定义的 kubeadm config
新增节点
清理节点
renew 证书
使用下列命令直接 renew 所有证书:
[root@k8s-m01 ~]# kubeadm certs renew all
[renew] Reading configuration from the cluster...
[renew] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W0920 20:40:32.931747 5070 configset.go:77] Warning: No kubeproxy.config.k8s.io/v1alpha1 config is loaded. Continuing without it: configmaps "kube-proxy" not found
certificate embedded in the kubeconfig file for the admin to use and for kubeadm itself renewed
certificate for serving the Kubernetes API renewed
certificate the apiserver uses to access etcd renewed
certificate for the API server to connect to kubelet renewed
certificate embedded in the kubeconfig file for the controller manager to use renewed
certificate for liveness probes to healthcheck etcd renewed
certificate for etcd nodes to communicate with each other renewed
certificate for serving etcd renewed
certificate for the front proxy client renewed
certificate embedded in the kubeconfig file for the scheduler manager to use renewed
Done renewing certificates. You must restart the kube-apiserver, kube-controller-manager, kube-scheduler and etcd, so that they can use the new certificates.
[root@k8s-m01 ~]#
升级 Kubernetes
升级 Master 节点
参考文档:https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/
# 获取可用的 kubeadm 版本
yum list --showduplicates kubeadm --disableexcludes=kubernetes
# 安装新版本的 kubeadm 并检查
yum versionlock del kubadm
yum install -y kubeadm-1.21.9-0
kubeadm version
yum versionlock add kubeadm
# 查看升级计划,确保目标版本正确(目标版本和 kubeadm 版本保持一致)
kubeadm upgrade plan
[root@k8s-m01 ~]# kubeadm upgrade plan
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.21.7
[upgrade/versions] kubeadm version: v1.21.9
W0921 12:38:49.345889 7397 version.go:102] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable.txt": Get "https://cdn.dl.k8s.io/release/stable.txt": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
W0921 12:38:49.346004 7397 version.go:103] falling back to the local client version: v1.21.9
[upgrade/versions] Target version: v1.21.9
W0921 12:38:59.362137 7397 version.go:102] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.21.txt": Get "https://cdn.dl.k8s.io/release/stable-1.21.txt": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
W0921 12:38:59.362174 7397 version.go:103] falling back to the local client version: v1.21.9
[upgrade/versions] Latest version in the v1.21 series: v1.21.9
W0921 12:38:59.707062 7397 configset.go:77] Warning: No kubeproxy.config.k8s.io/v1alpha1 config is loaded. Continuing without it: configmaps "kube-proxy" not found
Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT CURRENT TARGET
kubelet 4 x v1.21.7 v1.21.9
Upgrade to the latest version in the v1.21 series:
COMPONENT CURRENT TARGET
kube-apiserver v1.21.7 v1.21.9
kube-controller-manager v1.21.7 v1.21.9
kube-scheduler v1.21.7 v1.21.9
kube-proxy v1.21.7 v1.21.9
CoreDNS v1.8.0 v1.8.0
etcd 3.4.13-0 3.4.13-0
You can now apply the upgrade by executing the following command:
kubeadm upgrade apply v1.21.9
_____________________________________________________________________
The table below shows the current state of component configs as understood by this version of kubeadm.
Configs that have a "yes" mark in the "MANUAL UPGRADE REQUIRED" column require manual config upgrade or
resetting to kubeadm defaults before a successful upgrade can be performed. The version to manually
upgrade to is denoted in the "PREFERRED VERSION" column.
API GROUP CURRENT VERSION PREFERRED VERSION MANUAL UPGRADE REQUIRED
kubeproxy.config.k8s.io - v1alpha1 no
kubelet.config.k8s.io v1beta1 v1beta1 no
_____________________________________________________________________
开始升级:
kubeadm upgrade apply v1.21.9
相关升级日志:
[root@k8s-m01 ~]# kubeadm upgrade apply v1.21.9
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W0921 12:41:23.169228 8294 configset.go:77] Warning: No kubeproxy.config.k8s.io/v1alpha1 config is loaded. Continuing without it: configmaps "kube-proxy" not found
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade/version] You have chosen to change the cluster version to "v1.21.9"
[upgrade/versions] Cluster version: v1.21.7
[upgrade/versions] kubeadm version: v1.21.9
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Pulling images required for setting up a Kubernetes cluster
[upgrade/prepull] This might take a minute or two, depending on the speed of your internet connection
[upgrade/prepull] You can also perform this action in beforehand using 'kubeadm config images pull'
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.21.9"...
Static pod: kube-apiserver-k8s-m01 hash: 03bdffad3f2449ff6607499dec09fbfe
Static pod: kube-controller-manager-k8s-m01 hash: 7831462d00b8a96ac4f2b00587ab197a
Static pod: kube-scheduler-k8s-m01 hash: 5d3751370e5d195bf49cd3d26dd73873
[upgrade/etcd] Upgrading to TLS for etcd
Static pod: etcd-k8s-m01 hash: f846fe3975899cd607a67d6ede4f42fd
[upgrade/staticpods] Preparing for "etcd" upgrade
[upgrade/staticpods] Current and new manifests of etcd are equal, skipping upgrade
[upgrade/etcd] Waiting for etcd to become available
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests275496096"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2023-09-21-12-41-52/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-apiserver-k8s-m01 hash: 03bdffad3f2449ff6607499dec09fbfe
Static pod: kube-apiserver-k8s-m01 hash: 03bdffad3f2449ff6607499dec09fbfe
Static pod: kube-apiserver-k8s-m01 hash: 03bdffad3f2449ff6607499dec09fbfe
Static pod: kube-apiserver-k8s-m01 hash: 9e78ffe647bed851b4d156c343563f79
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2023-09-21-12-41-52/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-controller-manager-k8s-m01 hash: 7831462d00b8a96ac4f2b00587ab197a
Static pod: kube-controller-manager-k8s-m01 hash: 7831462d00b8a96ac4f2b00587ab197a
Static pod: kube-controller-manager-k8s-m01 hash: 7831462d00b8a96ac4f2b00587ab197a
Static pod: kube-controller-manager-k8s-m01 hash: 7831462d00b8a96ac4f2b00587ab197a
Static pod: kube-controller-manager-k8s-m01 hash: 7831462d00b8a96ac4f2b00587ab197a
Static pod: kube-controller-manager-k8s-m01 hash: 7831462d00b8a96ac4f2b00587ab197a
Static pod: kube-controller-manager-k8s-m01 hash: 7831462d00b8a96ac4f2b00587ab197a
Static pod: kube-controller-manager-k8s-m01 hash: 7831462d00b8a96ac4f2b00587ab197a
Static pod: kube-controller-manager-k8s-m01 hash: 7831462d00b8a96ac4f2b00587ab197a
Static pod: kube-controller-manager-k8s-m01 hash: 7831462d00b8a96ac4f2b00587ab197a
Static pod: kube-controller-manager-k8s-m01 hash: 7831462d00b8a96ac4f2b00587ab197a
Static pod: kube-controller-manager-k8s-m01 hash: 7831462d00b8a96ac4f2b00587ab197a
Static pod: kube-controller-manager-k8s-m01 hash: 7831462d00b8a96ac4f2b00587ab197a
Static pod: kube-controller-manager-k8s-m01 hash: 7831462d00b8a96ac4f2b00587ab197a
Static pod: kube-controller-manager-k8s-m01 hash: 7831462d00b8a96ac4f2b00587ab197a
Static pod: kube-controller-manager-k8s-m01 hash: 7831462d00b8a96ac4f2b00587ab197a
Static pod: kube-controller-manager-k8s-m01 hash: 7831462d00b8a96ac4f2b00587ab197a
Static pod: kube-controller-manager-k8s-m01 hash: 7831462d00b8a96ac4f2b00587ab197a
Static pod: kube-controller-manager-k8s-m01 hash: 7831462d00b8a96ac4f2b00587ab197a
Static pod: kube-controller-manager-k8s-m01 hash: 7831462d00b8a96ac4f2b00587ab197a
Static pod: kube-controller-manager-k8s-m01 hash: 7831462d00b8a96ac4f2b00587ab197a
Static pod: kube-controller-manager-k8s-m01 hash: 7831462d00b8a96ac4f2b00587ab197a
Static pod: kube-controller-manager-k8s-m01 hash: 7831462d00b8a96ac4f2b00587ab197a
Static pod: kube-controller-manager-k8s-m01 hash: 7831462d00b8a96ac4f2b00587ab197a
Static pod: kube-controller-manager-k8s-m01 hash: 7831462d00b8a96ac4f2b00587ab197a
Static pod: kube-controller-manager-k8s-m01 hash: 7831462d00b8a96ac4f2b00587ab197a
Static pod: kube-controller-manager-k8s-m01 hash: 7831462d00b8a96ac4f2b00587ab197a
Static pod: kube-controller-manager-k8s-m01 hash: 7831462d00b8a96ac4f2b00587ab197a
Static pod: kube-controller-manager-k8s-m01 hash: 7831462d00b8a96ac4f2b00587ab197a
Static pod: kube-controller-manager-k8s-m01 hash: 7831462d00b8a96ac4f2b00587ab197a
Static pod: kube-controller-manager-k8s-m01 hash: a5876789b084c46842004f9f9f098983
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2023-09-21-12-41-52/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-scheduler-k8s-m01 hash: 5d3751370e5d195bf49cd3d26dd73873
Static pod: kube-scheduler-k8s-m01 hash: 5d3751370e5d195bf49cd3d26dd73873
Static pod: kube-scheduler-k8s-m01 hash: 5d3751370e5d195bf49cd3d26dd73873
Static pod: kube-scheduler-k8s-m01 hash: 5d3751370e5d195bf49cd3d26dd73873
Static pod: kube-scheduler-k8s-m01 hash: 5d3751370e5d195bf49cd3d26dd73873
Static pod: kube-scheduler-k8s-m01 hash: 5d3751370e5d195bf49cd3d26dd73873
Static pod: kube-scheduler-k8s-m01 hash: 5d3751370e5d195bf49cd3d26dd73873
Static pod: kube-scheduler-k8s-m01 hash: 5d3751370e5d195bf49cd3d26dd73873
Static pod: kube-scheduler-k8s-m01 hash: 5d3751370e5d195bf49cd3d26dd73873
Static pod: kube-scheduler-k8s-m01 hash: 5d3751370e5d195bf49cd3d26dd73873
Static pod: kube-scheduler-k8s-m01 hash: 5d3751370e5d195bf49cd3d26dd73873
Static pod: kube-scheduler-k8s-m01 hash: 5d3751370e5d195bf49cd3d26dd73873
Static pod: kube-scheduler-k8s-m01 hash: 5d3751370e5d195bf49cd3d26dd73873
Static pod: kube-scheduler-k8s-m01 hash: 5d3751370e5d195bf49cd3d26dd73873
Static pod: kube-scheduler-k8s-m01 hash: 5d3751370e5d195bf49cd3d26dd73873
Static pod: kube-scheduler-k8s-m01 hash: 5d3751370e5d195bf49cd3d26dd73873
Static pod: kube-scheduler-k8s-m01 hash: 5d3751370e5d195bf49cd3d26dd73873
Static pod: kube-scheduler-k8s-m01 hash: 5d3751370e5d195bf49cd3d26dd73873
Static pod: kube-scheduler-k8s-m01 hash: 5d3751370e5d195bf49cd3d26dd73873
Static pod: kube-scheduler-k8s-m01 hash: 5d3751370e5d195bf49cd3d26dd73873
Static pod: kube-scheduler-k8s-m01 hash: 5d3751370e5d195bf49cd3d26dd73873
Static pod: kube-scheduler-k8s-m01 hash: 5d3751370e5d195bf49cd3d26dd73873
Static pod: kube-scheduler-k8s-m01 hash: 5d3751370e5d195bf49cd3d26dd73873
Static pod: kube-scheduler-k8s-m01 hash: 5d3751370e5d195bf49cd3d26dd73873
Static pod: kube-scheduler-k8s-m01 hash: 5d3751370e5d195bf49cd3d26dd73873
Static pod: kube-scheduler-k8s-m01 hash: 5d3751370e5d195bf49cd3d26dd73873
Static pod: kube-scheduler-k8s-m01 hash: 5d3751370e5d195bf49cd3d26dd73873
Static pod: kube-scheduler-k8s-m01 hash: b21794d83b7d1052cbbc550e81d02817
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upgrade/postupgrade] Applying label node-role.kubernetes.io/control-plane='' to Nodes with label node-role.kubernetes.io/master='' (deprecated)
[upgrade/postupgrade] Applying label node.kubernetes.io/exclude-from-external-load-balancers='' to control plane Nodes
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.21" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
W0921 12:43:17.363688 8294 postupgrade.go:141] the ConfigMap "kube-proxy" in the namespace "kube-system" was not found. Assuming that kube-proxy was not deployed for this cluster. Note that once 'kubeadm upgrade apply' supports phases you will have to skip the kube-proxy upgrade manually
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.21.9". Enjoy!
[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
升级 Master 节点的 kubelet
kubectl drain k8s-m01 --ignore-daemonsets
相关日志:
[root@k8s-m01 ~]# kubectl drain k8s-m01 --ignore-daemonsets
node/k8s-m01 cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/cilium-cvn8r
node/k8s-m01 drained
[root@k8s-m01 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-m01 Ready,SchedulingDisabled control-plane,master 261d v1.21.7
k8s-w01 Ready <none> 261d v1.21.7
k8s-w02 Ready <none> 261d v1.21.7
k8s-w03 Ready <none> 14h v1.21.7
升级 kubelet 和 kubectl:
yum versionlock del kubectl kubelet
yum install -y kubelet-1.21.9-0 kubectl-1.21.9-0 --disableexcludes=Kubernetes
# 重启服务
sudo systemctl daemon-reload
sudo systemctl restart kubelet
yum versionlock add kubectl kubelet
kubectl uncordon k8s-m01
升级完成后检查 Master 版本正常:
[root@k8s-m01 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-m01 Ready control-plane,master 261d v1.21.9
k8s-w01 Ready <none> 261d v1.21.7
k8s-w02 Ready <none> 261d v1.21.7
k8s-w03 Ready <none> 14h v1.21.7
升级剩余的节点
依次登陆 Worker 节点,使用下列命令升级:
相关日志:
[root@k8s-w01 ~]# kubeadm upgrade node
[upgrade] Reading configuration from the cluster...
[upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W0921 12:50:36.385469 14493 configset.go:77] Warning: No kubeproxy.config.k8s.io/v1alpha1 config is loaded. Continuing without it: configmaps "kube-proxy" is forbidden: User "system:node:k8s-w01" cannot get resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'k8s-w01' and this object
[preflight] Running pre-flight checks
[preflight] Skipping prepull. Not a control plane node.
[upgrade] Skipping phase. Not a control plane node.
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[upgrade] The configuration for this node was successfully updated!
[upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.
在 Master 节点依次 drain 节点,并升级 kubelet
kubectl drain k8s-w01 --ignore-daemonsets
登陆相关 Worker 节点,使用下列命令升级:
yum versionlock del kubelet
yum install -y kubelet-1.21.9-0 --disableexcludes=Kubernetes
# 重启服务
sudo systemctl daemon-reload
sudo systemctl restart kubelet
yum versionlock add kubelet
在 Master 节点依次 uncordon 所有节点:
再重复上述步骤,处理剩余节点。
最终检查所有版本升级完毕:
[root@k8s-m01 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-m01 Ready control-plane,master 261d v1.21.9
k8s-w01 Ready <none> 261d v1.21.9
k8s-w02 Ready <none> 261d v1.21.9
k8s-w03 Ready <none> 15h v1.21.9
使用自定义的 kubeadm config
如果在升级时想调整 kubeadm 的配置,可以使用下列方法导出 kubeadm config,修改后再调用:
# 获取原来的 kubeadm config 文件,并导出为 oldkubeadm.yaml
kubectl get cm -o yaml -n kube-system kubeadm-config > oldkubeadm.yaml
# 复制 configmap
cp oldkubeadm.yaml newkubeadm.yaml
编辑 kubeadm config 文件,将里面的 data 提取出来,去掉 configmap 的内容。修改里面的 Kubernetes 版本号为目标版本 (此处为 v1.21.9)
# vi newkubeadm.yaml
apiServer:
extraArgs:
authorization-mode: Node,RBAC
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.21.10
networking:
dnsDomain: cluster.local
podSubnet: 10.20.0.0/16
serviceSubnet: 10.96.0.0/12
scheduler: {}
使用 kubeadm upgrade plan 并调用上述的 yaml 文件,进行升级检查:
kubeadm upgrade plan --config=newkubeadm.yaml
[upgrade/config] Making sure the configuration is correct:
W0921 13:18:03.135784 19506 common.go:94] WARNING: Usage of the --config flag with kubeadm config types for reconfiguring the cluster during upgrade is not recommended!
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.21.9
[upgrade/versions] kubeadm version: v1.21.10
[upgrade/versions] Target version: v1.21.10
[upgrade/versions] Latest version in the v1.21 series: v1.21.10
W0921 13:18:05.664970 19506 configset.go:77] Warning: No kubeproxy.config.k8s.io/v1alpha1 config is loaded. Continuing without it: configmaps "kube-proxy" not found
Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT CURRENT TARGET
kubelet 4 x v1.21.9 v1.21.10
Upgrade to the latest version in the v1.21 series:
COMPONENT CURRENT TARGET
kube-apiserver v1.21.9 v1.21.10
kube-controller-manager v1.21.9 v1.21.10
kube-scheduler v1.21.9 v1.21.10
kube-proxy v1.21.9 v1.21.10
CoreDNS v1.8.0 v1.8.0
etcd 3.4.13-0 3.4.13-0
You can now apply the upgrade by executing the following command:
kubeadm upgrade apply v1.21.10
_____________________________________________________________________
The table below shows the current state of component configs as understood by this version of kubeadm.
Configs that have a "yes" mark in the "MANUAL UPGRADE REQUIRED" column require manual config upgrade or
resetting to kubeadm defaults before a successful upgrade can be performed. The version to manually
upgrade to is denoted in the "PREFERRED VERSION" column.
API GROUP CURRENT VERSION PREFERRED VERSION MANUAL UPGRADE REQUIRED
kubeproxy.config.k8s.io - v1alpha1 no
kubelet.config.k8s.io v1beta1 v1beta1 no
确保无问题后使用下列命令进行升级:
kubeadm upgrade apply v1.21.10 --config=newkubeadm.yaml
其余步骤和前面文章的一致。
新增节点
默认 kubeadm 生成的 token 只有一天的有效期,在 token 过期后,新增的 worker 不能再通过之前的 join 加入 Master,因此需要创建新的 token,同时生产加入集群的命令,之后在 Worker 直接执行即可:
kubeadm token create --print-join-command
清理节点
通过下列命令卸载 Kubernetes(注意这些步骤并不会卸载 CNI 残留的组件,所以如果卸载后安装其他 CNI 可能会出问题):