1.官網(wǎng)升級說明
升級 kubeadm 集群 | Kubernetes
2. 版本說明
詳細(xì)參考:版本偏差策略 | Kubernetes
Kubernetes 版本以?x.y.z?表示,其中?x?是主要版本,?y?是次要版本,z?是補(bǔ)丁版本。
版本升級控制:
? ? ? ? 1.?最新版和最老版的 kube-apiserver 實(shí)例版本偏差最多為一個(gè)次要版本。
? ? ? ? 2.?kubelet 版本不能比kube-apiserver版本新; kubelet可以比kube-apiserver低三個(gè)次要版本 (如果 kubelet < 1.25,則只能比 kube-apiserver 低兩個(gè)次要版本,如:kube-apiserver 處于 1.29 版本,則kubelet 支持 1.29、1.28、1.27 和 1.26 版本)。
? ? ? ? 3.?kube-proxy不能比 kube-apiserver 新; 最多可以 kube-apiserver舊三個(gè)小版本(kube-proxy < 1.25 最多只能比 kube-apiserver 舊兩個(gè)小版本); 可比它旁邊運(yùn)行的kubelet實(shí)例舊或新最多三個(gè)次要版本(kube-proxy < 1.25 最多只能是比它并行運(yùn)行的 kubelet 實(shí)例舊或新的兩個(gè)次要版本)。
? ? ? ? 4.kube-controller-manager、kube-scheduler和cloud-controller-manager不能比與它們通信的kube-apiserver實(shí)例新。它們應(yīng)該與kube-apiserver次要版本相匹配,但可能最多舊一個(gè)次要版本(允許實(shí)時(shí)升級)。
? ? ? ? 5.?kubectl 在 kube-apiserver 的一個(gè)次要版本(較舊或較新)中支持。
3.升級總體流程
3.1 先升級master節(jié)點(diǎn),然后升級work節(jié)點(diǎn)
3.1.1各個(gè)插件升級流程
1)升級kubeadm
目前阿里的apt源和清華源的kubeadm版本只能倒1.28.2;同步的是舊的apt.kubernetes.io地址的倉庫,現(xiàn)在需要轉(zhuǎn)到用最新的社區(qū)自治的軟件包倉庫(
pkgs.k8s.io
)更改源可以參考:更改 Kubernetes 軟件包倉庫 | Kubernetes
如果使用的是阿里源或是清華源,執(zhí)行命令
apt-cache madison kubeadm
??
可以看到最高只支持更新到kubeadm 1.28.2
需要更改使用pkgs.k8s.io的源
下載 Kubernetes 倉庫的公共簽名密鑰。所有倉庫都使用相同的簽名密鑰, 因此你可以忽略 URL 中的版本:
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
新增apt倉庫定義
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /" | sudo tee /etc/apt/sources.list.d/kubernetes.list
執(zhí)行update,查看可以升級的kubeadm版本
atp-get update
apt-cache madison kubeadm
?先看下升級計(jì)劃
kubeadm upgrade plan
可以看到kubeadm當(dāng)前版本是1.28.2,需要對其先升級到1.28.4然后再執(zhí)行k8s的升級
# 用最新的補(bǔ)丁版本號替換 1.28.x-* 中的 x
apt-mark unhold kubeadm && \
apt-get update && apt-get install -y kubeadm='1.28.4-1.1' && \
apt-mark hold kubeadm
?
2)開始升級master節(jié)點(diǎn)?
root@k8s-master:/etc/apt/keyrings# kubeadm upgrade apply v1.28.4
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade/version] You have chosen to change the cluster version to "v1.28.4"
[upgrade/versions] Cluster version: v1.28.2
[upgrade/versions] kubeadm version: v1.28.4
[upgrade] Are you sure you want to proceed? [y/N]: y
[upgrade/prepull] Pulling images required for setting up a Kubernetes cluster
[upgrade/prepull] This might take a minute or two, depending on the speed of your internet connection
[upgrade/prepull] You can also perform this action in beforehand using 'kubeadm config images pull'
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.28.4" (timeout: 5m0s)...
[upgrade/etcd] Upgrading to TLS for etcd
[upgrade/staticpods] Preparing for "etcd" upgrade
[upgrade/staticpods] Current and new manifests of etcd are equal, skipping upgrade
[upgrade/etcd] Waiting for etcd to become available
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests132025818"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2023-12-18-17-38-58/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2023-12-18-17-38-58/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2023-12-18-17-38-58/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upgrade] Backing up kubelet config file to /etc/kubernetes/tmp/kubeadm-kubelet-config3792099237/config.yaml
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.28.4". Enjoy!
[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
kubeadm upgrade apply 做了以下工作:
? ? 檢查你的集群是否處于可升級狀態(tài):
? ? ? ? API 服務(wù)器是可訪問的
? ? ? ? 所有節(jié)點(diǎn)處于 Ready 狀態(tài)
? ? ? ? 控制面是健康的
? ? 強(qiáng)制執(zhí)行版本偏差策略。
? ? 確??刂泼娴溺R像是可用的或可拉取到服務(wù)器上。
? ? 如果組件配置要求版本升級,則生成替代配置與/或使用用戶提供的覆蓋版本配置。
? ? 升級控制面組件或回滾(如果其中任何一個(gè)組件無法啟動)。
? ? 應(yīng)用新的 CoreDNS 和 kube-proxy 清單,并強(qiáng)制創(chuàng)建所有必需的 RBAC 規(guī)則。
? ? 如果舊文件在 180 天后過期,將創(chuàng)建 API 服務(wù)器的新證書和密鑰文件并備份舊文件。
出現(xiàn)下面說明升級成功了
注:kubeadm upgrade 也會自動對 kubeadm 在節(jié)點(diǎn)上所管理的證書執(zhí)行續(xù)約操作。 如果需要略過證書續(xù)約操作,可以使用標(biāo)志 --certificate-renewal=false。?
查看證書過期時(shí)間
kubeadm certs check-expiration
說明:
對于 v1.28 之前的版本,kubeadm 默認(rèn)采用這樣一種模式:在?
kubeadm upgrade apply
?期間立即升級插件(包括 CoreDNS 和 kube-proxy),而不管是否還有其他尚未升級的Master節(jié)點(diǎn)實(shí)例。 這可能會導(dǎo)致兼容性問題。從 v1.28 開始,kubeadm 默認(rèn)采用這樣一種模式: 在開始升級插件之前,先檢查是否已經(jīng)升級所有的Master節(jié)點(diǎn)實(shí)例。 你必須按順序執(zhí)行Master節(jié)點(diǎn)的升級,或者至少確保在所有其他Master節(jié)點(diǎn)實(shí)例已完成升級之前不啟動最后一個(gè)Master節(jié)點(diǎn)實(shí)例的升級, 并且在最后一個(gè)Master節(jié)點(diǎn)實(shí)例完成升級之后才執(zhí)行插件的升級。如果你要保留舊的升級行為,可以通過?kubeadm upgrade apply --feature-gates=UpgradeAddonsBeforeControlPlane=true
?啟用?UpgradeAddonsBeforeControlPlane
?特性門控。Kubernetes 項(xiàng)目通常不建議啟用此特性門控, 你應(yīng)該轉(zhuǎn)為更改你的升級過程或集群插件,這樣你就不需要啟用舊的行為。?UpgradeAddonsBeforeControlPlane
?特性門控將在后續(xù)的版本中被移除。
?3)升級CNI插件
需要檢查對應(yīng)的網(wǎng)絡(luò)插件是否和當(dāng)前版本匹配,不匹配的話需要升級;
插件說明可以參考官方文檔
安裝擴(kuò)展(Addon) | Kubernetes
如果 CNI 驅(qū)動作為 DaemonSet 運(yùn)行,則在其他控制平面節(jié)點(diǎn)上不需要此步驟。?
例如:該集群使用的calico是3.26.3
?查看calico文檔
About Calico | Calico Documentation
它所支持的kubernetes版本有
4)升級其他master節(jié)點(diǎn)和work節(jié)點(diǎn)
使用命令
sudo kubeadm upgrade node
kubeadm upgrade node 在其他Master節(jié)點(diǎn)上執(zhí)行以下操作:
? ? 從集群中獲取 kubeadm ClusterConfiguration。
? ? (可選操作)備份 kube-apiserver 證書。
? ? 升級Master節(jié)點(diǎn)組件的靜態(tài) Pod 清單。
? ? 為本節(jié)點(diǎn)升級 kubelet 配置
kubeadm upgrade node 在工作節(jié)點(diǎn)上完成以下工作:
? ? 從集群取回 kubeadm ClusterConfiguration。
? ? 為本節(jié)點(diǎn)升級 kubelet 配置。
?
?5)工作節(jié)點(diǎn)升級
# 先驅(qū)逐節(jié)點(diǎn)的pod (同時(shí)會將節(jié)點(diǎn)打污點(diǎn))
kubectl drain k8s-master --ignore-daemonsets
---
root@k8s-master:/etc/apt/keyrings# kubectl drain k8s-master --ignore-daemonsets
node/k8s-master cordoned
Warning: ignoring DaemonSet-managed Pods: calico-system/calico-node-2fcr6, calico-system/csi-node-driver-s4zvc, ingress-nginx/ingress-nginx-controller-6w5d7, kube-system/kube-proxy-x6pv5
evicting pod tigera-operator/tigera-operator-597bf4ddf6-gjthp
evicting pod default/curl-b747fd9ff-mvdtp
evicting pod calico-apiserver/calico-apiserver-7ff86ffc-b65hp
evicting pod calico-apiserver/calico-apiserver-7ff86ffc-sk6kz
evicting pod calico-system/calico-kube-controllers-6d5984f57f-rfw74
evicting pod calico-system/calico-typha-7d7d7c7d67-zmx2p
evicting pod ingress-nginx/ingress-nginx-admission-patch-xl8l2
evicting pod default/nginx-statefulset-0
evicting pod ingress-nginx/ingress-nginx-admission-create-789zw
evicting pod kube-system/coredns-66f779496c-459f4
evicting pod kube-system/coredns-66f779496c-s5blf
pod/ingress-nginx-admission-patch-xl8l2 evicted
pod/ingress-nginx-admission-create-789zw evicted
pod/tigera-operator-597bf4ddf6-gjthp evicted
I1218 17:47:25.344453 2767592 request.go:697] Waited for 1.070127575s due to client-side throttling, not priority and fairness, request: GET:https://k8s-master:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-7ff86ffc-b65hp
pod/nginx-statefulset-0 evicted
pod/calico-apiserver-7ff86ffc-sk6kz evicted
pod/calico-apiserver-7ff86ffc-b65hp evicted
pod/calico-kube-controllers-6d5984f57f-rfw74 evicted
pod/calico-typha-7d7d7c7d67-zmx2p evicted
pod/coredns-66f779496c-s5blf evicted
pod/coredns-66f779496c-459f4 evicted
pod/curl-b747fd9ff-mvdtp evicted
node/k8s-master drained
開始升級kubelet和kubectl(master節(jié)點(diǎn)和work節(jié)點(diǎn)都要)
# 解除版本鎖
root@k8s-master:~# apt-mark unhold kubelet kubectl
kubelet was already not on hold.
kubectl was already not on hold.
# 安裝1.28.4-1.1
root@k8s-master:~# apt-get update && apt-get install -y kubelet='1.28.4-1.1' kubectl='1.28.4-1.1'
Hit:1 http://mirrors.aliyun.com/ubuntu jammy InRelease
Hit:2 http://mirrors.aliyun.com/ubuntu jammy-security InRelease
Hit:3 http://mirrors.aliyun.com/ubuntu jammy-updates InRelease
Hit:4 http://mirrors.aliyun.com/ubuntu jammy-proposed InRelease
Hit:5 http://mirrors.aliyun.com/ubuntu jammy-backports InRelease
Hit:6 https://prod-cdn.packages.k8s.io/repositories/isv:/kubernetes:/core:/stable:/v1.28/deb InRelease
Reading package lists... Done
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following packages will be upgraded:
kubectl kubelet
2 upgraded, 0 newly installed, 0 to remove and 193 not upgraded.
Need to get 29.8 MB of archives.
After this operation, 205 kB of additional disk space will be used.
Get:1 https://prod-cdn.packages.k8s.io/repositories/isv:/kubernetes:/core:/stable:/v1.28/deb kubectl 1.28.4-1.1 [10.3 MB]
Get:2 https://prod-cdn.packages.k8s.io/repositories/isv:/kubernetes:/core:/stable:/v1.28/deb kubelet 1.28.4-1.1 [19.5 MB]
Fetched 29.8 MB in 5s (5,638 kB/s)
(Reading database ... 89165 files and directories currently installed.)
Preparing to unpack .../kubectl_1.28.4-1.1_amd64.deb ...
Unpacking kubectl (1.28.4-1.1) over (1.28.2-00) ...
Preparing to unpack .../kubelet_1.28.4-1.1_amd64.deb ...
Unpacking kubelet (1.28.4-1.1) over (1.28.2-00) ...
Setting up kubectl (1.28.4-1.1) ...
Setting up kubelet (1.28.4-1.1) ...
Scanning processes...
Scanning linux images...
Running kernel seems to be up-to-date.
No services need to be restarted.
No containers need to be restarted.
No user sessions are running outdated binaries.
No VM guests are running outdated hypervisor (qemu) binaries on this host.
# 升級完成 鎖住版本
root@k8s-master:~# apt-mark hold kubelet kubectl
kubelet set on hold.
kubectl set on hold.
# 重啟kubelet
root@k8s-master:~# systemctl daemon-reload
root@k8s-master:~# systemctl restart kubelet
# 接觸節(jié)點(diǎn)保護(hù)
root@k8s-master:~# kubectl uncordon k8s-master
node/k8s-master uncordoned
完成升級文章來源:http://www.zghlxwxcb.cn/news/detail-781706.html
文章來源地址http://www.zghlxwxcb.cn/news/detail-781706.html
到了這里,關(guān)于kubeadm升級k8s版本1.28.2升級至1.28.4(Ubuntu操作系統(tǒng)下)的文章就介紹完了。如果您還想了解更多內(nèi)容,請?jiān)谟疑辖撬阉鱐OY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!