kubeadm極速部署Kubernetes 1.26版本集群
一、Kubernetes 1.26版本集群部署
1.1 Kubernetes 1.26版本集群部署環(huán)境準(zhǔn)備
1.1.1 主機(jī)操作系統(tǒng)說明
序號(hào) | 操作系統(tǒng)及版本 | 備注 |
---|---|---|
1 | CentOS7u9 |
1.1.2 主機(jī)硬件配置說明
需求 | CPU | 內(nèi)存 | 硬盤 | 角色 | 主機(jī)名 |
---|---|---|---|---|---|
值 | 4C | 8G | 100GB | master | k8s-master01 |
值 | 4C | 8G | 100GB | worker(node) | k8s-worker01 |
值 | 4C | 8G | 100GB | worker(node) | k8s-worker02 |
1.1.3 主機(jī)配置
1.1.3.1 主機(jī)名配置
由于本次使用3臺(tái)主機(jī)完成kubernetes集群部署,其中1臺(tái)為master節(jié)點(diǎn),名稱為k8s-master01;其中2臺(tái)為worker節(jié)點(diǎn),名稱分別為:k8s-worker01及k8s-worker02
master節(jié)點(diǎn)
# hostnamectl set-hostname k8s-master01
worker01節(jié)點(diǎn)
# hostnamectl set-hostname k8s-worker01
worker02節(jié)點(diǎn)
# hostnamectl set-hostname k8s-worker02
1.1.3.2 主機(jī)IP地址配置
k8s-master節(jié)點(diǎn)IP地址為:192.168.10.141/24
# vim /etc/sysconfig/network-scripts/ifcfg-ens33
TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
BOOTPROTO="none"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_FAILURE_FATAL="no"
IPV6_ADDR_GEN_MODE="stable-privacy"
NAME="ens33"
DEVICE="ens33"
ONBOOT="yes"
IPADDR="192.168.10.141"
PREFIX="24"
GATEWAY="192.168.10.2"
DNS1="119.29.29.29"
k8s-worker1節(jié)點(diǎn)IP地址為:192.168.10.142/24
# vim /etc/sysconfig/network-scripts/ifcfg-ens33
TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
BOOTPROTO="none"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_FAILURE_FATAL="no"
IPV6_ADDR_GEN_MODE="stable-privacy"
NAME="ens33"
DEVICE="ens33"
ONBOOT="yes"
IPADDR="192.168.10.142"
PREFIX="24"
GATEWAY="192.168.10.2"
DNS1="119.29.29.29"
k8s-worker2節(jié)點(diǎn)IP地址為:192.168.10.143/24
# vim /etc/sysconfig/network-scripts/ifcfg-ens33
TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
BOOTPROTO="none"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_FAILURE_FATAL="no"
IPV6_ADDR_GEN_MODE="stable-privacy"
NAME="ens33"
DEVICE="ens33"
ONBOOT="yes"
IPADDR="192.168.10.143"
PREFIX="24"
GATEWAY="192.168.10.2"
DNS1="119.29.29.29"
1.1.3.3 主機(jī)名與IP地址解析
所有集群主機(jī)均需要進(jìn)行配置。
# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.10.141 k8s-master01
192.168.10.142 k8s-worker01
192.168.10.143 k8s-worker02
1.1.3.4 防火墻配置
所有主機(jī)均需要操作。
關(guān)閉現(xiàn)有防火墻firewalld
# systemctl disable firewalld
# systemctl stop firewalld
# firewall-cmd --state
not running
1.1.3.5 SELINUX配置
所有主機(jī)均需要操作。修改SELinux配置需要重啟操作系統(tǒng)。
# sed -ri 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config
1.1.3.6 時(shí)間同步配置
所有主機(jī)均需要操作。最小化安裝系統(tǒng)需要安裝ntpdate軟件。
# crontab -l
0 */1 * * * /usr/sbin/ntpdate time1.aliyun.com
1.1.3.7 升級(jí)操作系統(tǒng)內(nèi)核
所有主機(jī)均需要操作。
導(dǎo)入elrepo gpg key
# rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
安裝elrepo YUM源倉庫
# yum -y install https://www.elrepo.org/elrepo-release-7.0-4.el7.elrepo.noarch.rpm
安裝kernel-ml版本,ml為長期穩(wěn)定版本,lt為長期維護(hù)版本
# yum --enablerepo="elrepo-kernel" -y install kernel-lt.x86_64
設(shè)置grub2默認(rèn)引導(dǎo)為0
# grub2-set-default 0
重新生成grub2引導(dǎo)文件
# grub2-mkconfig -o /boot/grub2/grub.cfg
更新后,需要重啟,使用升級(jí)的內(nèi)核生效。
# reboot
重啟后,需要驗(yàn)證內(nèi)核是否為更新對(duì)應(yīng)的版本
# uname -r
1.1.3.8 配置內(nèi)核轉(zhuǎn)發(fā)及網(wǎng)橋過濾
所有主機(jī)均需要操作。
添加網(wǎng)橋過濾及內(nèi)核轉(zhuǎn)發(fā)配置文件
# cat /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0
加載br_netfilter模塊
# modprobe br_netfilter
查看是否加載
# lsmod | grep br_netfilter
br_netfilter 22256 0
bridge 151336 1 br_netfilter
1.1.3.9 安裝ipset及ipvsadm
所有主機(jī)均需要操作。
安裝ipset及ipvsadm
# yum -y install ipset ipvsadm
配置ipvsadm模塊加載方式
添加需要加載的模塊
# cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
EOF
授權(quán)、運(yùn)行、檢查是否加載
# chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack
1.1.3.10 關(guān)閉SWAP分區(qū)
修改完成后需要重啟操作系統(tǒng),如不重啟,可臨時(shí)關(guān)閉,命令為swapoff -a
永遠(yuǎn)關(guān)閉swap分區(qū),需要重啟操作系統(tǒng)
# cat /etc/fstab
......
# /dev/mapper/centos-swap swap swap defaults 0 0
在上一行中行首添加#
1.2 Docker準(zhǔn)備
1.2.1 Docker安裝YUM源準(zhǔn)備
使用阿里云開源軟件鏡像站。
# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
1.2.2 Docker安裝
# yum -y install docker-ce
1.2.3 啟動(dòng)Docker服務(wù)
# systemctl enable --now docker
1.2.4 修改cgroup方式
/etc/docker/daemon.json 默認(rèn)沒有此文件,需要單獨(dú)創(chuàng)建
在/etc/docker/daemon.json添加如下內(nèi)容
# cat /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
# systemctl restart docker
1.2.5 cri-dockerd安裝
1.2.5.1 golang環(huán)境準(zhǔn)備
wget https://storage.googleapis.com/golang/getgo/installer_linux
chmod +x ./installer_linux
./installer_linux
source ~/.bash_profile
2.2.5.2 構(gòu)建并安裝cri-dockerd
克隆cri-dockerd源碼
# git clone https://github.com/Mirantis/cri-dockerd.git
查看克隆下來的目錄
# ls
cri-dockerd
查看目錄中內(nèi)容
# ls cri-dockerd/
LICENSE Makefile packaging README.md src VERSION
# cd cri-dockerd
創(chuàng)建bin目錄并構(gòu)建cri-dockerd二進(jìn)制文件
# mkdir bin
# go get && go build -o ../bin/cri-dockerd
創(chuàng)建/usr/local/bin,默認(rèn)存在時(shí),可不用創(chuàng)建
# mkdir -p /usr/local/bin
安裝cri-dockerd
# install -o root -g root -m 0755 bin/cri-dockerd /usr/local/bin/cri-dockerd
復(fù)制服務(wù)管理文件至/etc/systemd/system目錄中
# cp -a packaging/systemd/* /etc/systemd/system
指定cri-dockerd運(yùn)行位置
#sed -i -e 's,/usr/bin/cri-dockerd,/usr/local/bin/cri-dockerd,' /etc/systemd/system/cri-docker.service
啟動(dòng)服務(wù)
# systemctl daemon-reload
# systemctl enable cri-docker.service
# systemctl enable --now cri-docker.socket
1.3 kubernetes 1.26.X 集群部署
1.3.1 集群軟件及版本說明
kubeadm | kubelet | kubectl | |
---|---|---|---|
版本 | 1.24.X | 1.24.X | 1.24.X |
安裝位置 | 集群所有主機(jī) | 集群所有主機(jī) | 集群所有主機(jī) |
作用 | 初始化集群、管理集群等 | 用于接收api-server指令,對(duì)pod生命周期進(jìn)行管理 | 集群應(yīng)用命令行管理工具 |
1.3.2 kubernetes YUM源準(zhǔn)備
1.3.2.1 谷歌YUM源
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
1.3.2.2 阿里云YUM源
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
1.3.3 集群軟件安裝
所有節(jié)點(diǎn)均可安裝
默認(rèn)安裝
# yum -y install kubeadm kubelet kubectl
查看指定版本
# yum list kubeadm.x86_64 --showduplicates | sort -r
# yum list kubelet.x86_64 --showduplicates | sort -r
# yum list kubectl.x86_64 --showduplicates | sort -r
安裝指定版本
# yum -y install kubeadm-1.26.X-0 kubelet-1.26.X-0 kubectl-1.26.X-0
1.3.4 配置kubelet
為了實(shí)現(xiàn)docker使用的cgroupdriver與kubelet使用的cgroup的一致性,建議修改如下文件內(nèi)容。
# vim /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"
設(shè)置kubelet為開機(jī)自啟動(dòng)即可,由于沒有生成配置文件,集群初始化后自動(dòng)啟動(dòng)
# systemctl enable kubelet
1.3.5 集群鏡像準(zhǔn)備
可使用VPN實(shí)現(xiàn)下載。
# kubeadm config images list --kubernetes-version=v1.26.X
# cat image_download.sh
#!/bin/bash
images_list='
鏡像列表'
for i in $images_list
do
docker pull $i
done
docker save -o k8s-1-24-X.tar $images_list
1.3.6 集群初始化
[root@k8s-master01 ~]# kubeadm init --kubernetes-version=v1.26.X --pod-network-cidr=10.224.0.0/16 --apiserver-advertise-address=192.168.10.160 --cri-socket unix:///var/run/cri-dockerd.sock
如果不添加--cri-socket選項(xiàng),則會(huì)報(bào)錯(cuò),內(nèi)容如下:
Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
To see the stack trace of this error execute with --v=5 or higher
初始化過程輸出
[init] Using Kubernetes version: v1.26.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.10.200]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master01 localhost] and IPs [192.168.10.200 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master01 localhost] and IPs [192.168.10.200 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 13.006785 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master01 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-master01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: 8x4o2u.hslo8xzwwlrncr8s
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.10.200:6443 --token 8x4o2u.hslo8xzwwlrncr8s \
--discovery-token-ca-cert-hash sha256:7323a8b0658fc33d89e627f078f6eb16ac94394f9a91b3335dd3ce73a3f313a0
1.3.7 集群應(yīng)用客戶端管理集群文件準(zhǔn)備
[root@k8s-master01 ~]# mkdir -p $HOME/.kube
[root@k8s-master01 ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master01 ~]# chown $(id -u):$(id -g) $HOME/.kube/config
[root@k8s-master01 ~]# ls /root/.kube/
config
[root@k8s-master01 ~]# export KUBECONFIG=/etc/kubernetes/admin.conf
1.3.8 集群網(wǎng)絡(luò)準(zhǔn)備
使用calico部署集群網(wǎng)絡(luò)
安裝參考網(wǎng)址:https://projectcalico.docs.tigera.io/about/about-calico
1.3.8.1 calico安裝
下載operator資源清單文件
[root@k8s-master01 ~]# wget https://docs.projectcalico.org/manifests/tigera-operator.yaml
應(yīng)用資源清單文件,創(chuàng)建operator
[root@k8s-master01 ~]# kubectl create -f tigera-operator.yaml
通過自定義資源方式安裝
[root@k8s-master01 ~]# wget https://docs.projectcalico.org/manifests/custom-resources.yaml
修改文件第13行,修改為使用kubeadm init ----pod-network-cidr對(duì)應(yīng)的IP地址段
[root@k8s-master01 ~]# vim custom-resources.yaml
......
11 ipPools:
12 - blockSize: 26
13 cidr: 10.224.0.0/16
14 encapsulation: VXLANCrossSubnet
......
當(dāng)node無法正常運(yùn)行時(shí),可考慮在此文件中添加相關(guān)內(nèi)容。
nodeAddressAutodetectionV4:
interface: ens.*
應(yīng)用資源清單文件
[root@k8s-master01 ~]# kubectl apply -f custom-resources.yaml
監(jiān)視calico-sysem命名空間中pod運(yùn)行情況
[root@k8s-master01 ~]# watch kubectl get pods -n calico-system
Wait until each pod has the
STATUS
ofRunning
.
刪除 master 上的 taint
[root@k8s-master01 ~]# kubectl taint nodes --all node-role.kubernetes.io/master-
已經(jīng)全部運(yùn)行
[root@k8s-master01 ~]# kubectl get pods -n calico-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-666bb9949-dzp68 1/1 Running 0 11m
calico-node-jhcf4 1/1 Running 4 11m
calico-typha-68b96d8d9c-7qfq7 1/1 Running 2 11m
查看kube-system命名空間中coredns狀態(tài),處于Running狀態(tài)表明聯(lián)網(wǎng)成功。
[root@k8s-master01 ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-6d4b75cb6d-js5pl 1/1 Running 0 12h
coredns-6d4b75cb6d-zm8pc 1/1 Running 0 12h
etcd-k8s-master01 1/1 Running 0 12h
kube-apiserver-k8s-master01 1/1 Running 0 12h
kube-controller-manager-k8s-master01 1/1 Running 0 12h
kube-proxy-7nhr7 1/1 Running 0 12h
kube-proxy-fv4kr 1/1 Running 0 12h
kube-proxy-vv5vg 1/1 Running 0 12h
kube-scheduler-k8s-master01 1/1 Running 0 12h
2.3.8.2 calico客戶端安裝
下載二進(jìn)制文件
# curl -L https://github.com/projectcalico/calico/releases/download/v3.21.4/calicoctl-linux-amd64 -o calicoctl
安裝calicoctl
# mv calicoctl /usr/bin/
為calicoctl添加可執(zhí)行權(quán)限
# chmod +x /usr/bin/calicoctl
查看添加權(quán)限后文件
# ls /usr/bin/calicoctl
/usr/bin/calicoctl
查看calicoctl版本
# calicoctl version
Client Version: v3.21.4
Git commit: 220d04c94
Cluster Version: v3.21.4
Cluster Type: typha,kdd,k8s,operator,bgp,kubeadm
通過~/.kube/config連接kubernetes集群,查看已運(yùn)行節(jié)點(diǎn)
# DATASTORE_TYPE=kubernetes KUBECONFIG=~/.kube/config calicoctl get nodes
NAME
k8s-master01
1.3.9 集群工作節(jié)點(diǎn)添加
因容器鏡像下載較慢,可能會(huì)導(dǎo)致報(bào)錯(cuò),主要錯(cuò)誤為沒有準(zhǔn)備好cni(集群網(wǎng)絡(luò)插件),如有網(wǎng)絡(luò),請耐心等待即可。文章來源:http://www.zghlxwxcb.cn/news/detail-436577.html
[root@k8s-worker01 ~]# kubeadm join 192.168.10.160:6443 --token 8x4o2u.hslo8xzwwlrncr8s \ --discovery-token-ca-cert-hash sha256:7323a8b0658fc33d89e627f078f6eb16ac94394f9a91b3335dd3ce73a3f313a0 --cri-socket unix:///var/run/cri-dockerd.sock
[root@k8s-worker02 ~]# kubeadm join 192.168.10.160:6443 --token 8x4o2u.hslo8xzwwlrncr8s \
--discovery-token-ca-cert-hash sha256:7323a8b0658fc33d89e627f078f6eb16ac94394f9a91b3335dd3ce73a3f313a0 --cri-socket unix:///var/run/cri-dockerd.sock
在master節(jié)點(diǎn)上操作,查看網(wǎng)絡(luò)節(jié)點(diǎn)是否添加
# DATASTORE_TYPE=kubernetes KUBECONFIG=~/.kube/config calicoctl get nodes
NAME
k8s-master01
k8s-worker01
k8s-worker02
二、 驗(yàn)證集群可用性
查看所有的節(jié)點(diǎn)
[root@k8s-master01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master01 Ready control-plane 12h v1.26.0
k8s-worker01 Ready <none> 12h v1.26.0
k8s-worker02 Ready <none> 12h v1.26.0
查看集群健康情況
[root@k8s-master01 ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health":"true","reason":""}
查看kubernetes集群pod運(yùn)行情況
[root@k8s-master01 ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-6d4b75cb6d-js5pl 1/1 Running 0 12h
coredns-6d4b75cb6d-zm8pc 1/1 Running 0 12h
etcd-k8s-master01 1/1 Running 0 12h
kube-apiserver-k8s-master01 1/1 Running 0 12h
kube-controller-manager-k8s-master01 1/1 Running 0 12h
kube-proxy-7nhr7 1/1 Running 0 12h
kube-proxy-fv4kr 1/1 Running 0 12h
kube-proxy-vv5vg 1/1 Running 0 12h
kube-scheduler-k8s-master01 1/1 Running 0 12h
再次查看calico-system命名空間中pod運(yùn)行情況。
[root@k8s-master01 ~]# kubectl get pods -n calico-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-5b544d9b48-xgfnk 1/1 Running 0 12h
calico-node-7clf4 1/1 Running 0 12h
calico-node-cjwns 1/1 Running 0 12h
calico-node-hhr4n 1/1 Running 0 12h
calico-typha-6cb6976b97-5lnpk 1/1 Running 0 12h
calico-typha-6cb6976b97-9w9s8 1/1 Running 0 12h
-k8s-master01 1/1 Running 0 12h
kube-proxy-7nhr7 1/1 Running 0 12h
kube-proxy-fv4kr 1/1 Running 0 12h
kube-proxy-vv5vg 1/1 Running 0 12h
kube-scheduler-k8s-master01 1/1 Running 0 12h文章來源地址http://www.zghlxwxcb.cn/news/detail-436577.html
~~~powershell
再次查看calico-system命名空間中pod運(yùn)行情況。
[root@k8s-master01 ~]# kubectl get pods -n calico-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-5b544d9b48-xgfnk 1/1 Running 0 12h
calico-node-7clf4 1/1 Running 0 12h
calico-node-cjwns 1/1 Running 0 12h
calico-node-hhr4n 1/1 Running 0 12h
calico-typha-6cb6976b97-5lnpk 1/1 Running 0 12h
calico-typha-6cb6976b97-9w9s8 1/1 Running 0 12h
到了這里,關(guān)于kubeadm極速部署Kubernetes 1.26版本集群的文章就介紹完了。如果您還想了解更多內(nèi)容,請?jiān)谟疑辖撬阉鱐OY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!