麒麟v10使用kubeadm安裝k8s1.26
苦于目前下載不到現(xiàn)成el8的kubelet/kubeadm/kubectl,就算有,以后如果在arm架構(gòu)上安裝,又要尋找新的包,通過(guò)摸索,找到了源碼構(gòu)建方法。無(wú)論是arm架構(gòu)還是amd架構(gòu),都是可以使用該辦法進(jìn)行安裝。
服務(wù)器安排
服務(wù)器IP | 主機(jī)名 | 用途 | 部署說(shuō)明 |
---|---|---|---|
192.168.1.3 | kmaster1 | 主節(jié)點(diǎn)1 | kubelet/kubeadm/kubectl/containerd/ipvs/golang、build二進(jìn)制文件、加載鏡像、負(fù)載均衡 |
192.168.1.4 | kmaster2 | 主節(jié)點(diǎn)2 | kubelet/kubeadm/kubectl/containerd/ipvs、加載鏡像、負(fù)載均衡 |
192.168.1.5 | kmaster3 | 主節(jié)點(diǎn) 3 | kubelet/kubeadm/kubectl/containerd/ipvs、加載鏡像、負(fù)載均衡 |
192.168.1.6 | knode1 | 工作節(jié)點(diǎn)1 | kubelet/kubeadm/kubectl/containerd/ipvs、加載鏡像 |
192.168.1.7 | knode2 | 工作節(jié)點(diǎn)2 | kubelet/kubeadm/kubectl/containerd/ipvs、加載鏡像 |
192.168.1.2 | 無(wú) | 主節(jié)點(diǎn)VIP | 無(wú) |
初始化服務(wù)器,安裝IPVS,主節(jié)點(diǎn)和工作節(jié)點(diǎn)都要執(zhí)行
安裝ipvs
yum install -y ipset ipvsadm
創(chuàng)建/etc/modules-load.d/containerd.conf配置文件
cat << EOF > /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF
執(zhí)行以下命令使配置生效
modprobe overlay
modprobe br_netfilter
創(chuàng)建/etc/sysctl.d/99-kubernetes-cri.conf配置文件
cat << EOF > /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
user.max_user_namespaces=28633
EOF
sysctl -p /etc/sysctl.d/99-kubernetes-cri.conf
加載ipvs內(nèi)核模塊–4.19以上版本內(nèi)核
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
EOF
授權(quán)生效
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack
關(guān)閉swap,并永久關(guān)閉
swapoff -a
sed -i "s/^[^#].*swap/#&/" /etc/fstab
配置/etc/hosts
cat >> /etc/hosts << EOF
192.168.1.3 kmaster1
192.168.1.4 kmaster2
192.168.1.5 kmaster3
192.168.1.6 knode1
192.168.1.7 knode2
EOF
免密登陸,不是必須
ssh-keygen
按回車(chē)幾次
ssh-copy-id kmaster1
輸入密碼
ssh-copy-id kmaster2
ssh-copy-id kmaster3
ssh-copy-id knode1
ssh-copy-id knode2
安裝containerd和cni,主節(jié)點(diǎn)和工作節(jié)點(diǎn)都要執(zhí)行
#官方參考安裝地址:https://github.com/containerd/containerd/blob/main/docs/getting-started.md
#cri-containerd下載地址:https://github.com/containerd/containerd/releases/download/v1.6.25/cri-containerd-1.6.25-linux-amd64.tar.gz
#libseccomp下載地址:https://github.com/opencontainers/runc/releases/download/v1.1.10/libseccomp-2.5.4.tar.gz
#gperf下載地址:https://rpmfind.net/linux/centos/8-stream/PowerTools/x86_64/os/Packages/gperf-3.1-5.el8.x86_64.rpm
#cni下載地址:https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz
更新依賴(lài),否則runc會(huì)運(yùn)行不了
yum install gcc gcc-c++ openssl-devel pcre-devel make autoconf -y
rpm -ivh gperf-3.1-5.el8.x86_64.rpm
tar xf libseccomp-2.5.4.tar.gz
cd libseccomp-2.5.4
./configure
make && make install
#開(kāi)始安裝cri-containerd【包含containerd、runc】
#直接解壓到根目錄
tar zxvf cri-containerd-1.6.25-linux-amd64.tar.gz -C /
#生成默認(rèn)配置文件
containerd config default > /etc/containerd/config.toml
#修改默認(rèn)配置
sed -i 's#SystemdCgroup = false#SystemdCgroup = true#g' /etc/containerd/config.toml
sed -i 's#k8s.gcr.io/pause#registry.cn-hangzhou.aliyuncs.com/google_containers/pause#g' /etc/containerd/config.toml
sed -i 's#registry.gcr.io/pause#registry.cn-hangzhou.aliyuncs.com/google_containers/pause#g' /etc/containerd/config.toml
sed -i 's#registry.k8s.io/pause#registry.cn-hangzhou.aliyuncs.com/google_containers/pause#g' /etc/containerd/config.toml
sed -i s/pause:3.6/pause:3.9/g /etc/containerd/config.toml
#啟動(dòng)containerd
systemctl daemon-reload
systemctl start containerd
systemctl enable containerd
#解壓cni到默認(rèn)指定目錄,如有修改,使用該命令查看目錄地址:crictl info | grep binDir
mkdir -p /opt/cni/bin
tar Cxzvf /opt/cni/bin cni-plugins-linux-amd64-v1.3.0.tgz
構(gòu)建kubelet/kubeadm/kubectl二進(jìn)制文件
安裝golang,第一臺(tái)master執(zhí)行
tar -xzf go1.21.1.linux-amd64.tar.gz -C /usr/local
echo "export PATH=$PATH:/usr/local/go/bin" >> /etc/profile
source /etc/profile
構(gòu)建k8s二進(jìn)制文件,第一臺(tái)master執(zhí)行,后續(xù)從master1拷貝即可
tar xf kubernetes-1.26.12.tar.gz
cd kubernetes-1.26.12
設(shè)置kubeadm安裝的集群證書(shū)時(shí)間為100年
sed -i s/365/365\ \*\ 100/g cmd/kubeadm/app/constants/constants.go
構(gòu)建命令,arm架構(gòu)的話就是linux/arm64
KUBE_BUILD_PLATFORMS=linux/amd64 make WHAT=cmd/kubelet GOFLAGS=-v GOGCFLAGS="-N -l"
KUBE_BUILD_PLATFORMS=linux/amd64 make WHAT=cmd/kubectl GOFLAGS=-v GOGCFLAGS="-N -l"
KUBE_BUILD_PLATFORMS=linux/amd64 make WHAT=cmd/kubeadm GOFLAGS=-v GOGCFLAGS="-N -l"
cp _output/bin/kubelet /usr/bin/
cp _output/bin/kubectl /usr/bin/
cp _output/bin/kubeadm /usr/bin/
拷貝到其他節(jié)點(diǎn)
scp _output/bin/kube* kmaster2@/usr/bin/
scp _output/bin/kube* kmaster3@/usr/bin/
scp _output/bin/kube* knode1@/usr/bin/
scp _output/bin/kube* knode2@/usr/bin/
安裝kubelet,設(shè)置系統(tǒng)級(jí)啟動(dòng)
cat > /usr/lib/systemd/system/kubelet.service << EOF
[Unit]
Description=kubelet: The Kubernetes Node Agent
Documentation=https://kubernetes.io/docs/
Wants=network-online.target
After=network-online.target
[Service]
ExecStart=/usr/bin/kubelet
Restart=always
StartLimitInterval=0
RestartSec=10
[Install]
WantedBy=multi-user.target
EOF
mkdir /usr/lib/systemd/system/kubelet.service.d/
echo '[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
EnvironmentFile=-/etc/sysconfig/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS' > /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf
cat > /etc/sysconfig/kubelet << EOF
KUBELET_CGROUP_ARGS="--cgroup-driver=systemd"
KUBE_PROXY_MODE="ipvs"
EOF
systemctl daemon-reload
systemctl start kubelet
systemctl enable kubelet
安裝負(fù)載均衡
部署keepalived+HAProxy
1.信息可以按照自己的環(huán)境填寫(xiě),或者和我相同
2.網(wǎng)卡名稱(chēng)都為ens33,如有不相同建議修改下面配置
3.cluster dns或domain有改變的話,需要修改kubelet-conf.yml
HA(haproxy+keepalived) 單臺(tái)master就不要用HA了
首先所有master安裝haproxy+keeplived
yum install haproxy keepalived -y
生成kmaster的haproxy配置文件,所有master通用
cat << EOF | tee /etc/haproxy/haproxy.cfg
global
log 127.0.0.1 local2
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy
daemon
defaults
mode tcp
log global
retries 3
timeout connect 10s
timeout client 1m
timeout server 1m
frontend kubernetes
bind *:8443
mode tcp
option tcplog
default_backend kubernetes-apiserver
backend kubernetes-apiserver
mode tcp
balance roundrobin
server kmaster1 192.168.1.3:6443 check maxconn 2000
server kmaster2 192.168.1.4:6443 check maxconn 2000
server kmaster3 192.168.1.5:6443 check maxconn 2000
EOF
生成kmaster1的keeplived配置文件
cat << EOF | tee /etc/keepalived/keepalived.conf
global_defs {
router_id LVS_DEVEL
}
vrrp_script check_haproxy {
script "/etc/keepalived/check_haproxy.sh"
interval 3
fall 10
timeout 9
rise 2
}
vrrp_instance VI_1 {
state MASTER #備服務(wù)器上改為BACKUP
interface ens33 #改為自己的接口
virtual_router_id 51
priority 100 #備服務(wù)器上改為小于100的數(shù)字,90,80
advert_int 1
mcast_src_ip 192.168.1.3 #本機(jī)IP
nopreempt
authentication {
auth_type PASS
auth_pass 1111
}
unicast_peer {
192.168.1.4 #除本機(jī)外其余兩個(gè)master的IP節(jié)點(diǎn)
192.168.1.5
}
virtual_ipaddress {
192.168.1.2 #虛擬vip,自己設(shè)定
}
track_script {
check_haproxy
}
}
EOF
生成kmaster2的keeplived配置文件
cat << EOF | tee /etc/keepalived/keepalived.conf
global_defs {
router_id LVS_DEVEL_1
}
vrrp_script check_haproxy {
script "/etc/keepalived/check_haproxy.sh"
interval 3
fall 10
timeout 9
rise 2
}
vrrp_instance VI_1 {
state BACKUP #備服務(wù)器上改為BACKUP
interface ens33 #改為自己的接口
virtual_router_id 51
priority 90 #備服務(wù)器上改為小于100的數(shù)字,90,80
advert_int 1
mcast_src_ip 192.168.1.4 #本機(jī)IP
nopreempt
authentication {
auth_type PASS
auth_pass 1111
}
unicast_peer {
192.168.1.3 #除本機(jī)外其余兩個(gè)master的IP節(jié)點(diǎn)
192.168.1.5
}
virtual_ipaddress {
192.168.1.2 #虛擬vip,自己設(shè)定
}
track_script {
check_haproxy
}
}
EOF
生成kmaster3的keeplived配置文件
cat << EOF | tee /etc/keepalived/keepalived3.conf
global_defs {
router_id LVS_DEVEL_3
}
vrrp_script check_haproxy {
script "/etc/keepalived/check_haproxy.sh"
interval 3
fall 10
timeout 9
rise 2
}
vrrp_instance VI_1 {
state BACKUP #備服務(wù)器上改為BACKUP
interface ens33 #改為自己的接口
virtual_router_id 51
priority 80 #備服務(wù)器上改為小于100的數(shù)字,90,80
advert_int 1
mcast_src_ip 192.168.1.5 #本機(jī)IP
nopreempt
authentication {
auth_type PASS
auth_pass 1111
}
unicast_peer {
192.168.1.3 #除本機(jī)外其余兩個(gè)master的IP節(jié)點(diǎn)
192.168.1.4
}
virtual_ipaddress {
192.168.1.2 #虛擬vip,自己設(shè)定
}
track_script {
check_haproxy
}
}
EOF
添加keeplived健康檢查腳本,每臺(tái)master通用
cat > /etc/keepalived/check_haproxy.sh <<EOF
#!/bin/bash
A=\`ps -C haproxy --no-header | wc -l\`
if [ \$A -eq 0 ];then
systemctl stop keepalived
fi
EOF
chmod +x /etc/keepalived/check_haproxy.sh
#啟動(dòng)haproxy和keepalived,并加入開(kāi)機(jī)自啟
systemctl enable --now haproxy keepalived
systemctl restart haproxy keepalived
加載鏡像【每臺(tái)都要執(zhí)行】,或者使用私有鏡像倉(cāng)庫(kù)
方法1,直接使用命令下載。
#master鏡像
crictl pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.26.0
crictl pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.26.0
crictl pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.26.0
crictl pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.26.0
crictl pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.9
crictl pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.3-0
crictl pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.6
#calico鏡像
crictl pull registry.cn-beijing.aliyuncs.com/dotbalo/cni:v3.24.0
crictl pull registry.cn-beijing.aliyuncs.com/dotbalo/kube-controllers:v3.24.0
crictl pull registry.cn-beijing.aliyuncs.com/dotbalo/typha:v3.24.0
crictl pull registry.cn-beijing.aliyuncs.com/dotbalo/node:v3.24.0
#comp鏡像
crictl pull registry.cn-beijing.aliyuncs.com/dotbalo/metrics-server:0.6.1
方法2,離線下載,先在其他地方準(zhǔn)備好鏡像tar包,再導(dǎo)入
#master鏡像
ctr -n k8s.io images import coredns-v1.9.3.tar
ctr -n k8s.io images import etcd-3.5.6-0.tar
ctr -n k8s.io images import kube-apiserver-v1.26.0.tar
ctr -n k8s.io images import kube-controller-manager-v1.26.0.tar
ctr -n k8s.io images import kube-proxy-v1.26.0.tar
ctr -n k8s.io images import kube-scheduler-v1.26.0.tar
ctr -n k8s.io images import pause-3.9.tar
#calico鏡像
ctr -n k8s.io images import cni-v3.24.0.tar
ctr -n k8s.io images import kube-controllers-v3.24.0.tar
ctr -n k8s.io images import node-v3.24.0.tar
ctr -n k8s.io images import typha-v3.24.0.tar
#comp鏡像
ctr -n k8s.io images import metrics-server-0.6.1.tar
方法3,用kubeadm命令拉取鏡像【有網(wǎng)絡(luò)的情況下推薦使用該方法,并且后續(xù)calico、comp、dashbored都不需要額外手動(dòng)拉??;網(wǎng)絡(luò)不太好的情況下使用方法1一個(gè)個(gè)的下載;沒(méi)網(wǎng)絡(luò)的情況下只能用方法2】
kubeadm config images pull --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers
[root@kmatser1 ~]# kubeadm config images pull --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.26.0
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.26.0
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.26.0
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.26.0
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.9
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.6-0
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.9.3
初始化集群主節(jié)點(diǎn)
有安裝負(fù)載均衡的情況下,使用8443端口和VIP,單機(jī)主節(jié)點(diǎn)的話,使用6443端口和master的IP。只需要在第一臺(tái)master執(zhí)行,并且確保VIP當(dāng)前就在該服務(wù)器上。
kubeadm init --apiserver-advertise-address 192.168.1.2 --apiserver-bind-port 8443 --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers --cri-socket "unix:///var/run/containerd/containerd.sock" --kubernetes-version 1.26.0
輸出如下表示成功:
[init] Using Kubernetes version: v1.26.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kmaster1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.203.200]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [kmaster1 localhost] and IPs [192.168.1.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [kmaster1 localhost] and IPs [192.168.1.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 6.003773 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node kmaster1 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node kmaster1 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: styztp.kt842zi3r4lc5ez8
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.1.2:8443 --token styztp.kt842zi3r4lc5ez8 \
--discovery-token-ca-cert-hash sha256:85d216d87b847ca609cd3bfe0099ff2dd776bc33ca33586db2dac354e720a80f
復(fù)制初始化打印出來(lái)的命令,到node節(jié)點(diǎn)去執(zhí)行,nide需要完成上述ipvs安裝、containerd安裝、kubelet安裝等。否則會(huì)失敗,仔細(xì)看文檔里面提到的需要在哪些服務(wù)器執(zhí)行。
kubeadm join 192.168.1.2:8443 --token styztp.kt842zi3r4lc5ez8 \
--discovery-token-ca-cert-hash sha256:85d216d87b847ca609cd3bfe0099ff2dd776bc33ca33586db2dac354e720a80f
新增master節(jié)點(diǎn),初始化的時(shí)候沒(méi)有給出master怎么加入集群,需要手動(dòng)在第一臺(tái)master生成,在去新的master執(zhí)行加入集群。
#在master上生成新的token
kubeadm token create --print-join-command
#在master上生成用于新master加入的證書(shū)
kubeadm init phase upload-certs --experimental-upload-certs
#根據(jù)上述兩條命令生成的信息,拿去新master節(jié)點(diǎn)執(zhí)行
kubeadm join 192.168.1.2:8443 --token styztp.kt842zi3r4lc5ez8 \
--discovery-token-ca-cert-hash sha256:85d216d87b847ca609cd3bfe0099ff2dd776bc33ca33586db2dac354e720a80f \
--experimental-control-plane --certificate-key e799a655f667fc327ab8c91f4f2541b57b96d2693ab5af96314ebddea7a68526
每臺(tái)master執(zhí)行下列命令
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
export KUBECONFIG=/etc/kubernetes/admin.conf
安裝calico
下載地址:https://raw.githubusercontent.com/projectcalico/calico/v3.24.0/manifests/calico.yaml
如果多網(wǎng)卡,或者報(bào)錯(cuò)網(wǎng)卡的問(wèn)題,需要修改calico.yaml的第4530行,加入下列參數(shù)指定網(wǎng)卡。
- name: IP_AUTODETECTION_METHOD
value: interface=ens33文章來(lái)源:http://www.zghlxwxcb.cn/news/detail-787587.html
kubectl create -f calico.yaml
至此,k8s就安裝完成了。文章來(lái)源地址http://www.zghlxwxcb.cn/news/detail-787587.html
到了這里,關(guān)于麒麟v10使用kubeadm安裝k8s1.26的文章就介紹完了。如果您還想了解更多內(nèi)容,請(qǐng)?jiān)谟疑辖撬阉鱐OY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!