国产 无码 综合区,色欲AV无码国产永久播放,无码天堂亚洲国产AV,国产日韩欧美女同一区二区

麒麟v10使用kubeadm安裝k8s1.26

這篇具有很好參考價(jià)值的文章主要介紹了麒麟v10使用kubeadm安裝k8s1.26。希望對(duì)大家有所幫助。如果存在錯(cuò)誤或未考慮完全的地方,請(qǐng)大家不吝賜教,您也可以點(diǎn)擊"舉報(bào)違法"按鈕提交疑問(wèn)。

麒麟v10使用kubeadm安裝k8s1.26

苦于目前下載不到現(xiàn)成el8的kubelet/kubeadm/kubectl,就算有,以后如果在arm架構(gòu)上安裝,又要尋找新的包,通過(guò)摸索,找到了源碼構(gòu)建方法。無(wú)論是arm架構(gòu)還是amd架構(gòu),都是可以使用該辦法進(jìn)行安裝。

服務(wù)器安排

服務(wù)器IP 主機(jī)名 用途 部署說(shuō)明
192.168.1.3 kmaster1 主節(jié)點(diǎn)1 kubelet/kubeadm/kubectl/containerd/ipvs/golang、build二進(jìn)制文件、加載鏡像、負(fù)載均衡
192.168.1.4 kmaster2 主節(jié)點(diǎn)2 kubelet/kubeadm/kubectl/containerd/ipvs、加載鏡像、負(fù)載均衡
192.168.1.5 kmaster3 主節(jié)點(diǎn) 3 kubelet/kubeadm/kubectl/containerd/ipvs、加載鏡像、負(fù)載均衡
192.168.1.6 knode1 工作節(jié)點(diǎn)1 kubelet/kubeadm/kubectl/containerd/ipvs、加載鏡像
192.168.1.7 knode2 工作節(jié)點(diǎn)2 kubelet/kubeadm/kubectl/containerd/ipvs、加載鏡像
192.168.1.2 無(wú) 主節(jié)點(diǎn)VIP 無(wú)

初始化服務(wù)器,安裝IPVS,主節(jié)點(diǎn)和工作節(jié)點(diǎn)都要執(zhí)行

安裝ipvs

yum install -y ipset ipvsadm

創(chuàng)建/etc/modules-load.d/containerd.conf配置文件

cat << EOF > /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF

執(zhí)行以下命令使配置生效

modprobe overlay
modprobe br_netfilter

創(chuàng)建/etc/sysctl.d/99-kubernetes-cri.conf配置文件

cat << EOF > /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
user.max_user_namespaces=28633
EOF
sysctl -p /etc/sysctl.d/99-kubernetes-cri.conf

加載ipvs內(nèi)核模塊–4.19以上版本內(nèi)核

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
EOF

授權(quán)生效

chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack

關(guān)閉swap,并永久關(guān)閉

swapoff -a
sed -i "s/^[^#].*swap/#&/" /etc/fstab

配置/etc/hosts

cat >> /etc/hosts << EOF
192.168.1.3 kmaster1
192.168.1.4 kmaster2
192.168.1.5 kmaster3
192.168.1.6 knode1
192.168.1.7 knode2
EOF

免密登陸,不是必須

ssh-keygen
按回車(chē)幾次
ssh-copy-id kmaster1
輸入密碼
ssh-copy-id kmaster2
ssh-copy-id kmaster3
ssh-copy-id knode1
ssh-copy-id knode2

安裝containerd和cni,主節(jié)點(diǎn)和工作節(jié)點(diǎn)都要執(zhí)行

#官方參考安裝地址:https://github.com/containerd/containerd/blob/main/docs/getting-started.md
#cri-containerd下載地址:https://github.com/containerd/containerd/releases/download/v1.6.25/cri-containerd-1.6.25-linux-amd64.tar.gz
#libseccomp下載地址:https://github.com/opencontainers/runc/releases/download/v1.1.10/libseccomp-2.5.4.tar.gz
#gperf下載地址:https://rpmfind.net/linux/centos/8-stream/PowerTools/x86_64/os/Packages/gperf-3.1-5.el8.x86_64.rpm
#cni下載地址:https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz

更新依賴(lài),否則runc會(huì)運(yùn)行不了

yum install gcc gcc-c++ openssl-devel pcre-devel make autoconf -y
rpm -ivh gperf-3.1-5.el8.x86_64.rpm
tar xf libseccomp-2.5.4.tar.gz
cd libseccomp-2.5.4
./configure
make && make install

#開(kāi)始安裝cri-containerd【包含containerd、runc】

#直接解壓到根目錄
tar zxvf cri-containerd-1.6.25-linux-amd64.tar.gz -C /
#生成默認(rèn)配置文件
containerd config default  > /etc/containerd/config.toml
#修改默認(rèn)配置
sed -i 's#SystemdCgroup = false#SystemdCgroup = true#g' /etc/containerd/config.toml
sed -i 's#k8s.gcr.io/pause#registry.cn-hangzhou.aliyuncs.com/google_containers/pause#g'  /etc/containerd/config.toml
sed -i 's#registry.gcr.io/pause#registry.cn-hangzhou.aliyuncs.com/google_containers/pause#g'  /etc/containerd/config.toml
sed -i 's#registry.k8s.io/pause#registry.cn-hangzhou.aliyuncs.com/google_containers/pause#g'  /etc/containerd/config.toml
sed -i s/pause:3.6/pause:3.9/g /etc/containerd/config.toml
#啟動(dòng)containerd
systemctl daemon-reload
systemctl start containerd
systemctl enable containerd

#解壓cni到默認(rèn)指定目錄,如有修改,使用該命令查看目錄地址:crictl info | grep binDir

mkdir -p /opt/cni/bin
tar Cxzvf /opt/cni/bin cni-plugins-linux-amd64-v1.3.0.tgz

構(gòu)建kubelet/kubeadm/kubectl二進(jìn)制文件

安裝golang,第一臺(tái)master執(zhí)行

tar -xzf go1.21.1.linux-amd64.tar.gz -C /usr/local
echo "export PATH=$PATH:/usr/local/go/bin" >> /etc/profile
source /etc/profile

構(gòu)建k8s二進(jìn)制文件,第一臺(tái)master執(zhí)行,后續(xù)從master1拷貝即可

tar xf kubernetes-1.26.12.tar.gz
cd kubernetes-1.26.12

設(shè)置kubeadm安裝的集群證書(shū)時(shí)間為100年

sed -i s/365/365\ \*\ 100/g cmd/kubeadm/app/constants/constants.go

構(gòu)建命令,arm架構(gòu)的話就是linux/arm64

KUBE_BUILD_PLATFORMS=linux/amd64 make WHAT=cmd/kubelet GOFLAGS=-v GOGCFLAGS="-N -l"
KUBE_BUILD_PLATFORMS=linux/amd64 make WHAT=cmd/kubectl GOFLAGS=-v GOGCFLAGS="-N -l"
KUBE_BUILD_PLATFORMS=linux/amd64 make WHAT=cmd/kubeadm GOFLAGS=-v GOGCFLAGS="-N -l"
cp _output/bin/kubelet /usr/bin/
cp _output/bin/kubectl /usr/bin/
cp _output/bin/kubeadm /usr/bin/

拷貝到其他節(jié)點(diǎn)

scp _output/bin/kube* kmaster2@/usr/bin/
scp _output/bin/kube* kmaster3@/usr/bin/
scp _output/bin/kube* knode1@/usr/bin/
scp _output/bin/kube* knode2@/usr/bin/

安裝kubelet,設(shè)置系統(tǒng)級(jí)啟動(dòng)

cat > /usr/lib/systemd/system/kubelet.service << EOF
[Unit]
Description=kubelet: The Kubernetes Node Agent
Documentation=https://kubernetes.io/docs/
Wants=network-online.target
After=network-online.target

[Service]
ExecStart=/usr/bin/kubelet
Restart=always
StartLimitInterval=0
RestartSec=10

[Install]
WantedBy=multi-user.target
EOF
mkdir /usr/lib/systemd/system/kubelet.service.d/

echo '[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
EnvironmentFile=-/etc/sysconfig/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS' > /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf

cat > /etc/sysconfig/kubelet << EOF
KUBELET_CGROUP_ARGS="--cgroup-driver=systemd"
KUBE_PROXY_MODE="ipvs"
EOF
systemctl daemon-reload
systemctl start kubelet
systemctl enable kubelet

安裝負(fù)載均衡

部署keepalived+HAProxy
1.信息可以按照自己的環(huán)境填寫(xiě),或者和我相同
2.網(wǎng)卡名稱(chēng)都為ens33,如有不相同建議修改下面配置
3.cluster dns或domain有改變的話,需要修改kubelet-conf.yml
HA(haproxy+keepalived) 單臺(tái)master就不要用HA了

首先所有master安裝haproxy+keeplived

yum install haproxy keepalived -y

生成kmaster的haproxy配置文件,所有master通用

cat << EOF | tee /etc/haproxy/haproxy.cfg
global
  log     127.0.0.1 local2
  chroot   /var/lib/haproxy
  pidfile   /var/run/haproxy.pid
  maxconn   4000
  user    haproxy
  group    haproxy
  daemon

defaults
  mode          tcp
  log           global
  retries         3
  timeout connect     10s
  timeout client     1m
  timeout server     1m

frontend kubernetes
  bind *:8443
  mode tcp
  option tcplog
  default_backend kubernetes-apiserver

backend kubernetes-apiserver
  mode tcp
  balance roundrobin
  server kmaster1 192.168.1.3:6443 check maxconn 2000
  server kmaster2 192.168.1.4:6443 check maxconn 2000
  server kmaster3 192.168.1.5:6443 check maxconn 2000
EOF

生成kmaster1的keeplived配置文件

cat << EOF | tee /etc/keepalived/keepalived.conf
global_defs {
  router_id LVS_DEVEL
}

vrrp_script check_haproxy {
  script "/etc/keepalived/check_haproxy.sh"
  interval 3
  fall 10
  timeout 9
  rise 2
}
vrrp_instance VI_1 {
  state MASTER     #備服務(wù)器上改為BACKUP
  interface ens33    #改為自己的接口
  virtual_router_id 51
  priority 100     #備服務(wù)器上改為小于100的數(shù)字,90,80
  advert_int 1
  mcast_src_ip 192.168.1.3   #本機(jī)IP
  nopreempt
  authentication {
    auth_type PASS
    auth_pass 1111
  }
  unicast_peer {
    192.168.1.4    #除本機(jī)外其余兩個(gè)master的IP節(jié)點(diǎn)
    192.168.1.5
  }
  virtual_ipaddress {
    192.168.1.2     #虛擬vip,自己設(shè)定
  }
  track_script {
    check_haproxy
  }
}
EOF

生成kmaster2的keeplived配置文件

cat << EOF | tee /etc/keepalived/keepalived.conf
global_defs {
  router_id LVS_DEVEL_1
}

vrrp_script check_haproxy {
  script "/etc/keepalived/check_haproxy.sh"
  interval 3
  fall 10
  timeout 9
  rise 2
}
vrrp_instance VI_1 {
  state BACKUP     #備服務(wù)器上改為BACKUP
  interface ens33    #改為自己的接口
  virtual_router_id 51
  priority 90     #備服務(wù)器上改為小于100的數(shù)字,90,80
  advert_int 1
  mcast_src_ip 192.168.1.4  #本機(jī)IP
  nopreempt
  authentication {
    auth_type PASS
    auth_pass 1111
  }
  unicast_peer {
    192.168.1.3    #除本機(jī)外其余兩個(gè)master的IP節(jié)點(diǎn)
    192.168.1.5
  }
  virtual_ipaddress {
    192.168.1.2     #虛擬vip,自己設(shè)定
  }
  track_script {
    check_haproxy
  }
}
EOF

生成kmaster3的keeplived配置文件

cat << EOF | tee /etc/keepalived/keepalived3.conf
global_defs {
  router_id LVS_DEVEL_3
}

vrrp_script check_haproxy {
  script "/etc/keepalived/check_haproxy.sh"
  interval 3
  fall 10
  timeout 9
  rise 2
}
vrrp_instance VI_1 {
  state BACKUP     #備服務(wù)器上改為BACKUP
  interface ens33    #改為自己的接口
  virtual_router_id 51
  priority 80     #備服務(wù)器上改為小于100的數(shù)字,90,80
  advert_int 1
  mcast_src_ip 192.168.1.5  #本機(jī)IP
  nopreempt
  authentication {
    auth_type PASS
    auth_pass 1111
  }
  unicast_peer {
    192.168.1.3    #除本機(jī)外其余兩個(gè)master的IP節(jié)點(diǎn)
    192.168.1.4
  }
  virtual_ipaddress {
    192.168.1.2     #虛擬vip,自己設(shè)定
  }
  track_script {
    check_haproxy
  }
}
EOF

添加keeplived健康檢查腳本,每臺(tái)master通用

cat > /etc/keepalived/check_haproxy.sh <<EOF
#!/bin/bash
A=\`ps -C haproxy --no-header | wc -l\`
if [ \$A -eq 0 ];then
systemctl stop keepalived
fi
EOF
chmod +x /etc/keepalived/check_haproxy.sh

#啟動(dòng)haproxy和keepalived,并加入開(kāi)機(jī)自啟

systemctl enable --now haproxy keepalived
systemctl restart haproxy keepalived

加載鏡像【每臺(tái)都要執(zhí)行】,或者使用私有鏡像倉(cāng)庫(kù)

方法1,直接使用命令下載。

#master鏡像
crictl pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.26.0
crictl pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.26.0
crictl pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.26.0
crictl pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.26.0
crictl pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.9
crictl pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.3-0
crictl pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.6
#calico鏡像
crictl pull registry.cn-beijing.aliyuncs.com/dotbalo/cni:v3.24.0
crictl pull registry.cn-beijing.aliyuncs.com/dotbalo/kube-controllers:v3.24.0
crictl pull registry.cn-beijing.aliyuncs.com/dotbalo/typha:v3.24.0
crictl pull registry.cn-beijing.aliyuncs.com/dotbalo/node:v3.24.0
#comp鏡像
crictl pull registry.cn-beijing.aliyuncs.com/dotbalo/metrics-server:0.6.1

方法2,離線下載,先在其他地方準(zhǔn)備好鏡像tar包,再導(dǎo)入

#master鏡像
ctr -n k8s.io images import coredns-v1.9.3.tar
ctr -n k8s.io images import etcd-3.5.6-0.tar
ctr -n k8s.io images import kube-apiserver-v1.26.0.tar
ctr -n k8s.io images import kube-controller-manager-v1.26.0.tar
ctr -n k8s.io images import kube-proxy-v1.26.0.tar
ctr -n k8s.io images import kube-scheduler-v1.26.0.tar
ctr -n k8s.io images import pause-3.9.tar
#calico鏡像
ctr -n k8s.io images import cni-v3.24.0.tar
ctr -n k8s.io images import kube-controllers-v3.24.0.tar
ctr -n k8s.io images import node-v3.24.0.tar
ctr -n k8s.io images import typha-v3.24.0.tar
#comp鏡像
ctr -n k8s.io images import metrics-server-0.6.1.tar

方法3,用kubeadm命令拉取鏡像【有網(wǎng)絡(luò)的情況下推薦使用該方法,并且后續(xù)calico、comp、dashbored都不需要額外手動(dòng)拉??;網(wǎng)絡(luò)不太好的情況下使用方法1一個(gè)個(gè)的下載;沒(méi)網(wǎng)絡(luò)的情況下只能用方法2】

kubeadm config images pull --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers

[root@kmatser1 ~]# kubeadm config images pull --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.26.0
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.26.0
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.26.0
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.26.0
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.9
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.6-0
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.9.3

初始化集群主節(jié)點(diǎn)

有安裝負(fù)載均衡的情況下,使用8443端口和VIP,單機(jī)主節(jié)點(diǎn)的話,使用6443端口和master的IP。只需要在第一臺(tái)master執(zhí)行,并且確保VIP當(dāng)前就在該服務(wù)器上。

kubeadm init --apiserver-advertise-address 192.168.1.2 --apiserver-bind-port 8443 --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers --cri-socket "unix:///var/run/containerd/containerd.sock" --kubernetes-version 1.26.0

輸出如下表示成功:

[init] Using Kubernetes version: v1.26.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kmaster1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.203.200]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [kmaster1 localhost] and IPs [192.168.1.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [kmaster1 localhost] and IPs [192.168.1.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 6.003773 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node kmaster1 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node kmaster1 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: styztp.kt842zi3r4lc5ez8
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.1.2:8443 --token styztp.kt842zi3r4lc5ez8 \
	--discovery-token-ca-cert-hash sha256:85d216d87b847ca609cd3bfe0099ff2dd776bc33ca33586db2dac354e720a80f

復(fù)制初始化打印出來(lái)的命令,到node節(jié)點(diǎn)去執(zhí)行,nide需要完成上述ipvs安裝、containerd安裝、kubelet安裝等。否則會(huì)失敗,仔細(xì)看文檔里面提到的需要在哪些服務(wù)器執(zhí)行。

kubeadm join 192.168.1.2:8443 --token styztp.kt842zi3r4lc5ez8 \
	--discovery-token-ca-cert-hash sha256:85d216d87b847ca609cd3bfe0099ff2dd776bc33ca33586db2dac354e720a80f

新增master節(jié)點(diǎn),初始化的時(shí)候沒(méi)有給出master怎么加入集群,需要手動(dòng)在第一臺(tái)master生成,在去新的master執(zhí)行加入集群。

#在master上生成新的token
kubeadm token create --print-join-command
#在master上生成用于新master加入的證書(shū)
kubeadm init phase upload-certs --experimental-upload-certs
#根據(jù)上述兩條命令生成的信息,拿去新master節(jié)點(diǎn)執(zhí)行
kubeadm join 192.168.1.2:8443 --token styztp.kt842zi3r4lc5ez8 \
	--discovery-token-ca-cert-hash sha256:85d216d87b847ca609cd3bfe0099ff2dd776bc33ca33586db2dac354e720a80f \
	--experimental-control-plane --certificate-key e799a655f667fc327ab8c91f4f2541b57b96d2693ab5af96314ebddea7a68526

每臺(tái)master執(zhí)行下列命令

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
export KUBECONFIG=/etc/kubernetes/admin.conf

安裝calico

下載地址:https://raw.githubusercontent.com/projectcalico/calico/v3.24.0/manifests/calico.yaml
如果多網(wǎng)卡,或者報(bào)錯(cuò)網(wǎng)卡的問(wèn)題,需要修改calico.yaml的第4530行,加入下列參數(shù)指定網(wǎng)卡。
- name: IP_AUTODETECTION_METHOD
value: interface=ens33
麒麟v10使用kubeadm安裝k8s1.26,kubernetes,容器,運(yùn)維,云原生

kubectl  create -f calico.yaml

至此,k8s就安裝完成了。文章來(lái)源地址http://www.zghlxwxcb.cn/news/detail-787587.html

到了這里,關(guān)于麒麟v10使用kubeadm安裝k8s1.26的文章就介紹完了。如果您還想了解更多內(nèi)容,請(qǐng)?jiān)谟疑辖撬阉鱐OY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!

本文來(lái)自互聯(lián)網(wǎng)用戶(hù)投稿,該文觀點(diǎn)僅代表作者本人,不代表本站立場(chǎng)。本站僅提供信息存儲(chǔ)空間服務(wù),不擁有所有權(quán),不承擔(dān)相關(guān)法律責(zé)任。如若轉(zhuǎn)載,請(qǐng)注明出處: 如若內(nèi)容造成侵權(quán)/違法違規(guī)/事實(shí)不符,請(qǐng)點(diǎn)擊違法舉報(bào)進(jìn)行投訴反饋,一經(jīng)查實(shí),立即刪除!

領(lǐng)支付寶紅包贊助服務(wù)器費(fèi)用

相關(guān)文章

  • kubeadm方式部署k8s最新版本V1.26.2

    kubeadm方式部署k8s最新版本V1.26.2

    Kubernetes核心概念 Master主要負(fù)責(zé)資源調(diào)度,控制副本,和提供統(tǒng)一訪問(wèn)集群的入口。--核心節(jié)點(diǎn)也是管理節(jié)點(diǎn) Node是Kubernetes集群架構(gòu)中運(yùn)行Pod的服務(wù)節(jié)點(diǎn)。Node是Kubernetes集群操作的單元,用來(lái)承載被分配Pod的運(yùn)行,是Pod運(yùn)行的宿主機(jī),由Master管理,并匯報(bào)容器狀態(tài)給Master,同時(shí)

    2024年02月05日
    瀏覽(20)
  • kubeadm部署k8s 1.26.0版本高可用集群

    kubeadm部署k8s 1.26.0版本高可用集群

    1.前言 本次搭建使用centos7.9系統(tǒng),并且使用haproxy+keepalived作為高可用架構(gòu)軟件,haproxy實(shí)現(xiàn)k8s集群管理節(jié)點(diǎn)apiserver服務(wù)的負(fù)載均衡以實(shí)現(xiàn)集群的高可用功能,keepalived保障了hapxoy的高可用,容器引擎使用docker,需要額外引入cri-docker服務(wù),且使用集群內(nèi)置的etcd服務(wù),并配置etcd的

    2024年02月11日
    瀏覽(24)
  • 【Kubernetes部署篇】Kubeadm方式搭建K8s集群 1.26.0版本

    【Kubernetes部署篇】Kubeadm方式搭建K8s集群 1.26.0版本

    官方文檔: 二進(jìn)制下載地址 環(huán)境規(guī)劃: pod網(wǎng)段:10.244.0.0/16 service網(wǎng)段:10.10.0.0/16 注意: pod和service網(wǎng)段不可沖突,如果沖突會(huì)導(dǎo)致K8S集群安裝失敗。 容器運(yùn)行時(shí)本次使用containerd。 主機(jī)名 IP地址 操作系統(tǒng) master-1 16.32.15.200 CentOS7.8 node-1 16.32.15.201 CentOS7.8 node-2 16.32.15.202 CentOS

    2024年02月10日
    瀏覽(50)
  • 使用kubeadm安裝和設(shè)置Kubernetes(k8s)

    使用kubeadm安裝和設(shè)置Kubernetes(k8s)

    kubeadm是官方社區(qū)推出的一個(gè)用于快速部署kubernetes集群的工具。 這個(gè)工具能通過(guò)兩條指令完成一個(gè)kubernetes集群的部署: 使用kubeadm方式搭建K8s集群主要分為以下幾步 準(zhǔn)備三臺(tái)虛擬機(jī),同時(shí)安裝操作系統(tǒng)CentOS 7.x 對(duì)三個(gè)安裝之后的操作系統(tǒng)進(jìn)行初始化操作 在三個(gè)節(jié)點(diǎn)安裝 dock

    2024年02月12日
    瀏覽(56)
  • Ubuntu 使用Kubeadm 離線安裝k8s

    K8S離線部署的方案 離線包,自己可以跟著下面步驟自己下載。 https://download.csdn.net/download/u010952056/86748944 萬(wàn)字長(zhǎng)文詳解 PaaS toB 場(chǎng)景下 K8s 離線部署方案 Item Language 離線部署支持情況 kops Golang 不支持 kubespray Ansible 支持,需自行構(gòu)建安裝包 kubeasz Ansible 支持,需自行構(gòu)建安裝包

    2024年02月05日
    瀏覽(26)
  • k8s證書(shū)更新,kubeadm安裝的K8S證書(shū)過(guò)期后無(wú)法使用后證書(shū)更新方法

    k8s安裝一年后證書(shū)顯示過(guò)期。證書(shū)未自動(dòng)續(xù)期。 以下操作需到所有master節(jié)點(diǎn)操作 一般情況下,k8s創(chuàng)建的集群節(jié)點(diǎn)上的/usr/bin/文件夾下會(huì)存在kubeadm二進(jìn)制文件,如果發(fā)現(xiàn)master節(jié)點(diǎn)上沒(méi)有kubeadm,可以從官方下載對(duì)應(yīng)的版本并且安裝。

    2024年02月14日
    瀏覽(28)
  • centos安裝部署Kubernetes(k8s)步驟使用kubeadm方式

    centos安裝部署Kubernetes(k8s)步驟使用kubeadm方式

    機(jī)器地址: 192.168.0.35 k8s-master 192.168.0.39 k8s-node1 192.168.0.116 k8s-node2 修改每臺(tái)機(jī)器的名字 關(guān)閉防火墻和selinux 臨時(shí)關(guān)閉selinux: 永久關(guān)閉: 修改selinux為disabled或者permissive 重啟生效 配置本地解析 確保每個(gè)節(jié)點(diǎn)MAC地址和 product_uuid 的唯一性 同步時(shí)間 如果各機(jī)器上時(shí)間都沒(méi)有問(wèn)題

    2024年02月06日
    瀏覽(32)
  • Amazon Linux2使用kubeadm部署安裝K8S集群

    在AWS上啟動(dòng)3臺(tái)Amazon Linux2的服務(wù)器,服務(wù)器配置為2vcpu 和2GB內(nèi)存 1. 修改主機(jī)名(可選步驟) 2.導(dǎo)入k8s的yum倉(cāng)庫(kù)密鑰 3. 配置kubernetes源 4. 部署安裝kubeadm、kubectl、docker,并且啟動(dòng)docker 5. 在master節(jié)點(diǎn)上執(zhí)行初始化 具體初始化過(guò)程如下 [init] Using Kubernetes version: v1.27.1 [preflight] Runni

    2024年02月06日
    瀏覽(29)
  • k8s1.27.x 最新版本使用kubeadm 的containerd的方式安裝

    k8s1.27.x 最新版本使用kubeadm 的containerd的方式安裝

    一:k8s1.27.x 的概述 太平洋時(shí)間 2023 年 4 月 11 日,Kubernetes 1.27 正式發(fā)布。此版本距離上版本發(fā)布時(shí)隔 4 個(gè)月,是 2023 年的第一個(gè)版本。 新版本中 release 團(tuán)隊(duì)跟蹤了 60 個(gè) enhancements,比之前版本都要多得多。其中 13 個(gè)功能升級(jí)為穩(wěn)定版,29 個(gè)已有功能進(jìn)行優(yōu)化升級(jí)為 Beta,另有

    2024年02月09日
    瀏覽(23)
  • 在離線的arm架構(gòu)kylin v10服務(wù)器上使用Kuboard-Spray搭建K8S集群

    在離線的arm架構(gòu)kylin v10服務(wù)器上使用Kuboard-Spray搭建K8S集群

    在離線的arm架構(gòu)kylin v10服務(wù)器上使用Kuboard-Spray搭建K8S集群 在內(nèi)網(wǎng)項(xiàng)目中需要安裝K8S集群,經(jīng)過(guò)調(diào)研,選擇使用Kuboard-Spray工具搭建K8S集群,降低學(xué)習(xí)成本,提高安裝效率。 為了簡(jiǎn)化安裝使用集群的過(guò)程,搭建了私有yum源倉(cāng)庫(kù)和harbor私有鏡像倉(cāng)庫(kù)。 詳細(xì)參考文章: 本地yum源倉(cāng)

    2024年04月10日
    瀏覽(22)

覺(jué)得文章有用就打賞一下文章作者

支付寶掃一掃打賞

博客贊助

微信掃一掃打賞

請(qǐng)作者喝杯咖啡吧~博客贊助

支付寶掃一掃領(lǐng)取紅包,優(yōu)惠每天領(lǐng)

二維碼1

領(lǐng)取紅包

二維碼2

領(lǐng)紅包