国产 无码 综合区,色欲AV无码国产永久播放,无码天堂亚洲国产AV,国产日韩欧美女同一区二区

【云原生 | Kubernetes 系列】— 部署K8S 1.28版本集群部署(基于Containerd容器運行)

這篇具有很好參考價值的文章主要介紹了【云原生 | Kubernetes 系列】— 部署K8S 1.28版本集群部署(基于Containerd容器運行)。希望對大家有所幫助。如果存在錯誤或未考慮完全的地方,請大家不吝賜教,您也可以點擊"舉報違法"按鈕提交疑問。

kubernetes集群規(guī)劃

主機名 IP地址 備注
k8s-master01 192.168.0.109 master
k8s-node1 192.168.0.108 node1
k8s-node2 192.168.0.107 node1
k8s-node3 192.168.0.105 node1

?? 準備工作

1、主機配置

[root@k8s-master01 ~]# hostnamectl set-hostname k8s-master01
[root@k8s-master01 ~]# hostname
k8s-master01
[root@localhost ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.0.109 k8s-master01
192.168.0.108 k8s-node1
192.168.0.107 k8s-node2
192.168.0.105 k8s-node3
systemctl stop firewalld && systemctl disable  firewalld
setenforce 0 && sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
sestatus #查看selinux 狀態(tài)
# 時間同步
ntpdate time1.aliyun.com

2、升級內(nèi)核

導elrepo gpg key,軟件驗證
# rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
安裝elrepoYUM源倉庫
# yum -y install https://www.elrepo.org/elrepo-release-7.0-4.el7.elrepo.noarch.rpm
安裝kernel-ml版本,ml為長期穩(wěn)定版本,1t為長期維護版本
# yum --enablerepo="elrepo-kernel" -y install kernel-lt.x86_64
設置grub2默認引導為0,就是重啟后會使用最新內(nèi)核
# grub2-set-default 0
重新生成grub2引導文件
# grub2-mkconfig -o /boot/grub2/grub.cfg
更新后,需要重啟,使用升級的內(nèi)核生效。
#reboot 重啟后檢查版本
# uname -r

[root@k8s-master01 ~]# uname -r
5.4.265-1.el7.elrepo.x86_64

3、配置內(nèi)核轉發(fā)以及過濾

添加網(wǎng)橋過濾及內(nèi)核轉發(fā)配置文件
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1  #啟用 IPv6 數(shù)據(jù)包經(jīng)過 iptables 的處理。
net.bridge.bridge-nf-call-iptables = 1   #啟用 IPv4 數(shù)據(jù)包經(jīng)過 iptables 的處理。
net.ipv4.ip_forward = 1  #啟用 IPv4 數(shù)據(jù)包的轉發(fā)功能。
vm.swappiness = 0  #閉交換分區(qū)的使用,使系統(tǒng)更加傾向于使用物理內(nèi)存而非交換分區(qū)。
EOF
加載br_netfilter模塊
# modprobe br_netfilter
# sysctl -p /etc/sysctl.d/k8s.conf
查看是否加載
# lsmod | grep br_netfilter

[root@k8s-master01 ~]# lsmod | grep br_netfilter
br_netfilter           28672  0 

設置開機自啟 vi /etc/sysconfig/modules/br_netfilter.modules
#!/bin/bash
modprobe br_netfilter
設置權限
# chmod 755 /etc/sysconfig/modules/br_netfilter.modules

4、安裝ipset ipvsadm,IPVS(IP Virtual Server)是一個用于負載均衡的 Linux 內(nèi)核模塊,它可以用來替代 kube-proxy 默認的 iptables 方式。IPVS 提供了更高效和可擴展的負載均衡功能,特別適用于大規(guī)模的集群環(huán)境。


yum -y install ipset ipvsadm
 配置ipvsadm模塊加載方式
 添加需要加載的模塊
# cat > /etc/sysconfig/modules/ipvs.modules <<EOF

# !/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
EOF
 授權、運行、檢查是否加載
[root@k8s-master01 ~]# chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack
ip_vs_sh               16384  0 
ip_vs_wrr              16384  0 
ip_vs_rr               16384  0 
ip_vs                 155648  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack          147456  4 xt_conntrack,nf_nat,xt_MASQUERADE,ip_vs
nf_defrag_ipv6         24576  2 nf_conntrack,ip_vs
nf_defrag_ipv4         16384  1 nf_conntrack
libcrc32c              16384  4 nf_conntrack,nf_nat,xfs,ip_vs

配置文件解析:
ip_vs:這是 IPVS 的核心模塊,用于實現(xiàn) IP 負載均衡。它攔截流量并將其轉發(fā)到后端的服務節(jié)點,以實現(xiàn)負載均衡和高可用性。
ip_vs_rr:這個模塊實現(xiàn)了基于輪詢的調(diào)度算法(Round Robin),它按順序將請求分配給后端節(jié)點,直到達到最大連接數(shù)限制。
ip_vs_wrr:這個模塊實現(xiàn)了加權輪詢的調(diào)度算法(Weighted Round Robin),它根據(jù)節(jié)點的權重分配請求,可以使具有更高權重的節(jié)點處理更多的請求。
ip_vs_sh:這個模塊實現(xiàn)了源地址哈希的調(diào)度算法(Source Hash),它基于請求的源 IP 地址將請求分發(fā)到后端節(jié)點。相同的源 IP 地址將始終被分配到同一個后端節(jié)點。
nf_conntrack:這個模塊提供了網(wǎng)絡連接跟蹤功能,用于跟蹤數(shù)據(jù)包的連接狀態(tài),以便正確地處理負載均衡過程中的網(wǎng)絡流量。

5、關閉swap 分區(qū)文章來源地址http://www.zghlxwxcb.cn/news/detail-830158.html

swapoff -a && sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

?? 部署containerd

1 ,下載

wget https://github.com/containerd/containerd/releases/download/v1.7.7/cri-containerd-1.7.7-linux-amd64.tar.gz
scp -r  ./cri-containerd-1.7.7-linux-amd64.tar.gz  root@192.168.0.108:/root/
scp -r  ./cri-containerd-1.7.7-linux-amd64.tar.gz  root@192.168.0.107:/root/
scp -r  ./cri-containerd-1.7.7-linux-amd64.tar.gz  root@192.168.0.105:/root/

tar -xvf cri-containerd-1.7.7-linux-amd64.tar.gz -C /
配置配置文件
mkdir /etc/contained
containerd config default > /etc/contained/config.toml
vi /etc/contained/config.toml
sandbox_image = "redistry.k8s.io/pause:3.9" #由3.8修改為3.9
設置開機自啟
systemctl enable --now containerd

runc 準備(替換原有問題的runc)

github:https://github.com/opencontainers/runc/releases/tag/v1.1.9
libseccomp準備
下載部署包
tar -xvf libseccomp-2.5.4.tar.gz
yum -y install gperf
cd libseccomp-2.5.4
./configure
make
make install
find / -name "libseccomp.so"

[root@k8s-master01 libseccomp-2.5.4]# find / -name "libseccomp.so"
/root/libseccomp-2.5.4/src/.libs/libseccomp.so
/usr/local/lib/libseccomp.so

安裝runc
gtihub:https://github.com/opencontainers/runc/releases/download/v1.1.9/runc.amd64
rm -rf `which runc`
chmod +x runc.amd64
mv runc.amd64 runc
mv runc /usr/local/sbin/

[root@k8s-master01 ~]# runc -version
runc version 1.1.9
commit: v1.1.9-0-gccaecfcb
spec: 1.0.2-dev
go: go1.20.3
libseccomp: 2.5.4

部署K8S

1、K8S集群軟件部署,選擇一個yum 源即可

cat > /etc/yum.repos.d/k8s.repo <<EOF
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey-https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF

cat > /etc/yum.repos.d/k8s.repo <<EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey-https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
更新后查看是否存在1.28鏡像
yum clean all
yum makecache
yum list kubeadm.x86_64 --showduplicates 
yum -y install kubeadm-1.28.0-0 kubelet-1.28.0-0 kubectl-1.28.0-0

2, K8S軟件初始化

#配置kubelet
[root@k8s-master wrap]# vi /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"
systemctl enable kubelet

3, 集群初始化

 kubeadm init --kubernetes-version=v1.28.0 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=172.21.131.89 --cri-socket unix:///var/run/containerd/containerd.sock
 新版本--cri-socket可以不添加 默認優(yōu)先選擇contained
 拉取鏡像失敗使用:
 kubeadm init --image-repository=registry.aliyuncs.com/google_containers --kubernetes-version=v1.28.1 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.0.109 unix:///var/run/containerd/containerd.sock
 kubeadm token create --print-join-command  #過有效期之后重新生成

4, 手動拉去鏡像(如果鏡像拉去不下來也可以手動拉去)

kubeadm config images list
[root@k8s-master01 ~]# kubeadm config images list
W1228 02:55:13.724483    8989 version.go:104] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get "https://cdn.dl.k8s.io/release/stable-1.txt": dial tcp 146.75.113.55:443: i/o timeout (Client.Timeout exceeded while awaiting headers)
W1228 02:55:13.724662    8989 version.go:105] falling back to the local client version: v1.28.0
registry.k8s.io/kube-apiserver:v1.28.0
registry.k8s.io/kube-controller-manager:v1.28.0
registry.k8s.io/kube-scheduler:v1.28.0
registry.k8s.io/kube-proxy:v1.28.0
registry.k8s.io/pause:3.9
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/coredns/coredns:v1.10.1

ctr image  pull docker.io/gotok8s/kube-apiserver:v1.28.0
ctr image  pull docker.io/gotok8s/kube-controller-manager:v1.28.0
ctr image  pull docker.io/gotok8s/kube-scheduler:v1.28.0
ctr image  pull docker.io/gotok8s/kube-proxy:v1.28.0
ctr image  pull docker.io/gotok8s/pause:3.9
ctr image  pull docker.io/gotok8s/etcd:3.5.9-0
ctr image  pull docker.io/gotok8s/coredns:v1.10.1

ctr i tag  docker.io/gotok8s/kube-apiserver:v1.28.0  registry.k8s.io/kube-apiserver:v1.28.0
ctr i tag  docker.io/gotok8s/kube-controller-manager:v1.28.0 registry.k8s.io/kube-controller-manager:v1.28.0
ctr i tag  docker.io/gotok8s/kube-scheduler:v1.28.0 registry.k8s.io/kube-scheduler:v1.28.0
ctr i tag  docker.io/gotok8s/kube-proxy:v1.28.0  registry.k8s.io/kube-proxy:v1.28.0
ctr i tag  docker.io/gotok8s/pause:3.9  registry.k8s.io/pause:3.9
ctr i tag  docker.io/gotok8s/etcd:3.5.9-0  registry.k8s.io/etcd:3.5.9-0
ctr i tag  docker.io/gotok8s/coredns:v1.10.1 registry.k8s.io/coredns:v1.10.1


5, 保存文件

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
export KUBECONFIG=/etc/kubernetes/admin.conf

[root@k8s-master01 kubernetes-1.28.0]# kubectl get nodes
NAME           STATUS     ROLES           AGE   VERSION
k8s-master01   NotReady   control-plane   53m   v1.28.0
k8s-node1      NotReady   <none>          28s   v1.28.0
k8s-node2      NotReady   <none>          16s   v1.28.0
k8s-node3      NotReady   <none>          10s   v1.28.0

網(wǎng)絡插件

flannel

sudo docker pull rancher/mirrored-flannelcni-flannel-cni-plugin:v1.2.0

wget https://github.com/flannel-io/flannel/blob/master/Documentation/kube-flannel.yml -O kube-flannel.yml

sed -i 's|docker.io/flannel/flannel-cni-plugin:v1.2.0|ccr.ccs.tencentyun.com/google_cn/mirrored-flannelcni-flannel-cni-plugin:v1.1.0|g' kube-flannel.yml

sed -i 's|docker.io/flannel/flannel:v0.23.0|ccr.ccs.tencentyun.com/google_cn/mirrored-flannelcni-flannel:v0.18.1|g' kube-flannel.yml

sed -i 's|10.244.0.0/16|10.1.0.0/16|' kube-flannel.yml

kubectl apply -f kube-flannel.yml


calico部署

calico訪問鏈接:https://projectcalico.docs.tigera.io/about/about-calico
# kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/tigera-operator.yaml

# wget https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/custom-resources.yaml

# vim custom-resources.yaml

# This section includes base Calico installation configuration.
# For more information, see: https://projectcalico.docs.tigera.io/master/reference/installation/api#operator.tigera.io/v1.Installation
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
name: default
spec:
# Configures Calico networking.
calicoNetwork:
# Note: The ipPools section cannot be modified post-install.
ipPools:
- blockSize: 26
cidr: 10.244.0.0/16 修改此行內(nèi)容為初始化時定義的pod network cidr
encapsulation: VXLANCrossSubnet
natOutgoing: Enabled
nodeSelector: all()

---

# This section configures the Calico API server.
# For more information, see: https://projectcalico.docs.tigera.io/master/reference/installation/api#operator.tigera.io/v1.APIServer
apiVersion: operator.tigera.io/v1
kind: APIServer
metadata:
name: default
spec: {}

kubectl create -f custom-resources.yaml

報錯一

[root@k8s-master01 ~]#  kubeadm init --image-repository=registry.aliyuncs.com/google_containers --kubernetes-version=v1.28.1 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.0.109
[init] Using Kubernetes version: v1.28.1
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR CRI]: container runtime is not running: output: time="2023-12-28T02:42:09-08:00" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/containerd/containerd.sock\": rpc error: code = Unimplemented desc = unknown service runtime.v1.RuntimeService"
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

解決

vim  /etc/containerd/config.toml

disabled_plugins = ["cri"] 改成 disabled_plugins = [""]

sudo systemctl restart containerd

報錯二

[init] Using Kubernetes version: v1.28.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
W1228 02:50:26.797707    8490 checks.go:835] detected that the sandbox image "registry.k8s.io/pause:3.8" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "registry.aliyuncs.com/google_containers/pause:3.9" as the CRI sandbox image.
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.0.109]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master01 localhost] and IPs [192.168.0.109 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master01 localhost] and IPs [192.168.0.109 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

# 查看錯誤信息
journalctl -u kubelet -f
大概是:
 Failed to create pod sandbox: rpc error: code = Unknown desc = failed to get sandbox image "registry.k8s.io/pause:3.8": failed to pull image "registry.k8s.io/pause:3.8": 
 failed to pull and unpack image "registry.k8s.io/pause:3.8": failed to resolve reference "registry.k8s.io/pause:3.8": 
 failed to do request: Head "https://europe-west2-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.8": dial tcp 173.194.174.82:443: i/o timeout


解決

crictl pull registry.aliyuncs.com/google_containers/pause:3.9
ctr -n k8s.io images tag registry.aliyuncs.com/google_containers/pause:3.9  registry.k8s.io/pause:3.8
kubeadm reset 
kubeadm init --image-repository=registry.aliyuncs.com/google_containers --kubernetes-version=v1.28.0 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.0.109 

結果

root@k8s-master01 contained]# kubectl  get   po -A
NAMESPACE      NAME                                   READY   STATUS    RESTARTS   AGE
kube-flannel   kube-flannel-ds-6nbh9                  1/1     Running   0          156m
kube-flannel   kube-flannel-ds-vpghq                  1/1     Running   0          16m
kube-flannel   kube-flannel-ds-wv5hb                  1/1     Running   0          15m
kube-flannel   kube-flannel-ds-zdsfg                  1/1     Running   0          16m
kube-system    coredns-66f779496c-fcxnb               1/1     Running   0          4h16m
kube-system    coredns-66f779496c-snjk2               1/1     Running   0          4h16m
kube-system    etcd-k8s-master01                      1/1     Running   0          4h16m
kube-system    kube-apiserver-k8s-master01            1/1     Running   0          4h16m
kube-system    kube-controller-manager-k8s-master01   1/1     Running   0          4h16m
kube-system    kube-proxy-6vdts                       1/1     Running   0          4h16m
kube-system    kube-proxy-7thjs                       1/1     Running   0          3m11s
kube-system    kube-proxy-nc5zh                       1/1     Running   0          3m11s
kube-system    kube-proxy-sgwcc                       1/1     Running   0          2m43s
kube-system    kube-scheduler-k8s-master01            1/1     Running   0          4h16m

資源監(jiān)控

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
    rbac.authorization.k8s.io/aggregate-to-admin: "true"
    rbac.authorization.k8s.io/aggregate-to-edit: "true"
    rbac.authorization.k8s.io/aggregate-to-view: "true"
  name: system:aggregated-metrics-reader
rules:
- apiGroups:
  - metrics.k8s.io
  resources:
  - pods
  - nodes
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
rules:
- apiGroups:
  - ""
  resources:
  - nodes/metrics
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - pods
  - nodes
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server-auth-reader
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server:system:auth-delegator
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-delegator
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:metrics-server
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  ports:
  - name: https
    port: 443
    protocol: TCP
    targetPort: https
  selector:
    k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  replicas: 2
  selector:
    matchLabels:
      k8s-app: metrics-server
  strategy:
    rollingUpdate:
      maxUnavailable: 1
  template:
    metadata:
      labels:
        k8s-app: metrics-server
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchLabels:
                k8s-app: metrics-server
            namespaces:
            - kube-system
            topologyKey: kubernetes.io/hostname
      containers:
      - args:
        - --cert-dir=/tmp
        - --secure-port=4443
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        - --kubelet-use-node-status-port
        - --metric-resolution=15s
        - --kubelet-insecure-tls
        image: registry.cn-hangzhou.aliyuncs.com/google_containers/metrics-server:v0.6.4
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /livez
            port: https
            scheme: HTTPS
          periodSeconds: 10
        name: metrics-server
        ports:
        - containerPort: 4443
          name: https
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /readyz
            port: https
            scheme: HTTPS
          initialDelaySeconds: 20
          periodSeconds: 10
        resources:
          requests:
            cpu: 100m
            memory: 200Mi
        securityContext:
          allowPrivilegeEscalation: false
          readOnlyRootFilesystem: true
          runAsNonRoot: true
          runAsUser: 1000
        volumeMounts:
        - mountPath: /tmp
          name: tmp-dir
      nodeSelector:
        kubernetes.io/os: linux
      priorityClassName: system-cluster-critical
      serviceAccountName: metrics-server
      volumes:
      - emptyDir: {}
        name: tmp-dir
---
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
  name: metrics-server
  namespace: kube-system
spec:
  minAvailable: 1
  selector:
    matchLabels:
      k8s-app: metrics-server
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
  labels:
    k8s-app: metrics-server
  name: v1beta1.metrics.k8s.io
spec:
  group: metrics.k8s.io
  groupPriorityMinimum: 100
  insecureSkipTLSVerify: true
  service:
    name: metrics-server
    namespace: kube-system
  version: v1beta1
  versionPriority: 100

到了這里,關于【云原生 | Kubernetes 系列】— 部署K8S 1.28版本集群部署(基于Containerd容器運行)的文章就介紹完了。如果您還想了解更多內(nèi)容,請在右上角搜索TOY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關文章,希望大家以后多多支持TOY模板網(wǎng)!

本文來自互聯(lián)網(wǎng)用戶投稿,該文觀點僅代表作者本人,不代表本站立場。本站僅提供信息存儲空間服務,不擁有所有權,不承擔相關法律責任。如若轉載,請注明出處: 如若內(nèi)容造成侵權/違法違規(guī)/事實不符,請點擊違法舉報進行投訴反饋,一經(jīng)查實,立即刪除!

領支付寶紅包贊助服務器費用

相關文章

  • CentOS7使用kubeadm部署k8s-1.28集群

    CentOS7使用kubeadm部署k8s-1.28集群

    參考資料:生產(chǎn)環(huán)境 記錄使用CentOS7.9進行k8s-1.28版本部署,運行時環(huán)境使用containerd,網(wǎng)絡插件使用calic,集群搭建完成后。搭建NFS共享存儲環(huán)境,測試運行pod并掛載使用持久卷。 主機名 **IP地址 ** 角色 基礎組件 el7-linux-node-01 192.168.240.11 k8s-master kubeadm,kubelet,kubectl,containerd,nf

    2024年04月26日
    瀏覽(28)
  • k8s 1.28版本:使用StorageClass動態(tài)創(chuàng)建PV,SelfLink 問題修復

    ?? k8s中提供了一套自動創(chuàng)建 PV 的機制,就是基于 StorageClass 進行的 ,通過 StorageClass 可以實現(xiàn)僅僅配置 PVC,然后交由 StorageClass 根據(jù) PVC 的需求動態(tài)創(chuàng)建 PV。 問題: ??使用 k8s 1.28版本,通過 kubectl get pvc ,發(fā)現(xiàn) PVC一直處于 Pending 狀態(tài)。通過 kubectl describe pvc [pvc名稱] 描述

    2024年02月08日
    瀏覽(17)
  • K8S(1.28)--部署ingress-nginx(1.9.1)

    K8S(1.28)--部署ingress-nginx(1.9.1)

    原文網(wǎng)址:K8S(1.28)--部署ingress-nginx(1.9.1)-CSDN博客 本文介紹K8S部署ingress-nginx的方法。 本文使用的K8S和ingress-nginx都是最新的版本。 官網(wǎng)地址 https://kubernetes.github.io/ingress-nginx/deploy/ Ingress里Nginx的代理流程: 1.確定版本 首先確定版本:https://github.com/kubernetes/ingress-nginx 我K8S是1.2

    2024年02月20日
    瀏覽(28)
  • 基于containerd容器運行時,kubeadmin部署k8s 1.28集群

    基于containerd容器運行時,kubeadmin部署k8s 1.28集群

    centos7u9 序號 主機名 ip地址 CPU 內(nèi)存 硬盤 1 k8s-master1 192.168.1.200 2C 2G 100G 2 k8s-worker1 192.168.1.201 2C 2G 100G 3 k8s-worker2 192.168.1.202 2C 2G 100G 1.3.1主機名配置 vi /etc/sysconfig/network-scripts/ifcfg-ens33 1.3.3主機名與IP地址解析(hosts) vi /etc/hosts 1.3.4防火墻配置 關閉防火墻firewalld 1.3.5SELINUX配置 修改

    2024年02月01日
    瀏覽(38)
  • 云原生Kubernetes: K8S 1.29版本 部署Jenkins

    云原生Kubernetes: K8S 1.29版本 部署Jenkins

    目錄 ?一、實驗 1.環(huán)境 2.K8S 1.29版本 部署Jenkins 服務 3.jenkins安裝Kubernetes插件 二、問題 1.創(chuàng)建pod失敗 2.journalctl如何查看日志信息 2.容器內(nèi)如何查詢jenkins初始密碼 3.jenkins離線安裝中文包報錯 4.jenkins插件報錯 (1)主機 表1 主機 主機 架構 版本 IP 備注 master K8S master節(jié)點 1.29.0 1

    2024年04月25日
    瀏覽(27)
  • (【云原生 | Kubernetes 系列】— Kubernetes 1.28 存儲方案)

    EmptyDir 用途: 1: 緩存空間,例如基于磁盤的歸并排序。 2: 為耗時較長的計算任務提供檢查點,以便任務能方便地從崩潰前狀態(tài)恢復執(zhí)行。 3: 在 Web 服務器容器服務數(shù)據(jù)時,保存內(nèi)容管理器容器獲取的文件。 HostPath 例如,hostPath 的一些用法有: 運行一個需要訪問 Docker 內(nèi)部機

    2024年02月02日
    瀏覽(27)
  • 云原生Kubernetes: Kubeadm部署K8S 1.29版本 單Master架構

    云原生Kubernetes: Kubeadm部署K8S 1.29版本 單Master架構

    目錄 一、實驗 1.環(huán)境 2.K8S master節(jié)點環(huán)境準備 3.K8S master節(jié)點安裝kubelet、kubeadm、kubectl 3.K8S node節(jié)點環(huán)境準備與軟件安裝 4.K8S master節(jié)點部署服務 5.K8S node節(jié)點部署 6.K8S master節(jié)點查看集群 7.容器網(wǎng)絡(CNI)部署 8.K8S 集群測試 二、問題 1.calico生成資源報錯 2.為何要安裝docker和ci-d

    2024年02月01日
    瀏覽(98)
  • 【云原生 | Kubernetes 系列】—K8S部署RocketMQ集群(雙主雙從+同步模式)

    rocketMQ高可用有很多種方式,比如:單機部署,多主集群,雙主雙從同步部署,雙主雙從異步部署,以及多主多從部署。部署集群可按照自己公司的實際情況進行部署。 單機部署:只啟動一個rocketMQ實例就可以了,一般常用來本機測試使用。原因:一旦rocketMQ因某些原因掛掉,

    2024年02月04日
    瀏覽(122)
  • Kubernetes(K8s 1.28.x)部署---超詳細

    Kubernetes(K8s 1.28.x)部署---超詳細

    目錄 一、基礎環(huán)境配置(所有主機均要配置) 1、配置IP地址和主機名、hosts解析 2、關閉防火墻、禁用SELinux 3、安裝常用軟件 4、配置時間同步 5、禁用Swap分區(qū) 6、修改linux的內(nèi)核參數(shù) 7、配置ipvs功能 二、容器環(huán)境操作 1、定制軟件源 2、安裝最新版docker 3、配置docker加速器 4、

    2024年02月11日
    瀏覽(25)
  • kubernetes(k8s) v1.28.2 安裝與部署

    版本:kubernetes(k8s) v1.28.2 并準備主機名映射。 設置好靜態(tài)IP。 在Ubuntu的/etc/hosts文件中,填入如下內(nèi)容。也可以在Windows的C:WindowsSystem32driversetchosts文件中填寫相同內(nèi)容。 關閉防火墻和SELinux。 關閉防火墻命令如下。 可使用命令 systemctl status firewalld 查看防火墻狀態(tài)。 關閉

    2024年02月03日
    瀏覽(27)

覺得文章有用就打賞一下文章作者

支付寶掃一掃打賞

博客贊助

微信掃一掃打賞

請作者喝杯咖啡吧~博客贊助

支付寶掃一掃領取紅包,優(yōu)惠每天領

二維碼1

領取紅包

二維碼2

領紅包