国产 无码 综合区,色欲AV无码国产永久播放,无码天堂亚洲国产AV,国产日韩欧美女同一区二区

K8S最新版本集群部署(v1.28) + 容器引擎Docker部署(下)

這篇具有很好參考價值的文章主要介紹了K8S最新版本集群部署(v1.28) + 容器引擎Docker部署(下)。希望對大家有所幫助。如果存在錯誤或未考慮完全的地方,請大家不吝賜教,您也可以點擊"舉報違法"按鈕提交疑問。

??上一集:K8S最新版本集群部署(v1.28) + 容器引擎Docker部署(上)

*??主目錄:溫故知新專欄

??下一集:Kubernetes可視化管理工具Kuboard部署使用及k8s常用命令梳理記錄

??第三章 Kubernetes各組件部署

??安裝kubectl(可直接跳轉(zhuǎn)到安裝kubeadm章節(jié),直接全部安裝了)

kubectl 是使用 Kubernetes APIKubernetes 集群的控制面進(jìn)行通信的命令行工具。詳見官網(wǎng)安裝步驟

??下載kubectl安裝包

kubectl v1.28.1下載
K8S最新版本集群部署(v1.28) + 容器引擎Docker部署(下),# 溫故知新,# docker & k8s,kubernetes,docker,容器

??執(zhí)行kubectl安裝

[kubernetes@renxiaozhao01 ~]$ sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
[kubernetes@renxiaozhao01 ~]$ 
[kubernetes@renxiaozhao01 ~]$ kubectl version --client
Client Version: v1.28.1
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
[kubernetes@renxiaozhao01 ~]$ 
[kubernetes@renxiaozhao01 ~]$ 

就這么簡單,之前傻傻的通過curl下載,估計哪里卡住了,一個半小時都沒下載好,網(wǎng)絡(luò)環(huán)境好的話當(dāng)然還是直接curl -LO https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubectl方便
K8S最新版本集群部署(v1.28) + 容器引擎Docker部署(下),# 溫故知新,# docker & k8s,kubernetes,docker,容器

??驗證kubectl

執(zhí)行kubectl version --client或者kubectl version --client --output=yaml

[kubernetes@renxiaozhao01 ~]$ kubectl version --client --output=yaml
clientVersion:
  buildDate: "2023-08-24T11:23:10Z"
  compiler: gc
  gitCommit: 8dc49c4b984b897d423aab4971090e1879eb4f23
  gitTreeState: clean
  gitVersion: v1.28.1
  goVersion: go1.20.7
  major: "1"
  minor: "28"
  platform: linux/amd64
kustomizeVersion: v5.0.4-0.20230601165947-6ce0bf390ce3

[kubernetes@renxiaozhao01 ~]$ 
[kubernetes@renxiaozhao01 ~]$ kubectl version --client
Client Version: v1.28.1
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
[kubernetes@renxiaozhao01 ~]$ 
[kubernetes@renxiaozhao01 ~]$ 

??安裝kubeadm

官網(wǎng)詳情
K8S最新版本集群部署(v1.28) + 容器引擎Docker部署(下),# 溫故知新,# docker & k8s,kubernetes,docker,容器
K8S最新版本集群部署(v1.28) + 容器引擎Docker部署(下),# 溫故知新,# docker & k8s,kubernetes,docker,容器

??添加yum存儲庫配置文件kubernetes.repo

cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=1
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
        http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF

K8S最新版本集群部署(v1.28) + 容器引擎Docker部署(下),# 溫故知新,# docker &amp; k8s,kubernetes,docker,容器

??安裝kubeadm & kubelet & kubectl (前面安裝kubectl 可以放到這里一塊安裝)

安裝命令:sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

這里不會重復(fù)安裝,對于之前已經(jīng)安裝過的kubectl,yum安裝時會自動跳過
K8S最新版本集群部署(v1.28) + 容器引擎Docker部署(下),# 溫故知新,# docker &amp; k8s,kubernetes,docker,容器
K8S最新版本集群部署(v1.28) + 容器引擎Docker部署(下),# 溫故知新,# docker &amp; k8s,kubernetes,docker,容器

??查看版本信息

[kubernetes@renxiaozhao01 ~]$ kubelet --version
Kubernetes v1.28.1
[kubernetes@renxiaozhao01 ~]$ kubeadm  version
kubeadm version: &version.Info{Major:"1", Minor:"28", GitVersion:"v1.28.1", GitCommit:"8dc49c4b984b897d423aab4971090e1879eb4f23", GitTreeState:"clean", BuildDate:"2023-08-24T11:21:51Z", GoVersion:"go1.20.7", Compiler:"gc", Platform:"linux/amd64"}
[kubernetes@renxiaozhao01 ~]$ 
[kubernetes@renxiaozhao01 ~]$ 

K8S最新版本集群部署(v1.28) + 容器引擎Docker部署(下),# 溫故知新,# docker &amp; k8s,kubernetes,docker,容器

??啟動kubelet

執(zhí)行sudo systemctl enable --now kubelet命令:

  • kubelet運行在每個節(jié)點上,并負(fù)責(zé)管理和監(jiān)控節(jié)點上的容器。它與Kubernetes Master節(jié)點通信,接收來自Master節(jié)點的指令,并根據(jù)指令來創(chuàng)建、銷毀和管理容器。
  • enable將kubelet服務(wù)設(shè)置為開機啟動。而通過添加
  • --now選項,kubelet服務(wù)將立即啟動
??kubelet啟動、查看狀態(tài)、日志命令
  • 查看狀態(tài)systemctl status kubelet
    K8S最新版本集群部署(v1.28) + 容器引擎Docker部署(下),# 溫故知新,# docker &amp; k8s,kubernetes,docker,容器

    kubelet 現(xiàn)在每隔幾秒就會重啟,因為它陷入了一個等待 kubeadm 指令的死循環(huán)。

    K8S最新版本集群部署(v1.28) + 容器引擎Docker部署(下),# 溫故知新,# docker &amp; k8s,kubernetes,docker,容器
    在使用kubeadm初始化Kubernetes集群時,需要在Master節(jié)點上運行一些kubeadm命令來完成初始化過程。其中一個命令是kubeadm init,它將生成一個用于加入其他節(jié)點的令牌。kubelet服務(wù)在啟動時會檢查是否存在這個令牌,如果沒有找到,它會陷入一個等待kubeadm指令的死循環(huán)。

  • 手動啟動systemctl start kubelet

  • 查看日志sudo journalctl -u kubelet
    K8S最新版本集群部署(v1.28) + 容器引擎Docker部署(下),# 溫故知新,# docker &amp; k8s,kubernetes,docker,容器

??其他機器Kubernetes組件安裝

到這里已經(jīng)在r1機器安裝完了Kubernetes核心組件,參照上集,主要就是關(guān)閉防火墻那些操作,和本集安裝kubeadm操作(覆蓋了kubeadm、kubelet、kubectl三個核心組件),此處做個精簡版概括,繼續(xù)在r2、r3機器安裝(機器信息在上集第二章有介紹)。
K8S最新版本集群部署(v1.28) + 容器引擎Docker部署(下),# 溫故知新,# docker &amp; k8s,kubernetes,docker,容器

??關(guān)閉防火墻、swap、selinux

  • ??關(guān)閉防火墻
    systemctl stop firewalld.service
    systemctl status firewalld.service
    systemctl disable firewalld.service
    
  • ??關(guān)閉swap
    修改vi /etc/fstab,注釋掉/dev/mapper/centos-swap swap...這一行
    K8S最新版本集群部署(v1.28) + 容器引擎Docker部署(下),# 溫故知新,# docker &amp; k8s,kubernetes,docker,容器
  • ??關(guān)閉selinux
    修改vi /etc/sysconfig/selinux,SELINUX=enforcing改為SELINUX=disabled
    K8S最新版本集群部署(v1.28) + 容器引擎Docker部署(下),# 溫故知新,# docker &amp; k8s,kubernetes,docker,容器
  • ??重啟虛擬機生效
    執(zhí)行reboot或者shutdown -r now

??創(chuàng)建sudo安裝用戶kubernetes

useradd kubernetes -d /home/kubernetes
passwd kubernetes
echo 'kubernetes  ALL=(ALL)  NOPASSWD: ALL' >> /etc/sudoers

??添加kubernetes對應(yīng)的yum存儲庫配置文件

cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=1
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
        http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF

??安裝kubeadm & kubelet & kubectl

sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

??安裝命令驗證

kubelet --version
kubeadm  version
kubectl version --client

K8S最新版本集群部署(v1.28) + 容器引擎Docker部署(下),# 溫故知新,# docker &amp; k8s,kubernetes,docker,容器
K8S最新版本集群部署(v1.28) + 容器引擎Docker部署(下),# 溫故知新,# docker &amp; k8s,kubernetes,docker,容器

??第四章 kubernetes集群部署

官網(wǎng)創(chuàng)建集群說明
K8S最新版本集群部署(v1.28) + 容器引擎Docker部署(下),# 溫故知新,# docker &amp; k8s,kubernetes,docker,容器

  • kubeadm initkubeadm join命令在默認(rèn)情況下會自動下載所需的Docker鏡像。這些鏡像包括Kubernetes組件(如kube-apiserver、kube-controller-manager、kube-scheduler等)以及其他一些常用的容器鏡像(如CoreDNS等)。
  • 使用kubeadm initkubeadm join命令來初始化和加入集群,并且不需要自定義鏡像源,kubeadm initkubeadm join命令會自動處理Docker的安裝和配置。

嘗試了一下直接執(zhí)行kubeadm init,反正都是錯,所以還是在r1、r2、r3上面手動安裝docker相關(guān)組件吧

K8S最新版本集群部署(v1.28) + 容器引擎Docker部署(下),# 溫故知新,# docker &amp; k8s,kubernetes,docker,容器
K8S最新版本集群部署(v1.28) + 容器引擎Docker部署(下),# 溫故知新,# docker &amp; k8s,kubernetes,docker,容器
K8S最新版本集群部署(v1.28) + 容器引擎Docker部署(下),# 溫故知新,# docker &amp; k8s,kubernetes,docker,容器

??安裝docker(r1、r2、r3都要安裝)

  • Kubernetes是一個容器編排平臺,用于管理和編排容器化應(yīng)用程序。它可以與多種容器運行時集成,包括Docker、containerd、CRI-O等。
  • 安裝Kubernetes時,默認(rèn)情況下會使用Docker作為容器運行時這是因為Docker是目前最廣泛使用的容器運行時之一,具有廣泛的社區(qū)支持和成熟的生態(tài)系統(tǒng)。因此,大多數(shù)Kubernetes安裝指南和教程都會建議先安裝Docker)。

總結(jié)起來,KubernetesDocker是相互獨立、不是必須綁定在一起的。但是正常在安裝Kubernetes時,默認(rèn)都是先安裝Docker容器

??yum 方式安裝

yum 這種方式的安裝適用于大多數(shù)情況,它會自動獲取最新的穩(wěn)定版本

  • 下面的yum-config-manageryum-utils中的一個工具
    sudo yum install -y yum-utils
    
  • 設(shè)置docker社區(qū)版存儲庫
    sudo yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
    
  • 安裝docker
    sudo yum install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
    
  • 啟動docker
    sudo systemctl start docker
    
  • 設(shè)置docker開啟啟動(虛擬機會經(jīng)常重啟,為了減少麻煩最好設(shè)置開機啟動)
    sudo systemctl enable docker
    
  • 查看docker狀態(tài)
    sudo systemctl status docker
    
    K8S最新版本集群部署(v1.28) + 容器引擎Docker部署(下),# 溫故知新,# docker &amp; k8s,kubernetes,docker,容器
  • 查看docker版本:docker version
    [kubernetes@renxiaozhao01 ~]$ docker version
    Client: Docker Engine - Community
     Version:           24.0.5
     API version:       1.43
     Go version:        go1.20.6
     Git commit:        ced0996
     Built:             Fri Jul 21 20:39:02 2023
     OS/Arch:           linux/amd64
     Context:           default
    permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get "http://%2Fvar%2Frun%2Fdocker.sock/v1.24/version": dial unix /var/run/docker.sock: connect: permission denied
    [kubernetes@renxiaozhao01 ~]$ 
    
  • 配置鏡像加速器,參照文章下面的問題三,支付寶登錄一下阿里云鏡像地址就獲取到了
    K8S最新版本集群部署(v1.28) + 容器引擎Docker部署(下),# 溫故知新,# docker &amp; k8s,kubernetes,docker,容器
    sudo mkdir -p /etc/docker
    sudo tee /etc/docker/daemon.json <<-'EOF'
    {
      "registry-mirrors": ["https://xxxx.mirror.aliyuncs.com"]
    }
    EOF
    sudo systemctl daemon-reload
    sudo systemctl restart docker
    
  • 執(zhí)行sudo docker info可以看到剛剛配置的加速器信息
    K8S最新版本集群部署(v1.28) + 容器引擎Docker部署(下),# 溫故知新,# docker &amp; k8s,kubernetes,docker,容器

??安裝cri-dockerd(r1、r2、r3都要安裝)

官網(wǎng)地址
K8S最新版本集群部署(v1.28) + 容器引擎Docker部署(下),# 溫故知新,# docker &amp; k8s,kubernetes,docker,容器
cri-dockerd 是一種基于 Docker 的容器運行時實現(xiàn),它通過實現(xiàn) CRI 接口,使 Kubernetes 能夠與 Docker 進(jìn)行交互,創(chuàng)建和管理容器

??下載rpm安裝包

官網(wǎng)說明
K8S最新版本集群部署(v1.28) + 容器引擎Docker部署(下),# 溫故知新,# docker &amp; k8s,kubernetes,docker,容器
Git地址,直接點擊下載丟到服務(wù)器(網(wǎng)絡(luò)好的話可以直接服務(wù)器執(zhí)行sudo wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.4/cri-dockerd-0.3.4-3.el7.x86_64.rpm進(jìn)行下載),下載挺慢…
K8S最新版本集群部署(v1.28) + 容器引擎Docker部署(下),# 溫故知新,# docker &amp; k8s,kubernetes,docker,容器
這次反而在環(huán)境上下載快一點
K8S最新版本集群部署(v1.28) + 容器引擎Docker部署(下),# 溫故知新,# docker &amp; k8s,kubernetes,docker,容器
心累…明晚繼續(xù)
K8S最新版本集群部署(v1.28) + 容器引擎Docker部署(下),# 溫故知新,# docker &amp; k8s,kubernetes,docker,容器
K8S最新版本集群部署(v1.28) + 容器引擎Docker部署(下),# 溫故知新,# docker &amp; k8s,kubernetes,docker,容器
cri-dockerd-0.3.4-3.el7.x86-64.rpm快速下載地址

??通過rpm安裝

沒有rpm命令,可以執(zhí)行yum - y intall rpm 安裝該命令

  • 通過rpm安裝:sudo rpm -ivh cri-dockerd-0.3.4-3.el7.x86_64.rpm,-i 表示安裝軟件包,-v 表示顯示詳細(xì)的安裝過程,-h 表示顯示進(jìn)度條。
    K8S最新版本集群部署(v1.28) + 容器引擎Docker部署(下),# 溫故知新,# docker &amp; k8s,kubernetes,docker,容器

??啟動cri-dockerd

# 重載系統(tǒng)守護(hù)進(jìn)程
sudo systemctl daemon-reload 
# 設(shè)置cri-dockerd自啟動
sudo systemctl enable cri-docker.socket cri-docker 
# 啟動cri-dockerd
sudo systemctl start cri-docker.socket cri-docker 
# 查看cri-dockerd狀態(tài)
sudo systemctl status cri-docker.socket

K8S最新版本集群部署(v1.28) + 容器引擎Docker部署(下),# 溫故知新,# docker &amp; k8s,kubernetes,docker,容器

??初始化K8S集群

??配置/etc/hosts(r1、r2、r3三臺機器)

名稱添加k8s標(biāo)識,方便區(qū)分,日后部署hadoop集群可以叫hp-master、hp-slave01、hp-slave02

[kubernetes@renxiaozhao01 ~]$ sudo vi /etc/hosts
[kubernetes@renxiaozhao01 ~]$ sudo cat  /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.17.17	k8s-master
192.168.17.18	k8s-node01
192.168.17.19   k8s-node02
[kubernetes@renxiaozhao01 ~]$ 

??配置初始化系統(tǒng)參數(shù)

??module參數(shù)
  • 內(nèi)核模塊配置文件

    cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
    overlay
    br_netfilter
    EOF
    
  • 加載內(nèi)核模塊:

    • overlay 模塊是用于支持容器的聯(lián)合文件系統(tǒng)的

    • br_netfilter 模塊是用于支持 Kubernetes 網(wǎng)絡(luò)的橋接和網(wǎng)絡(luò)過濾功能的。

    • 加載命令:

      sudo modprobe overlay
      sudo modprobe br_netfilter
      
    • 驗證加載是否成功:

      lsmod | grep br_netfilter
      lsmod | grep overlay  
      

      K8S最新版本集群部署(v1.28) + 容器引擎Docker部署(下),# 溫故知新,# docker &amp; k8s,kubernetes,docker,容器

??sysctl 參數(shù)
  • net.bridge.bridge-nf-call-iptables :用于啟用橋接網(wǎng)絡(luò)對 iptables 的調(diào)用
  • net.bridge.bridge-nf-call-ip6tables: 用于啟用橋接網(wǎng)絡(luò)對 ip6tables 的調(diào)用
  • net.ipv4.ip_forward: 用于啟用 IP 轉(zhuǎn)發(fā)
  • 參數(shù)
    • 配置文件

      cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
      net.bridge.bridge-nf-call-iptables  = 1
      net.bridge.bridge-nf-call-ip6tables = 1
      net.ipv4.ip_forward                 = 1
      EOF
      
    • 生效配置

      sudo sysctl --system
      
    • 查看配置:

      sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward
      

      K8S最新版本集群部署(v1.28) + 容器引擎Docker部署(下),# 溫故知新,# docker &amp; k8s,kubernetes,docker,容器

??Master節(jié)點初始化(對應(yīng)r1機器)

??kubeadm init
  • 官網(wǎng)地址
    K8S最新版本集群部署(v1.28) + 容器引擎Docker部署(下),# 溫故知新,# docker &amp; k8s,kubernetes,docker,容器

  • 初始化命令

    sudo kubeadm init --node-name=k8s-master --image-repository=registry.aliyuncs.com/google_containers --cri-socket=unix:///var/run/cri-dockerd.sock --apiserver-advertise-address=192.168.17.17 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12
    

kubeadm init 命令中的參數(shù)解釋(上面的官網(wǎng)地址中也有解釋):

  • --node-name=k8s-master: 指定k8s集群中Master節(jié)點的名稱,和上面/etc/hosts配置中192.168.17.17 k8s-master名稱一致。
  • --image-repository=registry.aliyuncs.com/google_containers: 指定容器鏡像的倉庫地址。
  • --cri-socket=unix:///var/run/cri-dockerd.sock: 指定容器運行時的 CRI(Container Runtime Interface)套接字路徑(Docker 的配置文件路徑通常是 /etc/docker/daemon.json,CRI 套接字路徑通常不會在 Docker 的配置文件中直接指定。Kubernetes一般會使用默認(rèn)的 Docker 套接字路徑 /var/run/docker.sock)。
  • --apiserver-advertise-address=192.168.17.17: 指定Kubernetes API Server的地址,即Master節(jié)點的IP地址。和上面/etc/hosts配置中192.168.17.17 k8s-master的IP一致
  • --pod-network-cidr=10.244.0.0/16: 指定 Pod 網(wǎng)絡(luò)的 CIDR(Classless Inter-Domain Routing)地址范圍。默認(rèn)使用這個即可。
  • --service-cidr=10.96.0.0/12: 指定服務(wù)網(wǎng)絡(luò)的 CIDR 地址范圍(需要選擇一個未被使用的 CIDR 地址范圍),一般默認(rèn)使用這個即可。
  • 執(zhí)行成功如下(差點崩潰了,卡了好久,詳見問題四)
    K8S最新版本集群部署(v1.28) + 容器引擎Docker部署(下),# 溫故知新,# docker &amp; k8s,kubernetes,docker,容器
    [kubernetes@renxiaozhao01 ~]$ sudo kubeadm init --node-name=k8s-master --image-repository=registry.aliyuncs.com/google_containers --cri-socket=unix:///var/run/cri-dockerd.sock --apiserver-advertise-address=192.168.17.17 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 
    [init] Using Kubernetes version: v1.28.1
    [preflight] Running pre-flight checks
    [preflight] Pulling images required for setting up a Kubernetes cluster
    [preflight] This might take a minute or two, depending on the speed of your internet connection
    [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
    [certs] Using certificateDir folder "/etc/kubernetes/pki"
    [certs] Generating "ca" certificate and key
    [certs] Generating "apiserver" certificate and key
    [certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.17.17]
    [certs] Generating "apiserver-kubelet-client" certificate and key
    [certs] Generating "front-proxy-ca" certificate and key
    [certs] Generating "front-proxy-client" certificate and key
    [certs] Generating "etcd/ca" certificate and key
    [certs] Generating "etcd/server" certificate and key
    [certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.17.17 127.0.0.1 ::1]
    [certs] Generating "etcd/peer" certificate and key
    [certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.17.17 127.0.0.1 ::1]
    [certs] Generating "etcd/healthcheck-client" certificate and key
    [certs] Generating "apiserver-etcd-client" certificate and key
    [certs] Generating "sa" key and public key
    [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
    [kubeconfig] Writing "admin.conf" kubeconfig file
    [kubeconfig] Writing "kubelet.conf" kubeconfig file
    [kubeconfig] Writing "controller-manager.conf" kubeconfig file
    [kubeconfig] Writing "scheduler.conf" kubeconfig file
    [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
    [control-plane] Using manifest folder "/etc/kubernetes/manifests"
    [control-plane] Creating static Pod manifest for "kube-apiserver"
    [control-plane] Creating static Pod manifest for "kube-controller-manager"
    [control-plane] Creating static Pod manifest for "kube-scheduler"
    [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [kubelet-start] Starting the kubelet
    [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
    [apiclient] All control plane components are healthy after 16.526647 seconds
    [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
    [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
    [upload-certs] Skipping phase. Please see --upload-certs
    [mark-control-plane] Marking the node k8s-master as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
    [mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
    [bootstrap-token] Using token: 4ydg4t.7cjjm52hd4p86gmk
    [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
    [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
    [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
    [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
    [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
    [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
    [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
    [addons] Applied essential addon: CoreDNS
    [addons] Applied essential addon: kube-proxy
    
    Your Kubernetes control-plane has initialized successfully!
    
    To start using your cluster, you need to run the following as a regular user:
    
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    Alternatively, if you are the root user, you can run:
    
      export KUBECONFIG=/etc/kubernetes/admin.conf
    
    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
      https://kubernetes.io/docs/concepts/cluster-administration/addons/
    
    Then you can join any number of worker nodes by running the following on each as root:
    
    kubeadm join 192.168.17.17:6443 --token 4ydg4t.7cjjm52hd4p86gmk \
    	--discovery-token-ca-cert-hash sha256:ee2c3ae1c2d702b77a0b52f9dafe734aa7e25f33c44cf7fa469c1adc8c176be1 
    
    
??admin.conf

K8S最新版本集群部署(v1.28) + 容器引擎Docker部署(下),# 溫故知新,# docker &amp; k8s,kubernetes,docker,容器

  • 參照上面初始化信息還需要執(zhí)行以下命令(kubernetes用戶,如果用root執(zhí)行的命令):
    [kubernetes@renxiaozhao01 ~]$ 
    [kubernetes@renxiaozhao01 ~]$   mkdir -p $HOME/.kube
    [kubernetes@renxiaozhao01 ~]$   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    [kubernetes@renxiaozhao01 ~]$   sudo chown $(id -u):$(id -g) $HOME/.kube/config
    [kubernetes@renxiaozhao01 ~]$ 
    [kubernetes@renxiaozhao01 ~]$ 
    
  • 如果用root執(zhí)行的命令:
    直接執(zhí)行export KUBECONFIG=/etc/kubernetes/admin.conf臨時生效,建議配置放到環(huán)境變量文件(非root用戶不需要再配置環(huán)境變量)

??安裝網(wǎng)絡(luò)插件(r1、r2、r3機器)

應(yīng)該是必須安裝,一開始沒安裝,執(zhí)行下 kubeadm join也可以添加成功(但是狀態(tài)應(yīng)該不對),后來 kubeadm join 添加節(jié)點名稱參數(shù)時候一直不成功,然后嘗試安裝網(wǎng)絡(luò)插件flannel下載地址,簡單記錄下吧

  • 下載kube-flannel.yml,指定網(wǎng)絡(luò)接口名稱,搜索kube-subnet-mgr ,下面添加 - --iface=ens33(對應(yīng)自己的固定IP網(wǎng)關(guān))
    K8S最新版本集群部署(v1.28) + 容器引擎Docker部署(下),# 溫故知新,# docker &amp; k8s,kubernetes,docker,容器
  • 直接執(zhí)行kubectl apply -f kube-flannel.yml,如果非root用戶,添加sudo會報錯,很多加sudo執(zhí)行都會報錯localhost:8080 was refused,去掉就好了,有待研究
    [kubernetes@renxiaozhao03 ~]$ kubectl apply -f kube-flannel.yml
    namespace/kube-flannel unchanged
    serviceaccount/flannel unchanged
    clusterrole.rbac.authorization.k8s.io/flannel unchanged
    clusterrolebinding.rbac.authorization.k8s.io/flannel unchanged
    configmap/kube-flannel-cfg unchanged
    daemonset.apps/kube-flannel-ds unchanged
    [kubernetes@renxiaozhao03 ~]$ 
    [kubernetes@renxiaozhao03 ~]$ 
    [kubernetes@renxiaozhao03 ~]$ 
    [kubernetes@renxiaozhao03 ~]$ sudo kubectl apply -f kube-flannel.yml
    The connection to the server localhost:8080 was refused - did you specify the right host or port?
    [kubernetes@renxiaozhao03 ~]$ 
    [kubernetes@renxiaozhao03 ~]$ 
    
    

??Node節(jié)點假如到Master(對應(yīng)r2、r3機器)

  • 同步master上的 .kubenode上面: scp -r .kube 192.168.17.18:/home/kubernetes/
    K8S最新版本集群部署(v1.28) + 容器引擎Docker部署(下),# 溫故知新,# docker &amp; k8s,kubernetes,docker,容器

  • 同步完驗證下
    K8S最新版本集群部署(v1.28) + 容器引擎Docker部署(下),# 溫故知新,# docker &amp; k8s,kubernetes,docker,容器

  • 執(zhí)行上文生成的join指令:

    sudo kubeadm join 192.168.17.17:6443 --token 4ydg4t.7cjjm52hd4p86gmk --discovery-token-ca-cert-hash sha256:ee2c3ae1c2d702b77a0b52f9dafe734aa7e25f33c44cf7fa469c1adc8c176be1 --cri-socket=unix:///var/run/cri-dockerd.sock
    

    K8S最新版本集群部署(v1.28) + 容器引擎Docker部署(下),# 溫故知新,# docker &amp; k8s,kubernetes,docker,容器

  • 添加成功后,查看節(jié)點信息kubectl get nodes

    有個節(jié)點沒準(zhǔn)備好呀??,有緣再說吧,累了,不過好像可以確認(rèn)網(wǎng)絡(luò)插件屬于必裝,比如flannel
    K8S最新版本集群部署(v1.28) + 容器引擎Docker部署(下),# 溫故知新,# docker &amp; k8s,kubernetes,docker,容器
    沒忍住排查了一下,反正現(xiàn)在好了,就是按照下面的問題四,修改了一下cri-docker.service,然后反復(fù)重啟了幾次??
    K8S最新版本集群部署(v1.28) + 容器引擎Docker部署(下),# 溫故知新,# docker &amp; k8s,kubernetes,docker,容器

    [kubernetes@renxiaozhao03 ~]$ sudo systemctl stop cri-docker
    [kubernetes@renxiaozhao03 ~]$ #怎么修改參照問題四
    [kubernetes@renxiaozhao03 ~]$ sudo vi /usr/lib/systemd/system/cri-docker.service
    [kubernetes@renxiaozhao03 ~]$ sudo systemctl daemon-reload
    [kubernetes@renxiaozhao03 ~]$ sudo systemctl start cri-docker
    [kubernetes@renxiaozhao03 ~]$ sudo systemctl  restart kubelet docker cri-docker
    [kubernetes@renxiaozhao03 ~]$ 
    [kubernetes@renxiaozhao03 ~]$ 
    [kubernetes@renxiaozhao03 ~]$ sudo systemctl  restart kubelet docker cri-docker
    [kubernetes@renxiaozhao03 ~]$ kubectl get pod -A|grep kube-flannel-ds-rc8vq
    kube-flannel   kube-flannel-ds-rc8vq                1/1     Running   2 (33s ago)   164m
    [kubernetes@renxiaozhao03 ~]$ 
    

??總結(jié)

??問題記錄

?問題一:yum安裝軟件提示“網(wǎng)絡(luò)不可達(dá)”

K8S最新版本集群部署(v1.28) + 容器引擎Docker部署(下),# 溫故知新,# docker &amp; k8s,kubernetes,docker,容器

[kubernetes@renxiaozhao01 ~]$ sudo yum -y install lrzsz
已加載插件:fastestmirror, langpacks
Loading mirror speeds from cached hostfile
Could not retrieve mirrorlist http://mirrorlist.centos.org/?release=7&arch=x86_64&repo=os&infra=stock error was
14: curl#7 - "Failed to connect to 2600:1f16:c1:5e01:4180:6610:5482:c1c0: 網(wǎng)絡(luò)不可達(dá)"


 One of the configured repositories failed (未知),
 and yum doesn't have enough cached data to continue. At this point the only
 safe thing yum can do is fail. There are a few ways to work "fix" this:

     1. Contact the upstream for the repository and get them to fix the problem.

     2. Reconfigure the baseurl/etc. for the repository, to point to a working
        upstream. This is most often useful if you are using a newer
        distribution release than is supported by the repository (and the
        packages for the previous distribution release still work).

     3. Run the command with the repository temporarily disabled
            yum --disablerepo=<repoid> ...

     4. Disable the repository permanently, so yum won't use it by default. Yum
        will then just ignore the repository until you permanently enable it
        again or use --enablerepo for temporary usage:

            yum-config-manager --disable <repoid>
        or
            subscription-manager repos --disable=<repoid>

     5. Configure the failing repository to be skipped, if it is unavailable.
        Note that yum will try to contact the repo. when it runs most commands,
        so will have to try and fail each time (and thus. yum will be be much
        slower). If it is a very temporary problem though, this is often a nice
        compromise:

            yum-config-manager --save --setopt=<repoid>.skip_if_unavailable=true

Cannot find a valid baseurl for repo: base/7/x86_64

?解決方式一:重啟虛擬機網(wǎng)絡(luò)sudo systemctl restart network

從公司回到家,切換了網(wǎng)絡(luò),虛擬機已經(jīng)重啟了,ping www.baidu.com也是通的,后面又試了一下,發(fā)現(xiàn)不通了,突然想起來中間切換過一次無線網(wǎng)絡(luò),重啟網(wǎng)絡(luò)就好了,原因還是本地網(wǎng)絡(luò)不好,不穩(wěn)(長城寬帶??)

[kubernetes@renxiaozhao01 yum.repos.d]$ curl https://www.baidu.com/
curl: (7) Failed connect to www.baidu.com:443; 沒有到主機的路由

執(zhí)行重啟命令:sudo systemctl restart network
K8S最新版本集群部署(v1.28) + 容器引擎Docker部署(下),# 溫故知新,# docker &amp; k8s,kubernetes,docker,容器

?其他:替換鏡像地址為阿里云的命令

sudo curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo

K8S最新版本集群部署(v1.28) + 容器引擎Docker部署(下),# 溫故知新,# docker &amp; k8s,kubernetes,docker,容器

?問題二:容器運行時名詞解釋

容器運行時用于指代用于創(chuàng)建、打包和運行容器的軟件組件

  • 容器運行時是負(fù)責(zé)管理執(zhí)行容器的主要組件

    • 負(fù)責(zé)加載和運行容器鏡像,創(chuàng)建和管理容器的生命周期,并提供與容器交互的接口
    • 負(fù)責(zé)處理容器的資源隔離、網(wǎng)絡(luò)和存儲配置等方面
  • 常見的容器運行時包括Docker、containerd、CRI-O

?問題三:獲取阿里云鏡像加速器地址步驟

  • 獲取鏡像加速器地址方法
    K8S最新版本集群部署(v1.28) + 容器引擎Docker部署(下),# 溫故知新,# docker &amp; k8s,kubernetes,docker,容器
  • 直接支付寶掃碼登錄阿里云容器鏡像服務(wù)
    K8S最新版本集群部署(v1.28) + 容器引擎Docker部署(下),# 溫故知新,# docker &amp; k8s,kubernetes,docker,容器

?問題四:kubeadm init執(zhí)行失敗

[init] Using Kubernetes version: v1.28.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
W0829 19:08:22.039382   55875 checks.go:835] detected that the sandbox image "registry.k8s.io/pause:3.6" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "registry.aliyuncs.com/google_containers/pause:3.9" as the CRI sandbox image.
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.17.17]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.17.17 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.17.17 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
	timed out waiting for the condition

This error is likely caused by:
	- The kubelet is not running
	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	- 'systemctl status kubelet'
	- 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
	- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | grep kube | grep -v pause'
	Once you have found the failing container, you can inspect its logs with:
	- 'crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

?錯誤詳情一:detected that the sandbox image "registry.k8s.io/pause:3.6" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "registry.aliyuncs.com/google_containers/pause:3.9" as the CRI sandbox image

?錯誤詳情二:[kubelet-check] Initial timeout of 40s passed.

?最終解決:修改vi /usr/lib/systemd/system/cri-docker.service

錯誤詳情一通過修改cri-docker.service解決,錯誤詳情二由錯誤詳情一導(dǎo)致
K8S最新版本集群部署(v1.28) + 容器引擎Docker部署(下),# 溫故知新,# docker &amp; k8s,kubernetes,docker,容器
解決步驟

  • 停止cri-docker服務(wù):sudo systemctl stop cri-docker

  • 編輯vi /usr/lib/systemd/system/cri-docker.service

    • 找到ExecStart,在后面添加--pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.9
      ExecStart=/usr/bin/cri-dockerd --container-runtime-endpoint fd:// --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.9
      
  • 重新加載服務(wù):sudo systemctl daemon-reload

  • 啟動cri-docker服務(wù):sudo systemctl start cri-docker

  • 查看cri-docker服務(wù)狀態(tài):sudo systemctl start cri-docker
    K8S最新版本集群部署(v1.28) + 容器引擎Docker部署(下),# 溫故知新,# docker &amp; k8s,kubernetes,docker,容器

[kubernetes@renxiaozhao01 ~]$ sudo  kubectl apply -f kube-flannel.yml
The connection to the server localhost:8080 was refused - did you specify the right host or port?

?問題五:kubeadm join執(zhí)行失敗

?錯誤詳情一:[WARNING Hostname]: hostname "renxiaozhao03"could not be reached

雖然是警告信息,但是名稱和IP都不是我之前在/etc/hosts中設(shè)置的,所以肯定不對對
K8S最新版本集群部署(v1.28) + 容器引擎Docker部署(下),# 溫故知新,# docker &amp; k8s,kubernetes,docker,容器

?解決方式添加--node-name參數(shù),指定主機名

指定--node-name k8s-node02,中間=加不加都可以,不加就用空格

sudo kubeadm join 192.168.17.17:6443  --node-name=k8s-node02 --token 4ydg4t.7cjjm52hd4p86gmk --discovery-token-ca-cert-hash sha256:ee2c3ae1c2d702b77a0b52f9dafe734aa7e25f33c44cf7fa469c1adc8c176be1 --cri-socket=unix:///var/run/cri-dockerd.sock

其中--cri-socket=unix:///var/run/cri-dockerd.sock也是根據(jù)錯誤提示新指定的參數(shù),其中containerd不知道是不是那個組件自帶的,博主只裝了cri-dockerd,這一塊也迷糊,一方面查資料使用docker容器,必須安裝cri-dockerd,一方面又說只要containerd,一步步來吧,先跑起來集群再深究

[kubernetes@renxiaozhao02 .kube]$ sudo kubeadm join 192.168.17.17:6443  --node-name=k8s-node02 --token 4ydg4t.7cjjm52hd4p86gmk --discovery-token-ca-cert-hash sha256:ee2c3ae1c2d702b77a0b52f9dafe734aa7e25f33c44cf7fa469c1adc8c176be1
Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
To see the stack trace of this error execute with --v=5 or higher

?錯誤詳情二:超時報錯error execution phase kubelet-start: error uploading crisocket: nodes "k8s-node01" not found

[kubernetes@renxiaozhao02 .kube]$ sudo systemctl stop kubelet
[kubernetes@renxiaozhao02 .kube]$ sudo rm -rf /etc/kubernetes/*
[kubernetes@renxiaozhao02 .kube]$ sudo kubeadm join 192.168.17.17:6443 --token urwtz8.3tvnbe2a63b3fnbl --discovery-token-ca-cert-hash sha256:ee2c3ae1c2d702b77a0b52f9dafe734aa7e25f33c44cf7fa469c1adc8c176be1  --node-name k8s-node01  --cri-socket=unix:///var/run/cri-dockerd.sock
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[kubelet-check] Initial timeout of 40s passed.
error execution phase kubelet-start: error uploading crisocket: nodes "k8s-node01" not found
To see the stack trace of this error execute with --v=5 or higher
[kubernetes@renxiaozhao02 .kube]$ 

?解決方式使用kubeadm reset重置之后再添加join

不要通過上面刪除的方式,博主之前添加成功過renxiaozhao02節(jié)點(不指定--node-name,默認(rèn)就是主機名),雖然節(jié)點已經(jīng)刪除了,但是貌似有殘留,直接在node幾點機器執(zhí)行kubeadm reset,然后再添加,成功

[kubernetes@renxiaozhao03 ~]$ sudo kubeadm reset --cri-socket=unix:///var/run/cri-dockerd.sock
W0830 17:11:12.123313   25230 preflight.go:56] [reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
W0830 17:11:14.355783   25230 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] Deleted contents of the etcd data directory: /var/lib/etcd
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
[kubernetes@renxiaozhao03 ~]$ #驗證有沒有用到ipvs(gpt說的,咋說呢,被它帶著走了很遠(yuǎn)的路)
[kubernetes@renxiaozhao03 ~]$ kubectl get svc -A | grep ipvs
[kubernetes@renxiaozhao03 ~]$ 
[kubernetes@renxiaozhao03 ~]$ 
[kubernetes@renxiaozhao03 ~]$  #驗證有沒有用到 CNI 
[kubernetes@renxiaozhao03 ~]$ kubectl get svc -A | grep cni -i

K8S最新版本集群部署(v1.28) + 容器引擎Docker部署(下),# 溫故知新,# docker &amp; k8s,kubernetes,docker,容器文章來源地址http://www.zghlxwxcb.cn/news/detail-696487.html

??彎路歷程
  • 重新join會報錯,之前已經(jīng)生成了相關(guān)文件,報文件已存在,直接執(zhí)行了刪除,結(jié)果引發(fā)新的錯
    sudo systemctl stop kubelet
    sudo rm -rf /etc/kubernetes/*
    
    直接sudo rm -rf /etc/kubernetes/*,結(jié)果引發(fā)了新的報錯/etc/kubernetes/manifests不存在(后來網(wǎng)上看了下,好像本來就沒有這個目錄,不是刪除引起的),也不知道之前有沒有這個文件,貌似只有master節(jié)點會用到,最終解決方式是直接把mastre節(jié)點上的該目錄同步過來,或者直接創(chuàng)建mkdir -p /etc/kubernetes/manifests,應(yīng)該也可以
    K8S最新版本集群部署(v1.28) + 容器引擎Docker部署(下),# 溫故知新,# docker &amp; k8s,kubernetes,docker,容器
    查看kubelet日志:sudo journalctl -u kubelet(也被坑了,一開始沒注意,這個命令日志不自動換行),改用sudo journalctl -u kubelet --no-pager --follow
  • Unable to register node with API server" err="nodes \"k8s-node01\" is forbidden: node \"renxiaozhao02\" is not allowed to modify node \"k8s-node01\"" node="k8s-node01"
    目錄同步過來之后有報錯,通過sudo journalctl -u kubelet --no-pager --follow可以準(zhǔn)確的查看日志
    830 15:37:47 renxiaozhao02 kubelet[9469]: I0830 15:37:47.153840    9469 kubelet_node_status.go:70] "Attempting to register node" node="k8s-node01"
    830 15:37:47 renxiaozhao02 kubelet[9469]: E0830 15:37:47.160006    9469 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes \"k8s-node01\" is forbidden: node \"renxiaozhao02\" is not allowed to modify node \"k8s-node01\"" node="k8s-node01"
    830 15:37:47 renxiaozhao02 kubelet[9469]: I0830 15:37:47.283440    9469 scope.go:117] "RemoveContainer" containerID="b0ab74aedc14936c3f0e95682f257b3a8981ecb3e9e36b590a35d65c3eafbd16"
    830 15:37:47 renxiaozhao02 kubelet[9469]: E0830 15:37:47.284025    9469 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-scheduler pod=kube-scheduler-k8s-node01_kube-system(7734ba1e9e5564ed51f1e93da8155ae7)\"" pod="kube-system/kube-scheduler-k8s-node01" podUID="7734ba1e9e5564ed51f1e93da8155ae7"
    

到了這里,關(guān)于K8S最新版本集群部署(v1.28) + 容器引擎Docker部署(下)的文章就介紹完了。如果您還想了解更多內(nèi)容,請在右上角搜索TOY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!

本文來自互聯(lián)網(wǎng)用戶投稿,該文觀點僅代表作者本人,不代表本站立場。本站僅提供信息存儲空間服務(wù),不擁有所有權(quán),不承擔(dān)相關(guān)法律責(zé)任。如若轉(zhuǎn)載,請注明出處: 如若內(nèi)容造成侵權(quán)/違法違規(guī)/事實不符,請點擊違法舉報進(jìn)行投訴反饋,一經(jīng)查實,立即刪除!

領(lǐng)支付寶紅包贊助服務(wù)器費用

相關(guān)文章

  • centos 7 部署k8s集群 1.28.2版本

    centos 7 部署k8s集群 1.28.2版本

    本地部署,僅供參考。 三個節(jié)點:名字為k8s-node1、k8s-node2、k8s-master 設(shè)置系統(tǒng)主機名及Host 文件 1.3.1 禁用 iptables 和 firewalld 服務(wù) 1.3.2 禁用selinux 1.3.3 禁用swap分區(qū) 1.3.4 調(diào)整內(nèi)核參數(shù),對于 K8S 顯示: 1.3.5 配置 ipvs 功能 顯示: 重啟 2.1.1 移除舊版docker(新安裝虛擬機則不需執(zhí)行

    2024年02月05日
    瀏覽(22)
  • 基于containerd容器運行時,kubeadmin部署k8s 1.28集群

    基于containerd容器運行時,kubeadmin部署k8s 1.28集群

    centos7u9 序號 主機名 ip地址 CPU 內(nèi)存 硬盤 1 k8s-master1 192.168.1.200 2C 2G 100G 2 k8s-worker1 192.168.1.201 2C 2G 100G 3 k8s-worker2 192.168.1.202 2C 2G 100G 1.3.1主機名配置 vi /etc/sysconfig/network-scripts/ifcfg-ens33 1.3.3主機名與IP地址解析(hosts) vi /etc/hosts 1.3.4防火墻配置 關(guān)閉防火墻firewalld 1.3.5SELINUX配置 修改

    2024年02月01日
    瀏覽(38)
  • 二進(jìn)制部署高可用k8s集群V1.20.11版本

    二進(jìn)制部署高可用k8s集群V1.20.11版本

    單master架構(gòu)圖 master節(jié)點 node1節(jié)點 node2節(jié)點 ??Etcd是一個分布式鍵值存儲系統(tǒng), K8s使用Etcd進(jìn)行數(shù)據(jù)存儲 ,所以先準(zhǔn)備一個Etcd數(shù)據(jù)庫,為解決Etcd單點故障,應(yīng)采用集群方式進(jìn)行部署,這里使用3臺組件集群,可容忍1臺機器故障,當(dāng)然 也可以使用5臺組件集群,可容忍2臺機器故

    2024年01月22日
    瀏覽(30)
  • 云原生Kubernetes:K8S集群實現(xiàn)容器運行時遷移(docker → containerd) 與 版本升級(v1.23.14 → v1.24.1)

    云原生Kubernetes:K8S集群實現(xiàn)容器運行時遷移(docker → containerd) 與 版本升級(v1.23.14 → v1.24.1)

    目錄 一、理論 1.K8S集群升級 2.環(huán)境 3.升級策略 4.master1節(jié)點遷移容器運行時(docker → containerd)? 5.master2節(jié)點遷移容器運行時(docker → containerd)? 6.node1節(jié)點容器運行時遷移(docker → containerd)? 7.升級集群計劃(v1.23.14 →?v1.24.1) 8.升級master1節(jié)點版本(v1.24.1) 9.升級master2節(jié)點版本

    2024年02月03日
    瀏覽(38)
  • Centos7安裝部署k8s(kubernetes)最新v1.27.1版本超詳細(xì)安裝教程

    從零開始的k8s安裝 硬件配置要求 cpu = 2核 硬盤 = 20G 內(nèi)存 = 2G 節(jié)點數(shù)量建議為奇數(shù)(3, 5, 7, 9等)(1臺好像也能搭,沒試過) 以下命令出除特殊要求外,其余都建議在master主機執(zhí)行 本教程配置如下 主機名 IP 配置 master 192.168.42.150 2核+2G+20G node1 192.168.42.151 2核+2G+20G node2 192.168.

    2024年02月11日
    瀏覽(59)
  • kubernetes(k8s) v1.28.2 安裝與部署

    版本:kubernetes(k8s) v1.28.2 并準(zhǔn)備主機名映射。 設(shè)置好靜態(tài)IP。 在Ubuntu的/etc/hosts文件中,填入如下內(nèi)容。也可以在Windows的C:WindowsSystem32driversetchosts文件中填寫相同內(nèi)容。 關(guān)閉防火墻和SELinux。 關(guān)閉防火墻命令如下。 可使用命令 systemctl status firewalld 查看防火墻狀態(tài)。 關(guān)閉

    2024年02月03日
    瀏覽(27)
  • kubeasz部署k8s:v1.27.5集群

    安裝k8s集群相關(guān)系統(tǒng)及組件的詳細(xì)版本號 Ubuntu 22.04.3 LTS k8s: v1.27.5 containerd: 1.6.23 etcd: v3.5.9 coredns: 1.11.1 calico: v3.24.6 安裝步驟清單: 1.deploy機器做好對所有k8s node節(jié)點的免密登陸操作 2.deploy機器安裝好python2版本以及pip,然后安裝ansible 3.對k8s集群配置做一些定制化配置并開始部

    2024年02月19日
    瀏覽(20)
  • 云原生Kubernetes:K8S集群版本升級(v1.20.6 - v1.20.15)

    云原生Kubernetes:K8S集群版本升級(v1.20.6 - v1.20.15)

    目錄 一、理論 1.K8S集群升級 2.集群概況 3.升級集群 4.驗證集群 二、實驗 ?1.升級集群 2.驗證集群 三、問題 1.給node1節(jié)點打污點報錯 (1)概念 搭建K8S集群的方式有很多種,比如二進(jìn)制,kubeadm,RKE(Rancher)等,K8S集群升級方式也各有千秋,目前準(zhǔn)備使用kubeadm方式搭建的k8s集群

    2024年02月07日
    瀏覽(27)
  • 云原生Kubernetes:K8S集群版本升級(v1.20.15 - v1.22.14)

    云原生Kubernetes:K8S集群版本升級(v1.20.15 - v1.22.14)

    目錄 一、理論 1.K8S集群升級 2.集群概況 3.升級集群(v1.21.14) 4.驗證集群(v1.21.14) 5.升級集群(v1.22.14) 6.驗證集群? (v1.22.14) 二、實驗 ?1.升級集群(v1.21.14) 2.驗證集群(v1.21.14) ?3.升級集群(v1.22.14) 4.驗證集群(v1.22.14) (1)概念 搭建K8S集群的方式有很多種,比如二

    2024年02月07日
    瀏覽(18)
  • 第27關(guān) 在K8s集群上使用Helm3部署最新版本v2.10.0的私有鏡像倉庫Harbor

    第27關(guān) 在K8s集群上使用Helm3部署最新版本v2.10.0的私有鏡像倉庫Harbor

    ------ 課程視頻同步分享在今日頭條和B站 大家好,我是博哥愛運維。 在前面的幾十關(guān)里面,博哥在k8s上部署服務(wù)一直都是用的docker hub上的公有鏡像,對于企業(yè)服務(wù)來說,有些我們是不想把服務(wù)鏡像放在公網(wǎng)上面的; 同時如果在有內(nèi)部的鏡像倉庫,那拉取鏡像的速度就會很快

    2024年02月01日
    瀏覽(87)

覺得文章有用就打賞一下文章作者

支付寶掃一掃打賞

博客贊助

微信掃一掃打賞

請作者喝杯咖啡吧~博客贊助

支付寶掃一掃領(lǐng)取紅包,優(yōu)惠每天領(lǐng)

二維碼1

領(lǐng)取紅包

二維碼2

領(lǐng)紅包