国产 无码 综合区,色欲AV无码国产永久播放,无码天堂亚洲国产AV,国产日韩欧美女同一区二区

Kubernetes實戰(zhàn)(九)-kubeadm安裝k8s集群

這篇具有很好參考價值的文章主要介紹了Kubernetes實戰(zhàn)(九)-kubeadm安裝k8s集群。希望對大家有所幫助。如果存在錯誤或未考慮完全的地方,請大家不吝賜教,您也可以點擊"舉報違法"按鈕提交疑問。

1 環(huán)境準(zhǔn)備

1.1 主機信息

ip hostname
10.220.43.203 ops-master-1
10.220.43.204 ops-worker-1
10.220.43.205 ops-worker-2

1.2 系統(tǒng)信息

$ cat /etc/redhat-release 
Alibaba Cloud Linux (Aliyun Linux) release 2.1903 LTS (Hunting Beagle)

2 部署準(zhǔn)備

master/與worker主機均需要設(shè)置。

2.1 設(shè)置主機名

# ops-master-1
hostnamectl set-hostname ops-master-1

# ops-worker-1
hostnamectl set-hostname ops-worker-1
# ops-worker-2
hostnamectl set-hostname ops-worker-1

2.2?設(shè)置hosts

$ vim /etc/hosts
#添加如下內(nèi)容:
10.220.43.203 ops-master-1
10.220.43.204 ops-worker-1
10.220.43.205 ops-worker-2
#保存退出,重新登錄主機

?2.3 網(wǎng)絡(luò)配置

# 橋接設(shè)置(master/node)

$ cat > /etc/sysctl.d/k8s.conf << EOF
#開啟網(wǎng)橋模式,可將網(wǎng)橋的流量傳遞給iptables鏈
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
#關(guān)閉ipv6協(xié)議
net.ipv6.conf.all.disable_ipv6=1
net.ipv4.ip_forward=1
EOF
$ sysctl --system

3 安裝部署?

master/worker均安裝

3.1?安裝docker

docker二進(jìn)制安裝參考:docker部署及常用命令-CSDN博客?

3.2 配置kubernetes加速yum源

為kubernetes添加國內(nèi)阿里云YUM軟件源。

$ cat > /etc/yum.repos.d/kubernetes.repo << EOF
[k8s]
name=k8s
enabled=1
gpgcheck=0 
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
EOF

3.3?安裝kubeadm/kubelet/kubectl

#版本可以選擇自己要安裝的版本號
$ yum install -y kubelet-1.21.9 kubectl-1.21.9 kubeadm-1.21.9
# 此時,還不能啟動kubelet,因為此時配置還不能,現(xiàn)在僅僅可以設(shè)置開機自啟動
$ systemctl enable kubelet

3.4?安裝容器運行時

如果k8s版本低于1.24版,可以忽略此步驟。

由于1.24版本不能直接兼容docker引擎,
Docker Engine 沒有實現(xiàn) CRI, 而這是容器運行時在 Kubernetes 中工作所需要的。 為此,必須安裝一個額外的服務(wù)cri-dockerd。 cri-dockerd 是一個基于傳統(tǒng)的內(nèi)置 Docker 引擎支持的項目, 它在 1.24 版本從 kubelet 中移除。

目前最新k8s版本為1.28.x。

Kubernetes實戰(zhàn)(九)-kubeadm安裝k8s集群,# Kubernetes系列,kubernetes,容器,云原生

需要在集群內(nèi)每個節(jié)點上安裝一個容器運行時以使Pod可以運行在上面。高版本Kubernetes要求使用符合容器運行時接口(CRI)的運行時。

以下是幾款 Kubernetes 中幾個常見的容器運行時的用法:

  • containerd
  • CRI-O
  • Docker Engine
  • Mirantis Container Runtime

以下是使用 cri-dockerd 適配器來將 Docker Engine 與 Kubernetes 集成。

3.4.1?安裝cri-dockerd

$ wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.2.6/cri-dockerd-0.2.6.amd64.tgz
$ tar -xf cri-dockerd-0.2.6.amd64.tgz
$ cp cri-dockerd/cri-dockerd /usr/bin/
$ chmod +x /usr/bin/cri-dockerd

3.4.2?配置啟動服務(wù)

$ cat <<"EOF" > /usr/lib/systemd/system/cri-docker.service
> [Unit]
> Description=CRI Interface for Docker Application Container Engine
> Documentation=https://docs.mirantis.com
> After=network-online.target firewalld.service docker.service
> Wants=network-online.target
> Requires=cri-docker.socket
> [Service]
> Type=notify
> ExecStart=/usr/bin/cri-dockerd --network-plugin=cni --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.8
> ExecReload=/bin/kill -s HUP $MAINPID
> TimeoutSec=0
> RestartSec=2
> Restart=always
> StartLimitBurst=3
> StartLimitInterval=60s
> LimitNOFILE=infinity
> LimitNPROC=infinity
> LimitCORE=infinity
> TasksMax=infinity
> Delegate=yes
> KillMode=process
> [Install]
> WantedBy=multi-user.target
> EOF

主要是以下命令:ExecStart=/usr/bin/cri-dockerd --network-plugin=cni --pod-infra-container-image=http://registry.aliyuncs.com/google_containers/pause:3.8

pause容器的版本可以通過kubeadm config images list查看:

$ kubeadm config images list
W1210 17:27:44.009895   31608 version.go:104] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get "https://cdn.dl.k8s.io/release/stable-1.txt": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
W1210 17:27:44.009935   31608 version.go:105] falling back to the local client version: v1.25.0
registry.k8s.io/kube-apiserver:v1.25.0
registry.k8s.io/kube-controller-manager:v1.25.0
registry.k8s.io/kube-scheduler:v1.25.0
registry.k8s.io/kube-proxy:v1.25.0
registry.k8s.io/pause:3.8
registry.k8s.io/etcd:3.5.4-0
registry.k8s.io/coredns/coredns:v1.9.3

3.4.3??成 socket ?件?

$ cat <<"EOF" > /usr/lib/systemd/system/cri-docker.socket
[Unit]
Description=CRI Docker Socket for the API
PartOf=cri-docker.service
[Socket]
ListenStream=%t/cri-dockerd.sock
SocketMode=0660
SocketUser=root
SocketGroup=docker
[Install]
WantedBy=sockets.target
EOF

3.4.4?啟動 cri-docker 服務(wù)并配置開機啟動?

$ systemctl daemon-reload
$ systemctl enable cri-docker
$ systemctl start cri-docker
$ systemctl is-active cri-docker

3.5?部署Kubernetes

master需要部署?,worker?node節(jié)點不需要執(zhí)行kubeadm init。

3.5.1?覆蓋kubernetes的鏡像地址

$ kubeadm config images list
I1216 16:58:50.892308    2873 version.go:254] remote version is much newer: v1.29.0; falling back to: stable-1.21
k8s.gcr.io/kube-apiserver:v1.21.14
k8s.gcr.io/kube-controller-manager:v1.21.14
k8s.gcr.io/kube-scheduler:v1.21.14
k8s.gcr.io/kube-proxy:v1.21.14
k8s.gcr.io/pause:3.4.1
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns/coredns:v1.8.0

3.5.2?更改為阿里云的鏡像地址

$ kubeadm config images list  --image-repository registry.aliyuncs.com/google_containers
I1216 16:59:02.975108    2912 version.go:254] remote version is much newer: v1.29.0; falling back to: stable-1.21
registry.aliyuncs.com/google_containers/kube-apiserver:v1.21.14
registry.aliyuncs.com/google_containers/kube-controller-manager:v1.21.14
registry.aliyuncs.com/google_containers/kube-scheduler:v1.21.14
registry.aliyuncs.com/google_containers/kube-proxy:v1.21.14
registry.aliyuncs.com/google_containers/pause:3.4.1
registry.aliyuncs.com/google_containers/etcd:3.4.13-0
registry.aliyuncs.com/google_containers/coredns:v1.8.0

3.5.3 拉取鏡像

$ kubeadm config images pull  --image-repository registry.aliyuncs.com/google_containers
I1216 16:59:09.028597    2951 version.go:254] remote version is much newer: v1.29.0; falling back to: stable-1.21
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.21.14
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.21.14
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.21.14
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.21.14
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.4.1
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.4.13-0
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.8.0                                                   

3.5.4?初始化kubernetes

3.5.4.1 命令行初始化
3.5.4.1.1?高于1.24版本
$ kubeadm init \
--apiserver-advertise-address=10.220.43.203 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.25.0 \
--service-cidr=192.168.0.0/16 \
--pod-network-cidr=172.25.0.0/16 \
--ignore-preflight-errors=all \
--cri-socket unix:///var/run/cri-dockerd.sock
3.5.4.1.2?低于1.24版本
$ kubeadm init \
--apiserver-advertise-address=10.220.43.203 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.21.9 \
--service-cidr=192.168.0.0/16 \
--pod-network-cidr=172.25.0.0/16 \
--ignore-preflight-errors=all 
3.5.4.2?配置文件初始化
$ cat <<EOF > kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.20.4
controlPlaneEndpoint: "10.12.70.130:8443"  //vip地址
networking:
   podSubnet: "22.244.0.0/16"    //pod分配的地址段
   serviceSubnet: "22.96.0.0/12"   //service分配的地址段
 
EOF
$ kubeadm init --config=kubeadm-config.yaml --upload-certs
?3.5.4.3 參數(shù)解釋
  • --apiserver-advertise-address=master節(jié)點IP
  • --pod-network-cidr=10.244.0.0/16,要與后面kube-flannel.yml里的ip一致也就是使用10.244.0.0/16不要改它。

輸出:

[init] Using Kubernetes version: v1.25.0
[preflight] Running pre-flight checks
        [WARNING CRI]: container runtime is not running: output: time="2023-12-10T17:38:57+08:00" level=fatal msg="validate service connection: CRI v1 runtime API is not implemented for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unimplemented desc = unknown service runtime.v1.RuntimeService"
, error: exit status 1
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
        [WARNING ImagePull]: failed to pull image registry.aliyuncs.com/google_containers/kube-apiserver:v1.25.0: output: time="2023-12-10T17:38:57+08:00" level=fatal msg="validate service connection: CRI v1 image API is not implemented for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unimplemented desc = unknown service runtime.v1.ImageService"
, error: exit status 1
        [WARNING ImagePull]: failed to pull image registry.aliyuncs.com/google_containers/kube-controller-manager:v1.25.0: output: time="2023-12-10T17:38:57+08:00" level=fatal msg="validate service connection: CRI v1 image API is not implemented for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unimplemented desc = unknown service runtime.v1.ImageService"
, error: exit status 1
        [WARNING ImagePull]: failed to pull image registry.aliyuncs.com/google_containers/kube-scheduler:v1.25.0: output: time="2023-12-10T17:38:57+08:00" level=fatal msg="validate service connection: CRI v1 image API is not implemented for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unimplemented desc = unknown service runtime.v1.ImageService"
, error: exit status 1
        [WARNING ImagePull]: failed to pull image registry.aliyuncs.com/google_containers/kube-proxy:v1.25.0: output: time="2023-12-10T17:38:57+08:00" level=fatal msg="validate service connection: CRI v1 image API is not implemented for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unimplemented desc = unknown service runtime.v1.ImageService"
, error: exit status 1
        [WARNING ImagePull]: failed to pull image registry.aliyuncs.com/google_containers/pause:3.8: output: time="2023-12-10T17:38:57+08:00" level=fatal msg="validate service connection: CRI v1 image API is not implemented for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unimplemented desc = unknown service runtime.v1.ImageService"
, error: exit status 1
        [WARNING ImagePull]: failed to pull image registry.aliyuncs.com/google_containers/etcd:3.5.4-0: output: time="2023-12-10T17:38:57+08:00" level=fatal msg="validate service connection: CRI v1 image API is not implemented for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unimplemented desc = unknown service runtime.v1.ImageService"
, error: exit status 1
        [WARNING ImagePull]: failed to pull image registry.aliyuncs.com/google_containers/coredns:v1.9.3: output: time="2023-12-10T17:38:58+08:00" level=fatal msg="validate service connection: CRI v1 image API is not implemented for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unimplemented desc = unknown service runtime.v1.ImageService"
, error: exit status 1
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master] and IPs [192.168.0.1 10.220.43.203]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master] and IPs [10.220.43.203 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master] and IPs [10.220.43.203 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 28.001898 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: 3u2q8d.u899qmv8lsm7sxyz
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.220.43.203:6443 --token 3u2q8d.u899qmv8lsm7sxyz \
        --discovery-token-ca-cert-hash sha256:d7b2a47417fbff13e11a50ae92aaa0666448a92eb4c8deaaae9e9aa5c0cbc930 
這里是通過kubeadm init安裝,所以執(zhí)行后會下載相應(yīng)的docker鏡像,一般會發(fā)現(xiàn)在控制臺卡著不動很久,這時就是在下載鏡像,可以使用docker images命令查看是不是有新的鏡像增加。

3.6 測試kubectl工具

master/slave均執(zhí)行。

kubeadm安裝好后,控制臺也會有提示執(zhí)行以下命令,照著執(zhí)行(也就是第11步最后控制臺輸出的)。

3.6.1 配置kubeconfig

master執(zhí)行。

$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
$ scp /etc/kubernetes/admin.conf  10.220.43.204:/etc/kubernetes
root@10.220.43.204's password: 
admin.conf               100% 5641    19.2MB/s   00:00                                                        

3.6.2?測試kubectl命令

$ kubectl get nodes -o wide
NAME     STATUS     ROLES           AGE   VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE                                                         KERNEL-VERSION            CONTAINER-RUNTIME
master   NotReady   control-plane   21m   v1.25.0   10.220.43.203   <none>        Alibaba Cloud Linux (Aliyun Linux) 2.1903 LTS (Hunting Beagle)   4.19.91-27.6.al7.x86_64   docker://20.10.21

剛部署完node狀態(tài)是NotReady,等安裝完網(wǎng)絡(luò)插件后,狀態(tài)會變?yōu)镽eady。

3.7?安裝網(wǎng)絡(luò)插件

master節(jié)點執(zhí)行,worker節(jié)點不需要執(zhí)行。?

常用的cni網(wǎng)絡(luò)插件有calico和flannel,兩者區(qū)別為:

  • flannel不支持復(fù)雜的網(wǎng)絡(luò)策略
  • calico支持網(wǎng)絡(luò)策略

3.7.1 安裝Pod CNI網(wǎng)絡(luò)插件flannel

master/slave均執(zhí)行?

$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
namespace/kube-flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

報錯:The connection to the server?http://raw.githubusercontent.com?was refused - did you specify the right host or port?
原因:國外資源訪問不了
解決辦法:host配置可以訪問的ip

$ vim /etc/hosts   
#在/etc/hosts增加以下這條
199.232.28.133 raw.githubusercontent.com

重新執(zhí)行上面命令,便可成功安裝!

3.7.2?部署Pod CNI網(wǎng)絡(luò)插件calico

官網(wǎng):About Calico | Calico Documentation

3.7.2.1 下載calico.yaml文件?
$ curl https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/calico.yaml -O

下載失敗,將?raw.githubusercontent.com寫入host。

$ vim /etc/hosts   
#在/etc/hosts增加以下這條
199.232.28.133 raw.githubusercontent.com
3.7.2.2 拉取calico鏡像
$ grep -w image calico.yaml| uniq 
          image: docker.io/calico/cni:v3.26.1
          image: docker.io/calico/node:v3.26.1
          image: docker.io/calico/kube-controllers:v3.26.1
$ docker pull docker.io/calico/cni:v3.26.1
$ docker pull docker.io/calico/node:v3.26.1
$ docker pull docker.io/calico/kube-controllers:v3.26.1
3.7.2.3 修改calico網(wǎng)段信息

修改calico.yaml 文件中CALICO_IPV4POOL_CIDR的IP段要和kubeadm初始化時候的pod網(wǎng)段一致,注意格式要對齊,不然會報錯。

$ vim calico.yaml            
            - name: CALICO_IPV4POOL_CIDR
              value: "172.25.0.0/16"
3.7.2.4 加載calico.yaml文件?
$ kubectl apply -f calico.yaml
poddisruptionbudget.policy/calico-kube-controllers configured
serviceaccount/calico-kube-controllers unchanged
serviceaccount/calico-node unchanged
serviceaccount/calico-cni-plugin unchanged
configmap/calico-config unchanged
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/bgpfilters.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org configured
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org configured
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers unchanged
clusterrole.rbac.authorization.k8s.io/calico-node unchanged
clusterrole.rbac.authorization.k8s.io/calico-cni-plugin unchanged
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers unchanged
clusterrolebinding.rbac.authorization.k8s.io/calico-node unchanged
clusterrolebinding.rbac.authorization.k8s.io/calico-cni-plugin unchanged

3.8?worker節(jié)點加入集群

?此步驟需要用到第3.5?部署Kubernetes控制臺輸出內(nèi)容:

$ kubeadm join 10.220.43.203:6443 --token 3u2q8d.u899qmv8lsm7sxyz \
        --discovery-token-ca-cert-hash sha256:d7b2a47417fbff13e11a50ae92aaa0666448a92eb4c8deaaae9e9aa5c0cbc930 

加入命令為:

$ kubeadm join 10.220.43.203:6443 --token 3u2q8d.u899qmv8lsm7sxyz \
	--discovery-token-ca-cert-hash sha256:d7b2a47417fbff13e11a50ae92aaa0666448a92eb4c8deaaae9e9aa5c0cbc930 \
 --ignore-preflight-errors=all \
--cri-socket unix:///var/run/cri-dockerd.sock
  • --ignore-preflight-errors=all?
  • --cri-socket unix:///var/run/cri-dockerd.sock

這兩行一定要加上不然就會報各種錯:

[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR CRI]: container runtime is not running: output: time="2023-08-31T16:42:23+08:00" level=fatal msg="validate service connection: CRI v1 runtime API is not implemented for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unimplemented desc = unknown service runtime.v1.RuntimeService"
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
To see the stack trace of this error execute with --v=5 or higher

3.9 master節(jié)點加入集群

生成新的認(rèn)證key

$ kubeadm init phase upload-certs --upload-certs
upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
a96e54087b299b962dae6321e519386fd9bdb1876a6cd4067c55484a0fe0c5e0
$ kubeadm join 10.220.43.211:16443 --token zobvuq.sqr2roc558g6esvj --discovery-token-ca-cert-hash sha256:b8dab5c214d3d6d3f804d0695a11a17a0d4245b1a145d8dbd8ccf9b47e8d73d7  --control-plane --certificate-key a96e54087b299b962dae6321e519386fd9bdb1876a6cd4067c55484a0fe0c5e0

3.10?驗證

master節(jié)點:

$  kubectl get nodes -o wide
NAME           STATUS   ROLES                  AGE     VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE                                                         KERNEL-VERSION            CONTAINER-RUNTIME
ops-master-1   Ready    control-plane,master   29m     v1.21.9   10.220.43.203   <none>        Alibaba Cloud Linux (Aliyun Linux) 2.1903 LTS (Hunting Beagle)   4.19.91-27.6.al7.x86_64   docker://20.10.21
ops-worker-1   Ready    <none>                 9m23s   v1.21.9   10.220.43.204   <none>        Alibaba Cloud Linux (Aliyun Linux) 2.1903 LTS (Hunting Beagle)   4.19.91-27.6.al7.x86_64   docker://20.10.21
ops-worker-2   Ready    <none>                 9m25s   v1.21.9   10.220.43.205   <none>        Alibaba Cloud Linux (Aliyun Linux) 2.1903 LTS (Hunting Beagle)   4.19.91-27.6.al7.x86_64   docker://20.10.21

4 常見使用問題

4.1?K8S在kubeadm init后,沒有記錄kubeadm join如何查詢?

#再生成一個token即可
kubeadm token create --print-join-command
#下在的命令可以查看歷史的token
kubeadm token list

4.2?node節(jié)點kubeadm join失敗后,要重新join怎么辦?

#再生成一個token即可
kubeadm token create --print-join-command
#下在的命令可以查看歷史的token
kubeadm token list

4.3?重啟kubelet

systemctl daemon-reload
systemctl restart kubelet

4.4 查詢系統(tǒng)組件

#查詢節(jié)點
kubectl get nodes
#查詢pods 一般要帶上"-n"即命名空間。不帶等同  -n dafault
kubectl get pods -n kube-system

?5 異常問題處理

5.1?kubeadm init報錯

[root@k8s centos]# kubeadm init
I1205 06:44:01.459391   12097 version.go:94] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I1205 06:44:01.459549   12097 version.go:95] falling back to the local client version: v1.13.0
[init] Using Kubernetes version: v1.13.0
[preflight] Running pre-flight checks
        [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
        [WARNING Hostname]: hostname "k8s.novalocal" could not be reached
        [WARNING Hostname]: hostname "k8s.novalocal": lookup k8s.novalocal on 10.32.148.99:53: no such host
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`

5.1.1?網(wǎng)絡(luò)設(shè)置問題

5.1.1.1 錯誤內(nèi)容
/proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
5.1.1.2?解決方法
$ echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables

5.1.2?Enable docker

5.1.2.1? 錯誤內(nèi)容
[WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
5.1.2.2?解決方法
$ systemctl enable docker.service

5.1.3?hostname問題?

5.1.3.1 錯誤內(nèi)容
[WARNING Hostname]: hostname "slave" could not be reached
[WARNING Hostname]: hostname "slave": lookup slave on 10.32.148.99:53: no such host
5.1.3.2?解決方法

1)修改主機名

$ hostnamectl set-hostname slave

2)更改/etc/hostname

$ echo k8s > /etc/hostname

5.1.4?Enable kubelet

5.1.4.1 錯誤內(nèi)容
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
5.1.4.2?錯誤內(nèi)容?
$ systemctl enable kubelet.service

6?配置kubectl命令tab鍵自動補全?

$ kubectl --help | grep bash
  completion    Output shell completion code for the specified shell (bash or zsh)

添加source <(kubectl completion bash)到/etc/profile,并使配置生效:

$ cat /etc/profile | head -2
# /etc/profile
source <(kubectl completion bash)

$ source /etc/profile

驗證kubectl是否可以自動補全。

$ kubectl get nodes 
NAME           STATUS   ROLES                  AGE   VERSION
ops-master-1   Ready    control-plane,master   33m   v1.21.0
ops-worker-1   Ready    <none>                 30m   v1.21.0
ops-worker-2   Ready    <none>                 30m   v1.21.0

#注意:需要bash-completion-2.1-6.el7.noarch包,不然不能自動補全命令文章來源地址http://www.zghlxwxcb.cn/news/detail-757097.html

$ rpm -qa | grep bash
bash-completion-2.1-6.el7.noarch
bash-4.2.46-30.el7.x86_64
bash-doc-4.2.46-30.el7.x86_64

到了這里,關(guān)于Kubernetes實戰(zhàn)(九)-kubeadm安裝k8s集群的文章就介紹完了。如果您還想了解更多內(nèi)容,請在右上角搜索TOY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!

本文來自互聯(lián)網(wǎng)用戶投稿,該文觀點僅代表作者本人,不代表本站立場。本站僅提供信息存儲空間服務(wù),不擁有所有權(quán),不承擔(dān)相關(guān)法律責(zé)任。如若轉(zhuǎn)載,請注明出處: 如若內(nèi)容造成侵權(quán)/違法違規(guī)/事實不符,請點擊違法舉報進(jìn)行投訴反饋,一經(jīng)查實,立即刪除!

領(lǐng)支付寶紅包贊助服務(wù)器費用

相關(guān)文章

  • Kubernetes(K8s)使用 kubeadm 方式搭建多 master 高可用 K8s 集群

    Kubernetes(K8s)使用 kubeadm 方式搭建多 master 高可用 K8s 集群

    本篇主要針對上篇文章的單 master 節(jié)點的 K8s 集群上搭建多 master 節(jié)點集群 和 LB 負(fù)載均衡服務(wù)器。 Kubernetes(K8S)集群搭建基礎(chǔ)入門教程 虛擬機 IP 地址: IP 操作系統(tǒng) 主機名稱 192.168.2.121 centos7.9 k8s-master01 192.168.2.124 centos7.9 k8s-master02 192.168.2.125 centos7.9 k8s-node01 192.168.2.126 centos

    2023年04月26日
    瀏覽(34)
  • Kubernetes技術(shù)--使用kubeadm快速部署一個K8s集群

    Kubernetes技術(shù)--使用kubeadm快速部署一個K8s集群

    這里我們配置一個 單master集群 。( 一個Master節(jié)點,多個Node節(jié)點 ) 1.硬件環(huán)境準(zhǔn)備 一臺或多臺機器,操作系統(tǒng) CentOS7.x-86_x64 。這里我們使用安裝了CentOS7的三臺虛擬機 硬件配置 : 2GB或更多RAM , 2個CPU或更多CPU , 硬盤30GB或更多 2.主機名稱和IP地址規(guī)劃 3. 初始化準(zhǔn)備工作 (1).關(guān)

    2024年02月10日
    瀏覽(122)
  • 【云原生-K8s-1】kubeadm搭建k8s集群(一主兩從)完整教程及kubernetes簡介

    【云原生-K8s-1】kubeadm搭建k8s集群(一主兩從)完整教程及kubernetes簡介

    ?? 博主簡介 ????云計算領(lǐng)域優(yōu)質(zhì)創(chuàng)作者 ????華為云開發(fā)者社區(qū)專家博主 ????阿里云開發(fā)者社區(qū)專家博主 ?? 交流社區(qū): 運維交流社區(qū) 歡迎大家的加入! ??Kubernetes(簡稱:k8s) 是Google在2014年6月開源的一個容器集群管理系統(tǒng),使用Go語言開發(fā),用于管理云平臺中多

    2024年02月07日
    瀏覽(28)
  • 【Kubernetes部署篇】Kubeadm方式搭建K8s集群 1.27.0版本

    【Kubernetes部署篇】Kubeadm方式搭建K8s集群 1.27.0版本

    官方文檔: 二進(jìn)制下載地址 環(huán)境規(guī)劃: pod網(wǎng)段:10.244.0.0/16 service網(wǎng)段:10.10.0.0/16 注意: pod和service網(wǎng)段不可沖突,如果沖突會導(dǎo)致K8S集群安裝失敗。 主機名 IP地址 操作系統(tǒng) master-1 16.32.15.200 CentOS7.8 node-1 16.32.15.201 CentOS7.8 node-2 16.32.15.202 CentOS7.8 1、關(guān)閉防火墻 2、配置域名解

    2024年02月08日
    瀏覽(35)
  • 【Kubernetes部署篇】Kubeadm方式搭建K8s集群 1.26.0版本

    【Kubernetes部署篇】Kubeadm方式搭建K8s集群 1.26.0版本

    官方文檔: 二進(jìn)制下載地址 環(huán)境規(guī)劃: pod網(wǎng)段:10.244.0.0/16 service網(wǎng)段:10.10.0.0/16 注意: pod和service網(wǎng)段不可沖突,如果沖突會導(dǎo)致K8S集群安裝失敗。 容器運行時本次使用containerd。 主機名 IP地址 操作系統(tǒng) master-1 16.32.15.200 CentOS7.8 node-1 16.32.15.201 CentOS7.8 node-2 16.32.15.202 CentOS

    2024年02月10日
    瀏覽(50)
  • Kubernetes技術(shù)--使用kubeadm搭建高可用的K8s集群(貼近實際環(huán)境)

    Kubernetes技術(shù)--使用kubeadm搭建高可用的K8s集群(貼近實際環(huán)境)

    1.高可用k8s集群架構(gòu)(多master) 2.安裝硬件要求 一臺或多臺機器,操作系統(tǒng) CentOS7.x-86_x64 硬件配置:2GB或更多RAM,2個CPU或更多CPU,硬盤30GB或更多 注: 這里屬于教學(xué)環(huán)境,所以使用三臺虛擬機模擬實現(xiàn)。 3.部署規(guī)劃 4.部署前準(zhǔn)備 (1).關(guān)閉防火墻 (2).關(guān)閉selinux (3).關(guān)閉swap (4).根據(jù)規(guī)

    2024年02月10日
    瀏覽(28)
  • 一、kubeadm部署Kubernetes(k8s) 1.23.0多主高可用集群

    Kubernetes介紹 kubernetes(k8s)是2015年由Google公司基于Go語言編寫的一款開源的容器集群編排系統(tǒng),用于自動化容器的部署、擴縮容和管理; kubernetes(k8s)是基于Google內(nèi)部的Borg系統(tǒng)的特征開發(fā)的一個版本,集成了Borg系統(tǒng)大部分優(yōu)勢; 官方地址:https://Kubernetes.io 代碼托管平臺:

    2024年03月25日
    瀏覽(38)
  • Kubeadm安裝K8s集群

    Kubeadm安裝K8s集群

    一、硬件環(huán)境 準(zhǔn)備3臺Linux服務(wù)器,此處用Vmware虛擬機。 主機名 CPU 內(nèi)存 k8smaster 2核 4G k8snode1 2核 4G k8snode2 2核 4G 二、系統(tǒng)前置準(zhǔn)備 配置三臺主機的hosts文件 設(shè)置主機名 關(guān)閉selinux、禁用swap分區(qū) ipv4流量轉(zhuǎn)發(fā) 關(guān)閉防火墻 同步三臺服務(wù)器的時間 三、環(huán)境安裝 Docker 安裝 Docker卸載

    2024年02月13日
    瀏覽(28)
  • 使用kubeadm安裝和設(shè)置Kubernetes(k8s)

    使用kubeadm安裝和設(shè)置Kubernetes(k8s)

    kubeadm是官方社區(qū)推出的一個用于快速部署kubernetes集群的工具。 這個工具能通過兩條指令完成一個kubernetes集群的部署: 使用kubeadm方式搭建K8s集群主要分為以下幾步 準(zhǔn)備三臺虛擬機,同時安裝操作系統(tǒng)CentOS 7.x 對三個安裝之后的操作系統(tǒng)進(jìn)行初始化操作 在三個節(jié)點安裝 dock

    2024年02月12日
    瀏覽(56)
  • centos安裝部署Kubernetes(k8s)步驟使用kubeadm方式

    centos安裝部署Kubernetes(k8s)步驟使用kubeadm方式

    機器地址: 192.168.0.35 k8s-master 192.168.0.39 k8s-node1 192.168.0.116 k8s-node2 修改每臺機器的名字 關(guān)閉防火墻和selinux 臨時關(guān)閉selinux: 永久關(guān)閉: 修改selinux為disabled或者permissive 重啟生效 配置本地解析 確保每個節(jié)點MAC地址和 product_uuid 的唯一性 同步時間 如果各機器上時間都沒有問題

    2024年02月06日
    瀏覽(32)

覺得文章有用就打賞一下文章作者

支付寶掃一掃打賞

博客贊助

微信掃一掃打賞

請作者喝杯咖啡吧~博客贊助

支付寶掃一掃領(lǐng)取紅包,優(yōu)惠每天領(lǐng)

二維碼1

領(lǐng)取紅包

二維碼2

領(lǐng)紅包