国产 无码 综合区,色欲AV无码国产永久播放,无码天堂亚洲国产AV,国产日韩欧美女同一区二区

使用nginx搭建kubernetes高可用集群

這篇具有很好參考價值的文章主要介紹了使用nginx搭建kubernetes高可用集群。希望對大家有所幫助。如果存在錯誤或未考慮完全的地方,請大家不吝賜教,您也可以點擊"舉報違法"按鈕提交疑問。

使用nginx搭建kubernetes高可用集群

本文使用 nginx 搭建 kubernetes 高可用集群。

1、環(huán)境準(zhǔn)備

服務(wù)器規(guī)劃(本實驗采用虛擬機):

ip hostname 說明
192.168.43.200 master master
192.168.43.201 slave1 slave
192.168.43.202 slave2 slave
192.168.43.203 master2 master
192.168.43.165 nginx nginx主機

2、系統(tǒng)初始化(master&&slave)

2.1 關(guān)閉防火墻

# 第1步
# 臨時關(guān)閉
systemctl stop firewalld
# 永久關(guān)閉
systemctl disable firewalld

2.2 關(guān)閉 selinux

# 第2步
# 臨時關(guān)閉
setenforce 0
# 永久關(guān)閉
sed -i '/SELINUX/s/enforcing/disabled/' /etc/selinux/config

2.3 關(guān)閉 swap

# 第3步
# 臨時關(guān)閉
swapoff -a
# 永久關(guān)閉
sed -ri 's/.*swap.*/#&/' /etc/fstab

2.4 設(shè)置主機名稱

使用命令 hostnamectl set-hostname hostname 設(shè)置主機名稱,如下四臺主機分別設(shè)置為:

# 第4步
# 設(shè)置
hostnamectl set-hostname master
hostnamectl set-hostname slave1
hostnamectl set-hostname slave2
hostnamectl set-hostname master2
# 查看當(dāng)前主機名稱
hostname

2.5 添加hosts

在每個節(jié)點中添加 hosts,即節(jié)點IP地址+節(jié)點名稱。

# 第5步
cat >> /etc/hosts << EOF
192.168.43.200 master
192.168.43.201 slave1
192.168.43.202 slave2
192.168.43.203 master2
EOF

2.6 將橋接的IPv4流量傳遞到iptables的鏈

# 第6步
# 設(shè)置
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
# 使其生效
sysctl --system

2.7 時間同步

讓各個節(jié)點(虛擬機)中的時間與本機時間保持一致。

# 第7步
yum install ntpdate -y
ntpdate time.windows.com

注意:虛擬機不管關(guān)機還是掛起,每次重新操作都需要更新時間進行同步。

3、Docker的安裝(all node)

3.1 卸載舊版本

# 第8步
yum remove docker \
                  docker-client \
                  docker-client-latest \
                  docker-common \
                  docker-latest \
                  docker-latest-logrotate \
                  docker-logrotate \
                  docker-engine

3.2 設(shè)置鏡像倉庫

# 第9步
# 默認(rèn)是國外的,這里使用阿里云的鏡像
yum-config-manager \
    --add-repo \
    http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

3.3 安裝需要的插件

# 第10步
yum install -y yum-utils \
  device-mapper-persistent-data \
  lvm2

3.4 更新yum軟件包索引

# 第11步
# 更新yum軟件包索引
yum makecache fast

3.5 安裝docker引擎

# 第12步
# 安裝特定版本 
# 查看有哪些版本
yum list docker-ce --showduplicates | sort -r
yum install docker-ce-<VERSION_STRING> docker-ce-cli-<VERSION_STRING> containerd.io
yum install docker-ce-20.10.21 docker-ce-cli-20.10.21 containerd.io
# 安裝最新版本
yum install docker-ce docker-ce-cli containerd.io

3.6 啟動Docker

# 第13步
systemctl enable docker && systemctl start docker

3.7 配置Docker鏡像加速

# 第14步
vim /etc/docker/daemon.json
{
  "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"]
}
# 重啟
systemctl restart docker

3.8 查看加速是否生效

# 第15步
docker info

3.9 驗證Docker信息

# 第16步
docker -v

3.10 其它Docker命令

# 停止docker
systemctl stop docker

# 查看docker狀態(tài)
systemctl status docker

3.11 卸載Docker的命令

yum remove docker-ce-20.10.21 docker-ce-cli-20.10.21 containerd.io
rm -rf /var/lib/docker
rm -rf /var/lib/containerd

4、添加阿里云yum源(master&&slave)

所有節(jié)點都需要執(zhí)行,nginx節(jié)點不需要執(zhí)行。

# 第17步
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[Kubernetes]
name=kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

5、kubeadm、kubelet、kubectl的安裝(master&&slave)

所有節(jié)點都需要執(zhí)行,nginx節(jié)點不需要執(zhí)行。

# 第18步
yum install -y kubelet-1.21.0 kubeadm-1.21.0 kubectl-1.21.0 --disableexcludes=kubernetes

6、啟動kubelet服務(wù)(master&&slave)

所有節(jié)點都需要執(zhí)行,nginx節(jié)點不需要執(zhí)行。

# 第19步
systemctl enable kubelet && systemctl start kubelet

7、Nginx節(jié)點安裝Nginx(nginx node)

這里我們使用 docker 的方式進行安裝,以下操作只需要在 Nginx 節(jié)點部署即可。

# 設(shè)置
# 第20步
hostnamectl set-hostname nginx

systemctl stop firewalld
systemctl disable firewalld
# 第21步
# 鏡像下載
[root@nginx ~]# docker pull nginx:1.17.2
1.17.2: Pulling from library/nginx
1ab2bdfe9778: Pull complete
c88f4a4e0a55: Pull complete
1a18b1b95ce1: Pull complete
Digest: sha256:5411d8897c3da841a1f45f895b43ad4526eb62d3393c3287124a56be49962d41
Status: Downloaded newer image for nginx:1.17.2
docker.io/library/nginx:1.17.2
# 第22步
# 編輯配置文件
[root@nginx ~]# mkdir -p /data/nginx && cd /data/nginx
[root@nginx nginx]# vim nginx-lb.conf
user  nginx;
worker_processes  2; # 根據(jù)服務(wù)器cpu核數(shù)修改
error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;
events {
    worker_connections  8192;
}
stream {
    upstream apiserver {
        server 192.168.43.200:6443 weight=5 max_fails=3 fail_timeout=30s; #master apiserver ip和端口
        server 192.168.43.203:6443 weight=5 max_fails=3 fail_timeout=30s; #master2 apiserver ip和端口
    }
    server {
        listen 8443;  # 監(jiān)聽端口
        proxy_pass apiserver;
    }
}
# 第23步
# 啟動容器
[root@nginx nginx]# docker run -d --restart=unless-stopped -p 8443:8443 -v /data/nginx/nginx-lb.conf:/etc/nginx/nginx.conf --name nginx-lb --hostname nginx-lb nginx:1.17.2
fd9d945c1ae1c39ab6aa9da3675a523694a8ef1aaf687ad6d1509abc0b21b822
# 第24步
# 查看啟動情況
[root@nginx nginx]# docker ps | grep nginx-lb
fd9d945c1ae1   nginx:1.17.2   "nginx -g 'daemon of…"   22 seconds ago   Up 21 seconds   80/tcp, 0.0.0.0:8443->8443/tcp   nginx-lb

8、部署k8s-master

8.1 kubeadm初始化(master node)

1.21.0 版本在初始化過程中會報錯,是因為阿里云倉庫中不存在 coredns/coredns 鏡像,也就是

registry.aliyuncs.com/google_containers/coredns/coredns:v1.8.0鏡像不存在。

解決方法:

# 第25步
# master節(jié)點執(zhí)行
# 該步驟需要提前執(zhí)行,否則的話在初始化的時候由于找不到鏡像會報錯
[root@master ~]# docker pull coredns/coredns:1.8.0
1.8.0: Pulling from coredns/coredns
c6568d217a00: Pull complete
5984b6d55edf: Pull complete
Digest: sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e
Status: Downloaded newer image for coredns/coredns:1.8.0
docker.io/coredns/coredns:1.8.0
[root@master ~]# docker tag coredns/coredns:1.8.0 registry.aliyuncs.com/google_containers/coredns/coredns:v1.8.0
[root@master ~]# docker rmi coredns/coredns:1.8.0
Untagged: coredns/coredns:1.8.0
Untagged: coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e

在 master 節(jié)點中執(zhí)行以下命令,注意將 master 節(jié)點 IP 和 kubeadm 版本號和 --control-plane-endpoint 修改為

自己主機中所對應(yīng)的。

# 第26步
# master節(jié)點執(zhí)行
[root@master ~]# kubeadm init \
 --apiserver-advertise-address=192.168.43.200 \
 --image-repository registry.aliyuncs.com/google_containers \
 --control-plane-endpoint=192.168.43.165:8443 \
 --kubernetes-version v1.21.0 \
 --service-cidr=10.96.0.0/12 \
 --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.21.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master] and IPs [10.96.0.1 192.168.43.200 192.168.43.165]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master] and IPs [192.168.43.200 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master] and IPs [192.168.43.200 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "admin.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 106.045002 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.21" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: fa1p76.qfwoidudtbxes0o5
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join 192.168.43.165:8443 --token fa1p76.qfwoidudtbxes0o5 \
        --discovery-token-ca-cert-hash sha256:644548f3c2f5d5961bb7630bdcf4f4908c3be42185a544f3855ca7b21c98f0eb \
        --control-plane

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.43.165:8443 --token fa1p76.qfwoidudtbxes0o5 \
        --discovery-token-ca-cert-hash sha256:644548f3c2f5d5961bb7630bdcf4f4908c3be42185a544f3855ca7b21c98f0eb

查看命令執(zhí)行后的提示信息,看到 Your Kubernetes control-plane has initialized successfully!

明我們 master 節(jié)點上的 k8s 集群已經(jīng)搭建成功。

8.2 開啟kubectl工具的使用(master node)

# 第27步
# master節(jié)點執(zhí)行
[root@master ~]# mkdir -p $HOME/.kube
[root@master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

查看集群的節(jié)點:

# 第28步
# master節(jié)點執(zhí)行
[root@master ~]# kubectl get nodes
NAME     STATUS   ROLES                  AGE    VERSION
master   NotReady    control-plane,master   3m7s   v1.21.0

8.3 slave節(jié)點加入集群(slave node)

# 第29步
# slave1節(jié)點執(zhí)行
[root@slave1 ~]# kubeadm join 192.168.43.165:8443 --token fa1p76.qfwoidudtbxes0o5         --discovery-token-ca-cert-hash sha256:644548f3c2f5d5961bb7630bdcf4f4908c3be42185a544f3855ca7b21c98f0eb
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
# 第30步
# slave2節(jié)點執(zhí)行
[root@slave2 ~]# kubeadm join 192.168.43.165:8443 --token fa1p76.qfwoidudtbxes0o5         --discovery-token-ca-cert-hash sha256:644548f3c2f5d5961bb7630bdcf4f4908c3be42185a544f3855ca7b21c98f0eb
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

查看集群的節(jié)點:

# 第31步
# master節(jié)點執(zhí)行
[root@master ~]# kubectl get nodes
NAME     STATUS      ROLES                  AGE     VERSION
master   NotReady    control-plane,master   7m46s   v1.21.0
slave1   NotReady    <none>                 89s     v1.21.0
slave2   NotReady    <none>                 81s     v1.21.0

8.4 master2節(jié)點加入集群(master2 node)

# 第32步
# master2節(jié)點執(zhí)行
# 鏡像下載
[root@master2 ~]# docker pull registry.aliyuncs.com/google_containers/kube-apiserver:v1.21.0
[root@master2 ~]# docker pull registry.aliyuncs.com/google_containers/kube-controller-manager:v1.21.0
[root@master2 ~]# docker pull registry.aliyuncs.com/google_containers/kube-scheduler:v1.21.0
[root@master2 ~]# docker pull registry.aliyuncs.com/google_containers/kube-proxy:v1.21.0
[root@master2 ~]# docker pull registry.aliyuncs.com/google_containers/pause:3.4.1
[root@master2 ~]# docker pull registry.aliyuncs.com/google_containers/etcd:3.4.13-0
# 1.21.0版本的k8s中,阿里云鏡像中沒有registry.aliyuncs.com/google_containers/coredns/coredns:v1.8.0鏡像,所以需要從別的地方下載鏡像,然后再進行處理
[root@master2 ~]# docker pull coredns/coredns:1.8.0
[root@master2 ~]# docker tag coredns/coredns:1.8.0 registry.aliyuncs.com/google_containers/coredns/coredns:v1.8.0
[root@master2 ~]# docker rmi coredns/coredns:1.8.0

證書拷貝:

# 第33步
# master2節(jié)點執(zhí)行
# 創(chuàng)建目錄
[root@master2 ~]# mkdir -p /etc/kubernetes/pki/etcd
# 第34步
# master節(jié)點執(zhí)行
# 將master節(jié)點上的證書拷貝到master2節(jié)點上
[root@master ~]# scp -rp /etc/kubernetes/pki/ca.* master2:/etc/kubernetes/pki
[root@master ~]# scp -rp /etc/kubernetes/pki/sa.* master2:/etc/kubernetes/pki
[root@master ~]# scp -rp /etc/kubernetes/pki/front-proxy-ca.* master2:/etc/kubernetes/pki
[root@master ~]# scp -rp /etc/kubernetes/pki/etcd/ca.* master2:/etc/kubernetes/pki/etcd
[root@master ~]# scp -rp /etc/kubernetes/admin.conf master2:/etc/kubernetes

加入集群:

# 第35步
# master2節(jié)點執(zhí)行
[root@master2 ~]# kubeadm join 192.168.43.165:8443 --token fa1p76.qfwoidudtbxes0o5         --discovery-token-ca-cert-hash sha256:644548f3c2f5d5961bb7630bdcf4f4908c3be42185a544f3855ca7b21c98f0eb         --control-plane
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master2] and IPs [192.168.43.203 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master2] and IPs [192.168.43.203 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master2] and IPs [10.96.0.1 192.168.43.203 192.168.43.165]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[check-etcd] Checking that the etcd cluster is healthy
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[mark-control-plane] Marking the node master2 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master2 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]

This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

        mkdir -p $HOME/.kube
        sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
        sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.
# 第36步
# master2節(jié)點執(zhí)行
[root@master2 ~]# mkdir -p $HOME/.kube
[root@master2 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master2 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

查看節(jié)點:

# 第37步
# master節(jié)點執(zhí)行
[root@master ~]# kubectl get nodes
NAME      STATUS      ROLES                  AGE     VERSION
master    NotReady    control-plane,master   11m     v1.21.0
master2   NotReady    control-plane,master   68s     v1.21.0
slave1    NotReady    <none>                 5m1s    v1.21.0
slave2    NotReady    <none>                 4m53s   v1.21.0
# 第38步
# master2節(jié)點執(zhí)行
[root@master2 ~]# kubectl get nodes
NAME      STATUS      ROLES                  AGE     VERSION
master    NotReady    control-plane,master   11m     v1.21.0
master2   NotReady    control-plane,master   68s     v1.21.0
slave1    NotReady    <none>                 5m1s    v1.21.0
slave2    NotReady    <none>                 4m53s   v1.21.0

注:由于網(wǎng)絡(luò)插件還沒有部署,所有節(jié)點還沒有準(zhǔn)備就緒,狀態(tài)為 NotReady,下面安裝網(wǎng)絡(luò)插件。

9、安裝網(wǎng)絡(luò)插件fannel(master node)

查看集群的信息:

# 第39步
# master節(jié)點執(zhí)行
[root@master ~]# kubectl get nodes
NAME      STATUS   ROLES                  AGE     VERSION
master    NotReady    control-plane,master   13m     v1.21.0
master2   NotReady    control-plane,master   2m50s   v1.21.0
slave1    NotReady    <none>                 6m43s   v1.21.0
slave2    NotReady    <none>                 6m35s   v1.21.0

[root@master ~]# kubectl get pods --all-namespaces
NAMESPACE     NAME                              READY   STATUS    RESTARTS   AGE
kube-system   coredns-545d6fc579-2cp4q          1/1     Running   0          12m
kube-system   coredns-545d6fc579-nv2bx          1/1     Running   0          12m
kube-system   etcd-master                       1/1     Running   0          12m
kube-system   etcd-master2                      1/1     Running   0          2m53s
kube-system   kube-apiserver-master             1/1     Running   1          13m
kube-system   kube-apiserver-master2            1/1     Running   0          2m56s
kube-system   kube-controller-manager-master    1/1     Running   1          12m
kube-system   kube-controller-manager-master2   1/1     Running   0          2m56s
kube-system   kube-proxy-6dtsk                  1/1     Running   0          2m57s
kube-system   kube-proxy-hc5tl                  1/1     Running   0          6m50s
kube-system   kube-proxy-kc824                  1/1     Running   0          6m42s
kube-system   kube-proxy-mltbt                  1/1     Running   0          12m
kube-system   kube-scheduler-master             1/1     Running   1          12m
kube-system   kube-scheduler-master2            1/1     Running   0          2m57
# 第40步
# master節(jié)點執(zhí)行
# 獲取fannel的配置文件
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
# 如果出現(xiàn)無法訪問的情況,可以直接用下面的flannel網(wǎng)絡(luò)的官方github地址
wget https://github.com/flannel-io/flannel/tree/master/Documentation/kube-flannel.yml
# 第41步
# master節(jié)點執(zhí)行
# 修改文件內(nèi)容
net-conf.json: |
    {
      "Network": "10.244.0.0/16", #這里的網(wǎng)段地址需要與master初始化的必須保持一致
      "Backend": {
        "Type": "vxlan"
      }
    }
# 第42步
# master節(jié)點執(zhí)行
[root@master ~]# kubectl apply -f kube-flannel.yml
namespace/kube-flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

查看節(jié)點情況:

# 第43步
# master節(jié)點執(zhí)行
[root@master ~]# kubectl get nodes
NAME      STATUS   ROLES                  AGE     VERSION
master    Ready    control-plane,master   15m     v1.21.0
master2   Ready    control-plane,master   4m58s   v1.21.0
slave1    Ready    <none>                 8m51s   v1.21.0
slave2    Ready    <none>                 8m43s   v1.21.0
# 第44步
# master2節(jié)點執(zhí)行
[root@master2 ~]# kubectl get nodes
NAME      STATUS   ROLES                  AGE     VERSION
master    Ready    control-plane,master   15m     v1.21.0
master2   Ready    control-plane,master   4m58s   v1.21.0
slave1    Ready    <none>                 8m51s   v1.21.0
slave2    Ready    <none>                 8m43s   v1.21.0

查看 pod 情況:

# 第45步
# master節(jié)點執(zhí)行
[root@master ~]# kubectl get pods --all-namespaces
NAMESPACE      NAME                              READY   STATUS    RESTARTS   AGE
kube-flannel   kube-flannel-ds-2c8np             1/1     Running   0          53s
kube-flannel   kube-flannel-ds-2zrrm             1/1     Running   0          53s
kube-flannel   kube-flannel-ds-blr77             1/1     Running   0          53s
kube-flannel   kube-flannel-ds-llxlh             1/1     Running   0          53s
kube-system    coredns-545d6fc579-2cp4q          1/1     Running   0          15m
kube-system    coredns-545d6fc579-nv2bx          1/1     Running   0          15m
kube-system    etcd-master                       1/1     Running   0          15m
kube-system    etcd-master2                      1/1     Running   0          5m20s
kube-system    kube-apiserver-master             1/1     Running   1          15m
kube-system    kube-apiserver-master2            1/1     Running   0          5m23s
kube-system    kube-controller-manager-master    1/1     Running   1          15m
kube-system    kube-controller-manager-master2   1/1     Running   0          5m23s
kube-system    kube-proxy-6dtsk                  1/1     Running   0          5m24s
kube-system    kube-proxy-hc5tl                  1/1     Running   0          9m17s
kube-system    kube-proxy-kc824                  1/1     Running   0          9m9s
kube-system    kube-proxy-mltbt                  1/1     Running   0          15m
kube-system    kube-scheduler-master             1/1     Running   1          15m
kube-system    kube-scheduler-master2            1/1     Running   0          5m24s
# 第46步
# master2節(jié)點執(zhí)行
[root@master ~]# kubectl get pods --all-namespaces
NAMESPACE      NAME                              READY   STATUS    RESTARTS   AGE
kube-flannel   kube-flannel-ds-2c8np             1/1     Running   0          53s
kube-flannel   kube-flannel-ds-2zrrm             1/1     Running   0          53s
kube-flannel   kube-flannel-ds-blr77             1/1     Running   0          53s
kube-flannel   kube-flannel-ds-llxlh             1/1     Running   0          53s
kube-system    coredns-545d6fc579-2cp4q          1/1     Running   0          15m
kube-system    coredns-545d6fc579-nv2bx          1/1     Running   0          15m
kube-system    etcd-master                       1/1     Running   0          15m
kube-system    etcd-master2                      1/1     Running   0          5m20s
kube-system    kube-apiserver-master             1/1     Running   1          15m
kube-system    kube-apiserver-master2            1/1     Running   0          5m23s
kube-system    kube-controller-manager-master    1/1     Running   1          15m
kube-system    kube-controller-manager-master2   1/1     Running   0          5m23s
kube-system    kube-proxy-6dtsk                  1/1     Running   0          5m24s
kube-system    kube-proxy-hc5tl                  1/1     Running   0          9m17s
kube-system    kube-proxy-kc824                  1/1     Running   0          9m9s
kube-system    kube-proxy-mltbt                  1/1     Running   0          15m
kube-system    kube-scheduler-master             1/1     Running   1          15m
kube-system    kube-scheduler-master2            1/1     Running   0          5m24s

10、測試

# 第47步
[root@master ~]# curl -k https://192.168.43.165:8443/version
[root@slave1 ~]# curl -k https://192.168.43.165:8443/version
[root@slave2 ~]# curl -k https://192.168.43.165:8443/version
[root@master2 ~]# curl -k https://192.168.43.165:8443/version
{
  "major": "1",
  "minor": "21",
  "gitVersion": "v1.21.0",
  "gitCommit": "cb303e613a121a29364f75cc67d3d580833a7479",
  "gitTreeState": "clean",
  "buildDate": "2021-04-08T16:25:06Z",
  "goVersion": "go1.16.1",
  "compiler": "gc",
  "platform": "linux/amd64"
}

至此,通過 kubeadm 工具就實現(xiàn)了 Kubernetes 高可用集群的快速搭建。文章來源地址http://www.zghlxwxcb.cn/news/detail-498858.html

到了這里,關(guān)于使用nginx搭建kubernetes高可用集群的文章就介紹完了。如果您還想了解更多內(nèi)容,請在右上角搜索TOY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!

本文來自互聯(lián)網(wǎng)用戶投稿,該文觀點僅代表作者本人,不代表本站立場。本站僅提供信息存儲空間服務(wù),不擁有所有權(quán),不承擔(dān)相關(guān)法律責(zé)任。如若轉(zhuǎn)載,請注明出處: 如若內(nèi)容造成侵權(quán)/違法違規(guī)/事實不符,請點擊違法舉報進行投訴反饋,一經(jīng)查實,立即刪除!

領(lǐng)支付寶紅包贊助服務(wù)器費用

相關(guān)文章

  • 【kubernetes系列】kubernetes之使用kubeadm搭建高可用集群

    目前來說,kubernetes集群搭建的方式很多,選擇一個穩(wěn)定的適合自己的很重要。目前使用kubeadm方式搭建k8s集群還是很常見的,使用kubeadm搭建可以很簡單差不多兩條命令就行,也可以稍微復(fù)雜一點做一些基礎(chǔ)優(yōu)化,本文將分享一下使用kubeadm搭建集群并做了一定的優(yōu)化。 本環(huán)境

    2024年02月15日
    瀏覽(22)
  • Kubernetes - CentOS7搭建k8s_v1.18集群高可用(kubeadm/二進制包部署方式)實測配置驗證手冊

    Kubernetes - CentOS7搭建k8s_v1.18集群高可用(kubeadm/二進制包部署方式)實測配置驗證手冊

    一、Kubernetes—k8s是什么 Kubernetes 這個名字源于希臘語,意為“舵手“或”飛行員\\\"。 Kubernetes,簡稱K8s,中間有8個字符用8代替縮寫。 Google于2014年開源項目,為容器化應(yīng)用提供集群和管理的開源工具,Kubernetes目標(biāo)是讓部署容器化的應(yīng)用簡單并且高效,提供了應(yīng)用部署,規(guī)劃,更

    2024年04月27日
    瀏覽(25)
  • Kubernetes技術(shù)--使用kubeadm搭建高可用的K8s集群(貼近實際環(huán)境)

    Kubernetes技術(shù)--使用kubeadm搭建高可用的K8s集群(貼近實際環(huán)境)

    1.高可用k8s集群架構(gòu)(多master) 2.安裝硬件要求 一臺或多臺機器,操作系統(tǒng) CentOS7.x-86_x64 硬件配置:2GB或更多RAM,2個CPU或更多CPU,硬盤30GB或更多 注: 這里屬于教學(xué)環(huán)境,所以使用三臺虛擬機模擬實現(xiàn)。 3.部署規(guī)劃 4.部署前準(zhǔn)備 (1).關(guān)閉防火墻 (2).關(guān)閉selinux (3).關(guān)閉swap (4).根據(jù)規(guī)

    2024年02月10日
    瀏覽(29)
  • 淺談sealos及使用sealos4.0部署Kubernetes(K8s)高可用集群

    淺談sealos及使用sealos4.0部署Kubernetes(K8s)高可用集群

    sealos 是以kubernetes為內(nèi)核的云操作系統(tǒng)發(fā)行版 早期單機操作系統(tǒng)也是分層架構(gòu),后來才演變成 linux windows這種內(nèi)核架構(gòu),云操作系統(tǒng)從容器誕生之日起分層架構(gòu)被擊穿,未來也會朝著高內(nèi)聚的\\\"云內(nèi)核\\\"架構(gòu)遷移 從現(xiàn)在開始,把你數(shù)據(jù)中心所有機器想象成一臺\\\"抽象\\\"的超級計算機

    2024年02月07日
    瀏覽(98)
  • Kubernetes(K8s)從入門到精通系列之十:使用 kubeadm 創(chuàng)建一個高可用 etcd 集群

    Kubernetes(K8s)從入門到精通系列之十:使用 kubeadm 創(chuàng)建一個高可用 etcd 集群

    默認(rèn)情況下,kubeadm 在每個控制平面節(jié)點上運行一個本地 etcd 實例。也可以使用外部的 etcd 集群,并在不同的主機上提供 etcd 實例。 可以設(shè)置HA集群: 使用堆疊控制控制平面節(jié)點,其中 etcd 節(jié)點與控制平面節(jié)點共存 使用外部 etcd 節(jié)點,其中 etcd 在與控制平面不同的節(jié)點上運行

    2024年02月14日
    瀏覽(34)
  • 【Spring Clound】Nacos高可用集群搭建與使用

    【Spring Clound】Nacos高可用集群搭建與使用

    Nacos 致力于幫助您發(fā)現(xiàn)、配置和管理微服務(wù)。Nacos 提供了一組簡單易用的特性集,幫助您快速實現(xiàn)動態(tài)服務(wù)發(fā)現(xiàn)、服務(wù)配置、服務(wù)元數(shù)據(jù)及流量管理。Nacos 幫助您更敏捷和容易地構(gòu)建、交付和管理微服務(wù)平臺。 Nacos 是構(gòu)建以“服務(wù)”為中心的現(xiàn)代應(yīng)用架構(gòu) (例如微服務(wù)范式、

    2024年02月12日
    瀏覽(22)
  • Kubernetes高可用集群二進制部署(六)Kubernetes集群節(jié)點添加

    Kubernetes高可用集群二進制部署(六)Kubernetes集群節(jié)點添加

    Kubernetes概述 使用kubeadm快速部署一個k8s集群 Kubernetes高可用集群二進制部署(一)主機準(zhǔn)備和負載均衡器安裝 Kubernetes高可用集群二進制部署(二)ETCD集群部署 Kubernetes高可用集群二進制部署(三)部署api-server Kubernetes高可用集群二進制部署(四)部署kubectl和kube-controller-man

    2024年02月14日
    瀏覽(18)
  • centos9 stream 下 rabbitmq高可用集群搭建及使用

    centos9 stream 下 rabbitmq高可用集群搭建及使用

    RabbitMQ是一種常用的消息隊列系統(tǒng),可以快速搭建一個高可用的集群環(huán)境,以提高系統(tǒng)的彈性和可靠性。下面是搭建RabbitMQ集群的步驟: 基于centos9?stream系統(tǒng) 首先需要在所有節(jié)點上安裝Erlang和RabbitMQ。建議使用官方提供的安裝包進行安裝。 在所有節(jié)點上編輯hosts文件,將各節(jié)

    2024年02月06日
    瀏覽(17)
  • 安裝kubernetes高可用集群(v1.26)

    安裝kubernetes高可用集群(v1.26)

    kubernetes的master是需要配置高可用集群的,當(dāng)一臺master出問題了之后另外一臺master仍然是可以繼續(xù)工作的。比如下圖 不管是worker還是client,只要把請求發(fā)送到LB負載均衡器,然后LB會把請求在master1和master2之間進行轉(zhuǎn)發(fā)。這里只要有一個master能夠正常工作,整個kubernetes集群就會

    2024年02月03日
    瀏覽(19)
  • Kubernetes高可用集群二進制部署(二)ETCD集群部署

    Kubernetes高可用集群二進制部署(二)ETCD集群部署

    Kubernetes概述 使用kubeadm快速部署一個k8s集群 Kubernetes高可用集群二進制部署(一)主機準(zhǔn)備和負載均衡器安裝 Kubernetes高可用集群二進制部署(二)ETCD集群部署 Kubernetes高可用集群二進制部署(三)部署api-server Kubernetes高可用集群二進制部署(四)部署kubectl和kube-controller-man

    2024年02月14日
    瀏覽(23)

覺得文章有用就打賞一下文章作者

支付寶掃一掃打賞

博客贊助

微信掃一掃打賞

請作者喝杯咖啡吧~博客贊助

支付寶掃一掃領(lǐng)取紅包,優(yōu)惠每天領(lǐng)

二維碼1

領(lǐng)取紅包

二維碼2

領(lǐng)紅包