国产 无码 综合区,色欲AV无码国产永久播放,无码天堂亚洲国产AV,国产日韩欧美女同一区二区

使用nginx+keepalived搭建kubernetes高可用集群

這篇具有很好參考價值的文章主要介紹了使用nginx+keepalived搭建kubernetes高可用集群。希望對大家有所幫助。如果存在錯誤或未考慮完全的地方,請大家不吝賜教,您也可以點擊"舉報違法"按鈕提交疑問。

使用nginx+keepalived搭建kubernetes高可用集群

本文使用 nginx+keepalived 搭建 kubernetes 高可用集群。

當使用 nginx 作為應用服務器前端軟負載的時候,可以通過 keepalived 來實現(xiàn)虛擬IP(Virtual IP,VIP)在主、備

節(jié)點之前的漂移,其中VIP需要在申請服務器的時候進行創(chuàng)建。

1)、當主節(jié)點 nginx 服務無法啟動,或者主節(jié)點服務器宕機,VIP 將漂移到備用節(jié)點;

2)、當主節(jié)點服務恢復(服務器啟動、keepalived 和 nginx 服務正常運行),備用節(jié)點將會進行備用狀態(tài),并移除

VIP,VIP將漂移回主節(jié)點。

在這個切換過程中,正常情況下,前端用戶是無感知的。

1、環(huán)境準備

服務器規(guī)劃(本實驗采用虛擬機):

ip hostname 說明
192.168.43.200 master master
192.168.43.201 slave1 slave
192.168.43.202 slave2 slave
192.168.43.203 master2 master
192.168.43.200(復用) nginx+keepalived nginx+keepalived
192.168.43.203(復用) nginx+keepalived nginx+keepalived
192.168.43.205(虛擬IP) VIP VIP

2、系統(tǒng)初始化(master&&slave)

2.1 關閉防火墻

# 第1步
# 臨時關閉
systemctl stop firewalld
# 永久關閉
systemctl disable firewalld

2.2 關閉 selinux

# 第2步
# 臨時關閉
setenforce 0
# 永久關閉
sed -i '/SELINUX/s/enforcing/disabled/' /etc/selinux/config

2.3 關閉 swap

# 第3步
# 臨時關閉
swapoff -a
# 永久關閉
sed -ri 's/.*swap.*/#&/' /etc/fstab

2.4 設置主機名稱

使用命令 hostnamectl set-hostname hostname 設置主機名稱,如下四臺主機分別設置為:

# 第4步
# 設置
hostnamectl set-hostname master
hostnamectl set-hostname slave1
hostnamectl set-hostname slave2
hostnamectl set-hostname master2
# 查看當前主機名稱
hostname

2.5 添加hosts

在每個節(jié)點中添加 hosts,即節(jié)點IP地址+節(jié)點名稱。

# 第5步
cat >> /etc/hosts << EOF
192.168.43.200 master
192.168.43.201 slave1
192.168.43.202 slave2
192.168.43.203 master2
EOF

2.6 將橋接的IPv4流量傳遞到iptables的鏈

# 第6步
# 設置
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
# 使其生效
sysctl --system

2.7 時間同步

讓各個節(jié)點(虛擬機)中的時間與本機時間保持一致。

# 第7步
yum install ntpdate -y
ntpdate time.windows.com

注意:虛擬機不管關機還是掛起,每次重新操作都需要更新時間進行同步。

3、Docker的安裝(master&&slave)

3.1 卸載舊版本

# 第8步
yum remove docker \
                  docker-client \
                  docker-client-latest \
                  docker-common \
                  docker-latest \
                  docker-latest-logrotate \
                  docker-logrotate \
                  docker-engine

3.2 設置鏡像倉庫

# 第9步
# 默認是國外的,這里使用阿里云的鏡像
yum-config-manager \
    --add-repo \
    http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

3.3 安裝需要的插件

# 第10步
yum install -y yum-utils \
  device-mapper-persistent-data \
  lvm2

3.4 更新yum軟件包索引

# 第11步
# 更新yum軟件包索引
yum makecache fast

3.5 安裝docker引擎

# 第12步
# 安裝特定版本 
# 查看有哪些版本
yum list docker-ce --showduplicates | sort -r
yum install docker-ce-<VERSION_STRING> docker-ce-cli-<VERSION_STRING> containerd.io
yum install docker-ce-20.10.21 docker-ce-cli-20.10.21 containerd.io
# 安裝最新版本
yum install docker-ce docker-ce-cli containerd.io

3.6 啟動Docker

# 第13步
systemctl enable docker && systemctl start docker

3.7 配置Docker鏡像加速

# 第14步
vim /etc/docker/daemon.json
{
  "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"]
}
# 重啟
systemctl restart docker

3.8 查看加速是否生效

# 第15步
docker info

3.9 驗證Docker信息

# 第16步
docker -v

3.10 其它Docker命令

# 停止docker
systemctl stop docker

# 查看docker狀態(tài)
systemctl status docker

3.11 卸載Docker的命令

yum remove docker-ce-20.10.21 docker-ce-cli-20.10.21 containerd.io
rm -rf /var/lib/docker
rm -rf /var/lib/containerd

4、添加阿里云yum源(master&&slave)

所有節(jié)點都需要執(zhí)行,nginx節(jié)點不需要執(zhí)行。

# 第17步
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[Kubernetes]
name=kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

5、kubeadm、kubelet、kubectl的安裝(master&&slave)

所有節(jié)點都需要執(zhí)行,nginx節(jié)點不需要執(zhí)行。

# 第18步
yum install -y kubelet-1.21.0 kubeadm-1.21.0 kubectl-1.21.0 --disableexcludes=kubernetes

6、啟動kubelet服務(master&&slave)

所有節(jié)點都需要執(zhí)行,nginx節(jié)點不需要執(zhí)行。

# 第19步
systemctl enable kubelet && systemctl start kubelet

7、nginx+keepalived安裝(master&&master2)

  • Nginx 是一個主流 Web 服務和反向代理服務器,這里用四層實現(xiàn)對 apiserver 實現(xiàn)負載均衡。

  • Keepalived 是一個主流高可用軟件,基于 VIP 綁定實現(xiàn)服務器雙機熱備,Keepalived 主要根據(jù) Nginx 運行狀

    態(tài)判斷是否需要故障轉移(漂移VIP),例如當 Nginx 主節(jié)點掛掉,VIP 會自動綁定在 Nginx 備節(jié)點,從而保證

    VIP 一直可用,實現(xiàn) Nginx 高可用。

  • 如果你是在公有云上,一般都不支持 keepalived,那么你可以直接用它們的負載均衡器產品,直接負載均衡

    多臺 master kube-apiserver。

下面的操作在兩臺 master 節(jié)點上進行操作。

7.1 安裝軟件包(master/master2)

# 第20步
yum install epel-release -y
yum install nginx keepalived -y

7.2 Nginx配置文件(master和master2相同)(兩臺master分別做為主備)

# 第21步
cat > /etc/nginx/nginx.conf << "EOF"
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;

include /usr/share/nginx/modules/*.conf;

events {
    worker_connections 1024;
}

# 四層負載均衡,為兩臺master apiserver組件提供負載均衡
stream {

    log_format  main  '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';

    access_log  /var/log/nginx/k8s-access.log  main;

    upstream k8s-apiserver {
       server 192.168.43.200:6443;   # master APISERVER IP:PORT
       server 192.168.43.203:6443;   # master2 APISERVER IP:PORT
    }
    
    server {
       listen 16443; # 由于nginx與master節(jié)點復用,這個監(jiān)聽端口不能是6443,否則會沖突
       proxy_pass k8s-apiserver;
    }
}

http {
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile            on;
    tcp_nopush          on;
    tcp_nodelay         on;
    keepalive_timeout   65;
    types_hash_max_size 2048;

    include             /etc/nginx/mime.types;
    default_type        application/octet-stream;

    server {
        listen       80 default_server;
        server_name  _;

        location / {
        }
    }
}
EOF

7.3 keepalived配置文件(master和master2相同)

# 第22步
cat > /etc/keepalived/keepalived.conf << EOF
global_defs { 
   notification_email { 
     acassen@firewall.loc 
     failover@firewall.loc 
     sysadmin@firewall.loc 
   } 
   notification_email_from Alexandre.Cassen@firewall.loc  
   smtp_server 127.0.0.1 
   smtp_connect_timeout 30 
   router_id NGINX_MASTER
} 

vrrp_script check_nginx {
    script "/etc/keepalived/check_nginx.sh"
}

vrrp_instance VI_1 { 
    state MASTER 
    interface ens33  # 修改為實際網(wǎng)卡名
    virtual_router_id 51 # VRRP 路由 ID實例,每個實例是唯一的 
    priority 100    # 優(yōu)先級,備服務器設置 90 
    advert_int 1    # 指定VRRP 心跳包通告間隔時間,默認1秒 
    authentication { 
        auth_type PASS      
        auth_pass 1111 
    }  
    # 虛擬IP
    virtual_ipaddress { 
        192.168.43.205/24 # 虛擬IP
    } 
    track_script {
        check_nginx
    } 
}
EOF
  • vrrp_script:指定檢查nginx工作狀態(tài)腳本(根據(jù)nginx狀態(tài)判斷是否故障轉移)

  • virtual_ipaddress:虛擬IP(VIP)

準備上述配置文件中檢查Nginx運行狀態(tài)的腳本

# 第23步
cat > /etc/keepalived/check_nginx.sh  << "EOF"
#!/bin/bash
count=$(ps -C nginx --no-heading | wc -l)

if [ "$count" -eq 0 ];then
    exit 1
else
    exit 0
fi
EOF

chmod +x /etc/keepalived/check_nginx.sh

說明:keepalived根據(jù)腳本返回狀態(tài)碼(0為工作正常,非0不正常)判斷是否故障轉移。

7.4 Nginx增加Steam模塊(在master2上操作)

7.4.1 查看Nginx版本模塊

如果已經(jīng)安裝--with-stream模塊,后面的步驟可以跳過。

# 第24步
[root@k8s-master2 nginx-1.20.1]# nginx -V
nginx version: nginx/1.20.1
built by gcc 4.8.5 20150623 (Red Hat 4.8.5-44) (GCC) 
configure arguments: --prefix=/usr/share/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib64/nginx/modules --conf-path=/etc/nginx/nginx.conf --with-stream
# --with-stream代表安裝
7.4.2 下載同一個版本的nginx

下載地址:http://nginx.org/download/

這里下載:http://nginx.org/download/nginx-1.20.1.tar.gz

7.4.3 備份原Nginx文件
# 第25步
mv /usr/sbin/nginx /usr/sbin/nginx.bak
cp -r /etc/nginx{,.bak}
7.4.4 重新編譯Nginx
# 根據(jù)第1步查到已有的模塊,加上本次需新增的模塊: --with-stream
# 檢查模塊是否支持,比如這次添加limit限流模塊和stream模塊
# -without-http_limit_conn_module disable表示已有該模塊,編譯時,不需要添加
./configure -help | grep limit
# -with-stream enable表示不支持,編譯時要自己添加該模塊
./configure -help | grep stream

編譯環(huán)境準備:

# 第26步
yum -y install libxml2 libxml2-dev libxslt-devel 
yum -y install gd-devel 
yum -y install perl-devel perl-ExtUtils-Embed 
yum -y install GeoIP GeoIP-devel GeoIP-data
yum -y install pcre-devel
yum -y install openssl openssl-devel
yum -y install gcc make

編譯:

# 第27步
tar -xf nginx-1.20.1.tar.gz
cd nginx-1.20.1/
./configure --prefix=/usr/share/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib64/nginx/modules --conf-path=/etc/nginx/nginx.conf  --with-stream
make

說明:make完成后不要繼續(xù)輸入make install,以免現(xiàn)在的nginx出現(xiàn)問題。以上完成后,會在objs目錄下生成

一個nginx文件,先驗證:

# 第28步
[root@k8s-master2 nginx-1.20.1]# ./objs/nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
7.4.5 替換nginx到master1/master2
# 第29步
cp ./objs/nginx /usr/sbin/ 
scp objs/nginx root@192.168.164.200:/usr/sbin/
7.4.6 修改nginx服務文件(master和master2)
# 第30步
vim /usr/lib/systemd/system/nginx.service
[Unit]
Description=The nginx HTTP and reverse proxy server
After=network.target remote-fs.target nss-lookup.target
[Service]
Type=forking
PIDFile=/run/nginx.pid
ExecStartPre=/usr/bin/rm -rf /run/nginx.pid
ExecStartPre=/usr/sbin/nginx -t
ExecStart=/usr/sbin/nginx
ExecStop=/usr/sbin/nginx -s stop
ExecReload=/usr/sbin/nginx -s reload
PrivateTmp=true
[Install]
WantedBy=multi-user.target

7.5 啟動并設置開機自啟(master1/master2)

# 第31步
systemctl daemon-reload
systemctl start nginx keepalived
systemctl enable nginx keepalived
systemctl status nginx keepalived
[root@master ~]# systemctl status nginx keepalived
● nginx.service - The nginx HTTP and reverse proxy server
   Loaded: loaded (/usr/lib/systemd/system/nginx.service; enabled; vendor preset: disabled)
   Active: active (running) since 三 2023-06-21 09:01:49 CST; 2s ago
  Process: 69549 ExecStop=/usr/sbin/nginx -s stop (code=exited, status=0/SUCCESS)
  Process: 69865 ExecStart=/usr/sbin/nginx (code=exited, status=0/SUCCESS)
  Process: 69857 ExecStartPre=/usr/sbin/nginx -t (code=exited, status=0/SUCCESS)
  Process: 69854 ExecStartPre=/usr/bin/rm -rf /run/nginx.pid (code=exited, status=0/SUCCESS)
 Main PID: 69868 (nginx)
    Tasks: 5
   Memory: 2.6M
   CGroup: /system.slice/nginx.service
           ├─69868 nginx: master process /usr/sbin/nginx
           ├─69870 nginx: worker process
           ├─69871 nginx: worker process
           ├─69873 nginx: worker process
           └─69875 nginx: worker process

621 09:01:49 master systemd[1]: Starting The nginx HTTP and reverse proxy server...
621 09:01:49 master nginx[69857]: nginx: [alert] could not open error log file: open() "/usr/share/nginx/logs/e...ctory)
6月 21 09:01:49 master nginx[69857]: nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
6月 21 09:01:49 master nginx[69857]: nginx: configuration file /etc/nginx/nginx.conf test is successful
6月 21 09:01:49 master nginx[69865]: nginx: [alert] could not open error log file: open() "/usr/share/nginx/logs/e...ctory)
621 09:01:49 master systemd[1]: Started The nginx HTTP and reverse proxy server.

● keepalived.service - LVS and VRRP High Availability Monitor
   Loaded: loaded (/usr/lib/systemd/system/keepalived.service; enabled; vendor preset: disabled)
   Active: active (running) since 三 2023-06-21 09:01:49 CST; 2s ago
  Process: 69855 ExecStart=/usr/sbin/keepalived $KEEPALIVED_OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 69858 (keepalived)
    Tasks: 3
   Memory: 1.5M
   CGroup: /system.slice/keepalived.service
           ├─69858 /usr/sbin/keepalived -D
           ├─69859 /usr/sbin/keepalived -D
           └─69861 /usr/sbin/keepalived -D

621 09:01:49 master systemd[1]: Starting LVS and VRRP High Availability Monitor...
621 09:01:49 master systemd[1]: Started LVS and VRRP High Availability Monitor.
Hint: Some lines were ellipsized, use -l to show in full.
[root@master2 ~]# systemctl status nginx keepalived
● nginx.service - The nginx HTTP and reverse proxy server
   Loaded: loaded (/usr/lib/systemd/system/nginx.service; enabled; vendor preset: disabled)
   Active: active (running) since 三 2023-06-21 09:01:46 CST; 8s ago
  Process: 7614 ExecStop=/usr/sbin/nginx -s stop (code=exited, status=0/SUCCESS)
  Process: 7853 ExecStart=/usr/sbin/nginx (code=exited, status=0/SUCCESS)
  Process: 7843 ExecStartPre=/usr/sbin/nginx -t (code=exited, status=0/SUCCESS)
  Process: 7838 ExecStartPre=/usr/bin/rm -rf /run/nginx.pid (code=exited, status=0/SUCCESS)
 Main PID: 7855 (nginx)
    Tasks: 5
   Memory: 2.6M
   CGroup: /system.slice/nginx.service
           ├─7855 nginx: master process /usr/sbin/nginx
           ├─7856 nginx: worker process
           ├─7857 nginx: worker process
           ├─7858 nginx: worker process
           └─7859 nginx: worker process

621 09:01:46 master2 systemd[1]: Starting The nginx HTTP and reverse proxy server...
621 09:01:46 master2 nginx[7843]: nginx: [alert] could not open error log file: open() "/usr/share/nginx/logs/e...ctory)
6月 21 09:01:46 master2 nginx[7843]: nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
6月 21 09:01:46 master2 nginx[7843]: nginx: configuration file /etc/nginx/nginx.conf test is successful
6月 21 09:01:46 master2 nginx[7853]: nginx: [alert] could not open error log file: open() "/usr/share/nginx/logs/e...ctory)
621 09:01:46 master2 systemd[1]: Started The nginx HTTP and reverse proxy server.

● keepalived.service - LVS and VRRP High Availability Monitor
   Loaded: loaded (/usr/lib/systemd/system/keepalived.service; enabled; vendor preset: disabled)
   Active: active (running) since 三 2023-06-21 09:01:46 CST; 8s ago
  Process: 7839 ExecStart=/usr/sbin/keepalived $KEEPALIVED_OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 7840 (keepalived)
    Tasks: 4
   Memory: 1.8M
   CGroup: /system.slice/keepalived.service
           ├─  7840 /usr/sbin/keepalived -D
           ├─  7841 /usr/sbin/keepalived -D
           ├─  7842 /usr/sbin/keepalived -D
           └─120419 ps -ef

621 09:01:46 master2 Keepalived_vrrp[7842]: SECURITY VIOLATION - scripts are being executed but script_security ...bled.
621 09:01:46 master2 Keepalived_vrrp[7842]: VRRP_Instance(VI_1) removing protocol VIPs.
621 09:01:46 master2 Keepalived_vrrp[7842]: Using LinkWatch kernel netlink reflector...
621 09:01:46 master2 Keepalived_vrrp[7842]: VRRP_Instance(VI_1) Entering BACKUP STATE
621 09:01:46 master2 Keepalived_vrrp[7842]: VRRP sockpool: [ifindex(2), proto(112), unicast(0), fd(10,11)]
621 09:01:46 master2 Keepalived_vrrp[7842]: /etc/keepalived/check_nginx.sh exited with status 1
621 09:01:47 master2 Keepalived_vrrp[7842]: VRRP_Script(check_nginx) succeeded
621 09:01:50 master2 Keepalived_vrrp[7842]: VRRP_Instance(VI_1) Transition to MASTER STATE
621 09:01:50 master2 Keepalived_vrrp[7842]: VRRP_Instance(VI_1) Received advert with higher priority 100, ours 90
621 09:01:50 master2 Keepalived_vrrp[7842]: VRRP_Instance(VI_1) Entering BACKUP STATE
Hint: Some lines were ellipsized, use -l to show in full.

7.6 查看keepalived工作狀態(tài)

# 第32步
[root@master ~]# ip addr | grep inet
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host
    inet 192.168.43.200/24 brd 192.168.43.255 scope global noprefixroute ens33
    inet 192.168.43.205/24 scope global secondary ens33
    inet6 2409:8903:304:bb7:41b4:9f94:9bc6:3a50/64 scope global noprefixroute dynamic
    inet6 fe80::c8e0:482b:7618:82bb/64 scope link noprefixroute
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
    inet6 fe80::42:fbff:fe1e:7fb7/64 scope link
    inet6 fe80::2c4c:e9ff:fee8:6134/64 scope link
    inet 10.96.0.10/32 scope global kube-ipvs0
    inet 10.96.0.1/32 scope global kube-ipvs0
    inet 10.101.110.138/32 scope global kube-ipvs0

[root@master2 ~]# ip addr | grep inet
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host
    inet 192.168.43.203/24 brd 192.168.43.255 scope global noprefixroute ens33
    inet6 2409:8903:304:bb7:1d19:410:2404:9753/64 scope global noprefixroute dynamic
    inet6 fe80::9bc0:3f5:d3cd:a77b/64 scope link noprefixroute
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0

可以看到,在 ens33 網(wǎng)卡綁定了 192.168.43.205 虛擬IP,說明工作正常。

8、部署k8s-master

8.1 kubeadm初始化(master node)

1.21.0 版本在初始化過程中會報錯,是因為阿里云倉庫中不存在 coredns/coredns 鏡像,也就是

registry.aliyuncs.com/google_containers/coredns/coredns:v1.8.0鏡像不存在。

解決方法:

# 第33步
# master節(jié)點執(zhí)行
# 該步驟需要提前執(zhí)行,否則的話在初始化的時候由于找不到鏡像會報錯
[root@master ~]# docker pull coredns/coredns:1.8.0
1.8.0: Pulling from coredns/coredns
c6568d217a00: Pull complete
5984b6d55edf: Pull complete
Digest: sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e
Status: Downloaded newer image for coredns/coredns:1.8.0
docker.io/coredns/coredns:1.8.0
[root@master ~]# docker tag coredns/coredns:1.8.0 registry.aliyuncs.com/google_containers/coredns/coredns:v1.8.0
[root@master ~]# docker rmi coredns/coredns:1.8.0
Untagged: coredns/coredns:1.8.0
Untagged: coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e

在 master 節(jié)點中執(zhí)行以下命令,注意將 master 節(jié)點 IP 和 kubeadm 版本號和 --control-plane-endpoint 修改為

自己主機中所對應的。

# 第34步
# master節(jié)點執(zhí)行
[root@master ~]# kubeadm init \
 --apiserver-advertise-address=192.168.43.200 \
 --image-repository registry.aliyuncs.com/google_containers \
 --control-plane-endpoint=192.168.43.205:16443 \
 --kubernetes-version v1.21.0 \
 --service-cidr=10.96.0.0/12 \
 --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.21.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master] and IPs [10.96.0.1 192.168.43.200 192.168.43.205]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master] and IPs [192.168.43.200 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master] and IPs [192.168.43.200 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "admin.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 63.524903 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.21" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: ukkdz8.qbzye91bxbv7kb8e
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join 192.168.43.205:16443 --token ukkdz8.qbzye91bxbv7kb8e \
        --discovery-token-ca-cert-hash sha256:fac357dde165cf441faba5ed033b4cc76751527ac69c59780d962bc8033f1a97 \
        --control-plane

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.43.205:16443 --token ukkdz8.qbzye91bxbv7kb8e \
        --discovery-token-ca-cert-hash sha256:fac357dde165cf441faba5ed033b4cc76751527ac69c59780d962bc8033f1a97

查看命令執(zhí)行后的提示信息,看到 Your Kubernetes control-plane has initialized successfully!

明我們 master 節(jié)點上的 k8s 集群已經(jīng)搭建成功。

8.2 開啟kubectl工具的使用(master node)

# 第35步
# master節(jié)點執(zhí)行
[root@master ~]# mkdir -p $HOME/.kube
[root@master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

查看集群的節(jié)點:

# 第36步
# master節(jié)點執(zhí)行
[root@master ~]# kubectl get nodes
NAME     STATUS   ROLES                  AGE     VERSION
master   NotReady    control-plane,master   3m50s   v1.21.0

8.3 slave節(jié)點加入集群(slave node)

# 第37步
# slave1節(jié)點執(zhí)行
[root@slave1 ~]# systemctl status nginx keepalived
Unit nginx.service could not be found.
Unit keepalived.service could not be found.
[root@slave1 ~]# kubeadm join 192.168.43.205:16443 --token ukkdz8.qbzye91bxbv7kb8e \
>         --discovery-token-ca-cert-hash sha256:fac357dde165cf441faba5ed033b4cc76751527ac69c59780d962bc8033f1a97
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
# 第38步
# slave2節(jié)點執(zhí)行
[root@slave2 ~]# kubeadm join 192.168.43.205:16443 --token ukkdz8.qbzye91bxbv7kb8e \
>         --discovery-token-ca-cert-hash sha256:fac357dde165cf441faba5ed033b4cc76751527ac69c59780d962bc8033f1a97
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

查看集群的節(jié)點:

# 第39步
# master節(jié)點執(zhí)行
[root@master ~]# kubectl get nodes
NAME     STATUS   	 ROLES                  AGE     VERSION
master   NotReady    control-plane,master   5m35s   v1.21.0
slave1   NotReady    <none>                 50s     v1.21.0
slave2   NotReady    <none>                 45s     v1.21.0

8.4 master2節(jié)點加入集群(master2 node)

# 第40步
# master2節(jié)點執(zhí)行
# 鏡像下載
[root@master2 ~]# docker pull registry.aliyuncs.com/google_containers/kube-apiserver:v1.21.0
[root@master2 ~]# docker pull registry.aliyuncs.com/google_containers/kube-controller-manager:v1.21.0
[root@master2 ~]# docker pull registry.aliyuncs.com/google_containers/kube-scheduler:v1.21.0
[root@master2 ~]# docker pull registry.aliyuncs.com/google_containers/kube-proxy:v1.21.0
[root@master2 ~]# docker pull registry.aliyuncs.com/google_containers/pause:3.4.1
[root@master2 ~]# docker pull registry.aliyuncs.com/google_containers/etcd:3.4.13-0
# 1.21.0版本的k8s中,阿里云鏡像中沒有registry.aliyuncs.com/google_containers/coredns/coredns:v1.8.0鏡像,所以需要從別的地方下載鏡像,然后再進行處理
[root@master2 ~]# docker pull coredns/coredns:1.8.0
[root@master2 ~]# docker tag coredns/coredns:1.8.0 registry.aliyuncs.com/google_containers/coredns/coredns:v1.8.0
[root@master2 ~]# docker rmi coredns/coredns:1.8.0

證書拷貝:

# 第41步
# master2節(jié)點執(zhí)行
# 創(chuàng)建目錄
[root@master2 ~]# mkdir -p /etc/kubernetes/pki/etcd
# 第42步
# master節(jié)點執(zhí)行
# 將master節(jié)點上的證書拷貝到master2節(jié)點上
[root@master ~]# scp -rp /etc/kubernetes/pki/ca.* master2:/etc/kubernetes/pki
[root@master ~]# scp -rp /etc/kubernetes/pki/sa.* master2:/etc/kubernetes/pki
[root@master ~]# scp -rp /etc/kubernetes/pki/front-proxy-ca.* master2:/etc/kubernetes/pki
[root@master ~]# scp -rp /etc/kubernetes/pki/etcd/ca.* master2:/etc/kubernetes/pki/etcd
[root@master ~]# scp -rp /etc/kubernetes/admin.conf master2:/etc/kubernetes

加入集群:

# 第43步
# master2節(jié)點執(zhí)行
[root@master2 ~]# kubeadm join 192.168.43.205:16443 --token ukkdz8.qbzye91bxbv7kb8e \
>         --discovery-token-ca-cert-hash sha256:fac357dde165cf441faba5ed033b4cc76751527ac69c59780d962bc8033f1a97 \
>         --control-plane
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master2] and IPs [10.96.0.1 192.168.43.203 192.168.43.205]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master2] and IPs [192.168.43.203 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master2] and IPs [192.168.43.203 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[check-etcd] Checking that the etcd cluster is healthy
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[mark-control-plane] Marking the node master2 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master2 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]

This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

        mkdir -p $HOME/.kube
        sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
        sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.
# 第44步
# master2節(jié)點執(zhí)行
[root@master2 ~]# mkdir -p $HOME/.kube
[root@master2 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master2 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

查看節(jié)點:

# 第45步
# master節(jié)點執(zhí)行
[root@master ~]# kubectl get nodes
NAME      STATUS      ROLES                  AGE     VERSION
master    NotReady    control-plane,master   9m18s   v1.21.0
master2   NotReady    control-plane,master   104s    v1.21.0
slave1    NotReady    <none>                 4m33s   v1.21.0
slave2    NotReady    <none>                 4m28s   v1.21.0
# 第46步
# master2節(jié)點執(zhí)行
[root@master2 ~]# kubectl get nodes
NAME      STATUS      ROLES                  AGE     VERSION
master    NotReady    control-plane,master   9m18s   v1.21.0
master2   NotReady    control-plane,master   104s    v1.21.0
slave1    NotReady    <none>                 4m33s   v1.21.0
slave2    NotReady    <none>                 4m28s   v1.21.0

注:由于網(wǎng)絡插件還沒有部署,所有節(jié)點還沒有準備就緒,狀態(tài)為 NotReady,下面安裝網(wǎng)絡插件。

9、安裝網(wǎng)絡插件fannel(master node)

查看集群的信息:

# 第47步
# master節(jié)點執(zhí)行
[root@master ~]# kubectl get nodes
NAME      STATUS   ROLES                  AGE     VERSION
master    NotReady    control-plane,master   13m     v1.21.0
master2   NotReady    control-plane,master   2m50s   v1.21.0
slave1    NotReady    <none>                 6m43s   v1.21.0
slave2    NotReady    <none>                 6m35s   v1.21.0

[root@master ~]# kubectl get pods --all-namespaces
NAMESPACE     NAME                              READY   STATUS    RESTARTS   AGE
kube-system   coredns-545d6fc579-2wmjs          1/1     Running   0          12m
kube-system   coredns-545d6fc579-lp4dg          1/1     Running   0          12m
kube-system   etcd-master                       1/1     Running   0          12m
kube-system   etcd-master2                      1/1     Running   0          2m53s
kube-system   kube-apiserver-master             1/1     Running   1          13m
kube-system   kube-apiserver-master2            1/1     Running   0          2m56s
kube-system   kube-controller-manager-master    1/1     Running   1          12m
kube-system   kube-controller-manager-master2   1/1     Running   0          2m56s
kube-system   kube-proxy-6dtsk                  1/1     Running   0          2m57s
kube-system   kube-proxy-hc5tl                  1/1     Running   0          6m50s
kube-system   kube-proxy-kc824                  1/1     Running   0          6m42s
kube-system   kube-proxy-mltbt                  1/1     Running   0          12m
kube-system   kube-scheduler-master             1/1     Running   1          12m
kube-system   kube-scheduler-master2            1/1     Running   0          2m57
# 第48步
# master節(jié)點執(zhí)行
# 獲取fannel的配置文件
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
# 如果出現(xiàn)無法訪問的情況,可以直接用下面的flannel網(wǎng)絡的官方github地址
wget https://github.com/flannel-io/flannel/tree/master/Documentation/kube-flannel.yml
# 第49步
# master節(jié)點執(zhí)行
# 修改文件內容
net-conf.json: |
    {
      "Network": "10.244.0.0/16", #這里的網(wǎng)段地址需要與master初始化的必須保持一致
      "Backend": {
        "Type": "vxlan"
      }
    }
# 第50步
# master節(jié)點執(zhí)行
[root@master ~]# kubectl apply -f kube-flannel.yml
namespace/kube-flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

查看節(jié)點情況:

# 第51步
# master節(jié)點執(zhí)行
[root@master ~]# kubectl get nodes
NAME      STATUS   ROLES                  AGE     VERSION
master    Ready    control-plane,master   15m     v1.21.0
master2   Ready    control-plane,master   4m58s   v1.21.0
slave1    Ready    <none>                 8m51s   v1.21.0
slave2    Ready    <none>                 8m43s   v1.21.0
# 第52步
# master2節(jié)點執(zhí)行
[root@master2 ~]# kubectl get nodes
NAME      STATUS   ROLES                  AGE     VERSION
master    Ready    control-plane,master   15m     v1.21.0
master2   Ready    control-plane,master   4m58s   v1.21.0
slave1    Ready    <none>                 8m51s   v1.21.0
slave2    Ready    <none>                 8m43s   v1.21.0

查看 pod 情況:

# 第53步
# master節(jié)點執(zhí)行
[root@master ~]# kubectl get pods --all-namespaces
NAMESPACE      NAME                              READY   STATUS    RESTARTS   AGE
kube-flannel   kube-flannel-ds-8z6gt             1/1     Running   0          53s
kube-flannel   kube-flannel-ds-j7dt6             1/1     Running   0          53s
kube-flannel   kube-flannel-ds-xrb5p             1/1     Running   0          53s
kube-flannel   kube-flannel-ds-xs6rr             1/1     Running   0          53s
kube-system    coredns-545d6fc579-2wmjs          1/1     Running   0          15m
kube-system    coredns-545d6fc579-lp4dg          1/1     Running   0          15m
kube-system    etcd-master                       1/1     Running   0          15m
kube-system    etcd-master2                      1/1     Running   0          5m20s
kube-system    kube-apiserver-master             1/1     Running   1          15m
kube-system    kube-apiserver-master2            1/1     Running   0          5m23s
kube-system    kube-controller-manager-master    1/1     Running   1          15m
kube-system    kube-controller-manager-master2   1/1     Running   0          5m23s
kube-system    kube-proxy-6dtsk                  1/1     Running   0          5m24s
kube-system    kube-proxy-hc5tl                  1/1     Running   0          9m17s
kube-system    kube-proxy-kc824                  1/1     Running   0          9m9s
kube-system    kube-proxy-mltbt                  1/1     Running   0          15m
kube-system    kube-scheduler-master             1/1     Running   1          15m
kube-system    kube-scheduler-master2            1/1     Running   0          5m24s
# 第54步
# master2節(jié)點執(zhí)行
NAMESPACE      NAME                              READY   STATUS    RESTARTS   AGE
kube-flannel   kube-flannel-ds-8z6gt             1/1     Running   0          53s
kube-flannel   kube-flannel-ds-j7dt6             1/1     Running   0          53s
kube-flannel   kube-flannel-ds-xrb5p             1/1     Running   0          53s
kube-flannel   kube-flannel-ds-xs6rr             1/1     Running   0          53s
kube-system    coredns-545d6fc579-2wmjs          1/1     Running   0          15m
kube-system    coredns-545d6fc579-lp4dg          1/1     Running   0          15m
kube-system    etcd-master                       1/1     Running   0          15m
kube-system    etcd-master2                      1/1     Running   0          5m20s
kube-system    kube-apiserver-master             1/1     Running   1          15m
kube-system    kube-apiserver-master2            1/1     Running   0          5m23s
kube-system    kube-controller-manager-master    1/1     Running   1          15m
kube-system    kube-controller-manager-master2   1/1     Running   0          5m23s
kube-system    kube-proxy-6dtsk                  1/1     Running   0          5m24s
kube-system    kube-proxy-hc5tl                  1/1     Running   0          9m17s
kube-system    kube-proxy-kc824                  1/1     Running   0          9m9s
kube-system    kube-proxy-mltbt                  1/1     Running   0          15m
kube-system    kube-scheduler-master             1/1     Running   1          15m
kube-system    kube-scheduler-master2            1/1     Running   0          5m24s

10、測試

# 第55步
[root@master ~]# curl -k https://192.168.43.205:16443/version
[root@slave1 ~]# curl -k https://192.168.43.205:16443/version
[root@slave2 ~]# curl -k https://192.168.43.205:16443/version
[root@master2 ~]# curl -k https://192.168.43.205:16443/version
{
  "major": "1",
  "minor": "21",
  "gitVersion": "v1.21.0",
  "gitCommit": "cb303e613a121a29364f75cc67d3d580833a7479",
  "gitTreeState": "clean",
  "buildDate": "2021-04-08T16:25:06Z",
  "goVersion": "go1.16.1",
  "compiler": "gc",
  "platform": "linux/amd64"
}

通過虛擬 ip 可以正常訪問。

11、nginx+keepalived高可用測試

關閉主節(jié)點 nginx,測試 VIP 是否漂移到備節(jié)點服務器。 在 nginx master 執(zhí)行 systemctl stop nginx;在 nginx

備節(jié)點,ip addr 命令查看已成功綁定 VIP。

# 第56步
[root@master ~]# ip addr | grep inet
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host
    inet 192.168.43.200/24 brd 192.168.43.255 scope global noprefixroute ens33
    inet 192.168.43.205/24 scope global secondary ens33
    inet6 2409:8903:304:bb7:41b4:9f94:9bc6:3a50/64 scope global noprefixroute dynamic
    inet6 fe80::c8e0:482b:7618:82bb/64 scope link noprefixroute
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
    inet6 fe80::42:fbff:fe1e:7fb7/64 scope link
    inet6 fe80::2c4c:e9ff:fee8:6134/64 scope link
    inet 10.96.0.10/32 scope global kube-ipvs0
    inet 10.96.0.1/32 scope global kube-ipvs0
    inet 10.101.110.138/32 scope global kube-ipvs0
    inet 10.244.0.1/24 brd 10.244.0.255 scope global cni0

[root@master2 ~]# ip addr | grep inet
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host
    inet 192.168.43.203/24 brd 192.168.43.255 scope global noprefixroute ens33
    inet6 2409:8903:304:bb7:1d19:410:2404:9753/64 scope global noprefixroute dynamic
    inet6 fe80::9bc0:3f5:d3cd:a77b/64 scope link noprefixroute
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0

[root@master ~]# systemctl stop nginx

[root@master ~]# ip addr | grep inet
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host
    inet 192.168.43.200/24 brd 192.168.43.255 scope global noprefixroute ens33
    inet6 2409:8903:304:bb7:41b4:9f94:9bc6:3a50/64 scope global noprefixroute dynamic
    inet6 fe80::c8e0:482b:7618:82bb/64 scope link noprefixroute
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
    inet6 fe80::42:fbff:fe1e:7fb7/64 scope link
    inet6 fe80::2c4c:e9ff:fee8:6134/64 scope link
    inet 10.96.0.10/32 scope global kube-ipvs0
    inet 10.96.0.1/32 scope global kube-ipvs0
    inet 10.101.110.138/32 scope global kube-ipvs0
    inet 10.244.0.1/24 brd 10.244.0.255 scope global cni0

[root@master2 ~]# ip addr | grep inet
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host
    inet 192.168.43.203/24 brd 192.168.43.255 scope global noprefixroute ens33
    inet 192.168.43.205/24 scope global secondary ens33
    inet6 2409:8903:304:bb7:1d19:410:2404:9753/64 scope global noprefixroute dynamic
    inet6 fe80::9bc0:3f5:d3cd:a77b/64 scope link noprefixroute
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0

keepalived 可以正常切換。

12、訪問負載均衡器測試

找 k8s 集群中任意一個節(jié)點,使用 curl 查看 K8s 版本測試,使用 VIP 訪問:

# 第57步
[root@master ~]# curl -k https://192.168.43.205:16443/version
[root@slave1 ~]# curl -k https://192.168.43.205:16443/version
[root@slave2 ~]# curl -k https://192.168.43.205:16443/version
[root@master2 ~]# curl -k https://192.168.43.205:16443/version
{
  "major": "1",
  "minor": "21",
  "gitVersion": "v1.21.0",
  "gitCommit": "cb303e613a121a29364f75cc67d3d580833a7479",
  "gitTreeState": "clean",
  "buildDate": "2021-04-08T16:25:06Z",
  "goVersion": "go1.16.1",
  "compiler": "gc",
  "platform": "linux/amd64"
}

可以正確獲取到 K8s 版本信息,說明負載均衡器搭建正常,該請求數(shù)據(jù)流程:

curl -> vip(nginx) -> apiserver ,通過查看Nginx日志也可以看到轉發(fā)apiserver IP:

# 第58步
[root@master2 ~]# tailf /var/log/nginx/k8s-access.log
192.168.43.203 192.168.43.203:6443 - [21/Jun/2023:10:07:57 +0800] 200 1092
192.168.43.203 192.168.43.200:6443 - [21/Jun/2023:10:07:57 +0800] 200 1092
192.168.43.201 192.168.43.203:6443 - [21/Jun/2023:10:07:57 +0800] 200 1091
192.168.43.203 192.168.43.203:6443 - [21/Jun/2023:10:07:58 +0800] 200 1092
192.168.43.202 192.168.43.200:6443 - [21/Jun/2023:10:07:58 +0800] 200 1092
192.168.43.200 192.168.43.200:6443 - [21/Jun/2023:10:07:58 +0800] 200 1471
192.168.43.202 192.168.43.203:6443 - [21/Jun/2023:10:07:58 +0800] 200 1092
192.168.43.200 192.168.43.200:6443 - [21/Jun/2023:10:07:58 +0800] 200 1471
192.168.43.203 192.168.43.203:6443 - [21/Jun/2023:10:07:58 +0800] 200 375
192.168.43.202 192.168.43.203:6443 - [21/Jun/2023:10:07:58 +0800] 200 1092
192.168.43.201 192.168.43.200:6443 - [21/Jun/2023:10:09:49 +0800] 200 424
192.168.43.201 192.168.43.203:6443 - [21/Jun/2023:10:09:51 +0800] 200 424
192.168.43.201 192.168.43.200:6443 - [21/Jun/2023:10:09:51 +0800] 200 424
192.168.43.201 192.168.43.203:6443 - [21/Jun/2023:10:09:52 +0800] 200 424
192.168.43.201 192.168.43.200:6443 - [21/Jun/2023:10:09:52 +0800] 200 424
192.168.43.201 192.168.43.203:6443 - [21/Jun/2023:10:09:53 +0800] 200 424
192.168.43.201 192.168.43.200:6443 - [21/Jun/2023:10:09:53 +0800] 200 424
192.168.43.201 192.168.43.200:6443 - [21/Jun/2023:10:09:54 +0800] 200 424
192.168.43.201 192.168.43.203:6443 - [21/Jun/2023:10:09:54 +0800] 200 424
192.168.43.201 192.168.43.200:6443 - [21/Jun/2023:10:09:55 +0800] 200 424

至此,通過 kubeadm 工具就實現(xiàn)了 Kubernetes 高可用集群的快速搭建。文章來源地址http://www.zghlxwxcb.cn/news/detail-609372.html

到了這里,關于使用nginx+keepalived搭建kubernetes高可用集群的文章就介紹完了。如果您還想了解更多內容,請在右上角搜索TOY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關文章,希望大家以后多多支持TOY模板網(wǎng)!

本文來自互聯(lián)網(wǎng)用戶投稿,該文觀點僅代表作者本人,不代表本站立場。本站僅提供信息存儲空間服務,不擁有所有權,不承擔相關法律責任。如若轉載,請注明出處: 如若內容造成侵權/違法違規(guī)/事實不符,請點擊違法舉報進行投訴反饋,一經(jīng)查實,立即刪除!

領支付寶紅包贊助服務器費用

相關文章

  • 基于nginx+keepalived的負載均衡、高可用web集群

    基于nginx+keepalived的負載均衡、高可用web集群

    項目描述: 本項目旨在構建一個高性能、高可用的web集群,使用ansible批量部署項目環(huán)境,nginx實現(xiàn)七層負載均衡,NFS實現(xiàn)web服務器機器的數(shù)據(jù)同源,keepalived搭建雙VIP實現(xiàn)高可用,Prometheus+grafana實現(xiàn)對LB負載均衡服務器以及NFS服務器的監(jiān)控。 項目環(huán)境: CentOS 7.9、Nginx 1.25.2、

    2024年02月09日
    瀏覽(21)
  • web集群學習:nginx+keepalived實現(xiàn)負載均衡高可用性

    目錄 項目架構 一,環(huán)境介紹 二,項目部署 在Web服務器上配置Web測試頁面 nginx負載均衡配置 配置Nginx_Master 通過vrrp_script實現(xiàn)對集群資源的監(jiān)控(1通過killall命令探測服務運行狀態(tài)) 通過vrrp_script實現(xiàn)對集群資源的監(jiān)控(2、開發(fā)檢測nginx存活的shell腳本) 三,項目測試 四,實

    2024年02月13日
    瀏覽(20)
  • 通過 docker-compose 搭建高可用 nginx + keepalived 集群

    通過 docker-compose 搭建高可用 nginx + keepalived 集群

    兩臺虛擬機 CentOS Linux release 7.9.2009 (Core) Docker version 23.0.1 Docker-compose version 1.25.0-rc4 Keepalived 是一種高性能的服務器高可用或熱備解決方案, Keepalived 可以用來防止服務器單點故障的發(fā)生,通過配合 Nginx 可以實現(xiàn) web 前端服務的高可用。 Keepalived 以 VRRP 協(xié)議為實現(xiàn)基礎。 VRRP(

    2024年02月04日
    瀏覽(16)
  • Docker搭建Nginx+keepalived高可用負載均衡服務器

    一、背景 1.nginx高可用 在生產環(huán)境下,Nginx作為流量的入口,如果Nginx不能正常工作或服務器宕機,將導致整個微服務架構的不可用。所以負責負載均衡、反向代理的服務(Nginx)為了提高處理性能,高可用,也需要集群部署。本期咋們采用 keepalived 和 Nginx實現(xiàn)高可用。 2.Kee

    2024年04月22日
    瀏覽(26)
  • Kubernetes(K8s)使用 kubeadm 方式搭建多 master 高可用 K8s 集群

    Kubernetes(K8s)使用 kubeadm 方式搭建多 master 高可用 K8s 集群

    本篇主要針對上篇文章的單 master 節(jié)點的 K8s 集群上搭建多 master 節(jié)點集群 和 LB 負載均衡服務器。 Kubernetes(K8S)集群搭建基礎入門教程 虛擬機 IP 地址: IP 操作系統(tǒng) 主機名稱 192.168.2.121 centos7.9 k8s-master01 192.168.2.124 centos7.9 k8s-master02 192.168.2.125 centos7.9 k8s-node01 192.168.2.126 centos

    2023年04月26日
    瀏覽(34)
  • 使用keepalived解決lvs的單點故障高可用集群

    功能強大 轉發(fā)策略比較多 適合大型的網(wǎng)絡高可用 使用keepalived解決lvs的單點故障高可用集群 準備6臺虛擬機,2臺做LVS主備調度器,2臺做web服務器,1臺做存儲,1臺客戶機驗證 LVS主備調度器 master(192.168.1.225)backup(192.168.1.226) web1(192.168.1.221)web2(192.168.1.223) 存儲(192.

    2024年02月11日
    瀏覽(18)
  • 兩臺宿主機搭建keepalived+Haproxy+mysql實現(xiàn)高可用負載均衡集群(電腦有限弄了兩臺,更多臺同理)

    注意事項 : 1.切記 percona/percona-xtradb-cluster 的版本要統(tǒng)一 ,否則可能出現(xiàn)各種各樣的問題 2. 宿主機要關閉SELINUX 。修改文件 vi /etc/selinux/config ,設置SELINUX為disabled,然后reboot機子 ?兩臺主機為: 宿主機1:192.168.10.4 宿主機2:192.168.10.6 主節(jié)點(在宿主機1上執(zhí)行) 子節(jié)點1(在宿主

    2023年04月26日
    瀏覽(33)
  • nginx兩臺負載均衡服務器之間使用keepalived實現(xiàn)高可用

    單點故障:某個重要的功能只有一份,如果他出現(xiàn)問題,會導致全局不能使用 “高可用性”(High Availability,縮寫為HA)用于描述系統(tǒng)或服務在面臨故障、硬件或軟件問題時能夠繼續(xù)正常運行的能力。高可用性的目標是最大程度地減少系統(tǒng)中斷或停機時間,確保用戶可以隨時

    2024年02月09日
    瀏覽(20)
  • 虛擬機中使用Nginx + Keepalived 實現(xiàn)高可用 Web 負載均衡筆記

    虛擬機中使用Nginx + Keepalived 實現(xiàn)高可用 Web 負載均衡筆記

    物理操作系統(tǒng):Windows10 虛擬機軟件:VMWare Workstation 16 Pro 虛擬操作系統(tǒng)統(tǒng):CentOS7 Nginx:1.24.0 Keepalived:2.2.8 在VMWare Worksattion中安裝了2臺CentOS7的虛擬機,橋接方式下IP地址分別為:192.168.0.35、192.168.0.36 VIP IP 主機名 Nginx端口 默認主從 192.168.0.100 192.168.0.35 wongoing01 88 MASTER 192.168

    2024年02月11日
    瀏覽(25)
  • 【kubernetes】k8s高可用集群搭建(三主三從)

    【kubernetes】k8s高可用集群搭建(三主三從)

    目錄 【kubernetes】k8s高可用集群搭建(三主三從) 一、服務器設置 二、環(huán)境配置 1、關閉防火墻 2、關閉selinux 3、關閉swap 4、修改主機名(根據(jù)主機角色不同,做相應修改) 5、主機名映射 6、將橋接的IPv4流量傳遞到iptables的鏈 7、時間同步 8、master之間進行免密登錄設置 三、

    2024年02月09日
    瀏覽(23)

覺得文章有用就打賞一下文章作者

支付寶掃一掃打賞

博客贊助

微信掃一掃打賞

請作者喝杯咖啡吧~博客贊助

支付寶掃一掃領取紅包,優(yōu)惠每天領

二維碼1

領取紅包

二維碼2

領紅包