環(huán)境centos 7.9
目錄
?1.對安裝 k8s 的節(jié)點進行初始化配置
2 通過 keepalived+nginx 實現(xiàn) k8s apiserver 節(jié)點高可用
3、kubeadm 初始化 k8s 集群
4.擴容 k8s 控制節(jié)點,把 xuegod62 加入到 k8s 集群
5、擴容 k8s 控制節(jié)點,把 xuegod64 加入到 k8s 集群
6、擴容 k8s 集群-添加第一個工作節(jié)點
?7、安裝 kubernetes 網(wǎng)絡(luò)組件-Calico
8、測試 k8s 集群的 DNS 解析和網(wǎng)絡(luò)是否正常
9、etcd 配置成高可用狀態(tài)
k8s 實驗環(huán)境網(wǎng)絡(luò)規(guī)劃:
?podSubnet(pod 網(wǎng)段) 10.244.0.0/16
?serviceSubnet(service 網(wǎng)段): 10.96.0.0/12
物理機網(wǎng)段:192.168.1.0/24
K8s 主機配置:
操作系統(tǒng):centos7.9
配置: 4Gib 內(nèi)存/4vCPU/60G 硬盤
網(wǎng)絡(luò):機器相互可以通信
?1.對安裝 k8s 的節(jié)點進行初始化配置
?初始化安裝 k8s 集群需要的實驗環(huán)境
準備四臺 Centos7.9 的 linux 機器。每臺機器配置:4VCPU/4G 內(nèi)存/60G 硬盤
環(huán)境說明(centos7.9):
IP 主機名 角色 內(nèi)存 cpu
192.168.1.63 xuegod63 master 4G 4vCPU
192.168.1.64 xuegod64 worker 4G 4vCPU
192.168.1.62 xuegod62 worker 4G 4vCPU
192.168.1.66 xuegod66 worker 4G 4vCPU
1、配置靜態(tài) IP:每臺機器的網(wǎng)絡(luò)模式要一致,能互相通信,機器網(wǎng)卡名字也要統(tǒng)一,機器要能聯(lián)
網(wǎng)。
?2、永久關(guān)閉 selinux
[root@localhost ~]# sed -i 's/SELINUX=enforcing/SELINUX=disabled/g'?
/etc/selinux/config
#注意:修改 selinux 配置文件之后,重啟機器,selinux 才能永久生效
[root@localhost~]#getenforce
Disabled
3、配置主機名
在 192.168.1.63 上執(zhí)行如下:
hostnamectl set-hostname xuegod63 && bash
在 192.168.1.64 上執(zhí)行如下:
hostnamectl set-hostname xuegod64 && bash
在 192.168.1.62 上執(zhí)行如下:
hostnamectl set-hostname xuegod62 && bash
在 192.168.1.66 上執(zhí)行如下:
hostnamectl set-hostname xuegod66 && bash
4、配置 hosts 文件:
修改每臺機器的/etc/hosts 文件,在內(nèi)容最后增加如下三行:
192.168.1.63 xuegod63?
192.168.1.64 xuegod64?
192.168.1.62 xuegod62?
192.168.1.66 xuegod66
5、安裝基礎(chǔ)軟件包
[root@xuegod63 ~]# yum install -y yum-utils device-mapper-persistent-data lvm2?
wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl?
curl-devel unzip sudo ntp libaio-devel vim ncurses-devel autoconf automake zlibdevel python-devel epel-release openssh-server socat conntrack ntpdate telnet ipvsadm
[root@xuegod64 ~]# yum install -y yum-utils device-mapper-persistent-data lvm2?
wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl?
curl-devel unzip sudo ntp libaio-devel vim ncurses-devel autoconf automake zlibdevel python-devel epel-release openssh-server socat conntrack ntpdate telnet ipvsadm
[root@xuegod62 ~]# yum install -y yum-utils device-mapper-persistent-data lvm2?
wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl?
curl-devel unzip sudo ntp libaio-devel vim ncurses-devel autoconf automake zlibdevel python-devel epel-release openssh-server socat conntrack ntpdate telnet ipvsadm
[root@xuegod66 ~]# yum install -y yum-utils device-mapper-persistent-data lvm2?
wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl?
curl-devel unzip sudo ntp libaio-devel vim ncurses-devel autoconf automake zlibdevel python-devel epel-release openssh-server socat conntrack ntpdate telnet ipvsadm
6、配置主機之間無密碼登錄
1)配置 xuegod63 到其他機器免密登錄
[root@xuegod63 ~]# ssh-keygen #一路回車,不輸入密碼
把本地的 ssh 公鑰文件安裝到遠程主機對應(yīng)的賬戶
[root@xuegod63 ~]# ssh-copy-id xuegod63
[root@xuegod63 ~]# ssh-copy-id xuegod64
[root@xuegod63 ~]# ssh-copy-id xuegod62
[root@xuegod63 ~]# ssh-copy-id xuegod66
2)配置 xuegod64 到其他機器免密登錄
[root@xuegod64 ~]# ssh-keygen #一路回車,不輸入密碼
把本地的 ssh 公鑰文件安裝到遠程主機對應(yīng)的賬戶
[root@xuegod64 ~]# ssh-copy-id xuegod63
[root@xuegod64 ~]# ssh-copy-id xuegod64
[root@xuegod64 ~]# ssh-copy-id xuegod62
[root@xuegod64 ~]# ssh-copy-id xuegod66
3)配置 xuegod62 到其他機器免密登錄
[root@xuegod62 ~]# ssh-keygen #一路回車,不輸入密碼
把本地的 ssh 公鑰文件安裝到遠程主機對應(yīng)的賬戶
[root@xuegod62 ~]# ssh-copy-id xuegod63
[root@xuegod62 ~]# ssh-copy-id xuegod64
[root@xuegod62 ~]# ssh-copy-id xuegod62
[root@xuegod62 ~]# ssh-copy-id xuegod66
4)配置 xuegod66 到其他機器免密登錄
[root@xuegod66 ~]# ssh-keygen #一路回車,不輸入密碼
把本地的 ssh 公鑰文件安裝到遠程主機對應(yīng)的賬戶
[root@xuegod66 ~]# ssh-copy-id xuegod63
[root@xuegod66 ~]# ssh-copy-id xuegod64
[root@xuegod66 ~]# ssh-copy-id xuegod62
[root@xuegod66~]# ssh-copy-id xuegod66
7、關(guān)閉所有主機 firewalld 防火墻
[root@xuegod63 ~]# systemctl stop firewalld ; systemctl disable firewalld
[root@xuegod64 ~]# systemctl stop firewalld ; systemctl disable firewalld
[root@xuegod62 ~]# systemctl stop firewalld ; systemctl disable firewalld
[root@xuegod66 ~]# systemctl stop firewalld ; systemctl disable firewalld
8、關(guān)閉交換分區(qū) swap
#臨時關(guān)閉交換分區(qū)
[root@xuegod63 ~]# swapoff -a
[root@xuegod64 ~]# swapoff -a
[root@xuegod62 ~]# swapoff -a
[root@xuegod66 ~]# swapoff -a
永久關(guān)閉:注釋 swap 掛載
[root@xuegod63 ~]# vim /etc/fstab #給 swap 這行開頭加一下注釋#
[root@xuegod64 ~]# vim /etc/fstab
[root@xuegod62 ~]# vim /etc/fstab?
?[root@xuegod66 ~]# vim /etc/fstab
?9、修改內(nèi)核參數(shù):
[root@xuegod63 ~]# modprobe br_netfilter
[root@xuegod64 ~]# modprobe br_netfilter
[root@xuegod62 ~]# modprobe br_netfilter
[root@xuegod66 ~]# modprobe br_netfilter
[root@xuegod63 ~]# cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
[root@xuegod63 ~]# sysctl -p /etc/sysctl.d/k8s.conf
[root@xuegod64 ~]# cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
[root@xuegod64 ~]# sysctl -p /etc/sysctl.d/k8s.conf
[root@xuegod62 ~]# cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
[root@xuegod62 ~]# sysctl -p /etc/sysctl.d/k8s.conf
[root@xuegod66 ~]# cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
[root@xuegod66 ~]# sysctl -p /etc/sysctl.d/k8s.conf
?10、配置安裝 docker 和 containerd 的需要的阿里云 yum 源
[root@xuegod63 ~]# yum-config-manager --add-repo?
http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
[root@xuegod64 ~]# yum-config-manager --add-repo?
http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
[root@xuegod62 ~]# yum-config-manager --add-repo
http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
[root@xuegod66 ~]# yum-config-manager --add-repo?
http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
?11、配置安裝 k8s 命令行工具需要的阿里云的 yum 源
配置阿里云 Kubernetes yum 源
[root@xuegod63 ~]# cat > /etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
EOF
然后,再執(zhí)行下面命令,使用在線 yum 源安裝 kubeadm 和 kubelet?
將 xuegod63 上 Kubernetes 的 yum 源復(fù)制給 xuegod64、xuegod62、xuegod66
[root@xuegod63 ~]# scp /etc/yum.repos.d/kubernetes.repo?
xuegod64:/etc/yum.repos.d/
[root@xuegod63 ~]# scp /etc/yum.repos.d/kubernetes.repo?
xuegod62:/etc/yum.repos.d/
[root@xuegod63 ~]# scp /etc/yum.repos.d/kubernetes.repo?
xuegod66:/etc/yum.repos.d/
12、配置時間同步:
[root@xuegod63 ~]# yum install -y ntp ntpdate
[root@xuegod63 ~]# ntpdate cn.pool.ntp.org
#編寫計劃任務(wù)
[root@xuegod63 ~]# crontab -e
* * * * * /usr/sbin/ntpdate cn.pool.ntp.org
[root@xuegod64 ~]# yum install -y ntp ntpdate
[root@xuegod64 ~]# ntpdate cn.pool.ntp.org
#編寫計劃任務(wù)
[root@xuegod64 ~]# crontab -e
* * * * * /usr/sbin/ntpdate cn.pool.ntp.org
[root@xuegod62~]# yum install -y ntp ntpdate
[root@xuegod62 ~]# ntpdate cn.pool.ntp.org
#編寫計劃任務(wù)
[root@xuegod62 ~]# crontab -e
* * * * * /usr/sbin/ntpdate cn.pool.ntp.org
[root@xuegod66~]# yum install -y ntp ntpdate
[root@xuegod66 ~]# ntpdate cn.pool.ntp.org
#編寫計劃任務(wù)
[root@xuegod66 ~]# crontab -e
* * * * * /usr/sbin/ntpdate cn.pool.ntp.org
13、安裝 containerd
在 xuegod63 上安裝 containerd
[root@xuegod63~]#yum install containerd.io-1.6.6 -y
Containerd 版本要按照我這個版本,其他版本有問題。
生成 containerd 的配置文件:
[root@xuegod63~]#mkdir -p /etc/containerd
[root@xuegod63 ~]#containerd config default > /etc/containerd/config.toml
修改配置文件,打開/etc/containerd/config.toml
把 SystemdCgroup = false 修改成 SystemdCgroup = true
把 sandbox_image = "k8s.gcr.io/pause:3.6"修改成
sandbox_image="registry.aliyuncs.com/google_containers/pause:3.7"
找到 config_path = "",修改成如下目錄:
config_path = "/etc/containerd/certs.d"
創(chuàng)建/etc/crictl.yaml 文件
[root@xuegod63 ~]#cat > /etc/crictl.yaml <<EOF
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false
EOF
[root@xuegod63 ~]#mkdir /etc/containerd/certs.d/docker.io/ -p
[root@xuegod63 ~]#vim /etc/containerd/certs.d/docker.io/hosts.toml
#寫入如下內(nèi)容:
[host."https://vh3bm52y.mirror.aliyuncs.com",host."https://registry.docker-cn.com"]
?capabilities = ["pull","push"]
啟動 containerd、并設(shè)置開啟自啟動
[root@xuegod63 ~]#systemctl enable containerd --now????????
#在 xuegod64 上安裝 containerd
[root@xuegod64~]#yum install containerd.io-1.6.6 -y
生成 containerd 的配置文件:
[root@xuegod64~]#mkdir -p /etc/containerd
[root@xuegod64 ~]#containerd config default > /etc/containerd/config.toml
修改配置文件,打開/etc/containerd/config.toml
把 SystemdCgroup = false 修改成 SystemdCgroup = true
把 sandbox_image = "k8s.gcr.io/pause:3.6"修改成
sandbox_image="registry.aliyuncs.com/google_containers/pause:3.7"
找到 config_path = "",修改成如下目錄:
config_path = "/etc/containerd/certs.d"
創(chuàng)建/etc/crictl.yaml 文件
[root@xuegod64 ~]#cat > /etc/crictl.yaml <<EOF
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false
EOF
[root@xuegod64 ~]#mkdir /etc/containerd/certs.d/docker.io/ -p
[root@xuegod64 ~]#vim /etc/containerd/certs.d/docker.io/hosts.toml
#寫入如下內(nèi)容:
[host."https://vh3bm52y.mirror.aliyuncs.com",host."https://registry.docker-cn.com"]
?capabilities = ["pull","push"]
啟動 containerd、并設(shè)置開啟自啟動
[root@xuegod64 ~]#systemctl enable containerd --now
#在 xuegod62 上安裝 containerd
[root@xuegod62~]#yum install containerd.io-1.6.6 -y
生成 containerd 的配置文件:
[root@xuegod62~]#mkdir -p /etc/containerd
[root@xuegod62 ~]#containerd config default > /etc/containerd/config.toml
修改配置文件,打開/etc/containerd/config.toml
把 SystemdCgroup = false 修改成 SystemdCgroup = true
把 sandbox_image = "k8s.gcr.io/pause:3.6"修改成
sandbox_image="registry.aliyuncs.com/google_containers/pause:3.7"
找到 config_path = "",修改成如下目錄:
config_path = "/etc/containerd/certs.d"
創(chuàng)建/etc/crictl.yaml 文件
[root@xuegod62 ~]#cat > /etc/crictl.yaml <<EOF
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false
EOF
[root@xuegod62 ~]#mkdir /etc/containerd/certs.d/docker.io/ -p
[root@xuegod62 ~]#vim /etc/containerd/certs.d/docker.io/hosts.toml
#寫入如下內(nèi)容:
[host."https://vh3bm52y.mirror.aliyuncs.com",host."https://registry.docker-cn.com"]
?capabilities = ["pull","push"]
啟動 containerd、并設(shè)置開啟自啟動
[root@xuegod62 ~]#systemctl enable containerd --now
在 xuegod66 上安裝 containerd
[root@xuegod66~]#yum install containerd.io-1.6.6 -y
Containerd 版本要按照我這個版本,其他版本有問題。
生成 containerd 的配置文件:
[root@xuegod66~]#mkdir -p /etc/containerd
[root@xuegod66 ~]#containerd config default > /etc/containerd/config.toml
修改配置文件,打開/etc/containerd/config.toml
把 SystemdCgroup = false 修改成 SystemdCgroup = true
把 sandbox_image = "k8s.gcr.io/pause:3.6"修改成
sandbox_image="registry.aliyuncs.com/google_containers/pause:3.7"
找到 config_path = "",修改成如下目錄:
config_path = "/etc/containerd/certs.d"
創(chuàng)建/etc/crictl.yaml 文件
[root@xuegod66 ~]#cat > /etc/crictl.yaml <<EOF
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false
EOF
[root@xuegod66 ~]#mkdir /etc/containerd/certs.d/docker.io/ -p
[root@xuegod66 ~]#vim /etc/containerd/certs.d/docker.io/hosts.toml
#寫入如下內(nèi)容:
[host."https://vh3bm52y.mirror.aliyuncs.com",host."https://registry.docker-cn.com"]
?capabilities = ["pull","push"]
啟動 containerd、并設(shè)置開啟自啟動
[root@xuegod66 ~]#systemctl enable containerd --now
14、安裝 docker-ce
K8s1.24 開始已經(jīng)不支持 docker 了,但是還要把 docker 安裝在 k8s 節(jié)點上,主要是為了用
docker build 基于 dockerfile 做鏡像,docker 跟 containerd 不沖突、
[root@xuegod63 ~]# yum install docker-ce-23.0.3 -y
[root@xuegod63 ~]# systemctl start docker && systemctl enable docker.service
[root@xuegod63 ~]# tee /etc/docker/daemon.json << 'EOF'
{
"registry-mirrors":["https://vh3bm52y.mirror.aliyuncs.com","https://registry.dockercn.com","https://docker.mirrors.ustc.edu.cn","https://dockerhub.azk8s.cn","http://hubmirror.c.163.com"],
"exec-opts": ["native.cgroupdriver=systemd"]
}?
EOF
[root@xuegod63 ~]# systemctl restart docker
[root@xuegod64 ~]# yum install docker-ce-23.0.3 -y
[root@xuegod64 ~]# systemctl start docker && systemctl enable docker.service
[root@xuegod64 ~]# tee /etc/docker/daemon.json << 'EOF'
{
"registry-mirrors":["https://vh3bm52y.mirror.aliyuncs.com","https://registry.dockercn.com","https://docker.mirrors.ustc.edu.cn","https://dockerhub.azk8s.cn","http://hubmirror.c.163.com"],
"exec-opts": ["native.cgroupdriver=systemd"]
}?
EOF
[root@xuegod62 ~]# systemctl restart docker
[root@xuegod62 ~]# yum install docker-ce-23.0.3 -y
[root@xuegod62 ~]# systemctl start docker && systemctl enable docker.service
[root@xuegod62 ~]# tee /etc/docker/daemon.json << 'EOF'
{
"registry-mirrors":["https://vh3bm52y.mirror.aliyuncs.com","https://registry.dockercn.com","https://docker.mirrors.ustc.edu.cn","https://dockerhub.azk8s.cn","http://hubmirror.c.163.com"],"exec-opts": ["native.cgroupdriver=systemd"]
}?
EOF
[root@xuegod62 ~]# systemctl restart docker
[root@xuegod66 ~]# yum install docker-ce-23.0.3 -y
[root@xuegod66 ~]# systemctl start docker && systemctl enable docker.service
[root@xuegod66 ~]# tee /etc/docker/daemon.json << 'EOF'
{
"registry-mirrors":["https://vh3bm52y.mirror.aliyuncs.com","https://registry.dockercn.com","https://docker.mirrors.ustc.edu.cn","https://dockerhub.azk8s.cn","http://hubmirror.c.163.com"],
"exec-opts": ["native.cgroupdriver=systemd"]
}?
EOF
[root@xuegod66 ~]# systemctl restart docker
15、安裝初始化 k8s 需要的組件
[root@xuegod63 ~]# yum install -y kubelet-1.26.0 kubeadm-1.26.0 kubectl-1.26.0
[root@xuegod63 ~]# systemctl enable kubelet?
[root@xuegod64 ~]# yum install -y kubelet-1.26.0 kubeadm-1.25.0 kubectl-1.26.0
[root@xuegod64~]# systemctl enable kubelet?
[root@xuegod62 ~]# yum install -y kubelet-1.26.0 kubeadm-1.25.0 kubectl-1.26.0
[root@xuegod62~]# systemctl enable kubelet?
[root@xuegod66 ~]# yum install -y kubelet-1.26.0 kubeadm-1.25.0 kubectl-1.26.0
[root@xuegod66~]# systemctl enable kubelet?
2 通過 keepalived+nginx 實現(xiàn) k8s apiserver 節(jié)點高可用
1、安裝 nginx 和 keepalived
在 xuegod63 和 xuegod64 上安裝 keepalived 和 nginx,實現(xiàn)對 apiserver 負載均衡和反向代
理。Xuegod63 是 keepalived 主節(jié)點,xuegod64 是 keepalived 備節(jié)點。
[root@xuegod63 ~]# yum install epel-release nginx keepalived nginx-mod-stream -y
[root@xuegod64 ~]# yum install epel-release nginx keepalived nginx-mod-stream -y
2、修改 nginx 配置文件。主備一樣
[root@xuegod63 ~]# vim /etc/nginx/nginx.conf
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
include /usr/share/nginx/modules/*.conf;
events {
?worker_connections 1024;
}
# 四層負載均衡,為兩臺 Master apiserver 組件提供負載均衡
stream {
?log_format main '$remote_addr $upstream_addr - [$time_local] $status?
$upstream_bytes_sent';
?access_log /var/log/nginx/k8s-access.log main;
?upstream k8s-apiserver {
?server 192.168.1.63:6443 weight=5 max_fails=3 fail_timeout=30s;?
?server 192.168.1.62:6443 weight=5 max_fails=3 fail_timeout=30s;
?server 192.168.1.64:6443 weight=5 max_fails=3 fail_timeout=30s;?
?}
?
?server {
?listen 16443; # 由于 nginx 與 master 節(jié)點復(fù)用,這個監(jiān)聽端口不能是 6443,否則會沖
突
?proxy_pass k8s-apiserver;
?}
}
http {
?log_format main '$remote_addr - $remote_user [$time_local] "$request" '
?'$status $body_bytes_sent "$http_referer" '
?'"$http_user_agent" "$http_x_forwarded_for"';
?access_log /var/log/nginx/access.log main;
?sendfile on;
?tcp_nopush on;
?tcp_nodelay on;
?keepalive_timeout 65;
?types_hash_max_size 2048;
include /etc/nginx/mime.types;
?default_type application/octet-stream;
?server {
?listen 80 default_server;
?server_name _;
?location / {
?}
?}
}
[root@xuegod64 ~]# vim /etc/nginx/nginx.conf
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
include /usr/share/nginx/modules/*.conf;
events {
?worker_connections 1024;
}
# 四層負載均衡,為兩臺 Master apiserver 組件提供負載均衡
stream {
?log_format main '$remote_addr $upstream_addr - [$time_local] $status?
$upstream_bytes_sent';
?access_log /var/log/nginx/k8s-access.log main;
?upstream k8s-apiserver {
?server 192.168.1.63:6443 weight=5 max_fails=3 fail_timeout=30s;?
?server 192.168.1.62:6443 weight=5 max_fails=3 fail_timeout=30s;
?server 192.168.1.64:6443 weight=5 max_fails=3 fail_timeout=30s;?
?}
?
?server {
?listen 16443; # 由于 nginx 與 master 節(jié)點復(fù)用,這個監(jiān)聽端口不能是 6443,否則會沖
突
?proxy_pass k8s-apiserver;
?}
}
http {
?log_format main '$remote_addr - $remote_user [$time_local] "$request" '
?'$status $body_bytes_sent "$http_referer" '
?'"$http_user_agent" "$http_x_forwarded_for"';
?access_log /var/log/nginx/access.log main;
?sendfile on;
?tcp_nopush on;
?tcp_nodelay on;
?keepalive_timeout 65;
?types_hash_max_size 2048;
?include /etc/nginx/mime.types;
?default_type application/octet-stream;
?server {
?listen 80 default_server;
?server_name _;
?location / {
?}
?}
}
3、修改 keepalive 配置文件,主備不一樣,需要區(qū)分
主 keepalived
[root@xuegod63 ~]# vim /etc/keepalived/keepalived.conf?
global_defs {?
?notification_email {?
?acassen@firewall.loc?
?failover@firewall.loc?
?sysadmin@firewall.loc?
?}?
?notification_email_from Alexandre.Cassen@firewall.loc?
?smtp_server 127.0.0.1?
?smtp_connect_timeout 30?
?router_id NGINX_MASTER
}?
vrrp_script check_nginx {
?script "/etc/keepalived/check_nginx.sh"
}
vrrp_instance VI_1 {?
?state MASTER?
?interface ens33 # 修改為實際網(wǎng)卡名
?virtual_router_id 51 # VRRP 路由 ID 實例,每個實例是唯一的
?priority 100 # 優(yōu)先級,備服務(wù)器設(shè)置 90?
?advert_int 1 # 指定 VRRP 心跳包通告間隔時間,默認 1 秒
?authentication {?
?auth_type PASS?
?auth_pass 1111?
?}?
?# 虛擬 IP
?virtual_ipaddress {?
?192.168.1.199/24
?}?
?track_script {
?check_nginx
?}?
}
#vrrp_script:指定檢查 nginx 工作狀態(tài)腳本(根據(jù) nginx 狀態(tài)判斷是否故障轉(zhuǎn)移)
#virtual_ipaddress:虛擬 IP(VIP)
[root@xuegod63 ~]# vim /etc/keepalived/check_nginx.sh?
#!/bin/bash
#1、判斷 Nginx 是否存活
counter=$(ps -ef |grep nginx | grep sbin | egrep -cv "grep|$$" )
if [ $counter -eq 0 ]; then
?#2、如果不存活則嘗試啟動 Nginx
?service nginx start
?sleep 2
?#3、等待 2 秒后再次獲取一次 Nginx 狀態(tài)
?counter=$(ps -ef |grep nginx | grep sbin | egrep -cv "grep|$$" )
?#4、再次進行判斷,如 Nginx 還不存活則停止 Keepalived,讓地址進行漂移
?if [ $counter -eq 0 ]; then
?service keepalived stop
?fi
fi
[root@xuegod63 ~]# chmod +x /etc/keepalived/check_nginx.sh
備 keepalive
[root@xuegod64 ~]# vim /etc/keepalived/keepalived.conf?
global_defs {?
?notification_email {?
?acassen@firewall.loc?
?failover@firewall.loc?
?sysadmin@firewall.loc?
?}?
?notification_email_from Alexandre.Cassen@firewall.loc?
?smtp_server 127.0.0.1?
?smtp_connect_timeout 30?
?router_id NGINX_BACKUP
}?
vrrp_script check_nginx {
?script "/etc/keepalived/check_nginx.sh"
}
vrrp_instance VI_1 {?
?state BACKUP?
?interface ens33
?virtual_router_id 51 # VRRP 路由 ID 實例,每個實例是唯一的
?priority 90
?advert_int 1
?authentication {?
?auth_type PASS?
?auth_pass 1111?
?}?
?virtual_ipaddress {?
?192.168.1.199/24
?}?
?track_script {
?check_nginx
?}?
}
[root@xuegod64 ~]# chmod +x /etc/keepalived/check_nginx.sh
4、啟動服務(wù):
[root@xuegod63 ~]# systemctl daemon-reload && systemctl start nginx
[root@xuegod63 ~]# systemctl start keepalived && systemctl enable nginx keepalived
[root@xuegod64 ~]# systemctl daemon-reload && systemctl start nginx
[root@xuegod64 ~]# systemctl start keepalived && systemctl enable nginx keepalived
5、測試 vip 是否綁定成功
[root@xuegod63 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group?
default qlen 1000
?link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
?inet 127.0.0.1/8 scope host lo
?valid_lft forever preferred_lft forever
?inet6 ::1/128 scope host?
?valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP?
group default qlen 1000
?link/ether 00:0c:29:79:9e:36 brd ff:ff:ff:ff:ff:ff
?inet 192.168.1.63/24 brd 192.168.40.255 scope global noprefixroute ens33
?valid_lft forever preferred_lft forever
?inet 192.168.1.199/24 scope global secondary ens33
?valid_lft forever preferred_lft forever
?inet6 fe80::b6ef:8646:1cfc:3e0c/64 scope link noprefixroute?
?valid_lft forever preferred_lft forever
6、測試 vip 能否漂移:
停掉 xuegod63 上的 keepalived,Vip 會漂移到 xuegod64
[root@xuegod63 ~]# service keepalived stop
[root@xuegod64]# ip addr6、測試 vip 能否漂移:
停掉 xuegod63 上的 keepalived,Vip 會漂移到 xuegod64
[root@xuegod63 ~]# service keepalived stop
[root@xuegod64]# ip addr
?#啟動 xuegod63 上的 nginx 和 keepalived,vip 又會漂移回來
[root@xuegod63 ~]# systemctl start nginx
[root@xuegod63 ~]# systemctl start keepalived
[root@xuegod63]# ip addr
?備注:
nginx 配置文件參數(shù)解釋:
1、weight 指定了每個后端服務(wù)器的權(quán)重,用于調(diào)節(jié)請求的分配比例,例如上述配置中三個后端服務(wù)器
的權(quán)重都為 5,則每個服務(wù)器會均衡地處理 1/3 的請求。
2、max_fails 指定了最大的失敗次數(shù),如果在 fail_timeout 時間內(nèi)連續(xù)失敗了 max_fails 次,則該
后端服務(wù)器會被暫時認為是不可用的,不再向其分配請求。
3、fail_timeout 指定了服務(wù)器被認為是不可用的時間,即在該時間段內(nèi)連續(xù)失敗了 max_fails 次,則
該后端服務(wù)器會被暫時認為是不可用的。
3、kubeadm 初始化 k8s 集群
#使用 kubeadm 初始化 k8s 集群
[root@xuegod63 ~]# kubeadm config print init-defaults > kubeadm.yaml
根據(jù)我們自己的需求修改配置,比如修改 imageRepository 的值,kube-proxy 的模式為
ipvs,需要注意的是由于我們使用的 containerd 作為運行時,所以在初始化節(jié)點的時候需要指定
cgroupDriver 為 systemd
kubeadm.yaml 配置文件如下:
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
?- system:bootstrappers:kubeadm:default-node-token
?token: abcdef.0123456789abcdef
?ttl: 24h0m0s
?usages:
?- signing
?- authentication
kind: InitConfiguration
?nodeRegistration:
?criSocket: unix:///run/containerd/containerd.sock #指定 containerd 容器運行時
?imagePullPolicy: IfNotPresent
#name: node #前面加注釋
apiServer:
?timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
?local:
?dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers?
#指定阿里云鏡像倉庫
kind: ClusterConfiguration
kubernetesVersion: 1.26.0
#新增加如下內(nèi)容:
controlPlaneEndpoint: 192.168.1.199:16443
networking:
?dnsDomain: cluster.local
podSubnet: 10.244.0.0/16 #指定 pod 網(wǎng)段
?serviceSubnet: 10.96.0.0/12
scheduler: {}
#追加如下內(nèi)容
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
?#基于 kubeadm.yaml 初始化 k8s 集群
[root@xuegod63 ~]# ctr -n=k8s.io images import k8s_1.26.0.tar.gz
[root@xuegod62 ~]# ctr -n=k8s.io images import k8s_1.26.0.tar.gz
[root@xuegod64 ~]# ctr -n=k8s.io images import k8s_1.26.0.tar.gz
[root@xuegod66 ~]# ctr -n=k8s.io images import k8s_1.26.0.tar.gz
[root@xuegod63 ~]# kubeadm init --config=kubeadm.yaml --ignore-preflighterrors=SystemVerification
?#配置 kubectl 的配置文件 config,相當(dāng)于對 kubectl 進行授權(quán),這樣 kubectl 命令可以使用這個
證書對 k8s 集群進行管理
[root@xuegod63 ~]# mkdir -p $HOME/.kube
[root@xuegod63 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@xuegod63 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@xuegod63 ~]# kubectl get nodes
4.擴容 k8s 控制節(jié)點,把 xuegod62 加入到 k8s 集群
?#把 xuegod63 節(jié)點的證書拷貝到 xuegod62 上
在 xuegod62 創(chuàng)建證書存放目錄:
[root@xuegod62 ~]# cd /root && mkdir -p /etc/kubernetes/pki/etcd &&mkdir -p?
~/.kube/
#把 xuegod63 節(jié)點的證書拷貝到 xuegod62 上:
scp /etc/kubernetes/pki/ca.crt xuegod62:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/ca.key xuegod62:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/sa.key xuegod62:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/sa.pub xuegod62:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/front-proxy-ca.crt xuegod62:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/front-proxy-ca.key xuegod62:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/etcd/ca.crt xuegod62:/etc/kubernetes/pki/etcd/
scp /etc/kubernetes/pki/etcd/ca.key xuegod62:/etc/kubernetes/pki/etcd/
在 xuegod63 上查看加入節(jié)點的命令:
[root@xuegod63 ~]# kubeadm token create --print-join-command
顯示如下:
kubeadm join 192.168.1.199:16443 --token zwzcks.u4jd8lj56wpckcwv \
?--discovery-token-ca-cert-hash?
sha256:1ba1b274090feecfef58eddc2a6f45590299c1d0624618f1f429b18a064cb728 \
在 xuegod62 上執(zhí)行:
[root@xuegod62 ~]#kubeadm join 192.168.1.199:16443 --token?
zwzcks.u4jd8lj56wpckcwv \
?--discovery-token-ca-cert-hash?
sha256:1ba1b274090feecfef58eddc2a6f45590299c1d0624618f1f429b18a064cb728 \
?--control-plane --ignore-preflight-errors=SystemVerification
在 xuegod63 上查看集群狀況:
[root@xuegod63 ~]# kubectl get nodes
?上面可以看到 xuegod62 已經(jīng)加入到集群了
5、擴容 k8s 控制節(jié)點,把 xuegod64 加入到 k8s 集群
在 xuegod64 創(chuàng)建證書存放目錄:
[root@xuegod64 ~]# cd /root && mkdir -p /etc/kubernetes/pki/etcd &&mkdir -p?
~/.kube/
#把 xuegod63 節(jié)點的證書拷貝到 xuegod64 上:
scp /etc/kubernetes/pki/ca.crt xuegod64:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/ca.key xuegod64:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/sa.key xuegod64:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/sa.pub xuegod64:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/front-proxy-ca.crt xuegod64:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/front-proxy-ca.key xuegod64:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/etcd/ca.crt xuegod64:/etc/kubernetes/pki/etcd/
scp /etc/kubernetes/pki/etcd/ca.key xuegod64:/etc/kubernetes/pki/etcd/
在 xuegod63 上查看加入節(jié)點的命令:
[root@xuegod63 ~]# kubeadm token create --print-join-command
顯示如下:
kubeadm join 192.168.1.199:16443 --token zwzcks.u4jd8lj56wpckcwv \
?--discovery-token-ca-cert-hash?
sha256:1ba1b274090feecfef58eddc2a6f45590299c1d0624618f1f429b18a064cb728 \
在 xuegod63 上執(zhí)行:
[root@xuegod63 ~]#kubeadm join 192.168.1.199:16443 --token?
zwzcks.u4jd8lj56wpckcwv \
?--discovery-token-ca-cert-hash?
sha256:1ba1b274090feecfef58eddc2a6f45590299c1d0624618f1f429b18a064cb728 \
?--control-plane --ignore-preflight-errors=SystemVerification
在 xuegod63 上查看集群狀況:
[root@xuegod63 ~]# kubectl get nodes
?上面可以看到 xuegod64、xuegod62 已經(jīng)加入到集群了
6、擴容 k8s 集群-添加第一個工作節(jié)點
在 xuegod63 上查看加入節(jié)點的命令:
[root@xuegod63 ~]# kubeadm token create --print-join-command
顯示如下:
kubeadm join 192.168.1.199:16443 --token vulvta.9ns7da3saibv4pg1 --discoverytoken-ca-cert-hash?
sha256:72a0896e27521244850b8f1c3b600087292c2d10f2565adb56381f1f4ba7057a
把 xuegod66 加入 k8s 集群:
[root@xuegod66~]# kubeadm join 192.168.1.199:16443 --token?
vulvta.9ns7da3saibv4pg1 --discovery-token-ca-cert-hash?
sha256:72a0896e27521244850b8f1c3b600087292c2d10f2565adb56381f1f4ba7057a --
ignore-preflight-errors=SystemVerification
?#看到上面說明 xuegod66 節(jié)點已經(jīng)加入到集群了,充當(dāng)工作節(jié)點
#在 xuegod63 上查看集群節(jié)點狀況:
[root@xuegod63 ~]# kubectl get nodes
?#可以對 xuegod66 打個標簽,顯示 work
[root@xuegod63 ~]# kubectl label nodes xuegod66 noderole.kubernetes.io/work=work
[root@xuegod63 ~]# kubectl get nodes
?
?7、安裝 kubernetes 網(wǎng)絡(luò)組件-Calico
把安裝 calico 需要的鏡像 calico.tar.gz 傳到 xuegod63、xuegod62、xuegod64 和 xuegod66
節(jié)點,手動解壓:
[root@xuegod63 ~]# ctr -n=k8s.io images import calico.tar.gz
[root@xuegod62 ~]# ctr -n=k8s.io images import calico.tar.gz
[root@xuegod64 ~]# ctr -n=k8s.io images import calico.tar.gz
[root@xuegod66 ~]# ctr -n=k8s.io images import calico.tar.gz
上傳 calico.yaml 到 xuegod63 上,使用 yaml 文件安裝 calico 網(wǎng)絡(luò)插件 。
修改 calico.yaml 文件:
如果機器有多個網(wǎng)卡,需要在 calico 配置文件里指定可以聯(lián)網(wǎng)的網(wǎng)卡,假如機器只有一個網(wǎng)卡,也
要指定下,這樣就直接找到可以用的網(wǎng)卡了。
- name: IP_AUTODETECTION_METHOD
?value: "interface=ens33"
?[root@xuegod63 ~]# kubectl apply -f calico.yaml
[root@xuegod63 ~]# kubectl get nodes
8、測試 k8s 集群的 DNS 解析和網(wǎng)絡(luò)是否正常
#把 busybox-1-28.tar.gz 上傳到 xuegod66 節(jié)點,手動解壓
[root@xuegod66 ~]# ctr images import busybox-1-28.tar.gz
[root@xuegod63 ~]# kubectl run busybox --image docker.io/library/busybox:1.28 --
image-pull-policy=IfNotPresent --restart=Never --rm -it busybox -- sh
/ # ping www.baidu.com
PING www.baidu.com (39.156.66.18): 56 data bytes
64 bytes from 39.156.66.18: seq=0 ttl=127 time=39.3 ms
#通過上面可以看到能訪問網(wǎng)絡(luò),說明 calico 網(wǎng)絡(luò)插件已經(jīng)被正常安裝了
/ # nslookup kubernetes.default.svc.cluster.local
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: kubernetes.default.svc.cluster.local
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
看到上面內(nèi)容,說明 k8s 的 coredns 服務(wù)正常
/ # exit #退出 pod
10.96.0.10 就是我們 coreDNS 的 clusterIP,說明 coreDNS 配置好了。
解析內(nèi)部 Service 的名稱,是通過 coreDNS 去解析的。
9、etcd 配置成高可用狀態(tài)
修改 xuegod63、xuegod62、xuegod64 上的 etcd.yaml 文件
vim /etc/kubernetes/manifests/etcd.yaml
把
- --initial-cluster=xuegod63=https://192.168.1.63:2380
變成如下:
- --initialcluster=xuegod63=https://192.168.1.63:2380,xuegod62=https://192.168.1.62:2380,xuegod
64=https://192.168.1.64:2380
修改成功之后重啟 kubelet:
[root@xuegod63 ~]# systemctl restart kubelet
[root@xuegod62 ~]# systemctl restart kubelet
[root@xuegod64 ~]# systemctl restart kubelet
測試 etcd 集群是否配置成功:
[root@xuegod63 ~]# docker run --rm -it --net host -v /etc/kubernetes:/etc/kubernetes?
registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.4-0 etcdctl --cert?
/etc/kubernetes/pki/etcd/peer.crt --key /etc/kubernetes/pki/etcd/peer.key --cacert?
/etc/kubernetes/pki/etcd/ca.crt member list
顯示如下,說明 etcd 集群配置成功:
1203cdd3ad75e761, started, xuegod63, https://192.168.1.63:2380,?
https://192.168.1.63:2379, false
5c9f58513f7f9d01, started, xuegod62, https://192.168.1.62:2380,?
https://192.168.1.62:2379, false
e4a737a7dcdd6fb5, started, xuegod63, https://192.168.1.64:2380,?
https://192.168.1.64:2379, false
[root@xuegod63 ~]# docker run --rm -it --net host -v /etc/kubernetes:/etc/kubernetes?
registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.4-0 etcdctl --cert?
/etc/kubernetes/pki/etcd/peer.crt --key /etc/kubernetes/pki/etcd/peer.key --cacert?
/etc/kubernetes/pki/etcd/ca.crt --
endpoints=https://192.168.1.63:2379,https://192.168.1.62:2379,https://192.168.1.64:2379?
endpoint health --cluster
顯示如下,說明 etcd 集群配置成功:
https://192.168.1.62:2379 is healthy: successfully committed proposal: took =?
10.808798ms
https://192.168.1.64:2379 is healthy: successfully committed proposal: took =?
11.179877ms
https://192.168.1.63:2379 is healthy: successfully committed proposal: took =?
12.32604ms
[root@xuegod63 ~]# docker run --rm -it --net host -v /etc/kubernetes:/etc/kubernetes?
registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.4-0 etcdctl -w table --cert?
/etc/kubernetes/pki/etcd/peer.crt --key /etc/kubernetes/pki/etcd/peer.key --cacert?
/etc/kubernetes/pki/etcd/ca.crt --
endpoints=https://192.168.1.63:2379,https://192.168.1.62:2379,https://192.168.1.64:2379?
endpoint status --cluster
顯示如下:
完.文章來源:http://www.zghlxwxcb.cn/news/detail-597760.html
?文章來源地址http://www.zghlxwxcb.cn/news/detail-597760.html
到了這里,關(guān)于使用kubeadm搭建生產(chǎn)環(huán)境的多master節(jié)點k8s高可用集群的文章就介紹完了。如果您還想了解更多內(nèi)容,請在右上角搜索TOY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!