一、Kubeadm 部署
集群服務(wù)器主機(jī)名 | 服務(wù)器IP地址 | 集群服務(wù)器部署的服務(wù) |
---|---|---|
master(2C/4G,cpu核心數(shù)要求大于2) | 192.168.145.15 | docker、kubeadm、kubelet、kubectl、flannel |
node01(2C/2G) | 192.168.145.30 | docker、kubeadm、kubelet、kubectl、flannel |
node02(2C/2G) | 192.168.145.45 | docker、kubeadm、kubelet、kubectl、flannel |
1、在所有節(jié)點(diǎn)上安裝Docker和kubeadm
2、部署Kubernetes Master
3、部署容器網(wǎng)絡(luò)插件
4、部署 Kubernetes Node,將節(jié)點(diǎn)加入Kubernetes集群中
1. 環(huán)境準(zhǔn)備
#所有節(jié)點(diǎn),關(guān)閉防火墻規(guī)則,關(guān)閉selinux,關(guān)閉swap交換
systemctl stop firewalld
systemctl disable firewalld
setenforce 0
sed -i 's/enforcing/disabled/' /etc/selinux/config
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
swapoff -a #交換分區(qū)必須要關(guān)閉
sed -ri 's/.*swap.*/#&/' /etc/fstab #永久關(guān)閉swap分區(qū),&符號(hào)在sed命令中代表上次匹配的結(jié)果
#加載 ip_vs 模塊
for i in $(ls /usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs|grep -o "^[^.]*");do echo $i; /sbin/modinfo -F filename $i >/dev/null 2>&1 && /sbin/modprobe $i;done
#修改主機(jī)名
hostnamectl set-hostname master01
hostnamectl set-hostname node01
hostnamectl set-hostname node02
#所有節(jié)點(diǎn)修改hosts文件
vim /etc/hosts
192.168.145.15 master01
192.168.145.30 node01
192.168.145.45 node02
#調(diào)整內(nèi)核參數(shù)
cat > /etc/sysctl.d/kubernetes.conf << EOF
#開啟網(wǎng)橋模式,可將網(wǎng)橋的流量傳遞給iptables鏈
net.bridge.bridge-nf-call-ip6tables=1
net.bridge.bridge-nf-call-iptables=1
#關(guān)閉ipv6協(xié)議
net.ipv6.conf.all.disable_ipv6=1
net.ipv4.ip_forward=1
EOF
#生效參數(shù)
sysctl --system
2. 所有節(jié)點(diǎn)安裝 docker
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum install -y docker-ce docker-ce-cli containerd.io
cd /etc/docker
cat > /etc/docker/daemon.json <<EOF
{
"registry-mirrors": ["https://00ub0bmk.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "500m", "max-file": "3"
}
}
EOF
#使用Systemd管理的Cgroup來進(jìn)行資源控制與管理,因?yàn)橄鄬?duì)Cgroupfs而言,Systemd限制CPU、內(nèi)存等資源更加簡(jiǎn)單和成熟穩(wěn)定。
#日志使用json-file格式類型存儲(chǔ),大小為100M,保存在/var/log/containers目錄下,方便ELK等日志系統(tǒng)收集和管理日志。
systemctl daemon-reload #重載配置文件
systemctl start docker.service #啟動(dòng) docker 服務(wù)
systemctl enable docker.service #將 docker 服務(wù)設(shè)置開機(jī)自啟
docker info | grep "Cgroup Driver" #查看 dokcer 的配置信息
Cgroup Driver: systemd
3. 所有節(jié)點(diǎn)安裝 kubeadm,kubelet 和 kubectl
#定義kubernetes源
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum install -y kubelet-1.20.15 kubeadm-1.20.15 kubectl-1.20.15
#開機(jī)自啟kubelet
systemctl enable kubelet.service
#K8S通過kubeadm安裝出來以后都是以Pod方式存在,即底層是以容器方式運(yùn)行,所以kubelet必須設(shè)置開機(jī)自啟
4. 部署 K8S 集群
4.1 配置 master01 節(jié)點(diǎn)
#查看初始化需要的鏡像
kubeadm config images list --kubernetes-version 1.20.15
#初始化kubeadm
kubeadm config print init-defaults > /opt/kubeadm-config.yaml
cd /opt/
vim kubeadm-config.yaml
......
11 localAPIEndpoint:
12 advertiseAddress: 192.168.145.15 #指定master節(jié)點(diǎn)的IP地址
13 bindPort: 6443
......
32 imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers #指定拉取鏡像的倉(cāng)庫(kù),默認(rèn)是k8s.gcr.io
33 kind: ClusterConfiguration
34 kubernetesVersion: v1.20.15 #指定kubernetes版本號(hào)
35 networking:
36 dnsDomain: cluster.local
37 podSubnet: 10.244.0.0/16 #指定pod網(wǎng)段,10.244.0.0/16用于匹配flannel默認(rèn)網(wǎng)段
38 serviceSubnet: 10.96.0.0/16 #指定service網(wǎng)段
39 scheduler: {}
#末尾再添加以下內(nèi)容
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs #把默認(rèn)的kube-proxy調(diào)度方式改為ipvs模式
#在線拉取鏡像
kubeadm config images pull --config /opt/kubeadm-config.yaml
#查看鏡像
docker images
#初始化 master
kubeadm init --config=/opt/kubeadm-config.yaml --upload-certs | tee kubeadm-init.log
#--upload-certs 參數(shù)可以在后續(xù)執(zhí)行加入節(jié)點(diǎn)時(shí)自動(dòng)分發(fā)證書文件
#tee kubeadm-init.log 用以輸出日志
#查看 kubeadm-init 日志
less kubeadm-init.log
#kubernetes配置文件目錄
ls /etc/kubernetes/
#存放ca等證書和密碼的目錄
ls /etc/kubernetes/pki
#設(shè)定kubectl
kubectl需經(jīng)由API server認(rèn)證及授權(quán)后方能執(zhí)行相應(yīng)的管理操作,kubeadm 部署的集群為其生成了一個(gè)具有管理員權(quán)限的認(rèn)證配置文件 /etc/kubernetes/admin.conf,它可由 kubectl 通過默認(rèn)的 “$HOME/.kube/config” 的路徑進(jìn)行加載。
mkdir -p $HOME/.kube #在家目錄中新鍵 .kube 目錄
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config #將自動(dòng)生成的 admin.conf 文件復(fù)制到家目錄中 ./kube 目錄
chown $(id -u):$(id -g) $HOME/.kube/config #給 config 文件授權(quán)
#部署網(wǎng)絡(luò)插件flannel
#上傳 flannel 鏡像 flannel.tar 和網(wǎng)絡(luò)插件 cni-plugins-linux-amd64-v0.8.6.tgz 到 /opt 目錄,master節(jié)點(diǎn)上傳 kube-flannel.yml 文件
cd /opt/
unzip flannel-v0.21.5.zip
docker load -i flannel.tar
docker load -i flannel-cni-plugin.tar
)
mv /opt/cni /opt/cni_bak
mkdir -p /opt/cni/bin
tar xf cni-plugins-linux-amd64-v0.8.6.tgz -C /opt/cni/bin
#在 master 節(jié)點(diǎn)創(chuàng)建 flannel 資源
kubectl apply -f kube-flannel.yml
#在master節(jié)點(diǎn)查看節(jié)點(diǎn)狀態(tài)
kubectl get nodes
kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-flannel kube-flannel-ds-xffxg 1/1 Running 0 24s
kube-system coredns-54d67798b7-7jjll 1/1 Running 0 9m49s
kube-system coredns-54d67798b7-txdlc 1/1 Running 0 9m49s
kube-system etcd-master01 1/1 Running 0 9m58s
kube-system kube-apiserver-master01 1/1 Running 0 9m58s
kube-system kube-controller-manager-master01 1/1 Running 0 9m58s
kube-system kube-proxy-ll5bh 1/1 Running 0 9m49s
kube-system kube-scheduler-master01 1/1 Running 0 9m58s
4.2 配置 node 節(jié)點(diǎn)
#將 master01 節(jié)點(diǎn)上的 flannel 相關(guān)文件傳送到所有 node 節(jié)點(diǎn)
scp flannel*tar node01:/opt #在master操作
scp flannel*tar node02:/opt #在master操作
#將node節(jié)點(diǎn)根據(jù)根據(jù)kubeadm init初始化的提示信息快速的加入到K8S集群當(dāng)中
kubeadm join 192.168.145.15:6443 --token abcdef.0123456789abcdef \
> --discovery-token-ca-cert-hash sha256:508d252e11aadcbe12da889eb90038a6f72d8aa4922f166a1fa4f765bfe723ce
#部署網(wǎng)絡(luò)插件flannel
cd /opt/
docker load -i flannel.tar
docker load -i flannel-cni-plugin.tar
#在master節(jié)點(diǎn)查看節(jié)點(diǎn)狀態(tài)
kubectl get nodes
kubectl get pods -A
二、Kubeadm 高可用部署
#注意事項(xiàng):
master節(jié)點(diǎn)cpu核心數(shù)要求大于2
● 最新的版本不一定好,但相對(duì)于舊版本,核心功能穩(wěn)定,但新增功能、接口相對(duì)不穩(wěn)
● 學(xué)會(huì)一個(gè)版本的 高可用部署,其他版本操作都差不多
● 宿主機(jī)盡量升級(jí)到CentOS 7.9
● 內(nèi)核kernel升級(jí)到 4.19+ 這種穩(wěn)定的內(nèi)核
● 部署k8s版本時(shí),盡量找 1.xx.5 這種大于5的小版本(這種一般是比較穩(wěn)定的版本)
集群服務(wù)器主機(jī)名 | 服務(wù)器IP地址 | 集群服務(wù)器部署的服務(wù) |
---|---|---|
master01(2C/4G,cpu核心數(shù)要求大于2) | 192.168.145.15 | docker、kubeadm、kubelet、kubectl、flannel |
master02(2C/4G,cpu核心數(shù)要求大于2) | 192.168.145.30 | docker、kubeadm、kubelet、kubectl、flannel |
master03(2C/4G,cpu核心數(shù)要求大于2) | 192.168.145.45 | docker、kubeadm、kubelet、kubectl、flannel |
node01(2C/2G) | 192.168.145.60 | docker、kubeadm、kubelet、kubectl、flannel |
node02(2C/2G) | 192.168.145.75 | docker、kubeadm、kubelet、kubectl、flannel |
1. 環(huán)境準(zhǔn)備
#所有節(jié)點(diǎn),關(guān)閉防火墻規(guī)則,關(guān)閉selinux,關(guān)閉swap交換
systemctl stop firewalld
systemctl disable firewalld
setenforce 0
sed -i 's/enforcing/disabled/' /etc/selinux/config
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
swapoff -a
sed -ri 's/.*swap.*/#&/' /etc/fstab
#修改主機(jī)名
hostnamectl set-hostname master01
hostnamectl set-hostname master02
hostnamectl set-hostname master03
hostnamectl set-hostname node01
hostnamectl set-hostname node02
#所有節(jié)點(diǎn)修改hosts文件
vim /etc/hosts
192.168.145.15 master01
192.168.145.30 master02
192.168.145.45 master03
192.168.145.60 node01
192.168.145.75 node02
#所有節(jié)點(diǎn)時(shí)間同步
yum -y install ntpdate
ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
echo 'Asia/Shanghai' >/etc/timezone
ntpdate time2.aliyun.com
systemctl enable --now crond
crontab -e
*/30 * * * * /usr/sbin/ntpdate time2.aliyun.com
#所有節(jié)點(diǎn)實(shí)現(xiàn)Linux的資源限制
vim /etc/security/limits.conf
* soft nofile 65536
* hard nofile 131072
* soft nproc 65535
* hard nproc 655350
* soft memlock unlimited
* hard memlock unlimited
#調(diào)整內(nèi)核參數(shù)
cat > /etc/sysctl.d/k8s.conf <<EOF
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
fs.may_detach_mounts = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384
EOF
#生效參數(shù)
sysctl --system
#加載 ip_vs 模塊
for i in $(ls /usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs|grep -o "^[^.]*");do echo $i; /sbin/modinfo -F filename $i >/dev/null 2>&1 && /sbin/modprobe $i;done
2. 所有節(jié)點(diǎn)安裝 docker
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum install -y docker-ce docker-ce-cli containerd.io
cd /etc/docker
cat > /etc/docker/daemon.json <<EOF
{
"registry-mirrors": ["https://00ub0bmk.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "500m", "max-file": "3"
}
}
EOF
systemctl daemon-reload
systemctl restart docker.service
systemctl enable docker.service
2. 所有節(jié)點(diǎn)安裝kubeadm,kubelet和kubectl
#定義kubernetes源
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum install -y kubelet-1.20.15 kubeadm-1.20.15 kubectl-1.20.15
#配置Kubelet使用阿里云的pause鏡像
cat > /etc/sysconfig/kubelet <<EOF
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.2"
EOF
#開機(jī)自啟kubelet
systemctl enable --now kubelet
3. 高可用組件安裝、配置
#所有 master 節(jié)點(diǎn)部署 Haproxy
yum -y install haproxy keepalived
cat > /etc/haproxy/haproxy.cfg << EOF
global
log 127.0.0.1 local0 info
log 127.0.0.1 local1 warning
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy
daemon
stats socket /var/lib/haproxy/stats
defaults
mode tcp
log global
option tcplog
option dontlognull
option redispatch
retries 3
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout check 10s
maxconn 3000
frontend monitor-in
bind *:33305
mode http
option httplog
monitor-uri /monitor
frontend k8s-master
bind *:16443
mode tcp
option tcplog
default_backend k8s-master
backend k8s-master
mode tcp
option tcplog
option tcp-check
balance roundrobin
server k8s-master1 192.168.145.15:6443 check inter 10000 fall 2 rise 2 weight 1
server k8s-master2 192.168.145.30:6443 check inter 10000 fall 2 rise 2 weight 1
server k8s-master3 192.168.145.45:6443 check inter 10000 fall 2 rise 2 weight 1
EOF
#所有 master 節(jié)點(diǎn)部署 keepalived
cd /etc/keepalived/
vim keepalived.conf
! Configuration File for keepalived
global_defs {
router_id LVS_HA1 #路由標(biāo)識(shí)符,每個(gè)節(jié)點(diǎn)配置不同
}
vrrp_script chk_haproxy {
script "/etc/keepalived/check_haproxy.sh"
interval 2
weight 2
}
vrrp_instance VI_1 {
state MASTER #本機(jī)實(shí)例狀態(tài),MASTER/BACKUP,備機(jī)配置文件中設(shè)置BACKUP
interface ens33
virtual_router_id 51
priority 100 #本機(jī)初始權(quán)重,備機(jī)設(shè)置小于主機(jī)的值
advert_int 1
virtual_ipaddress {
192.168.145.100 #設(shè)置VIP地址
}
track_script {
chk_haproxy
}
}
vim check_haproxy.sh
#!/bin/bash
if ! killall -0 haproxy; then
systemctl stop keepalived
fi
systemctl enable --now haproxy
systemctl enable --now keepalived
4. 部署 K8S 集群
#在 master01 節(jié)點(diǎn)上設(shè)置集群初始化配置文件
kubeadm config print init-defaults > /opt/kubeadm-config.yaml
cd /opt/
vim kubeadm-config.yaml
......
11 localAPIEndpoint:
12 advertiseAddress: 192.168.145.15 #指定當(dāng)前master節(jié)點(diǎn)的IP地址
13 bindPort: 6443
21 apiServer:
22 certSANs: #在apiServer屬性下面添加一個(gè)certsSANs的列表,添加所有master節(jié)點(diǎn)的IP地址和集群VIP地址
23 - 192.168.145.100
24 - 192.168.145.15
25 - 192.168.145.30
26 - 192.168.145.45
30 clusterName: kubernetes
31 controlPlaneEndpoint: "192.168.145.100:6443" #指定集群VIP地址
32 controllerManager: {}
38 imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers #指定鏡像下載地址
39 kind: ClusterConfiguration
40 kubernetesVersion: v1.20.15 #指定kubernetes版本號(hào)
41 networking:
42 dnsDomain: cluster.local
43 podSubnet: "10.244.0.0/16" #指定pod網(wǎng)段,10.244.0.0/16用于匹配flannel默認(rèn)網(wǎng)段
44 serviceSubnet: 10.96.0.0/16 #指定service網(wǎng)段
45 scheduler: {}
#末尾再添加以下內(nèi)容
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs #把默認(rèn)的kube-proxy調(diào)度方式改為ipvs模式
#更新集群初始化配置文件
kubeadm config migrate --old-config kubeadm-config.yaml --new-config new.yaml
#所有節(jié)點(diǎn)拉取鏡像
#拷貝yaml配置文件給其他主機(jī),通過配置文件進(jìn)行拉取鏡像
for i in master02 master03 node01 node02; do scp /opt/new.yaml $i:/opt/; done
kubeadm config images pull --config /opt/new.yaml
#master01 節(jié)點(diǎn)進(jìn)行初始化
kubeadm init --config new.yaml --upload-certs | tee kubeadm-init.log
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of the control-plane node running the following command on each as root:
#master節(jié)點(diǎn)加入使用的命令,記錄!
kubeadm join 192.168.145.100:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:63df6e626b225f6b60112bbc592d0ff69c1ae2e7869c9b41f142e4776df71d53 \
--control-plane --certificate-key 0aad75a1c7f178755e4776e474e12616ab88086c93984ba311a465df1999d193
Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
Then you can join any number of worker nodes by running the following on each as root:
#node節(jié)點(diǎn)加入使用的命令。記錄!
kubeadm join 192.168.145.100:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:63df6e626b225f6b60112bbc592d0ff69c1ae2e7869c9b41f142e4776df71d53
#將其他節(jié)點(diǎn)加入集群中
#將mater節(jié)點(diǎn)加入集群中
kubeadm join 192.168.145.100:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:63df6e626b225f6b60112bbc592d0ff69c1ae2e7869c9b41f142e4776df71d53 \
--control-plane --certificate-key 0aad75a1c7f178755e4776e474e12616ab88086c93984ba311a465df1999d193
#將node節(jié)點(diǎn)加入集群中
kubeadm join 192.168.145.100:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:63df6e626b225f6b60112bbc592d0ff69c1ae2e7869c9b41f142e4776df71d53
#若初始化失敗,進(jìn)行的操作
kubeadm reset -f
ipvsadm --clear
rm -rf ~/.kube
再次進(jìn)行初始化
#master01 節(jié)點(diǎn)進(jìn)行環(huán)境配置
#配置 kubectl
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
#修改controller-manager和scheduler配置文件
vim /etc/kubernetes/manifests/kube-scheduler.yaml
vim /etc/kubernetes/manifests/kube-controller-manager.yaml
......
#- --port=0 #搜索port=0,把這一行注釋掉
systemctl restart kubelet
#部署網(wǎng)絡(luò)插件flannel
所有節(jié)點(diǎn)上傳 flannel 鏡像 flannel.tar 和網(wǎng)絡(luò)插件 cni-plugins-linux-amd64-v0.8.6.tgz 到 /opt 目錄,master節(jié)點(diǎn)上傳 kube-flannel.yml 文件
cd /opt
unzip flannel-v0.21.5.zip
docker load -i flannel.tar
docker load -i flannel-cni-plugin.tar
mv /opt/cni /opt/cni_bak
mkdir -p /opt/cni/bin
tar xf cni-plugins-linux-amd64-v1.3.0.tgz /opt/cni/bin
kubectl apply -f kube-flannel.yml
#其他節(jié)點(diǎn)操作
#配置 kubectl
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
kubectl apply -f kube-flannel.yml
文章來源:http://www.zghlxwxcb.cn/news/detail-632863.html
#在 master01 查看集群信息
kubectl get nodes
kubectl get pod -n kube-system
文章來源地址http://www.zghlxwxcb.cn/news/detail-632863.html
總結(jié)
kubeadm 部署過程
1)所有節(jié)點(diǎn)進(jìn)行初始化,安裝docker引擎和kubeadm kubelet kubectl
2)生成集群初始化配置文件并進(jìn)行修改
3)使用kubeadm init根據(jù)初始化配置文件生成K8S的master控制管理節(jié)點(diǎn)
4)安裝CNI網(wǎng)絡(luò)插件(flannel、calico等)
5)在其他節(jié)點(diǎn)使用kubeadm join將節(jié)點(diǎn)以node
kubeadm 部署的K8S集群更新證書
1)備份老證書和kubeconfig配置文件
mkdir /etc/kubernetes.bak
cp -r /etc/kubernetes/pki/ /etc/kubernetes.bak
cp /etc/kubernetes/*.conf /etc/kubernetes.bak
2)重新生成證書
kubeadm alpha certs renew all --config=kubeadm.yaml
3)重新生成kubeconfig配置文件
kubeadm init phase kubeconfig all --config kubeadm.yaml
4)重啟kubelet和其他K8S組件的Pod容器
systemctl restart kubelet
mv /etc/kubernetes/manifests /tmp
mv /tmp/*.yaml /etc/kubernetes/manifests
到了這里,關(guān)于【Kubernetes】Kubernetes之Kubeadm部署的文章就介紹完了。如果您還想了解更多內(nèi)容,請(qǐng)?jiān)谟疑辖撬阉鱐OY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!