一、集群規(guī)劃及架構(gòu)
官方文檔:
二進(jìn)制下載地址
環(huán)境規(guī)劃:
- pod網(wǎng)段:10.244.0.0/16
- service網(wǎng)段:10.10.0.0/16
- 注意: pod和service網(wǎng)段不可沖突,如果沖突會導(dǎo)致K8S集群安裝失敗。
- 容器運(yùn)行時本次使用containerd。
主機(jī)名 | IP地址 | 操作系統(tǒng) |
---|---|---|
master-1 | 16.32.15.200 | CentOS7.8 |
node-1 | 16.32.15.201 | CentOS7.8 |
node-2 | 16.32.15.202 | CentOS7.8 |
二、系統(tǒng)初始化準(zhǔn)備(所有節(jié)點(diǎn)同步操作)
1、關(guān)閉防火墻
systemctl disable firewalld --now
setenforce 0
sed -i -r 's/SELINUX=[ep].*/SELINUX=disabled/g' /etc/selinux/config
2、配置域名解析
cat >> /etc/hosts << EOF
16.32.15.200 master-1
16.32.15.201 node-1
16.32.15.202 node-2
EOF
在指定主機(jī)上面修改主機(jī)名
hostnamectl set-hostname master-1 && bash
hostnamectl set-hostname node-1 && bash
hostnamectl set-hostname node-2 && bash
3、配置服務(wù)器時間保持一致
yum -y install ntpdate
ntpdate ntp1.aliyun.com
添加定時同步 每天凌晨1點(diǎn)自動同步時間
echo "0 1 * * * ntpdate ntp1.aliyun.com" >> /var/spool/cron/root
crontab -l
4、禁用swap交換分區(qū)(kubernetes強(qiáng)制要求禁用)
swapoff --all
禁止開機(jī)自啟動swap交換分區(qū)
sed -i -r '/swap/ s/^/#/' /etc/fstab
5、修改Linux內(nèi)核參數(shù),添加網(wǎng)橋過濾器和地址轉(zhuǎn)發(fā)功能
cat >> /etc/sysctl.d/kubernetes.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
sysctl -p /etc/sysctl.d/kubernetes.conf
加載網(wǎng)橋過濾器模塊
modprobe br_netfilter
lsmod | grep br_netfilter # 驗(yàn)證是否生效
6、配置ipvs功能
在kubernetes中Service有兩種代理模型,一種是基于iptables的,一種是基于ipvs,兩者對比ipvs的性能要高,如果想要使用ipvs模型,需要手動載入ipvs模塊
yum -y install ipset ipvsadm
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod +x /etc/sysconfig/modules/ipvs.modules
# 執(zhí)行腳本
/etc/sysconfig/modules/ipvs.modules
# 驗(yàn)證ipvs模塊
lsmod | grep -e ip_vs -e nf_conntrack_ipv4
7、安裝Docker容器組件
注意:Docker用來下載鏡像、Dockerfile構(gòu)建鏡像等操作,和Containerd不沖突。
curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
yum makecache
# yum-utils軟件用于提供yum-config-manager程序
yum install -y yum-utils
# 使用yum-config-manager創(chuàng)建docker阿里存儲庫
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum install docker-ce-20.10.6 docker-ce-cli-20.10.6 -y
Docker配置加速源:
mkdir /etc/docker
cat <<EOF > /etc/docker/daemon.json
{
"registry-mirrors": ["https://aoewjvel.mirror.aliyuncs.com"]
}
EOF
# 啟動docker并設(shè)置開機(jī)自啟
systemctl enable docker --now
systemctl status docker
8、重啟服務(wù)器 可略過
reboot
三、安裝并配置Containerd容器運(yùn)行時
三臺服務(wù)器同時操作
1、安裝containerd
yum -y install containerd.io-1.6.6
2、生成containerd配置文件,并修改配置文件
mkdir -p /etc/containerd
containerd config default > /etc/containerd/config.toml
修改配置文件,主要修改以下配置:
vim /etc/containerd/config.toml
SystemdCgroup = true
sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.7"
3、啟動containerd
systemctl enable containerd --now
4、添加 crictl 工具的配置文件
crictl 是一個命令行工具,用于與 CRI(Container Runtime Interface)兼容的容器運(yùn)行時進(jìn)行交互。在 Kubernetes 中,Kubelet 使用 CRI 接口與容器運(yùn)行時進(jìn)行通信,而 crictl 工具則可以用于直接與容器運(yùn)行時進(jìn)行交互。
cat > /etc/crictl.yaml <<EOF
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false
EOF
- runtime-endpoint:指定容器運(yùn)行時的地址
- image-endpoint:指定鏡像倉庫的地址,即containerd所使用的鏡像倉庫的地址。
5、配置containerd鏡像加速器
指定加速器目錄信息:
vim /etc/containerd/config.toml
config_path = "/etc/containerd/certs.d"
配置加速信息:
mkdir /etc/containerd/certs.d/docker.io/ -p
vim /etc/containerd/certs.d/docker.io/hosts.toml
[host."https://vh3bm52y.mirror.aliyuncs.com",host."https://registry.docker-cn.com"]
capabilities = ["pull"]
重啟containerd
systemctl restart containerd
四、安裝kubeadm(所有節(jié)點(diǎn)同步操作)
1、配置國內(nèi)yum源,一鍵安裝 kubeadm、kubelet、kubectl
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
EOF
yum install -y kubelet-1.26.0 kubeadm-1.26.0 kubectl-1.26.0
2、kubeadm將使用kubelet服務(wù)以容器方式部署kubernetes的主要服務(wù),所以需要先啟動kubelet服務(wù)
systemctl enable kubelet.service --now
五、初始化集群
在master-1主機(jī)上進(jìn)行操作
1、配置容器運(yùn)行時
crictl config runtime-endpoint unix:///run/containerd/containerd.sock
2、生成初始化默認(rèn)配置文件
kubeadm config print init-defaults > kubeadm.yaml
我們根據(jù)自己需求進(jìn)行修改默認(rèn)配置文件,我主要更改了一下配置如下:
- advertiseAddress:更改為master的IP地址
- criSocket:指定容器運(yùn)行時
- imageRepository:配置國內(nèi)加速源地址
- podSubnet:pod網(wǎng)段地址
- serviceSubnet:services網(wǎng)段地址
- 末尾添加了指定使用ipvs,開啟systemd
- nodeRegistration.name:改為當(dāng)前主機(jī)名稱
最終初始化配置文件如下:
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 16.32.15.200
bindPort: 6443
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
imagePullPolicy: IfNotPresent
name: master-1
taints: null
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: 1.26.0
networking:
dnsDomain: cluster.local
podSubnet: 10.244.0.0/16
serviceSubnet: 10.96.0.0/12
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
3、進(jìn)行初始化
kubeadm init --config=kubeadm.yaml --ignore-preflight-errors=SystemVerification
初始化成功后輸出如下內(nèi)容:
[init] Using Kubernetes version: v1.26.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local node] and IPs [10.96.0.1 16.32.15.200]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost node] and IPs [16.32.15.200 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost node] and IPs [16.32.15.200 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 12.003782 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node node as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node node as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 16.32.15.200:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:02dc7cc7702f9814d01f9a4d5957da3053e74adcc2f583415e516a4b81fb37bc
4、配置kubectl的配置文件config,相當(dāng)于對kubectl進(jìn)行授權(quán),這樣kubectl命令可以使用這個證書對k8s集群進(jìn)行管理
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
驗(yàn)證使用可以使用 kubectl 命令
kubectl get nodes
5、查看鏡像,咱們使用containerd拉取的鏡像,使用docker images命令是查看不到鏡像的。需要使用ctr命令,如下:
ctr -n k8s.io images list
crictl images
- -n 指明名稱空間
六、Node節(jié)點(diǎn)添加到集群
在兩臺node節(jié)點(diǎn)進(jìn)行操
1、賦值初始化輸出的token信息,在兩臺node節(jié)點(diǎn)執(zhí)行
kubeadm join 16.32.15.200:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:6839bcc102c7ab089554871dd1a8f3d4261e1482ff13eafdf32fc092ebaf9f7e
如果忘記可以使用以下命令創(chuàng)建并查看token:
kubeadm token create --print-join-command
成功加入到集群如下圖:
2、給兩臺node節(jié)點(diǎn)打上標(biāo)簽
master-1主機(jī)上執(zhí)行
kubectl label nodes node-1 node-role.kubernetes.io/work=work
kubectl label nodes node-2 node-role.kubernetes.io/work=work
七、安裝網(wǎng)絡(luò)組件Calico
Calico在線文檔地址:
Calico.yaml下載地址:
1、上傳calico.yaml文件到服務(wù)器中,下面提供calico.yaml文件內(nèi)容:
在master主機(jī)執(zhí)行
kubectl apply -f calico.yaml
2、查看集群狀態(tài) && 查看自帶Pod狀態(tài)
kubectl get nodes
3、查看組件狀態(tài) 是否為 Running狀態(tài) 如下圖:
kubectl get pods -n kube-system
八、測試CoreDNS解析可用性
1、下載busybox:1.28鏡像
ctr -n k8s.io images pull docker.io/library/busybox:1.28
2、測試coredns
kubectl run busybox --image busybox:1.28 --restart=Never --rm -it busybox -- sh
If you don't see a command prompt, try pressing enter.
/ # nslookup kubernetes.default.svc.cluster.local
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: kubernetes.default.svc.cluster.local
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
- 注意:busybox要用指定的1.28版本,不能用最新版本,最新版本,nslookup會解析不到dns和ip
九、拓展
1、ctr和crictl命令具體區(qū)別
ctr和crictl都是用于管理容器運(yùn)行時的命令行工具,但它們的具體區(qū)別如下:
-
ctr是由Docker開發(fā)的,而crictl是由Kubernetes開發(fā)的。
-
ctr支持多種容器運(yùn)行時,包括Docker、containerd、CRI-O等,而crictl只支持CRI(Container Runtime Interface)兼容的容器運(yùn)行時,如CRI-O、containerd等。
-
ctr提供了更多的功能,如鏡像管理、網(wǎng)絡(luò)管理、卷管理等,而crictl只提供了基本的容器管理功能,如容器創(chuàng)建、刪除、啟動、停止等。
-
ctr的命令更加直觀和易用,而crictl的命令更加符合Kubernetes的設(shè)計理念和規(guī)范。
綜上所述,ctr和crictl都是優(yōu)秀的容器管理工具,選擇哪一個取決于具體的使用場景和需求。
2、calico多網(wǎng)卡情況配置
可能會有一些情況,一臺服務(wù)器上面有多個網(wǎng)卡,但是只有其中一個網(wǎng)卡可以通外網(wǎng),這種情況下需要添加一些配置,來指定網(wǎng)卡,如下命令是指定ens33網(wǎng)卡,當(dāng)然也可以使用正則表達(dá)式來表示。文章來源:http://www.zghlxwxcb.cn/news/detail-687253.html
- name: IP_AUTODETECTION_METHOD
value: "interface=ens33"
重新執(zhí)行生效:文章來源地址http://www.zghlxwxcb.cn/news/detail-687253.html
kubectl apply -f calico.yaml
到了這里,關(guān)于【Kubernetes部署篇】Kubeadm方式搭建K8s集群 1.26.0版本的文章就介紹完了。如果您還想了解更多內(nèi)容,請在右上角搜索TOY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!