系統(tǒng): CentOS Linux release 7.9.2009 (Core)
- 準(zhǔn)備3臺(tái)主機(jī)
192.168.44.148 | k8s-master |
---|---|
92.168.44.154 | k8s-worker01 |
192.168.44.155 | k8s-worker02 |
-
3臺(tái)主機(jī)準(zhǔn)備工作
關(guān)閉防火墻和selinuxsystemctl disable firewalld --now setenforce 0 sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
關(guān)閉swap分區(qū)(swap分區(qū)會(huì)降低性能,所以選擇關(guān)閉)
參考如下鏈接:https://blog.csdn.net/dejunyang/article/details/97972399
設(shè)置node的主機(jī)名,并配置/etc/hosts(這樣可以方面看到pod調(diào)度到哪個(gè)node上面)
對(duì)應(yīng)的node配置對(duì)應(yīng)的主機(jī)名hostnamectl set-hostname k8s-master hostnamectl set-hostname k8s-worker01 hostnamectl set-hostname k8s-worker02
每臺(tái)node主機(jī)都需要執(zhí)行
cat >> /etc/hosts << EOF 192.168.44.148 k8s-master 192.168.44.154 k8s-worker01 192.168.44.155 k8s-worker02 EOF
-
開(kāi)始安裝k8s
-
配置yum源(3臺(tái)主機(jī)都需要操作,使用阿里云yum倉(cāng)庫(kù))
cat > /etc/yum.repos.d/kubernetes.repo << EOF [kubernetes] name=Kubernetes baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
-
安裝 kuberneters組件(所有節(jié)點(diǎn)安裝)
yum -y install kubeadm-1.19.0 kubectl-1.19.0 kubelet-1.19.0 --disableexcludes=kubernetes
-
--disableexcludes=Kubernetes
禁掉除了這個(gè)kubernetes之外的別的倉(cāng)庫(kù) - 這里安裝的是指定版本,不指定版本默認(rèn)安裝最新版本
-
kubelet服務(wù),啟動(dòng)并設(shè)置開(kāi)啟自啟(所有node執(zhí)行)
systemctl enable kubelet --now
-
所有節(jié)點(diǎn)安裝docker
yum install -y yum-utils yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo yum -y install docker-ce-19.03.10-3.el7 docker-ce-cli-19.03.10-3.el7 containerd.io docker-compose-plugin
note: K8s使用的是1.19.0版本,官網(wǎng)推薦docker 版本 19.03
啟動(dòng),并自啟docker
systemctl enable docker --now
Docker配置鏡像倉(cāng)庫(kù),并修改cgroup驅(qū)動(dòng)為systemd(所有node節(jié)點(diǎn)執(zhí)行)
Cgroups概念 cgroups 的全稱是 Linux Control Groups,主要作用是限制、記錄和隔離進(jìn)程組(process
groups)使用的物理資源(cpu、memory、IO 等)。systemd是系統(tǒng)自帶的cgroup管理器, 系統(tǒng)初始化就存在的, 和cgroups聯(lián)系緊密,為每一個(gè)進(jìn)程分配cgroups,
用它管理就行了. 如果設(shè)置成cgroupfs就存在2個(gè)cgroup控制管理器, 實(shí)驗(yàn)證明在資源有壓力的情況下,會(huì)存在不穩(wěn)定的情況。由于 kubeadm 把 kubelet 視為一個(gè)系統(tǒng)服務(wù)來(lái)管理,所以對(duì)基于 kubeadm 的安裝kubernetes, 我們推薦使用
systemd 驅(qū)動(dòng),不推薦 cgroupfs 驅(qū)動(dòng)。cat > /etc/docker/daemon.json << EOF { "exec-opts": ["native.cgroupdriver=systemd"], "registry-mirrors": ["https://l72pvsl6.mirror.aliyuncs.com"], "log-opts": {"max-size":"50M","max-file":"3"} } EOF
-
"exec-opts": ["native.cgroupdriver=systemd"]
設(shè)置cgroup驅(qū)動(dòng)為systemd -
"registry-mirrors": ["https://l72pvsl6.mirror.aliyuncs.com"]
設(shè)置docker鏡像倉(cāng)庫(kù)為阿里云鏡像 -
"log-opts": {"max-size":"50M","max-file":"3"}
設(shè)置docker容器日志最大為50M,最大文件個(gè)數(shù)為 3
-
允許 iptables 檢查橋接流量
若要顯式加載此模塊,請(qǐng)運(yùn)行sudo modprobe br_netfilter
通過(guò)運(yùn)行lsmod | grep br_netfilter
來(lái)驗(yàn)證br_netfilter
模塊是否已加載sudo modprobe br_netfilter lsmod | grep br_netfilter
為了讓 Linux 節(jié)點(diǎn)的 iptables 能夠正確查看橋接流量,請(qǐng)確認(rèn) sysctl 配置中的
net.bridge.bridge-nf-call-iptables
設(shè)置為 1。例如:cat > /etc/modules-load.d/k8s.conf << EOF overlay br_netfilter EOF
modprobe overlay modprobe br_netfilter
設(shè)置所需的 sysctl 參數(shù),參數(shù)在重新啟動(dòng)后保持不變
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1 net.ipv4.ip_forward = 1 EOF
加載sysctl參數(shù)(重啟也能生效)
sysctl --system
-
修改kubeadm的默認(rèn)配置
kubeadm config print init-defaults > init-config.yaml 將kubeadm默認(rèn)的配置保存在文件中,到時(shí)候?qū)肱渲梦募跏蓟?
修改init-config.yaml(只修改幾個(gè)關(guān)鍵的部分)
advertiseAddress: 192.168.44.148 imageRepository: registry.aliyuncs.com/google_containers networking: dnsDomain: cluster.local serviceSubnet: 10.96.0.0/12 podSubnet: 192.168.0.0/16
note:
-
advertiseAddress master
與集群通信的ip -
imageRepository: registry.aliyuncs.com/google_containers
鏡像倉(cāng)庫(kù)地址 -
networking
下面只添加了 podSubnet:192.168.0.0/16
這個(gè)參數(shù),其他都是默認(rèn);待會(huì)要修改CNI文件子網(wǎng)范圍和這個(gè)一致查看需要拉取的鏡像
kubeadm config images list --config=init-config.yaml
拉取鏡像
kubeadm config images pull --config=init-config.yaml
-
運(yùn)行kubeadm init 命令安裝master 節(jié)點(diǎn)
kubeadm init --config=init-config.yaml
note: 可以忽略如下信息
W0131 16:57:45.102290 3727 configset.go:348] WARNING: kubeadmcannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
Your Kubernetes control-plane has initialized successfully! 代表控制平面初始化成功
-
為kubectl 配置證書(shū)才能訪問(wèn)master
非root用戶配置:mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
root用戶配置:
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/profile source /etc/profile
-
將節(jié)點(diǎn)加入到k8s集群
當(dāng)然,當(dāng)集群已經(jīng)成功初始化完成都,最下方會(huì)打印出加入集群的代碼:
kubeadm join 192.168.44.148:6443 --token abcdef.0123456789abcdef
–discovery-token-ca-cert-hash sha256:88eb2767faaad801ce07501946b510d20bc180fd20cf35c9aa1822b3b345c2ef獲取加入node節(jié)點(diǎn)的token(master節(jié)點(diǎn)執(zhí)行)
kubeadm token list
獲取 --discovery-token-ca-cert-hash 值
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
固定格式是(注意換成自己的token和hash值):
kubeadm join 192.168.44.148:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:88eb2767faaad801ce07501946b510d20bc180fd20cf35c9aa1822b3b345c2ef
如果token過(guò)期了,可以在master節(jié)點(diǎn)上運(yùn)行
kubeadm token create
打印加入節(jié)點(diǎn)的完整命令可以運(yùn)行
kubeadm token create --print-join-command
在所有node節(jié)點(diǎn)執(zhí)行 加入集群的token
根據(jù)自己對(duì)應(yīng)的加入集群的tokenkubeadm join 192.168.44.148:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:88eb2767faaad801ce07501946b510d20bc180fd20cf35c9aa1822b3b345c2ef
在master節(jié)點(diǎn)使用 kubectl get pod -A 查看所有組件是否安裝成功,狀態(tài)是否Runting(需要等待一會(huì))
kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-6d56c8448f-hsm4m 0/1 Pending 0 16m kube-system coredns-6d56c8448f-z66pp 0/1 Pending 0 16m kube-system etcd-k8s-master 1/1 Running 0 16m kube-system kube-apiserver-k8s-master 1/1 Running 0 16m kube-system kube-controller-manager-k8s-master 1/1 Running 0 16m kube-system kube-proxy-447d7 1/1 Running 0 119s kube-system kube-proxy-9gt2z 1/1 Running 0 16m kube-system kube-proxy-s9hbb 1/1 Running 0 114s kube-system kube-scheduler-k8s-master 1/1 Running 0 16m
在master節(jié)點(diǎn)使用查看所有節(jié)點(diǎn)信息
kubectl get nodes
NAME STATUS ROLES AGE VERSION k8s-master NotReady master 17m v1.19.0 k8s-worker01 NotReady <none> 2m48s v1.19.0 k8s-worker02 NotReady <none> 2m43s v1.19.0
note:Node節(jié)點(diǎn)狀態(tài)顯示 NotReady 因?yàn)?還沒(méi)有安裝 CNI(容器網(wǎng)絡(luò)接口)
-
安裝Calico CNI網(wǎng)絡(luò)插件(在master節(jié)點(diǎn)執(zhí)行)
下載資源文件curl -O https://docs.tigera.io/archive/v3.18/manifests/calico.yaml
修改一下pod網(wǎng)段
安裝
kubectl apply -f calico.yaml
等待一會(huì)再次查看pod,node詳情
kubectl get nodes,pod -A
NAME STATUS ROLES AGE VERSION node/k8s-master Ready master 36m v1.19.0 node/k8s-worker01 Ready <none> 21m v1.19.0 node/k8s-worker02 Ready <none> 21m v1.19.0 NAMESPACE NAME READY STATUS RESTARTS AGE kube-system pod/calico-kube-controllers-56c7cdffc6-frc82 0/1 CrashLoopBackOff 6 13m kube-system pod/calico-node-7rxzn 1/1 Running 0 13m kube-system pod/calico-node-dbjhv 1/1 Running 0 13m kube-system pod/calico-node-fsw2d 1/1 Running 0 13m kube-system pod/coredns-6d56c8448f-hsm4m 0/1 Running 0 36m kube-system pod/coredns-6d56c8448f-z66pp 0/1 Running 0 36m kube-system pod/etcd-k8s-master 1/1 Running 0 36m kube-system pod/kube-apiserver-k8s-master 1/1 Running 0 36m kube-system pod/kube-controller-manager-k8s-master 1/1 Running 0 36m kube-system pod/kube-proxy-447d7 1/1 Running 0 21m kube-system pod/kube-proxy-9gt2z 1/1 Running 0 36m kube-system pod/kube-proxy-s9hbb 1/1 Running 0 21m kube-system pod/kube-scheduler-k8s-master 1/1 Running 0 36m
目前來(lái)看node節(jié)點(diǎn)已經(jīng)就緒了,但是有一個(gè)pod報(bào)錯(cuò),2個(gè)pod沒(méi)有準(zhǔn)備好,使用kubectl --namespace=kube-system describe pod pod名稱 查看pod的事件文章來(lái)源:http://www.zghlxwxcb.cn/news/detail-826519.html
kubectl --namespace=kube-system describe pod calico-kube-controllers-56c7cdffc6-frc82
報(bào)錯(cuò)問(wèn)題能力有限無(wú)法處理,部署基本都是這樣。組要是版本差異然后不知道怎么處理?。?!
如果知道是什么原因?qū)е碌膱?bào)錯(cuò),還請(qǐng)不吝賜教。文章來(lái)源地址http://www.zghlxwxcb.cn/news/detail-826519.html
到了這里,關(guān)于安裝部署k8s集群的文章就介紹完了。如果您還想了解更多內(nèi)容,請(qǐng)?jiān)谟疑辖撬阉鱐OY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!