前言:
云原生|kubernetes|kubernetes集群部署神器kubekey的初步使用(centos7下的kubekey使用)_晚風(fēng)_END的博客-CSDN博客
前面利用kubekey部署了一個(gè)簡(jiǎn)單的非高可用,etcd單實(shí)例的kubernetes集群,經(jīng)過研究,發(fā)現(xiàn)部署過程可以簡(jiǎn)化,省去了一部分下載過程(主要是下載kubernetes組件的過程)只是kubernetes版本會(huì)固定在1.22.16版本,etcd集群可以部署成生產(chǎn)用的外部集群,并且apiserver等等組件也是高可用,并且部署非常簡(jiǎn)單,因此,也就非常nice了。
一,
離線安裝包
####注,該離線包適用于centos7并在centos7下全系列驗(yàn)證通過,歐拉的部分版本應(yīng)該也可以使用
鏈接:https://pan.baidu.com/s/1d4YR_a244iZj5aj2DJLU2w?pwd=kkey?
提取碼:kkey?
安裝包內(nèi)大體有如下文件:
?第一個(gè)沒什么好說的,kubekey的安裝包,解壓后查看是否有執(zhí)行權(quán)限就可以了,如果沒有,添加執(zhí)行權(quán)限
第二個(gè)是kubernetes組件的二進(jìn)制文件,直接解壓到root目錄下就可以了
第三個(gè)是強(qiáng)依賴,解壓后,進(jìn)入解壓后目錄,執(zhí)行?rpm -ivh *? 就可以了,
第四個(gè)事部署清單,需要按照?實(shí)際的情況填寫IP,還有服務(wù)器的密碼,別的基本不需要?jiǎng)?/strong>
然后就可以執(zhí)行部署工作了,只是會(huì)拉取一些鏡像,這些鏡像是從kubesphere官網(wǎng)拉取,如果嫌拉取鏡像太慢,可以export KKZONE=cn ,然后鏡像都會(huì)從阿里云拉取。
二,
部署清單文件的解析
文件內(nèi)容如下:
主要是hosts標(biāo)簽,roleGroups標(biāo)簽
hosts標(biāo)簽下面,有幾個(gè)節(jié)點(diǎn)寫幾個(gè)節(jié)點(diǎn),我實(shí)驗(yàn)的時(shí)候是使用了四個(gè)VMware虛擬機(jī),每個(gè)虛擬機(jī)是4G內(nèi)存,2CPUI的規(guī)格,IP地址和密碼按實(shí)際填寫
用戶使用的是root,其實(shí)也是避免一些失敗的情況,畢竟root權(quán)限最高嘛,部署安裝工作還是不要花里胡哨的用普通用戶(yum部署都從來不用普通用戶,就是避免失敗的嘛)。
roleGroups的標(biāo)簽是11,12,13?這三個(gè)節(jié)點(diǎn)做主節(jié)點(diǎn),也是etcd集群的節(jié)點(diǎn)
高可用使用的haproxy,具體實(shí)現(xiàn)細(xì)節(jié)還沒分析出來。
具體的安裝部署的日志在/root/kubekey/logs文章來源:http://www.zghlxwxcb.cn/news/detail-605940.html
apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
name: sample
spec:
hosts:
- {name: node1, address: 192.168.123.11, internalAddress: 192.168.123.11, user: root, password: "密碼"}
- {name: node2, address: 192.168.123.12, internalAddress: 192.168.123.12, user: root, password: "密碼"}
- {name: node3, address: 192.168.123.13, internalAddress: 192.168.123.13, user: root, password: "密碼"}
- {name: node4, address: 192.168.123.14, internalAddress: 192.168.123.14, user: root, password: "密碼"}
roleGroups:
etcd:
- node1
- node2
- node3
control-plane:
- node1
- node2
- node3
worker:
- node4
controlPlaneEndpoint:
## Internal loadbalancer for apiservers
internalLoadbalancer: haproxy
domain: lb.kubesphere.local
address: ""
port: 6443
kubernetes:
version: v1.23.16
clusterName: cluster.local
autoRenewCerts: true
containerManager: docker
etcd:
type: kubekey
network:
plugin: calico
kubePodsCIDR: 10.244.0.0/18
kubeServiceCIDR: 10.96.0.0/18
## multus support. https://github.com/k8snetworkplumbingwg/multus-cni
multusCNI:
enabled: false
registry:
privateRegistry: ""
namespaceOverride: ""
registryMirrors: []
insecureRegistries: []
addons: []
三,
部署完成的狀態(tài)檢查
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-2 Healthy {"health":"true"}
etcd-1 Healthy {"health":"true"}
etcd-0 Healthy {"health":"true"}
[root@centos1 ~]# kubectl get po -A -owide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system calico-kube-controllers-84897d7cdf-hrj4f 1/1 Running 0 152m 10.244.28.2 node3 <none> <none>
kube-system calico-node-2m7hp 1/1 Running 0 152m 192.168.123.11 node1 <none> <none>
kube-system calico-node-5ztjk 1/1 Running 0 152m 192.168.123.14 node4 <none> <none>
kube-system calico-node-96dmb 1/1 Running 0 152m 192.168.123.13 node3 <none> <none>
kube-system calico-node-rqp2p 1/1 Running 0 152m 192.168.123.12 node2 <none> <none>
kube-system coredns-b7c47bcdc-bbxck 1/1 Running 0 152m 10.244.28.3 node3 <none> <none>
kube-system coredns-b7c47bcdc-qtvhf 1/1 Running 0 152m 10.244.28.1 node3 <none> <none>
kube-system haproxy-node4 1/1 Running 0 152m 192.168.123.14 node4 <none> <none>
kube-system kube-apiserver-node1 1/1 Running 0 152m 192.168.123.11 node1 <none> <none>
kube-system kube-apiserver-node2 1/1 Running 0 152m 192.168.123.12 node2 <none> <none>
kube-system kube-apiserver-node3 1/1 Running 0 152m 192.168.123.13 node3 <none> <none>
kube-system kube-controller-manager-node1 1/1 Running 0 152m 192.168.123.11 node1 <none> <none>
kube-system kube-controller-manager-node2 1/1 Running 0 152m 192.168.123.12 node2 <none> <none>
kube-system kube-controller-manager-node3 1/1 Running 0 152m 192.168.123.13 node3 <none> <none>
kube-system kube-proxy-649mn 1/1 Running 0 152m 192.168.123.14 node4 <none> <none>
kube-system kube-proxy-7q7ts 1/1 Running 0 152m 192.168.123.13 node3 <none> <none>
kube-system kube-proxy-dmd7v 1/1 Running 0 152m 192.168.123.12 node2 <none> <none>
kube-system kube-proxy-fpb6z 1/1 Running 0 152m 192.168.123.11 node1 <none> <none>
kube-system kube-scheduler-node1 1/1 Running 0 152m 192.168.123.11 node1 <none> <none>
kube-system kube-scheduler-node2 1/1 Running 0 152m 192.168.123.12 node2 <none> <none>
kube-system kube-scheduler-node3 1/1 Running 0 152m 192.168.123.13 node3 <none> <none>
kube-system nodelocaldns-565pz 1/1 Running 0 152m 192.168.123.12 node2 <none> <none>
kube-system nodelocaldns-dpwlx 1/1 Running 0 152m 192.168.123.13 node3 <none> <none>
kube-system nodelocaldns-ndlbw 1/1 Running 0 152m 192.168.123.14 node4 <none> <none>
kube-system nodelocaldns-r8gjl 1/1 Running 0 152m 192.168.123.11 node1 <none> <none>
[root@centos1 ~]# kubectl get no -owide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
node1 Ready control-plane,master 152m v1.23.16 192.168.123.11 <none> CentOS Linux 7 (Core) 3.10.0-1062.el7.x86_64 docker://20.10.8
node2 Ready control-plane,master 152m v1.23.16 192.168.123.12 <none> CentOS Linux 7 (Core) 3.10.0-1062.el7.x86_64 docker://20.10.8
node3 Ready control-plane,master 152m v1.23.16 192.168.123.13 <none> CentOS Linux 7 (Core) 3.10.0-1062.el7.x86_64 docker://20.10.8
node4 Ready worker 152m v1.23.16 192.168.123.14 <none> CentOS Linux 7 (Core) 3.10.0-1062.el7.x86_64 docker://20.10.8
在將12? ?節(jié)點(diǎn)關(guān)閉后,可以看到?kubernetes集群仍可以正常運(yùn)行(11不能關(guān),因?yàn)槭枪芾砉?jié)點(diǎn)嘛,那些集群的config文件沒拷貝到其它節(jié)點(diǎn))文章來源地址http://www.zghlxwxcb.cn/news/detail-605940.html
到了這里,關(guān)于云原生|kubernetes|kubernetes集群部署神器kubekey安裝部署高可用k8s集群(半離線形式)的文章就介紹完了。如果您還想了解更多內(nèi)容,請(qǐng)?jiān)谟疑辖撬阉鱐OY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!