国产 无码 综合区,色欲AV无码国产永久播放,无码天堂亚洲国产AV,国产日韩欧美女同一区二区

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目

這篇具有很好參考價(jià)值的文章主要介紹了基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目。希望對(duì)大家有所幫助。如果存在錯(cuò)誤或未考慮完全的地方,請(qǐng)大家不吝賜教,您也可以點(diǎn)擊"舉報(bào)違法"按鈕提交疑問(wèn)。

目錄

項(xiàng)目名稱

項(xiàng)目架構(gòu)圖

項(xiàng)目環(huán)境

項(xiàng)目概述

項(xiàng)目準(zhǔn)備

項(xiàng)目步驟

一、修改每臺(tái)主機(jī)的ip地址,同時(shí)設(shè)置永久關(guān)閉防火墻和selinux,修改好主機(jī)名,在firewalld服務(wù)器上開(kāi)啟路由功能并配置snat策略。

1. 在firewalld服務(wù)器上配置ip地址、設(shè)置永久關(guān)閉防火墻和selinux,并修改好主機(jī)名

2. 在firewalld服務(wù)器上開(kāi)啟路由功能,并配置snat策略,使內(nèi)網(wǎng)服務(wù)器能上網(wǎng)

3. 配置剩下的服務(wù)器的ip地址,永久關(guān)閉防火墻和selinux,并修改好主機(jī)名

二、部署docker+k8s環(huán)境,實(shí)現(xiàn)1個(gè)master和2個(gè)node節(jié)點(diǎn)的k8s集群

1.?在k8s集群那3臺(tái)服務(wù)器上安裝好docker,這里根據(jù)官方文檔進(jìn)行安裝

2.?創(chuàng)建k8s集群,這里采用 kubeadm方式安裝

2.1 確認(rèn)docker已經(jīng)安裝好,啟動(dòng)docker,并且設(shè)置開(kāi)機(jī)啟動(dòng)

2.2 配置 Docker使用systemd作為默認(rèn)Cgroup驅(qū)動(dòng)

2.3?關(guān)閉swap分區(qū)

2.4 修改hosts文件,和內(nèi)核會(huì)讀取的參數(shù)文件

2.5 安裝kubeadm,kubelet和kubectl?

2.6 部署Kubernetes Master

2.7 node節(jié)點(diǎn)服務(wù)器加入k8s集群

2.8 安裝網(wǎng)絡(luò)插件flannel

2.9 查看集群狀態(tài)?

三、編譯安裝nginx,制作自己的鏡像,并上傳到docker hub上,給node節(jié)點(diǎn)下載使用

1. 在master建立一個(gè)一鍵安裝nginx的腳本?

2. 建立一個(gè)Dockerfile文件

3. 創(chuàng)建鏡像

4. 將制作的鏡像推送到docker hub上,供node節(jié)點(diǎn)下載?

5. node節(jié)點(diǎn)去docker hub上拉取這個(gè)鏡像

四、創(chuàng)建NFS服務(wù)器為所有的節(jié)點(diǎn)提供相同Web數(shù)據(jù),結(jié)合使用pv+pvc和卷掛載,保障數(shù)據(jù)的一致性,并用探針對(duì)pod中容器的狀態(tài)進(jìn)行檢測(cè)

1. 用ansible部署nfs服務(wù)器環(huán)境

1.1 在ansible服務(wù)器上對(duì)k8s集群和nfs服務(wù)器建立免密通道?

1.2 安裝ansible自動(dòng)化運(yùn)維工具在ansible服務(wù)器上,并寫好主機(jī)清單

1.3 編寫安裝nfs腳本

1.4?編寫playbook,實(shí)現(xiàn)nfs安裝部署

1.5?檢查yaml文件語(yǔ)法

1.6?執(zhí)行yaml文件

1.7 驗(yàn)證nfs是否安裝成功

2. 將web數(shù)據(jù)頁(yè)面掛載到容器上,并使用探針技術(shù)對(duì)容器狀態(tài)進(jìn)行檢查?

2.1 創(chuàng)建web頁(yè)面數(shù)據(jù)文件

2.1.1 先在nfs服務(wù)器上創(chuàng)建web頁(yè)面數(shù)據(jù)共享文件

2.2 創(chuàng)建nginx.conf配置文件

2.2.1 先再nfs服務(wù)器上下載nginx,使用前面的一鍵編譯安裝nginx的腳本下載,得到nginx.conf配置文件

2.2.2 修改nginx.conf的配置文件,添加就緒探針和存活性探針的位置塊

2.3 編輯/etc/exports文件,并讓其生效

?2.4?掛載web頁(yè)面數(shù)據(jù)文件

2.4.1在master服務(wù)器上創(chuàng)建pv

2.4.2?在master服務(wù)器上創(chuàng)建pvc,用來(lái)使用pv

2.5 掛載nginx.conf配置文件

2.5.1在master服務(wù)器上創(chuàng)建pv

2.5.2?在master服務(wù)器上創(chuàng)建pvc,用來(lái)使用pv

2.6?在master服務(wù)器上創(chuàng)建pod使用pvc

2.7 創(chuàng)建service服務(wù)發(fā)布出去

2.8 在firewalld服務(wù)器上,配置dnat策略,將web服務(wù)發(fā)布出去

2.9 測(cè)試訪問(wèn)

五、采用HPA技術(shù),當(dāng)cpu使用率達(dá)到40%的時(shí)候,pod進(jìn)行自動(dòng)水平擴(kuò)縮,最小10個(gè),最多20個(gè)pod

1.?安裝metrics服務(wù)

2.?配置HPA,當(dāng)cpu使用率達(dá)到50%的時(shí)候,pod進(jìn)行自動(dòng)水平擴(kuò)縮,最小20個(gè),最多40個(gè)pod

2.1 在原來(lái)的deployment yaml文件中配置資源請(qǐng)求

2.2 創(chuàng)建hpa

3. 對(duì)集群進(jìn)行壓力測(cè)試

3.1 在其他機(jī)器上安裝ab軟件

3.2?對(duì)該集群進(jìn)行ab壓力測(cè)試

4. 查看hpa效果,觀察變化

5. 觀察集群性能

6. 優(yōu)化整個(gè)web集群

六、使用ingress對(duì)象結(jié)合ingress-controller給web業(yè)務(wù)實(shí)現(xiàn)負(fù)載均衡功能

1. 用ansible部署ingress環(huán)境

1.1 將配置ingress controller需要的配置文件傳入ansible服務(wù)器上

1.2 編寫拉取ingress鏡像的腳本

1.3 編寫playbook,實(shí)現(xiàn)ingress controller的安裝部署

1.4 查看是否成功

2.?執(zhí)行ingress-controller-deploy.yaml 文件,去啟動(dòng)ingress ?controller

3.?啟用ingress 關(guān)聯(lián)ingress controller 和service

3.1 編寫ingrss的yaml文件?

3.2 執(zhí)行文件

3.3 查看效果

3.4 查看ingress controller 里的nginx.conf 文件里是否有ingress對(duì)應(yīng)的規(guī)則?

4. 測(cè)試訪問(wèn)

4.1 獲取ingress controller對(duì)應(yīng)的service暴露宿主機(jī)的端口

4.2 在其他的宿主機(jī)或者windows機(jī)器上使用域名進(jìn)行訪問(wèn)

4.2.1 修改host文件

4.2.1 測(cè)試訪問(wèn)

5.?啟動(dòng)第2個(gè)服務(wù)和pod

6. 再次測(cè)試訪問(wèn),查看www.xin.com的是否能夠訪問(wèn)到

七、在k8s集群里部署Prometheus對(duì)web業(yè)務(wù)進(jìn)行監(jiān)控,結(jié)合Grafana成圖工具進(jìn)行數(shù)據(jù)展示

1.?搭建prometheus監(jiān)控k8s集群

1.1 采用daemonset方式部署node-exporter

1.2?部署Prometheus

1.3 測(cè)試

2. 搭建garafana結(jié)合prometheus出圖

2.1 部署grafana

2.2 測(cè)試

2.2.1?增添Prometheus數(shù)據(jù)源

2.2.2 導(dǎo)入模板

2.3?出圖效果

八、構(gòu)建CI/CD環(huán)境,使用gitlab集成Jenkins、Harbor構(gòu)建pipeline流水線工作,實(shí)現(xiàn)自動(dòng)相關(guān)拉取代碼、鏡像制作、上傳鏡像等功能

1. 部署gitlab環(huán)境?

1.1 安裝gitlab

1.1.1設(shè)置gitlab的yum源(使用清華鏡像源安裝GitLab)

1.1.2?安裝 gitlab

1.1.3?配置GitLab站點(diǎn)Url

1.2?啟動(dòng)并訪問(wèn)GitLab

1.2.1?重新配置并啟動(dòng)

1.2.2 在firewalld服務(wù)器上配置dnat策略,使windows能訪問(wèn)進(jìn)來(lái)

1.2.3 在window上訪問(wèn)

1.2.4?配置默認(rèn)訪問(wèn)密碼

1.2.5 登錄訪問(wèn)

1.3?配置使用自己創(chuàng)建的用戶登錄

2. 部署jenkins環(huán)境

2.1 先到官網(wǎng)下載通用java項(xiàng)目war包,建議選擇LTS長(zhǎng)期支持版

2.2?下載java,jdk11以上版本并安裝,安裝后配置jdk的環(huán)境變量

2.2.1 yum安裝?

2.2.2??查找JAVA安裝目錄

2.2.3?配置環(huán)境變量

2.3 將剛剛下載下來(lái)的jenkins.war包傳入服務(wù)器

2.4?啟動(dòng)jenkins服務(wù)

2.5 測(cè)試訪問(wèn)

3. 部署harbor環(huán)境

3.1?安裝docker、docker-compose

3.1.1 安裝docker

3.1.2 安裝docker-compose

3.2 安裝harbor

3.2.1?下載harbor的源碼,上傳到linux服務(wù)器

3.2.2 解壓并修改內(nèi)容

3.3 登錄harbor

4. gitlab集成jenkins、harbor構(gòu)建pipeline流水線任務(wù),實(shí)現(xiàn)相關(guān)拉取代碼、鏡像制作、上傳鏡像等流水線工作?

4.1 jenkins服務(wù)器上需要安裝docker且配置可登錄Harbor服務(wù)拉取鏡像?

4.1.1 jenkins服務(wù)器上安裝docker?

4.1.2? jenkins服務(wù)器上配置可登錄Harbor服務(wù)

4.1.3 測(cè)試登錄

4.2 在jenkins上安裝git

4.3?在jenkins上安裝maven

4.3.1 下載安裝包

4.3.2?解壓下載的包

4.3.3?配置環(huán)境變量

4.3.4 mvn校驗(yàn)

4.4?gitlab中創(chuàng)建測(cè)試項(xiàng)目

4.5 在harbor上新建dev項(xiàng)目

4.6 在Jenkins頁(yè)面中配置JDK和Maven?

4.7?在Jenkins開(kāi)發(fā)視圖中創(chuàng)建流水線任務(wù)(pipeline)

4.7.1?流水線任務(wù)需要編寫pipeline腳本,編寫腳本的第一步應(yīng)該是拉取gitlab中的項(xiàng)目

4.7.2 編寫pipeline

5. 驗(yàn)證

九、部署跳板機(jī)限制用戶訪問(wèn)內(nèi)部網(wǎng)絡(luò)的權(quán)限

1.? 在firewalld上配置dnat策略,實(shí)現(xiàn)用戶ssh到firewalld服務(wù)后自動(dòng)轉(zhuǎn)入到跳板機(jī)服務(wù)器

2. 在跳板機(jī)服務(wù)器上配置只允許192.168.31.0/24網(wǎng)段的用戶ssh進(jìn)來(lái)

3. 將跳板機(jī)與內(nèi)網(wǎng)其他服務(wù)器都建立免密通道

4. 驗(yàn)證

十、安裝zabbix對(duì)所有服務(wù)器區(qū)進(jìn)行監(jiān)控,監(jiān)控其CPU、內(nèi)存、網(wǎng)絡(luò)帶寬等

1. 安裝zabbix環(huán)境

2. 測(cè)試訪問(wèn)

3.? 在要監(jiān)控的服務(wù)器上安裝zabbix-agent服務(wù)

4. 在zabbix-server服務(wù)器上安裝zabbix-get服務(wù)

5. 獲取數(shù)據(jù)

6. 在web頁(yè)添加監(jiān)控主機(jī)

十一、使用ab軟件對(duì)整個(gè)k8s集群和相關(guān)服務(wù)器進(jìn)行壓力測(cè)試

1.? 安裝ab軟件

2. 測(cè)試

項(xiàng)目遇到的問(wèn)題

1. 重啟服務(wù)器后,發(fā)現(xiàn)除了firewalld服務(wù)器,其他服務(wù)器的xshell連接不上了

2. pod啟動(dòng)不起來(lái),發(fā)現(xiàn)是pvc與pv的綁定出錯(cuò)了,原因是pvc和pv的yaml文件中的storageClassName不一致

3. 測(cè)試訪問(wèn)時(shí),發(fā)現(xiàn)訪問(wèn)的內(nèi)容不足自己設(shè)置的,即web數(shù)據(jù)文件掛載失敗,但是nginx.conf配置文件掛載成功

4.?pipeline執(zhí)行最后一步報(bào)錯(cuò)

5. pipeline執(zhí)行最后一步報(bào)錯(cuò)登錄不了harbor

項(xiàng)目心得


項(xiàng)目名稱

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目

項(xiàng)目架構(gòu)圖

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

項(xiàng)目環(huán)境

centos 7.9?

docker 24.0.5

docker compose 2.7.0

kubelet 1.23.6

kubeadm 1.23.6

kubectl 1.23.6

nginx 1.21.1

ansible 2.9.27

prometheus? 2.0.0

grafana? 6.1.4

zabbix? 5.0

gitlab? 16.3.1

jenkins? 2.414.1

harbor? 2.1.0

項(xiàng)目概述

項(xiàng)目名稱:基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目

項(xiàng)目環(huán)境:centos 7.9(11臺(tái),3臺(tái)k8s集群2核2G,1臺(tái)gitlab4核8G,7臺(tái)1核1G),docker 24.0.5,nginx1.21.1,prometheus 2.0.0,grafana 6.1.4,gitlab 16.3.1,Jenkins 2.414.1,Harbor 2.1.0,zabbix 5.0,ansible 2.9.27等

項(xiàng)目描述:本項(xiàng)目模擬企業(yè)里的生產(chǎn)環(huán)境,并通過(guò)sna+dnat發(fā)布內(nèi)網(wǎng)服務(wù),部署了一個(gè)跳板機(jī)限制用戶訪問(wèn)內(nèi)部網(wǎng)絡(luò)的權(quán)限,部署web,nfs,ansible,harbor,zabbix,gitlab,jenkins環(huán)境,基于docker+k8s構(gòu)建一個(gè)高可用、高性能的web集群,在k8s中用prometheus+grafana對(duì)web集群資源做監(jiān)控和出圖,同時(shí)模擬CI/CD流程,深刻體會(huì)應(yīng)用開(kāi)發(fā)中的高度持續(xù)自動(dòng)化。

項(xiàng)目步驟:

  1. 規(guī)劃好整個(gè)集群架構(gòu),部署好防火墻服務(wù)器,開(kāi)啟路由功能并配置SNAT策略,使用k8s實(shí)現(xiàn)web集群部署(1個(gè)master,2個(gè)node)
  2. 編譯安裝nginx,制作自己的鏡像供web集群內(nèi)部的服務(wù)器使用
  3. 部署nfs為web集群所有節(jié)點(diǎn)提供相同數(shù)據(jù),結(jié)合使用pv+pvc+nfs卷掛載,保障數(shù)據(jù)的一致性,同時(shí)使用探針技術(shù)(就緒探針和存活性探針)對(duì)容器狀態(tài)進(jìn)行檢查,同時(shí)配置DNAT策略讓外面用戶能訪問(wèn)到web集群的數(shù)據(jù)
  4. 采用HPA技術(shù),當(dāng)cpu使用率達(dá)到40%的時(shí)候,pod進(jìn)行自動(dòng)水平擴(kuò)縮,最小10個(gè),最多20個(gè)pod
  5. 使用ingress對(duì)象結(jié)合ingress-controller給web業(yè)務(wù)實(shí)現(xiàn)基于域名的負(fù)載均衡功能
  6. 在k8s-web集群里部署Prometheus對(duì)web業(yè)務(wù)進(jìn)行監(jiān)控,結(jié)合Grafana出圖工具進(jìn)行數(shù)據(jù)展示
  7. 構(gòu)建CI/CD環(huán)境,使用gitlab集成Jenkins、Harbor構(gòu)建pipeline流水線工作,實(shí)現(xiàn)自動(dòng)相關(guān)拉取代碼、鏡像制作、上傳鏡像等功能
  8. 部署跳板機(jī)限制用戶訪問(wèn)內(nèi)部網(wǎng)絡(luò)的權(quán)限
  9. 使用zabbix對(duì)所有web集群外的服務(wù)器進(jìn)行監(jiān)控,監(jiān)控其CPU、內(nèi)存、網(wǎng)絡(luò)帶寬等
  10. 使用ab軟件對(duì)整個(gè)集群進(jìn)行壓力測(cè)試,了解其系統(tǒng)資源瓶頸

項(xiàng)目心得:

通過(guò)網(wǎng)絡(luò)拓?fù)鋱D規(guī)劃整個(gè)集群的架構(gòu),提高了項(xiàng)目整體的落實(shí)和效率,對(duì)于k8s的使用和集群的部署更加熟悉,對(duì)promehteus+grafana和zabbix兩種監(jiān)控方式理解更深入,通過(guò)gitlab集成Jenkins、Harbor構(gòu)建pipeline流水線工作,深刻體會(huì)CI/CD流程的持續(xù)自動(dòng)化。查看日志對(duì)排錯(cuò)的幫助很大,提升了自己的trouble shooting的能力。

項(xiàng)目準(zhǔn)備

11臺(tái)Linux服務(wù)器,網(wǎng)絡(luò)模式全部使用橋接模式(其中firewalld要配置兩塊網(wǎng)卡),配置好ip地址,修改好主機(jī)名,同時(shí)關(guān)閉防火墻和selinux,設(shè)置開(kāi)機(jī)不自啟,為后面做項(xiàng)目做好準(zhǔn)備,以免影響項(xiàng)目進(jìn)度。

IP地址 角色
192.168.31.69、192.168.107.10 firewalld(防火墻服務(wù)器)
192.168.107.11 master
192.168.107.12 node1
192.168.107.13 node2
192.168.107.14 jump_server(跳板機(jī))
192.168.107.15 nfs
192.168.107.16 zabbix
192.168.107.17 gitlab
192.168.107.18 jenkins
192.168.107.19 harbor
192.168.107.20 ansible

項(xiàng)目步驟

一、修改每臺(tái)主機(jī)的ip地址,同時(shí)設(shè)置永久關(guān)閉防火墻和selinux,修改好主機(jī)名,在firewalld服務(wù)器上開(kāi)啟路由功能并配置snat策略。

修改每臺(tái)主機(jī)的ip地址和主機(jī)名,本項(xiàng)目所有主機(jī)的網(wǎng)絡(luò)模式為橋接,注意firewalld有兩張網(wǎng)卡,要配置兩個(gè)IP地址。

1. 在firewalld服務(wù)器上配置ip地址、設(shè)置永久關(guān)閉防火墻和selinux,并修改好主機(jī)名

備注信息只做提示用,建議配置時(shí)刪掉

[root@fiewalld ~]# cd /etc/sysconfig/network-scripts
[root@fiewalld network-scripts]# ls
ifcfg-ens33  ifdown       ifdown-ippp  ifdown-post    ifdown-sit       ifdown-tunnel  ifup-bnep  ifup-ipv6  ifup-plusb  ifup-routes  ifup-TeamPort  init.ipv6-global ifdown-bnep  ifdown-ipv6  ifdown-ppp     ifdown-Team      ifup           ifup-eth   ifup-isdn  ifup-post   ifup-sit     ifup-tunnel    network-functions
ifcfg-lo     ifdown-eth   ifdown-isdn  ifdown-routes  ifdown-TeamPort  ifup-aliases   ifup-ippp  ifup-plip  ifup-ppp    ifup-Team    ifup-wireless  network-functions-ipv6
[root@fiewalld network-scripts]# vi ifcfg-ens33
BOOTPROTO="none"  #將dhcp改為none,為了實(shí)驗(yàn)的方便防止后面由于ip地址改變而出錯(cuò),將ip地址靜態(tài)化
NAME="ens33"
DEVICE="ens33"
ONBOOT="yes"
IPADDR=192.168.31.69   #WAN口ip地址
PREFIX=24
GATEWAY=192.168.31.1
DNS1=114.114.114.114

然后配置這臺(tái)機(jī)器的另一個(gè)網(wǎng)卡的ip地址

先復(fù)制一個(gè)同樣的ifcfg-ens33在同一路徑,改名為ifcfg-ens36,修改里面的內(nèi)容如下(LAN口不需要配置網(wǎng)關(guān)和dns)

[root@fiewalld network-scripts]# cp ifcfg-ens33 ifcfg-ens36
[root@fiewalld network-scripts]# ls
ifcfg-ens33  ifdown       ifdown-ippp  ifdown-post    ifdown-sit       ifdown-tunnel  ifup-bnep  ifup-ipv6  ifup-plusb  ifup-routes  ifup-TeamPort  init.ipv6-global
ifcfg-ens36  ifdown-bnep  ifdown-ipv6  ifdown-ppp     ifdown-Team      ifup           ifup-eth   ifup-isdn  ifup-post   ifup-sit     ifup-tunnel    network-functions
ifcfg-lo     ifdown-eth   ifdown-isdn  ifdown-routes  ifdown-TeamPort  ifup-aliases   ifup-ippp  ifup-plip  ifup-ppp    ifup-Team    ifup-wireless  network-functions-ipv6
[root@fiewalld network-scripts]# vi ifcfg-ens36
BOOTPROTO="none"
NAME="ens36"
DEVICE="ens36"
ONBOOT="yes"
IPADDR=192.168.107.10    #LAN口ip地址
PREFIX=24

然后重啟網(wǎng)絡(luò)

[root@fiewalld network-scripts]# service network restart

查看修改ip地址是否生效

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

可以看到,ip地址配置成功!

永久關(guān)閉防火墻和selinux?

[root@fiewalld ~]# systemctl disable firewalld  #永久關(guān)閉防火墻
[root@fiewalld ~]# vim /etc/selinux/config
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=disabled     #修改這里
# SELINUXTYPE= can take one of three values:
#     targeted - Targeted processes are protected,
#     minimum - Modification of targeted policy. Only selected processes are protected. 
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted

修改主機(jī)名

[root@fiewalld ~]# hostnamectl set-hostname firewalld
[root@fiewalld ~]# su - root

2. 在firewalld服務(wù)器上開(kāi)啟路由功能,并配置snat策略,使內(nèi)網(wǎng)服務(wù)器能上網(wǎng)

編寫一個(gè)腳本執(zhí)行

[root@fiewalld ~]# vim snat_dnat.sh
#!/bin/bash
iptables -F
iptables -t nat -F

#enable route開(kāi)啟路由功能
echo 1 >/proc/sys/net/ipv4/ip_forward

#enable snat 讓109.168.107.0網(wǎng)段的主機(jī)能夠通過(guò)WAN口上網(wǎng)
iptables -t nat -A POSTROUTING -s 192.168.107.0/24 -o ens33 -j SNAT --to-source  192.168.31.69    

執(zhí)行腳本

[root@fiewalld ~]# bash snat_dnat.sh

查看是否搭建成功

[root@fiewalld ~]# iptables -t nat -L -n
Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination         

Chain INPUT (policy ACCEPT)
target     prot opt source               destination         

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         

Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination         
SNAT       all  --  192.168.107.0/24     0.0.0.0/0            to:192.168.31.69
#出現(xiàn)這一條規(guī)則,說(shuō)明搭建成功

3. 配置剩下的服務(wù)器的ip地址,永久關(guān)閉防火墻和selinux,并修改好主機(jī)名

這里以其中一臺(tái)為例

[root@nfs ~]# vi /etc/sysconfig/network-scripts/ifcfg-ens33
BOOTPROTO="none"
NAME="ens33"
DEVICE="ens33"
ONBOOT="yes"
IPADDR=192.168.107.15
PREFIX=24
GATEWAY=192.168.107.10    #注意,這里要以firewalld服務(wù)器的LAN口為網(wǎng)關(guān),因?yàn)槭峭ㄟ^(guò)它出去上網(wǎng)
DNS1=114.114.114.114

然后重啟網(wǎng)絡(luò)

[root@nfs ~]# service network restart

查看修改ip地址是否生效

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

可以看到,ip地址已經(jīng)修改好了!

測(cè)試是否能夠上網(wǎng)

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

可見(jiàn),firewalld服務(wù)器的snat策略配置成功,內(nèi)網(wǎng)服務(wù)器已經(jīng)可以上網(wǎng)。

永久關(guān)閉防火墻和selinux?

[root@nfs ~]# systemctl disable firewalld  #永久關(guān)閉防火墻
[root@nfs ~]# vim /etc/selinux/config
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=disabled     #修改這里
# SELINUXTYPE= can take one of three values:
#     targeted - Targeted processes are protected,
#     minimum - Modification of targeted policy. Only selected processes are protected. 
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted

修改主機(jī)名

[root@nfs ~]# hostnamectl set-hostname firewalld
[root@nfs ~]# su - root

二、部署docker+k8s環(huán)境,實(shí)現(xiàn)1個(gè)master和2個(gè)node節(jié)點(diǎn)的k8s集群

1.?在k8s集群那3臺(tái)服務(wù)器上安裝好docker,這里根據(jù)官方文檔進(jìn)行安裝

[root@master ~]# yum remove docker \
>                   docker-client \
>                   docker-client-latest \
>                   docker-common \
>                   docker-latest \
>                   docker-latest-logrotate \
>                   docker-logrotate \
>                   docker-engine
 
[root@master ~]# yum install -y yum-utils
 
[root@master ~]# yum-config-manager \
    --add-repo \
    https://download.docker.com/linux/centos/docker-ce.repo
 
[root@master ~]# yum install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin -y
 
[root@master ~]# systemctl start docker   #啟動(dòng)docker
 
[root@master ~]# docker --version  #查看docker是否安裝成功
Docker version 24.0.5, build ced0996

2.?創(chuàng)建k8s集群,這里采用 kubeadm方式安裝

2.1 確認(rèn)docker已經(jīng)安裝好,啟動(dòng)docker,并且設(shè)置開(kāi)機(jī)啟動(dòng)
[root@master ~]# systemctl restart docker
[root@master ~]# systemctl enable docker
[root@master ~]# ps aux|grep docker
root      2190  1.4  1.5 1159376 59744 ?       Ssl  16:22   0:00 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
root      2387  0.0  0.0 112824   984 pts/0    S+   16:22   0:00 grep --color=auto docker
2.2 配置 Docker使用systemd作為默認(rèn)Cgroup驅(qū)動(dòng)

每臺(tái)服務(wù)器上都要操作,master和node上都要操作

[root@master ~]# cat <<EOF > /etc/docker/daemon.json
> {
>    "exec-opts": ["native.cgroupdriver=systemd"]
> }
> EOF 
[root@master ~]# systemctl restart docker   #重啟docker
2.3?關(guān)閉swap分區(qū)

因?yàn)閗8s不想使用swap分區(qū)來(lái)存儲(chǔ)數(shù)據(jù),使用swap會(huì)降低性能,每臺(tái)服務(wù)器都需要操作

[root@master ~]# swapoff -a   #臨時(shí)關(guān)閉
[root@master ~]# sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab   #永久關(guān)閉
2.4 修改hosts文件,和內(nèi)核會(huì)讀取的參數(shù)文件

每臺(tái)機(jī)器上的/etc/hosts文件都需要修改

[root@master ~]# cat >> /etc/hosts << EOF 
> 192.168.107.11 master
> 192.168.107.12 node1
> 192.168.107.13 node2
> EOF

修改,每臺(tái)機(jī)器上(master和node),永久修改

[rootmaster ~]#cat <<EOF >>  /etc/sysctl.conf  追加到內(nèi)核會(huì)讀取的參數(shù)文件里
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_nonlocal_bind = 1
net.ipv4.ip_forward = 1
vm.swappiness=0
EOF
[root@master ~]#sysctl -p  讓內(nèi)核重新讀取數(shù)據(jù),加載生效
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_nonlocal_bind = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0
2.5 安裝kubeadm,kubelet和kubectl?

kubeadm 是k8s的管理程序,在master上運(yùn)行的,用來(lái)建立整個(gè)k8s集群,背后是執(zhí)行了大量的腳本,幫助我們?nèi)?dòng)k8s。

kubelet 是在node節(jié)點(diǎn)上用來(lái)管理容器的 --> 管理docker,告訴docker程序去啟動(dòng)容器
? ? ? ? ? ? ?是master和node通信用的-->管理docker,告訴docker程序去啟動(dòng)容器。
一個(gè)在集群中每個(gè)節(jié)點(diǎn)(node)上運(yùn)行的代理。 它保證容器(containers)都運(yùn)行在 Pod 中。
kubectl 是在master上用來(lái)給node節(jié)點(diǎn)發(fā)號(hào)施令的程序,用來(lái)控制node節(jié)點(diǎn)的,告訴它們做什么事情的,是命令行操作的工具。

添加kubernetes YUM軟件源

集群里的每臺(tái)服務(wù)器都需要安裝

[root@master ~]# cat > /etc/yum.repos.d/kubernetes.repo << EOF
> [kubernetes]
> name=Kubernetes
> baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
> enabled=1
> gpgcheck=0
> repo_gpgcheck=0
> gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
> EOF

安裝kubeadm,kubelet和kubectl

[root@master ~]# yum install -y kubelet-1.23.6 kubeadm-1.23.6 kubectl-1.23.6
#最好指定版本,因?yàn)?.24的版本默認(rèn)的容器運(yùn)行時(shí)環(huán)境不是docker了

設(shè)置開(kāi)機(jī)自啟,因?yàn)閗ubelet是k8s在node節(jié)點(diǎn)上的代理,必須開(kāi)機(jī)要運(yùn)行的

[root@master ~]# systemctl enable  kubelet
2.6 部署Kubernetes Master

只是master主機(jī)執(zhí)行

提前準(zhǔn)備coredns:1.8.4的鏡像,后面需要使用,需要在每臺(tái)機(jī)器上下載鏡像

[root@master ~]#  docker pull  coredns/coredns:1.8.4
[root@master ~]# docker tag coredns/coredns:1.8.4 registry.aliyuncs.com/google_containers/coredns:v1.8.4

初始化操作在master服務(wù)器上執(zhí)行

[root@master ~]#kubeadm init \
	--apiserver-advertise-address=192.168.107.11 \
	--image-repository registry.aliyuncs.com/google_containers \
	--service-cidr=10.1.0.0/16 \
	--pod-network-cidr=10.244.0.0/16

#192.168.107.11 是master的ip?

# ? ? ?--service-cidr string ? ? ? ? ? ? ? ? ?Use alternative range of IP address for service VIPs. (default "10.96.0.0/12") ?服務(wù)發(fā)布暴露--》dnat

# ? ? ?--pod-network-cidr string ? ? ? ? ? ? ?Specify range of IP addresses for the pod network. If set, the control plane will automatically allocate CIDRs for every node.

執(zhí)行成功后,將下面這段記錄下來(lái),為后面node節(jié)點(diǎn)加入集群做準(zhǔn)備

kubeadm join 192.168.107.11:6443 --token i25xkd.0xrlqnee2gbky4uv \
?? ?--discovery-token-ca-cert-hash sha256:7384e64dabec0ea4eb9f0b82729aa696f90ae8c8d9f6f7b2c87c33f71c611741?

完成初始化的新建目錄和文件操作,在master上完成

[root@master ~]# mkdir -p $HOME/.kube
[root@master ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master ~]# chown $(id -u):$(id -g) $HOME/.kube/config
2.7 node節(jié)點(diǎn)服務(wù)器加入k8s集群

測(cè)試node1節(jié)點(diǎn)是否能和master通信

[root@node1 ~]# ping master
PING master (192.168.107.24) 56(84) bytes of data.
64 bytes from master (192.168.107.24): icmp_seq=1 ttl=64 time=0.765 ms
64 bytes from master (192.168.107.24): icmp_seq=2 ttl=64 time=1.34 ms
^C
--- master ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 0.765/1.055/1.345/0.290 ms

在所有的node節(jié)點(diǎn)上執(zhí)行

[root@node1 ~]#kubeadm join 192.168.107.11:6443 --token i25xkd.0xrlqnee2gbky4uv \
	--discovery-token-ca-cert-hash sha256:7384e64dabec0ea4eb9f0b82729aa696f90ae8c8d9f6f7b2c87c33f71c611741

在master上查看node是否已經(jīng)加入集群

[root@master ~]# kubectl get node
NAME     STATUS     ROLES                  AGE    VERSION
master   NotReady   control-plane,master   5m2s   v1.23.6
node1    NotReady   <none>                 61s    v1.23.6
node2    NotReady   <none>                 58s    v1.23.6
2.8 安裝網(wǎng)絡(luò)插件flannel

在master節(jié)點(diǎn)執(zhí)行

實(shí)現(xiàn)master上的pod和node節(jié)點(diǎn)上的pod之間通信

將flannel文件傳入master主機(jī)

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

部署flannel?


[root@master ~]# kubectl apply -f kube-flannel.yml  #執(zhí)行
namespace/kube-flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds create
2.9 查看集群狀態(tài)?
[root@master ~]# kubectl get nodes
NAME     STATUS   ROLES                  AGE     VERSION
master   Ready    control-plane,master   9m49s   v1.23.6
node1    Ready    <none>                 5m48s   v1.23.6
node2    Ready    <none>                 5m45s   v1.23.6

此過(guò)程可能需要等一會(huì),看見(jiàn)都Ready狀態(tài)了,則表示k8s環(huán)境搭建成功了!

三、編譯安裝nginx,制作自己的鏡像,并上傳到docker hub上,給node節(jié)點(diǎn)下載使用

1. 在master建立一個(gè)一鍵安裝nginx的腳本?

[root@master ~]# mkdir /nginx
[root@master ~]# cd /nginx
[root@master nginx]# vim onekey_install_nginx.sh 
#!/bin/bash
 
#解決軟件的依賴關(guān)系,需要安裝的軟件包
 
yum -y install zlib zlib-devel openssl openssl-devel pcre pcre-devel gcc gcc-c++ autoconf automake make psmisc net-tools lsof vim wget
 
#下載nginx軟件
 
mkdir  /nginx
 
cd /nginx
 
curl -O  http://nginx.org/download/nginx-1.21.1.tar.gz
 
#解壓軟件
 
tar xf nginx-1.21.1.tar.gz
 
#進(jìn)入解壓后的文件夾
 
cd nginx-1.21.1
 
#編譯前的配置
 
./configure --prefix=/usr/local/nginx1  --with-http_ssl_module   --with-threads  --with-http_v2_module  --with-http_stub_status_module  --with-stream
#編譯
make -j 2
#編譯安裝
make  install

2. 建立一個(gè)Dockerfile文件

[root@master nginx]# vim Dockerfile 
FROM centos:7                #指明基礎(chǔ)鏡像
ENV NGINX_VERSION 1.21.1     #將1.21.1這個(gè)數(shù)值賦值NGINX_VERSION這個(gè)變量
ENV AUTHOR zhouxin           #  作者zhouxin
LABEL maintainer="cali<695811769@qq.com>"    #標(biāo)簽
RUN mkdir /nginx             #在容器中運(yùn)行的命令
WORKDIR /nginx               #指定進(jìn)入容器的時(shí)候,在哪個(gè)目錄下
COPY . /nginx                #復(fù)制宿主機(jī)里的文件或者文件夾到容器的/nginx目錄下
RUN set -ex; \               #在容器運(yùn)行命令
    bash  onekey_install_nginx.sh ; \         #執(zhí)行一鍵安裝nginx的腳本
    yum install vim iputils  net-tools iproute -y      #安裝一些工具
EXPOSE 80          #聲明開(kāi)放的端口號(hào)
ENV PATH=/usr/local/nginx1/sbin:$PATH        #定義環(huán)境變量
 
STOPSIGNAL SIGQUIT           #屏蔽信號(hào)
CMD ["nginx","-g","daemon off;"]    #在前臺(tái)啟動(dòng)nginx程序, -g daemon off將off值賦給daemon這個(gè)變量,告訴nginx不要在后臺(tái)啟動(dòng),在前臺(tái)啟動(dòng),daemon是守護(hù)進(jìn)程,默認(rèn)在后臺(tái)啟動(dòng)

3. 創(chuàng)建鏡像

[root@master nginx]# docker build -t zhouxin_nginx:1.0 .

?查看鏡像

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

4. 將制作的鏡像推送到docker hub上,供node節(jié)點(diǎn)下載?

將自己制作的鏡像推送到我的docker hub倉(cāng)庫(kù)以供其他2個(gè)node節(jié)點(diǎn)服務(wù)器使用,首先要在docker hub創(chuàng)建自己的賬號(hào),并創(chuàng)建自己的倉(cāng)庫(kù),我已經(jīng)創(chuàng)建了zhouxin03/nginx的倉(cāng)庫(kù)

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

在master上將自己制作的鏡像打標(biāo)簽

[root@master nginx]# docker tag zhouxin_nginx:1.0 zhouxin03/nginx

登錄docker hub

[root@master nginx]# docker login
Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com to create one.
Username: zhouxin03
Password: 
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded

然后再推到自己的docker hub倉(cāng)庫(kù)里

[root@master nginx]# docker push zhouxin03/nginx
Using default tag: latest
The push refers to repository [docker.io/zhouxin03/nginx]
52bbda705d25: Pushed 
41e872683328: Pushed 
5f70bf18a086: Pushed 
5376459cbb05: Pushed 
174f56854903: Mounted from library/centos 
latest: digest: sha256:39801c440d239b8fec21fda5a750b38f96d64a13eef695c9394ffe244c5034a6 size: 1362

此時(shí),在docker hub上查看鏡像

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

可見(jiàn),鏡像已經(jīng)被推送到docker hub上了

5. node節(jié)點(diǎn)去docker hub上拉取這個(gè)鏡像

[root@node1 ~]# docker pull zhouxin03/nginx:latest  #拉取鏡像
latest: Pulling from zhouxin03/nginx
2d473b07cdd5: Pull complete 
63fe9f4e3ea7: Pull complete 
4f4fb700ef54: Pull complete 
947ca89e3d17: Pull complete 
0d4cea36d8fd: Pull complete 
Digest: sha256:39801c440d239b8fec21fda5a750b38f96d64a13eef695c9394ffe244c5034a6
Status: Downloaded newer image for zhouxin03/nginx:latest
docker.io/zhouxin03/nginx:latest
[root@node1 ~]# docker images
REPOSITORY                                           TAG       IMAGE ID       CREATED          SIZE
zhouxin03/nginx                                      latest    31274f1e297c   17 minutes ago   636MB
rancher/mirrored-flannelcni-flannel                  v0.19.2   8b675dda11bb   12 months ago    62.3MB
rancher/mirrored-flannelcni-flannel-cni-plugin       v1.1.0    fcecffc7ad4a   15 months ago    8.09MB
registry.aliyuncs.com/google_containers/kube-proxy   v1.23.6   4c0375452406   16 months ago    112MB
registry.aliyuncs.com/google_containers/coredns      v1.8.6    a4ca41631cc7   23 months ago    46.8MB
registry.aliyuncs.com/google_containers/pause        3.6       6270bb605e12   2 years ago      683kB
coredns/coredns                                      1.8.4     8d147537fb7d   2 years ago      47.6MB
registry.aliyuncs.com/google_containers/coredns      v1.8.4    8d147537fb7d   2 years ago      47.6MB

四、創(chuàng)建NFS服務(wù)器為所有的節(jié)點(diǎn)提供相同Web數(shù)據(jù),結(jié)合使用pv+pvc和卷掛載,保障數(shù)據(jù)的一致性,并用探針對(duì)pod中容器的狀態(tài)進(jìn)行檢測(cè)

1. 用ansible部署nfs服務(wù)器環(huán)境

1.1 在ansible服務(wù)器上對(duì)k8s集群和nfs服務(wù)器建立免密通道?

這里展示對(duì)nfs服務(wù)器建立免密通道的過(guò)程

[root@ansible ~]# ssh-keygen   #生成密鑰對(duì)
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:GtLchZ2flfBGzV5K3yqXePoIc9f1oT1WUOZzZ0AQdpw root@ansible
The key's randomart image is:
+---[RSA 2048]----+
|            ===+o|
|         o o =E*+|
|        . +  .*=B|
|     o . . . +.oB|
|    . + S   o. +o|
|     . o    o B.=|
|      .   o .*.+o|
|           +.o. .|
|            ...  |
+----[SHA256]-----+

[root@ansible ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub 192.168.107.15  # 將公鑰傳到要建立免密通道的服務(wù)器上
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host '192.168.107.15 (192.168.107.15)' can't be established.
ECDSA key fingerprint is SHA256:/y4BmyQxo26qq5BDptWmP9KVykKwBX7YrugbGtSwN1Q.
ECDSA key fingerprint is MD5:8e:26:8d:24:1a:35:94:79:3e:b5:5a:1a:d3:9e:99:83.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@192.168.107.15's password:   #第一次傳送公鑰到遠(yuǎn)程服務(wù)器上要輸入遠(yuǎn)程服務(wù)器的登錄密碼

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh '192.168.107.15'"
and check to make sure that only the key(s) you wanted were added.

[root@ansible ~]# ssh root@192.168.107.15  #驗(yàn)證免密通道是否建立成功
Last login: Sat Sep  2 16:26:00 2023 from 192.168.31.67
[root@nfs ~]# 

其他服務(wù)器只需要把a(bǔ)nsible的公鑰傳到各個(gè)服務(wù)器上即可?

[root@ansible ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub 192.168.107.11  # 將公鑰傳到master
[root@ansible ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub 192.168.107.12  # 將公鑰傳到node1
[root@ansible ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub 192.168.107.13  # 將公鑰傳到node2
1.2 安裝ansible自動(dòng)化運(yùn)維工具在ansible服務(wù)器上,并寫好主機(jī)清單
[root@ansible ~]# yum install -y epel-release
[root@ansible ~]# yum install ansible -y
[root@ansible ~]# cd /etc/ansible/
[root@ansible ansible]# ls
ansible.cfg  hosts  roles
[root@ansible ansible]# vim hosts 
[nfs]
192.168.107.15  #nfs
[web]
192.168.107.11  #master
192.168.107.12  #node1
192.168.107.13  #node2
1.3 編寫安裝nfs腳本

在nfs服務(wù)器上,要安裝好nfs軟件包并設(shè)計(jì)開(kāi)啟自啟nfs服務(wù)

[root@ansible ~]# vim nfs_install.sh
yum install -y nfs-utils    #安裝nfs軟件包
systemctl start nfs   #設(shè)置nfs開(kāi)機(jī)自啟
systemctl enable nfs

在k8s集群里要安裝好nfs軟件包

[root@ansible ~]# vim web_nfs_install.sh
yum install -y nfs-utils    #安裝nfs軟件包
1.4?編寫playbook,實(shí)現(xiàn)nfs安裝部署
[root@ansible ansible]# vim nfs_install.yaml
- hosts: nfs
  remote_user: root
  tasks:
  - name: install nfs in nfs
    script: /root/nfs_install.sh
- hosts: web
  remote_user: root
  tasks:
  - name: install nfs in web
    script: /root/web_nfs_install.sh

script模塊:把本地的腳本傳到遠(yuǎn)端執(zhí)行?

1.5?檢查yaml文件語(yǔ)法
[root@ansible ansible]# ansible-playbook --syntax-check /etc/ansible/nfs_install.yaml

playbook: /etc/ansible/nfs_install.yaml
1.6?執(zhí)行yaml文件
[root@ansible ansible]# ansible-playbook  nfs_install.yaml
1.7 驗(yàn)證nfs是否安裝成功

在nfs服務(wù)器看查看是否啟動(dòng)nfsd進(jìn)程

[root@nfs ~]# ps aux|grep nfs
root       1693  0.0  0.0      0     0 ?        S<   17:05   0:00 [nfsd4_callbacks]
root       1699  0.0  0.0      0     0 ?        S    17:05   0:00 [nfsd]
root       1700  0.0  0.0      0     0 ?        S    17:05   0:00 [nfsd]
root       1701  0.0  0.0      0     0 ?        S    17:05   0:00 [nfsd]
root       1702  0.0  0.0      0     0 ?        S    17:05   0:00 [nfsd]
root       1703  0.0  0.0      0     0 ?        S    17:05   0:00 [nfsd]
root       1704  0.0  0.0      0     0 ?        S    17:05   0:00 [nfsd]
root       1705  0.0  0.0      0     0 ?        S    17:05   0:00 [nfsd]
root       1706  0.0  0.0      0     0 ?        S    17:05   0:00 [nfsd]
root       1745  0.0  0.0 112824   976 pts/0    R+   17:06   0:00 grep --color=auto nfs

可見(jiàn),nfs安裝部署成功了!

2. 將web數(shù)據(jù)頁(yè)面掛載到容器上,并使用探針技術(shù)對(duì)容器狀態(tài)進(jìn)行檢查?

要用到探針技術(shù),需要修改nginx的配置文件,我這里采用就緒探針(readinessProbe)和存活性探針(livenessProbe),就要將就緒探針和存活性探針的位置塊添加到nginx配置中,因此,需要在nfs服務(wù)器上修改nginx的配置文件后,再將nginx的配置文件掛載到容器里。

所以,這里需要掛載兩個(gè)文件。

2.1 創(chuàng)建web頁(yè)面數(shù)據(jù)文件
2.1.1 先在nfs服務(wù)器上創(chuàng)建web頁(yè)面數(shù)據(jù)共享文件
[root@nfs ~]# mkdir /web
[root@nfs ~]# cd /web
[root@nfs web]# vim index.html
<p>welcome!</p>
<h1>name:zhouxin</h1>
<h1>Hunan Agricultural University</h1>
<h1>age: 20</h1>
2.2 創(chuàng)建nginx.conf配置文件
2.2.1 先再nfs服務(wù)器上下載nginx,使用前面的一鍵編譯安裝nginx的腳本下載,得到nginx.conf配置文件
[root@nfs nginx]# vim onekey_install_nginx.sh 
#!/bin/bash
 
#解決軟件的依賴關(guān)系,需要安裝的軟件包
 
yum -y install zlib zlib-devel openssl openssl-devel pcre pcre-devel gcc gcc-c++ autoconf automake make psmisc net-tools lsof vim wget
 
#下載nginx軟件
 
mkdir  /nginx
 
cd /nginx
 
curl -O  http://nginx.org/download/nginx-1.21.1.tar.gz
 
#解壓軟件
 
tar xf nginx-1.21.1.tar.gz
 
#進(jìn)入解壓后的文件夾
 
cd nginx-1.21.1
 
#編譯前的配置
 
./configure --prefix=/usr/local/nginx1  --with-http_ssl_module   --with-threads  --with-http_v2_module  --with-http_stub_status_module  --with-stream
#編譯
make -j 2
#編譯安裝
make  install
[root@nfs nginx]# bash onekey_install_nginx.sh  #執(zhí)行腳本
2.2.2 修改nginx.conf的配置文件,添加就緒探針和存活性探針的位置塊
[root@nfs ~]# cd /usr/local
[root@nfs local]# ls
bin  etc  games  include  lib  lib64  libexec  nginx1  sbin  share  src
[root@nfs local]# cd nginx1
[root@nfs nginx1]# ls
conf  html  logs  sbin
[root@nfs nginx1]# cd conf
[root@nfs conf]# ls
fastcgi.conf          fastcgi_params          koi-utf  mime.types          nginx.conf          scgi_params          uwsgi_params          win-utf
fastcgi.conf.default  fastcgi_params.default  koi-win  mime.types.default  nginx.conf.default  scgi_params.default  uwsgi_params.default
[root@nfs conf]# vim nginx.conf

在http的server中添加

location /healthz {
  access_log off;
  return 200 'ok';
}

location /isalive {
  access_log off;
  return 200 'ok';
}

如:

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

2.3 編輯/etc/exports文件,并讓其生效
[root@nfs web]# vim /etc/exports
/web 192.168.107.0/24 (rw,sync,all_squash)
/usr/local/nginx1/conf 192.168.107.0/24 (rw,sync,all_squash)

/nginx? 是我們共享的文件夾的路徑--》使用絕對(duì)路徑
192.168.107.0/24 允許過(guò)來(lái)訪問(wèn)的客戶機(jī)的ip地址網(wǎng)段
(rw,all_squash,sync) 表示權(quán)限的限制?
? ? ? rw 表示可讀可寫 read and ?write
? ? ? ro 表示只能讀 ?read-only
? ? ? all_squash :任何客戶機(jī)上的用戶過(guò)來(lái)訪問(wèn)的時(shí)候,都把它認(rèn)為是普通的用戶
? ? ? root_squash 當(dāng)NFS客戶端以root管理員訪問(wèn)時(shí),映射為NFS服務(wù)器匿名用戶
? ? ? no_root_squash ?當(dāng)NFS客戶端以root管理員訪問(wèn)時(shí),映射為NFS服務(wù)器的root管理員
? ? ? sync ?同時(shí)將數(shù)據(jù)寫入到內(nèi)存與硬盤中,保證不丟失數(shù)據(jù)
? ? ? async 優(yōu)先將數(shù)據(jù)保存到內(nèi)存,然后再寫入硬盤,效率更高,但可能丟失數(shù)據(jù)

讓/etc/exports文件其生效?

[root@nfs web]#  exportfs -av
exportfs: No options for /web 192.168.107.0/24: suggest 192.168.107.0/24(sync) to avoid warning
exportfs: No host name given with /web (rw,sync,all_squash), suggest *(rw,sync,all_squash) to avoid warning
exportfs: No options for /usr/local/nginx1/conf 192.168.107.0/24: suggest 192.168.107.0/24(sync) to avoid warning
exportfs: No host name given with /usr/local/nginx1/conf (rw,sync,all_squash), suggest *(rw,sync,all_squash) to avoid warning
exporting 192.168.107.0/24:/usr/local/nginx1/conf
exporting 192.168.107.0/24:/web
exporting *:/usr/local/nginx1/conf
exporting *:/web

設(shè)置共享目錄的權(quán)限

[root@nfs web]# chown nobody:nobody /web
[root@nfs web]# ll -d /web
drwxr-xr-x 2 nobody nobody 24 9月   2 17:08 /web
[root@nfs web]# chown nobody:nobody /usr/local/nginx1/conf
[root@nfs web]# ll -d /usr/local/nginx1/conf
drwxr-xr-x 2 nobody nobody 333 9月   2 18:25 /usr/local/nginx1/conf
?2.4?掛載web頁(yè)面數(shù)據(jù)文件
2.4.1在master服務(wù)器上創(chuàng)建pv
[root@master pod]# mkdir /pod
[root@master pod]# cd /pod
[root@master pod]# vim pv_nfs.yaml 
apiVersion: v1
kind: PersistentVolume   #資源類型
metadata:
  name: zhou-nginx-pv   #創(chuàng)建的pv的名字
  labels:
    type: zhou-nginx-pv
spec:
  capacity:
    storage: 5Gi 
  accessModes:
    - ReadWriteMany     #訪問(wèn)模式,多個(gè)客戶端讀寫
  persistentVolumeReclaimPolicy: Recycle    #回收策略-可以回收
  storageClassName: nfs      #pv名字,后面創(chuàng)建pvc的時(shí)候要用一樣的
  nfs:
    path: "/web"        # nfs共享目錄的路徑
    server: 192.168.107.15  # nfs服務(wù)器的ip
    readOnly: false      #只讀

執(zhí)行pv的yaml文件

[root@master pod]# kubectl apply -f pv_nfs.yaml
persistentvolume/zhou-nginx-pv created
[root@master pod]# kubectl get pv  #查看
NAME            CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
zhou-nginx-pv   5Gi        RWX            Recycle          Available           nfs                     17s
2.4.2?在master服務(wù)器上創(chuàng)建pvc,用來(lái)使用pv
[root@master pod]# vim pvc_nfs.yaml 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: zhou-nginx-pvc
spec:
  accessModes:
  - ReadWriteMany      
  resources:
     requests:
       storage: 1Gi
  storageClassName: nfs  #注意這里要用與前面pv相同的

執(zhí)行并查看

[root@master pod]# kubectl apply -f pvc_nfs.yaml
persistentvolumeclaim/zhou-nginx-pvc created
[root@master pod]# kubectl get pvc #查看
NAME             STATUS   VOLUME          CAPACITY   ACCESS MODES   STORAGECLASS   AGE
zhou-nginx-pvc   Bound    zhou-nginx-pv   5Gi        RWX            nfs            8s
2.5 掛載nginx.conf配置文件

其實(shí)這里也可以用configmap實(shí)現(xiàn)

參考:https://mp.csdn.net/mp_blog/creation/editor/129893723?

2.5.1在master服務(wù)器上創(chuàng)建pv
[root@master pod]# vim pv_nginx.yaml 
apiVersion: v1
kind: PersistentVolume   #資源類型
metadata:
  name: zhou-nginx-conf-pv   #創(chuàng)建的pv的名字
  labels:
    type: zhou-nginx-conf-pv
spec:
  capacity:
    storage: 5Gi 
  accessModes:
    - ReadWriteMany     #訪問(wèn)模式,多個(gè)客戶端讀寫
  persistentVolumeReclaimPolicy: Recycle    #回收策略-可以回收
  storageClassName: nginx-conf      #pv名字,后面創(chuàng)建pvc的時(shí)候要用一樣的
  nfs:
    path: "/usr/local/nginx1/conf"        # nfs共享目錄的路徑
    server: 192.168.107.15  # nfs服務(wù)器的ip
    readOnly: false      #只讀

執(zhí)行并查看

[root@master pod]# kubectl apply -f pv_nginx.yaml 
persistentvolume/zhou-nginx-conf-pv created
[root@master pod]# kubectl get pv
NAME                 CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                    STORAGECLASS   REASON   AGE
zhou-nginx-conf-pv   5Gi        RWX            Recycle          Available                            nginx-conf              8s
zhou-nginx-pv        5Gi        RWX            Recycle          Bound       default/zhou-nginx-pvc   nfs                     81m
2.5.2?在master服務(wù)器上創(chuàng)建pvc,用來(lái)使用pv
[root@master pod]# vim pvc_nginx.yaml 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: zhou-nginx-conf-pvc
spec:
  accessModes:
  - ReadWriteMany      
  resources:
     requests:
       storage: 1Gi
  storageClassName: nginx-conf  #注意這里要用與前面pv相同的

執(zhí)行并查看

[root@master pod]# kubectl apply -f pvc_nginx.yaml 
persistentvolumeclaim/zhou-nginx-conf-pvc created
[root@master pod]# kubectl get pvc
NAME                  STATUS   VOLUME               CAPACITY   ACCESS MODES   STORAGECLASS   AGE
zhou-nginx-conf-pvc   Bound    zhou-nginx-conf-pv   5Gi        RWX            nginx-conf     3s
zhou-nginx-pvc        Bound    zhou-nginx-pv        5Gi        RWX            nfs            113m

看到兩個(gè)都是綁定狀態(tài),則成功?

2.6?在master服務(wù)器上創(chuàng)建pod使用pvc
[root@master pod]# vim pv_pod.yaml 
apiVersion: apps/v1
kind: Deployment   #用副本控制器deployment創(chuàng)建
metadata:
  name: nginx-deployment      #deployment的名稱
  labels:
    app: zhou-nginx
spec:
  replicas: 10    #建立10個(gè)副本
  selector:
    matchLabels:
      app: zhou-nginx
  template:      #根據(jù)此模版創(chuàng)建Pod的副本(實(shí)例)
    metadata:
      labels:
        app: zhou-nginx
    spec:
     volumes:
     - name: zhou-pv-storage-nfs
       persistentVolumeClaim:
          claimName: zhou-nginx-pvc   #使用前面創(chuàng)建的pvc
     - name: zhou-pv-storage-conf-nfs
       persistentVolumeClaim:
          claimName: zhou-nginx-conf-pvc   #使用前面創(chuàng)建的pvc
     containers:
     - name: zhou-pv-container-nfs     #容器名字
       image: zhouxin03/nginx:latest       #使用之前自己制作的鏡像
       ports:
        - containerPort: 80       #容器應(yīng)用監(jiān)聽(tīng)的端口號(hào)
          name: "http-server"
       volumeMounts:
        - mountPath: "/usr/local/nginx1/html"     #掛載到的容器里的目錄,這里是自己編譯安裝的nginx下的html路徑
          name: zhou-pv-storage-nfs
       volumeMounts:
        - mountPath: "/usr/local/nginx1/conf"     #掛載到的容器里的目錄,這里是自己編譯安裝的nginx下的conf路徑
          name: zhou-pv-storage-conf-nfs
       readinessProbe:    #配置就緒探針內(nèi)容
            httpGet:       #使用httpGet檢查機(jī)制
              path: /healthz   #使用nginx.conf配置文件里的路徑
              port: 80
            initialDelaySeconds: 10
            periodSeconds: 5
       livenessProbe:     #配置存活性探針內(nèi)容
            httpGet:
              path: /isalive    #使用nginx.conf配置文件里的路徑
              port: 80
            initialDelaySeconds: 15
            periodSeconds: 10

執(zhí)行并查看

[root@master pod]#kubectl apply -f pv_pod.yaml
[root@master pod]# kubectl get deployment
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment   20/20   20           20          2m18s
[root@master pod]# kubectl get pod -o wide
NAME                                READY   STATUS    RESTARTS   AGE     IP            NODE    NOMINATED NODE   READINESS GATES
nginx-deployment-79878f849f-5gzfl   1/1     Running   0          2m46s   10.244.1.13   node1   <none>           <none>
nginx-deployment-79878f849f-6nrrf   1/1     Running   0          2m46s   10.244.2.9    node2   <none>           <none>
nginx-deployment-79878f849f-6pl8g   1/1     Running   0          2m46s   10.244.1.6    node1   <none>           <none>
nginx-deployment-79878f849f-82g94   1/1     Running   0          2m46s   10.244.1.14   node1   <none>           <none>
nginx-deployment-79878f849f-8zssk   1/1     Running   0          2m46s   10.244.1.15   node1   <none>           <none>
nginx-deployment-79878f849f-9n8ql   1/1     Running   0          2m46s   10.244.2.4    node2   <none>           <none>
nginx-deployment-79878f849f-bwp9s   1/1     Running   0          2m46s   10.244.1.10   node1   <none>           <none>
nginx-deployment-79878f849f-ct5k4   1/1     Running   0          2m46s   10.244.2.8    node2   <none>           <none>
nginx-deployment-79878f849f-hdj5f   1/1     Running   0          2m46s   10.244.1.7    node1   <none>           <none>
nginx-deployment-79878f849f-hhw4c   1/1     Running   0          2m46s   10.244.1.8    node1   <none>           <none>

這個(gè)過(guò)程可能需要等一會(huì)才能看到全部變成Running狀態(tài),且 READY是1/1,則表示pod啟動(dòng)成功

如果不是running狀態(tài)或 READY是0/1,表示出錯(cuò)了,可以通過(guò)kubectl describe pod pod的名字?來(lái)排錯(cuò)

測(cè)試訪問(wèn)

[root@master pod]# curl 10.244.1.13
<p>welcome!</p>
<h1>name:zhouxin</h1>
<h1>Hunan Agricultural University</h1>
<h1>age: 20</h1>

查看nginx.conf的配置文件是否掛載成功

[root@master pod]# kubectl exec -it nginx-deployment-79878f849f-r4zsq -- bash
[root@nginx-deployment-79878f849f-r4zsq nginx]# cd /usr/local/nginx1/conf
[root@nginx-deployment-79878f849f-r4zsq conf]# ls
fastcgi.conf          fastcgi_params          koi-utf  mime.types          nginx.conf          scgi_params          uwsgi_params          win-utf
fastcgi.conf.default  fastcgi_params.default  koi-win  mime.types.default  nginx.conf.default  scgi_params.default  uwsgi_params.default
[root@nginx-deployment-79878f849f-r4zsq conf]# vim nginx.conf

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

看到配置文件里有這兩項(xiàng),說(shuō)明掛載成功!

2.7 創(chuàng)建service服務(wù)發(fā)布出去
[root@master pod]# vim my_service.yaml 
apiVersion: v1
kind: Service
metadata:
  name: my-nginx-nfs   #service的名字,后面配置ingress會(huì)用到
  labels:
    run: my-nginx-nfs
spec:
  type: NodePort
  ports:
  - port: 8070
    targetPort: 80
    protocol: TCP
    name: http
  selector:
    app: zhou-nginx   #注意這里要用app的形式,跟前面的pv_pod.yaml文件對(duì)應(yīng),有些使用方法是run,不要搞錯(cuò)了

執(zhí)行并查看

[root@master pod]# kubectl apply -f my_service.yaml
service/my-nginx-nfs created
[root@master pod]# kubectl get service
NAME           TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)          AGE
kubernetes     ClusterIP   10.1.0.1      <none>        443/TCP          46h
my-nginx-nfs   NodePort    10.1.32.204   <none>        8070:32621/TCP   9s
#這里的32621就是宿主機(jī)暴露的端口號(hào),驗(yàn)證時(shí)用瀏覽器訪問(wèn)宿主機(jī)的這個(gè)端口號(hào)
2.8 在firewalld服務(wù)器上,配置dnat策略,將web服務(wù)發(fā)布出去
[root@fiewalld ~]# vim snat_dnat.sh
#!/bin/bash
iptables -F
iptables -t nat -F

#enable route 開(kāi)啟路由功能
echo 1 >/proc/sys/net/ipv4/ip_forward
 
#enable snat 讓109.168.107.0網(wǎng)段的主機(jī)能夠通過(guò)WAN口上網(wǎng)
iptables -t nat -A POSTROUTING -s 192.168.107.0/24 -o ens33 -j SNAT --to-source  192.168.31.69

#添加下面的dnat策略
#enable dant 讓外網(wǎng)能夠訪問(wèn)內(nèi)網(wǎng)數(shù)據(jù)
iptables -t nat -A PREROUTING -i ens33 -d 192.168.31.69 -p tcp --dport 80 -j DNAT --to-destination 192.168.107.11
iptables -t nat -A PREROUTING -i ens33 -d 192.168.31.69 -p tcp --dport 80 -j DNAT --to-destination 192.168.107.12
iptables -t nat -A PREROUTING -i ens33 -d 192.168.31.69 -p tcp --dport 80 -j DNAT --to-destination 192.168.107.13

查看配置的防火墻規(guī)則生效沒(méi)

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

可見(jiàn),已經(jīng)生效!

2.9 測(cè)試訪問(wèn)

使用瀏覽器訪問(wèn)3臺(tái)k8s集群服務(wù)器任意一臺(tái)的32621端口,都能顯示出nfs-server服務(wù)器上的定制頁(yè)面

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

五、采用HPA技術(shù),當(dāng)cpu使用率達(dá)到40%的時(shí)候,pod進(jìn)行自動(dòng)水平擴(kuò)縮,最小10個(gè),最多20個(gè)pod

1.?安裝metrics服務(wù)

HPA的指標(biāo)數(shù)據(jù)是通過(guò)metrics服務(wù)來(lái)獲得,必須要提前安裝好

Metrics Server 從 Kubelets 收集資源指標(biāo),并通過(guò)Metrics API在 Kubernetes apiserver 中公開(kāi)它們, 以供Horizo??ntal Pod Autoscaler(HPA)和Vertical Pod Autoscaler (VPA)使用,比如CPU、文件描述符、內(nèi)存、請(qǐng)求延時(shí)等指標(biāo),metric-server收集數(shù)據(jù)給k8s集群內(nèi)使用,如kubectl,hpa,scheduler等。還可以通過(guò) 訪問(wèn)指標(biāo) API kubectl top,從而更輕松地調(diào)試自動(dòng)縮放管道

[root@master ~]# vim metrics.yaml 
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
    rbac.authorization.k8s.io/aggregate-to-admin: "true"
    rbac.authorization.k8s.io/aggregate-to-edit: "true"
    rbac.authorization.k8s.io/aggregate-to-view: "true"
  name: system:aggregated-metrics-reader
rules:
- apiGroups:
  - metrics.k8s.io
  resources:
  - pods
  - nodes
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
rules:
- apiGroups:
  - ""
  resources:
  - pods
  - nodes
  - nodes/stats
  - namespaces
  - configmaps
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server-auth-reader
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server:system:auth-delegator
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-delegator
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:metrics-server
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  ports:
  - name: https
    port: 443
    protocol: TCP
    targetPort: https
  selector:
    k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  selector:
    matchLabels:
      k8s-app: metrics-server
  strategy:
    rollingUpdate:
      maxUnavailable: 0
  template:
    metadata:
      labels:
        k8s-app: metrics-server
    spec:
      containers:
      - args:
        - --cert-dir=/tmp
        - --secure-port=4443
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        - --kubelet-use-node-status-port
        - --metric-resolution=15s
        - --kubelet-insecure-tls
        image: registry.cn-shenzhen.aliyuncs.com/zengfengjin/metrics-server:v0.5.0
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /livez
            port: https
            scheme: HTTPS
          periodSeconds: 10
        name: metrics-server
        ports:
        - containerPort: 4443
          name: https
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /readyz
            port: https
            scheme: HTTPS
          initialDelaySeconds: 20
          periodSeconds: 10
        resources:
          requests:
            cpu: 100m
            memory: 200Mi
        securityContext:
          readOnlyRootFilesystem: true
          runAsNonRoot: true
          runAsUser: 1000
        volumeMounts:
        - mountPath: /tmp
          name: tmp-dir
      nodeSelector:
        kubernetes.io/os: linux
      priorityClassName: system-cluster-critical
      serviceAccountName: metrics-server
      volumes:
      - emptyDir: {}
        name: tmp-dir
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
  labels:
    k8s-app: metrics-server
  name: v1beta1.metrics.k8s.io
spec:
  group: metrics.k8s.io
  groupPriorityMinimum: 100
  insecureSkipTLSVerify: true
  service:
    name: metrics-server
    namespace: kube-system
  version: v1beta1
  versionPriority: 100

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

可見(jiàn),metrics已經(jīng)安裝成功

查看節(jié)點(diǎn)的狀態(tài)信息

[root@master ~]# kubectl top nodes
NAME     CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
master   115m         5%     1101Mi          29%       
node1    61m          3%     766Mi           20%       
node2    59m          2%     740Mi           20%      

?查看pod資源消耗

[root@master pod]# kubectl top pods
NAME                                CPU(cores)   MEMORY(bytes)   
nginx-deployment-6fd9b4f959-754lc   1m           1Mi             
nginx-deployment-6fd9b4f959-94p97   1m           1Mi             
nginx-deployment-6fd9b4f959-d66t7   1m           1Mi             
nginx-deployment-6fd9b4f959-hcffl   1m           1Mi             
nginx-deployment-6fd9b4f959-hjbfb   1m           1Mi             
nginx-deployment-6fd9b4f959-k2hvs   1m           1Mi             
nginx-deployment-6fd9b4f959-mgb6m   1m           1Mi             
nginx-deployment-6fd9b4f959-nb4sd   1m           1Mi             
nginx-deployment-6fd9b4f959-rcfnj   1m           1Mi             
nginx-deployment-6fd9b4f959-tv7t4   1m           1Mi      

這個(gè)命令需要由metric-server服務(wù)提供數(shù)據(jù),沒(méi)有安裝metrics的話會(huì)報(bào)錯(cuò)error: Metrics API not available

2.?配置HPA,當(dāng)cpu使用率達(dá)到50%的時(shí)候,pod進(jìn)行自動(dòng)水平擴(kuò)縮,最小20個(gè),最多40個(gè)pod

2.1 在原來(lái)的deployment yaml文件中配置資源請(qǐng)求

要配置HPA功能,需要在Deployment YAML文件中配置資源請(qǐng)求,由于前面的deployment沒(méi)有配置資源請(qǐng)求,因此,先刪除前面用deployment創(chuàng)建的pod

[root@master ~]# cd /pod
[root@master pod]# ls
my_service.yaml  pvc_nfs.yaml  pvc_nginx.yaml  pv_nfs.yaml  pv_nginx.yaml  pv_pod.yaml
[root@master pod]# kubectl delete -f pv_pod.yaml 
deployment.apps "nginx-deployment" deleted

修改pv_pov.yaml配置文件,增加配置資源請(qǐng)求

[root@master pod]# vim pv_pod.yaml 
apiVersion: apps/v1
kind: Deployment   #用副本控制器deployment創(chuàng)建
metadata:
  name: nginx-deployment      #deployment的名稱
  labels:
    app: zhou-nginx
spec:
  replicas: 10    #建立10個(gè)副本
  selector:
    matchLabels:
      app: zhou-nginx
  template:      #根據(jù)此模版創(chuàng)建Pod的副本(實(shí)例)
    metadata:
      labels:
        app: zhou-nginx
    spec:
     volumes:
     - name: zhou-pv-storage-nfs
       persistentVolumeClaim:
          claimName: zhou-nginx-pvc   #使用前面創(chuàng)建的pvc
     - name: zhou-pv-storage-conf-nfs
       persistentVolumeClaim:
          claimName: zhou-nginx-conf-pvc   #使用前面創(chuàng)建的pvc
     containers:
     - name: zhou-pv-container-nfs     #容器名字
       image: zhouxin03/nginx:latest       #使用之前自己制作的鏡像
       ports:
        - containerPort: 80       #容器應(yīng)用監(jiān)聽(tīng)的端口號(hào)
          name: "http-server"
       volumeMounts:
        - mountPath: "/usr/local/nginx1/html"     #掛載到的容器里的目錄,這里是自己編譯安裝的nginx下的html路徑
          name: zhou-pv-storage-nfs
       volumeMounts:
        - mountPath: "/usr/local/nginx1/conf"     #掛載到的容器里的目錄,這里是自己編譯安裝的nginx下的conf路徑
          name: zhou-pv-storage-conf-nfs
       readinessProbe:    #配置就緒探針內(nèi)容
            httpGet:       #使用httpGet檢查機(jī)制
              path: /healthz   #使用nginx.conf配置文件里的路徑
              port: 80
            initialDelaySeconds: 10
            periodSeconds: 5
       livenessProbe:     #配置存活性探針內(nèi)容
            httpGet:
              path: /isalive    #使用nginx.conf配置文件里的路徑
              port: 80
            initialDelaySeconds: 15
            periodSeconds: 10
       #############################添加下面的內(nèi)容##############################
       resources:
          requests:
            cpu: 300m    # 這里設(shè)置了CPU的請(qǐng)求為300m
          limits:
            cpu: 500m    # 這里設(shè)置了CPU的限制為500m

執(zhí)行并查看

[root@master pod]# kubectl apply -f pv_pod.yaml 
deployment.apps/nginx-deployment created
[root@master pod]# kubectl get pod
NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-6fd9b4f959-754lc   1/1     Running   0          36s
nginx-deployment-6fd9b4f959-94p97   1/1     Running   0          36s
nginx-deployment-6fd9b4f959-d66t7   1/1     Running   0          36s
nginx-deployment-6fd9b4f959-hcffl   1/1     Running   0          36s
nginx-deployment-6fd9b4f959-hjbfb   1/1     Running   0          36s
nginx-deployment-6fd9b4f959-k2hvs   1/1     Running   0          36s
nginx-deployment-6fd9b4f959-mgb6m   1/1     Running   0          36s
nginx-deployment-6fd9b4f959-nb4sd   1/1     Running   0          36s
nginx-deployment-6fd9b4f959-rcfnj   1/1     Running   0          36s
nginx-deployment-6fd9b4f959-tv7t4   1/1     Running   0          36s
2.2 創(chuàng)建hpa
[root@master ~]# vim hpa.yaml 
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
  name: my-hpa
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: nginx-deployment   #這里用前面的deployment的名字
  minReplicas: 10    #最少10個(gè)
  maxReplicas: 20    #最多20個(gè)
  metrics:
  - type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 30   #限制%30的內(nèi)存

執(zhí)行并查看

[root@master ~]# kubectl apply -f hpa.yaml 

[root@master ~]# kubectl get hpa
NAME     REFERENCE                     TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
my-hpa   Deployment/nginx-deployment   0%/30%    10        20        10         48s

該過(guò)程可能需要等一會(huì)才能看到TARGETS的0%/50%

3. 對(duì)集群進(jìn)行壓力測(cè)試

3.1 在其他機(jī)器上安裝ab軟件
[root@ansible pod]# yum install httpd-tools -y
3.2?對(duì)該集群進(jìn)行ab壓力測(cè)試

#1000個(gè)并發(fā)數(shù),100000000個(gè)請(qǐng)求數(shù)?

[root@ansible ~]# ab -c 1000 -n 100000000 http://192.168.107.11:32621/
This is ApacheBench, Version 2.3 <$Revision: 1430300 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 127.0.0.1 (be patient)


4. 查看hpa效果,觀察變化

[root@master pod]# kubectl get hpa
NAME     REFERENCE                     TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
my-hpa   Deployment/nginx-deployment   46%/30%   10        20        17         3m4s

可以看出,hpa TARGETS達(dá)到了46%,需要擴(kuò)容。pod數(shù)自動(dòng)擴(kuò)展到了17個(gè)

5. 觀察集群性能

?查看吞吐率

?經(jīng)過(guò)多次測(cè)試,看到最高吞吐率為4480左右

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

6. 優(yōu)化整個(gè)web集群

可以通過(guò)修改內(nèi)核參數(shù)或nginx配置文件中的參數(shù)來(lái)優(yōu)化

這里使用ulimit命令

[root@master ~]# ulimit -n 10000
#擴(kuò)大并發(fā)連接數(shù)

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

六、使用ingress對(duì)象結(jié)合ingress-controller給web業(yè)務(wù)實(shí)現(xiàn)負(fù)載均衡功能

1. 用ansible部署ingress環(huán)境

1.1 將配置ingress controller需要的配置文件傳入ansible服務(wù)器上

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

1.2 編寫拉取ingress鏡像的腳本

直接下載github上的 deploy.yaml 部署即可?

由于網(wǎng)絡(luò)問(wèn)題鏡像如果拉取失敗,可以使用下面hub.docker 上的鏡像

這里是參考博客:ingress-nginx-controller 部署以及優(yōu)化 - 小兔幾白又白 - 博客園 (cnblogs.com)

[root@ansible ~]# vim ingress_images.sh
docker pull koala2020/ingress-nginx-controller:v1
docker pull koala2020/ingress-nginx-kube-webhook-certgen:v1
1.3 編寫playbook,實(shí)現(xiàn)ingress controller的安裝部署

編寫主機(jī)清單,ingress-controller-deployment.yaml文件只需要傳到master上,拉取ingress鏡像要在所有k8s集群里

[root@ansible etc]# vim /etc/ansible/hosts
[nfs]
192.168.107.15
[web]
192.168.107.11
192.168.107.12
192.168.107.13
[master]   #添加
192.168.107.11

編寫playbook

[root@ansible ansible]# vim ingress_install.yaml 
- hosts: web
  remote_user: root
  tasks:
  - name: install ingress controller
    script: /root/ingress_images.sh
- hosts: master
  remote_user: root
  tasks:
  - name: copy ingress controller deployment file
    copy: src=/root/ingress-controller-deploy.yaml dest=/root/

檢查yaml文件語(yǔ)法?

[root@ansible ansible]# ansible-playbook --syntax-check /etc/ansible/ingress_install.yaml

playbook: /etc/ansible/ingress_install.yaml

執(zhí)行yaml文件

[root@ansible ansible]# ansible-playbook  ingress_install.yaml
1.4 查看是否成功

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

發(fā)現(xiàn)鏡像拉取成功,文件也傳送到master上了

2.?執(zhí)行ingress-controller-deploy.yaml 文件,去啟動(dòng)ingress ?controller

在master機(jī)器上

[root@master ~]# kubectl apply -f ingress-controller-deploy.yaml

查看ingress controller的相關(guān)命名空間

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

查看ingress controller的相關(guān)service

[root@k8smaster 4-4]# kubectl get svc -n ingress-nginx
NAME                                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                      AGE
ingress-nginx-controller             NodePort    10.99.160.10   <none>        80:30092/TCP,443:30263/TCP   91s
ingress-nginx-controller-admission   ClusterIP   10.99.138.23   <none>        443/TCP                      91s

查看ingress controller的相關(guān)pod

[root@master ~]# kubectl get pod -n ingress-nginx
NAME                                        READY   STATUS      RESTARTS   AGE
ingress-nginx-admission-create-fbz67        0/1     Completed   0          110s
ingress-nginx-admission-patch-4fsjz         0/1     Completed   1          110s
ingress-nginx-controller-7cd558c647-dgfbd   1/1     Running     0          110s
ingress-nginx-controller-7cd558c647-g9vvt   1/1     Running     0          110s

3.?啟用ingress 關(guān)聯(lián)ingress controller 和service

3.1 編寫ingrss的yaml文件?
[root@master ~]# vim zhou_ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: zhou-ingress       #ingress的名字
  annotations:
    kubernets.io/ingress.class: nginx #注釋 這個(gè)ingress 是關(guān)聯(lián)ingress controller的
spec:
  ingressClassName: nginx  #關(guān)聯(lián)ingress controller
  rules:
  - host: www.zhou.com     #根據(jù)域名做負(fù)載均衡
    http:
      paths:
      - pathType: Prefix
        path: /
        backend:
          service:
            name: my-nginx-nfs  #用前面發(fā)布的service名字
            port:
              number: 80
  - host: www.xin.com
    http:
      paths:
      - pathType: Prefix
        path: /
        backend:
          service:
            name: my-nginx-nfs2  #后面做發(fā)布service的時(shí)候要用到
            port:
              number: 80                              
3.2 執(zhí)行文件
[root@master ~]# kubectl apply -f zhou_ingress.yaml 
ingress.networking.k8s.io/zhou-ingress created
3.3 查看效果
[root@master ~]# kubectl get ingress
NAME           CLASS   HOSTS                      ADDRESS                         PORTS   AGE
zhou-ingress   nginx   www.zhou.com,www.xin.com   192.168.107.12,192.168.107.13   80      85s

該過(guò)程需要等幾分鐘才能看到ADDRESS中的ip地址

3.4 查看ingress controller 里的nginx.conf 文件里是否有ingress對(duì)應(yīng)的規(guī)則?
[root@master ~]# kubectl get pod -n ingress-nginx
NAME                                        READY   STATUS      RESTARTS   AGE
ingress-nginx-admission-create-fbz67        0/1     Completed   0          12m
ingress-nginx-admission-patch-4fsjz         0/1     Completed   1          12m
ingress-nginx-controller-7cd558c647-dgfbd   1/1     Running     0          12m
ingress-nginx-controller-7cd558c647-g9vvt   1/1     Running     0          12m
[root@master ~]# kubectl exec -n ingress-nginx -it ingress-nginx-controller-7cd558c647-dgfbd -- bash
bash-5.1$ cat nginx.conf|grep zhou.com
	## start server www.zhou.com
		server_name www.zhou.com ;
	## end server www.zhou.com
bash-5.1$ cat nginx.conf|grep xin.com
	## start server www.xin.com
		server_name www.xin.com ;
	## end server www.xin.com
bash-5.1$ cat nginx.conf|grep -C3 upstream_balancer
	error_log  /var/log/nginx/error.log notice;
	
	upstream upstream_balancer {
		server 0.0.0.1:1234; # placeholder
		
		balancer_by_lua_block {

4. 測(cè)試訪問(wèn)

4.1 獲取ingress controller對(duì)應(yīng)的service暴露宿主機(jī)的端口

訪問(wèn)宿主機(jī)和相關(guān)端口,就可以驗(yàn)證ingress controller是否能進(jìn)行負(fù)載均衡

[root@master ~]# kubectl get svc -n ingress-nginx
NAME                                 TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)                      AGE
ingress-nginx-controller             NodePort    10.1.58.218   <none>        80:30289/TCP,443:32195/TCP   19m
ingress-nginx-controller-admission   ClusterIP   10.1.241.17   <none>        443/TCP                      19m
4.2 在其他的宿主機(jī)或者windows機(jī)器上使用域名進(jìn)行訪問(wèn)

這里在ansible服務(wù)器上訪問(wèn)

4.2.1 修改host文件
[root@ansible ansible]# vim /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.107.12 www.zhou.com
192.168.107.13 www.xin.com

因?yàn)槲覀兪腔谟蛎龅呢?fù)載均衡的配置,所有必須要在瀏覽器里使用域名去訪問(wèn),不能使用ip地址
同時(shí)ingress controller做負(fù)載均衡的時(shí)候是基于http協(xié)議的,7層負(fù)載均衡

4.2.1 測(cè)試訪問(wèn)
[root@ansible ansible]# curl  www.zhou.com
<p>welcome!</p>
<h1>name:zhouxin</h1>
<h1>Hunan Agricultural University</h1>
<h1>age: 20</h1>

[root@ansible ansible]# curl  www.xin.com
<html>
<head><title>503 Service Temporarily Unavailable</title></head>
<body>
<center><h1>503 Service Temporarily Unavailable</h1></center>
<hr><center>nginx</center>
</body>
</html>
[root@ansible ansible]# 

這里看到,訪問(wèn)www.zhou.com能正常訪問(wèn)到,而www.xin.com沒(méi)有訪問(wèn)到,出現(xiàn)503錯(cuò)誤,原因是我們只發(fā)布另一個(gè)service服務(wù),沒(méi)有發(fā)布另一個(gè)

5.?啟動(dòng)第2個(gè)服務(wù)和pod

[root@master ~]# vim zhou_nginx_svc.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: zhou-nginx-deploy
  labels:
    app: zhou-nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: zhou-nginx
  template:
    metadata:
      labels:
        app: zhou-nginx
    spec:
      containers:
      - name: zhou-nginx
        image: nginx
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name:  my-nginx-nfs2  #要用前面zhou_ingress.yaml中一樣的
  labels:
    app: my-nginx-nfs2
spec:
  selector:
    app: zhou-nginx
  ports:
  - name: name-of-service-port
    protocol: TCP
    port: 80

執(zhí)行并查看

[root@master ~]# kubectl apply -f zhou_nginx_svc.yaml 
deployment.apps/zhou-nginx-deploy created
service/my-nginx-nfs2 created
[root@master ~]# kubectl get svc
NAME            TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
kubernetes      ClusterIP   10.1.0.1       <none>        443/TCP          2d1h
my-nginx-nfs    NodePort    10.1.32.204    <none>        8070:32621/TCP   173m
my-nginx-nfs2   ClusterIP   10.1.202.196   <none>        80/TCP           43s
[root@master ~]# kubectl get svc -n ingress-nginx
NAME                                 TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)                      AGE
ingress-nginx-controller             NodePort    10.1.58.218   <none>        80:30289/TCP,443:32195/TCP   33m
ingress-nginx-controller-admission   ClusterIP   10.1.241.17   <none>        443/TCP                      33m
[root@master ~]# kubectl get ingress
NAME           CLASS   HOSTS                      ADDRESS                         PORTS   AGE
zhou-ingress   nginx   www.zhou.com,www.xin.com   192.168.107.12,192.168.107.13   80      23m

6. 再次測(cè)試訪問(wèn),查看www.xin.com的是否能夠訪問(wèn)到

[root@ansible ansible]# curl  www.zhou.com
<p>welcome!</p>
<h1>name:zhouxin</h1>
<h1>Hunan Agricultural University</h1>
<h1>age: 20</h1>

[root@ansible ansible]# curl  www.xin.com
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a >nginx.org</a>.<br/>
Commercial support is available at
<a >nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

可見(jiàn),這次訪問(wèn)成功!ingress負(fù)載均衡配置成功!

七、在k8s集群里部署Prometheus對(duì)web業(yè)務(wù)進(jìn)行監(jiān)控,結(jié)合Grafana成圖工具進(jìn)行數(shù)據(jù)展示

這里參考了https://blog.csdn.net/rzy1248873545/article/details/125758153這篇博客

監(jiān)控node的資源,可以放一個(gè)node_exporter,這是監(jiān)控node資源的,node_exporter是Linux上的采集器,放上去就能采集到當(dāng)前節(jié)點(diǎn)的CPU、內(nèi)存、網(wǎng)絡(luò)IO,等都可以采集的。

監(jiān)控容器,k8s內(nèi)部提供cadvisor采集器,pod、容器都可以采集到這些指標(biāo),都是內(nèi)置的,不需要單獨(dú)部署,只知道怎么去訪問(wèn)這個(gè)Cadvisor就可以了。

監(jiān)控k8s資源對(duì)象,會(huì)部署一個(gè)kube-state-metrics這個(gè)服務(wù),它會(huì)定時(shí)的API中獲取到這些指標(biāo),幫存取到Prometheus里,要是告警的話,通過(guò)Alertmanager發(fā)送給一些接收方,通過(guò)Grafana可視化展示

1.?搭建prometheus監(jiān)控k8s集群

1.1 采用daemonset方式部署node-exporter
[root@master /]# mkdir /prometheus
[root@master /]# cd /prometheus
[root@master prometheus]# vim node_exporter.yaml 
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: node-exporter
  namespace: kube-system
  labels:
    k8s-app: node-exporter
spec:
  selector:
    matchLabels:
      k8s-app: node-exporter
  template:
    metadata:
      labels:
        k8s-app: node-exporter
    spec:
      containers:
      - image: prom/node-exporter
        name: node-exporter
        ports:
        - containerPort: 9100
          protocol: TCP
          name: http
---
apiVersion: v1
kind: Service
metadata:
  labels:
    k8s-app: node-exporter
  name: node-exporter
  namespace: kube-system
spec:
  ports:
  - name: http
    port: 9100
    nodePort: 31672
    protocol: TCP
  type: NodePort
  selector:
    k8s-app: node-exporter

執(zhí)行

[root@master prometheus]# kubectl apply -f node-exporter.yaml
daemonset.apps/node-exporter created
service/node-exporter created
1.2?部署Prometheus
[root@master prometheus]# vim prometheus_rbac.yaml 
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: prometheus
rules:
- apiGroups: [""]
  resources:
  - nodes
  - nodes/proxy
  - services
  - endpoints
  - pods
  verbs: ["get", "list", "watch"]
- apiGroups:
  - extensions
  resources:
  - ingresses
  verbs: ["get", "list", "watch"]
- nonResourceURLs: ["/metrics"]
  verbs: ["get"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: prometheus
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: prometheus
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: prometheus
subjects:
- kind: ServiceAccount
  name: prometheus
  namespace: kube-system
[root@master prometheus]# vim prometheus_comfig.yaml 
apiVersion: v1
kind: ConfigMap
metadata:
  name: prometheus-config
  namespace: kube-system
data:
  prometheus.yml: |
    global:
      scrape_interval:     15s
      evaluation_interval: 15s
    scrape_configs:
 
    - job_name: 'kubernetes-apiservers'
      kubernetes_sd_configs:
      - role: endpoints
      scheme: https
      tls_config:
        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      relabel_configs:
      - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
        action: keep
        regex: default;kubernetes;https
 
    - job_name: 'kubernetes-nodes'
      kubernetes_sd_configs:
      - role: node
      scheme: https
      tls_config:
        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      relabel_configs:
      - action: labelmap
        regex: __meta_kubernetes_node_label_(.+)
      - target_label: __address__
        replacement: kubernetes.default.svc:443
      - source_labels: [__meta_kubernetes_node_name]
        regex: (.+)
        target_label: __metrics_path__
        replacement: /api/v1/nodes/${1}/proxy/metrics
 
    - job_name: 'kubernetes-cadvisor'
      kubernetes_sd_configs:
      - role: node
      scheme: https
      tls_config:
        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      relabel_configs:
      - action: labelmap
        regex: __meta_kubernetes_node_label_(.+)
      - target_label: __address__
        replacement: kubernetes.default.svc:443
      - source_labels: [__meta_kubernetes_node_name]
        regex: (.+)
        target_label: __metrics_path__
        replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor
 
    - job_name: 'kubernetes-service-endpoints'
      kubernetes_sd_configs:
      - role: endpoints
      relabel_configs:
      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
        action: keep
        regex: true
      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
        action: replace
        target_label: __scheme__
        regex: (https?)
      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
        action: replace
        target_label: __metrics_path__
        regex: (.+)
      - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
        action: replace
        target_label: __address__
        regex: ([^:]+)(?::\d+)?;(\d+)
        replacement: $1:$2
      - action: labelmap
        regex: __meta_kubernetes_service_label_(.+)
      - source_labels: [__meta_kubernetes_namespace]
        action: replace
        target_label: kubernetes_namespace
      - source_labels: [__meta_kubernetes_service_name]
        action: replace
        target_label: kubernetes_name
 
    - job_name: 'kubernetes-services'
      kubernetes_sd_configs:
      - role: service
      metrics_path: /probe
      params:
        module: [http_2xx]
      relabel_configs:
      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_probe]
        action: keep
        regex: true
      - source_labels: [__address__]
        target_label: __param_target
      - target_label: __address__
        replacement: blackbox-exporter.example.com:9115
      - source_labels: [__param_target]
        target_label: instance
      - action: labelmap
        regex: __meta_kubernetes_service_label_(.+)
      - source_labels: [__meta_kubernetes_namespace]
        target_label: kubernetes_namespace
      - source_labels: [__meta_kubernetes_service_name]
        target_label: kubernetes_name
 
    - job_name: 'kubernetes-ingresses'
      kubernetes_sd_configs:
      - role: ingress
      relabel_configs:
      - source_labels: [__meta_kubernetes_ingress_annotation_prometheus_io_probe]
        action: keep
        regex: true
      - source_labels: [__meta_kubernetes_ingress_scheme,__address__,__meta_kubernetes_ingress_path]
        regex: (.+);(.+);(.+)
        replacement: ${1}://${2}${3}
        target_label: __param_target
      - target_label: __address__
        replacement: blackbox-exporter.example.com:9115
      - source_labels: [__param_target]
        target_label: instance
      - action: labelmap
        regex: __meta_kubernetes_ingress_label_(.+)
      - source_labels: [__meta_kubernetes_namespace]
        target_label: kubernetes_namespace
      - source_labels: [__meta_kubernetes_ingress_name]
        target_label: kubernetes_name
 
    - job_name: 'kubernetes-pods'
      kubernetes_sd_configs:
      - role: pod
      relabel_configs:
      - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
        action: keep
        regex: true
      - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
        action: replace
        target_label: __metrics_path__
        regex: (.+)
      - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
        action: replace
        regex: ([^:]+)(?::\d+)?;(\d+)
        replacement: $1:$2
        target_label: __address__
      - action: labelmap
        regex: __meta_kubernetes_pod_label_(.+)
      - source_labels: [__meta_kubernetes_namespace]
        action: replace
        target_label: kubernetes_namespace
      - source_labels: [__meta_kubernetes_pod_name]
        action: replace
        target_label: kubernetes_pod_name
[root@master prometheus]# vim prometheus_deployment.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    name: prometheus-deployment
  name: prometheus
  namespace: kube-system
spec:
  replicas: 1
  selector:
    matchLabels:
      app: prometheus
  template:
    metadata:
      labels:
        app: prometheus
    spec:
      containers:
      - image: prom/prometheus:v2.0.0
        name: prometheus
        command:
        - "/bin/prometheus"
        args:
        - "--config.file=/etc/prometheus/prometheus.yml"
        - "--storage.tsdb.path=/prometheus"
        - "--storage.tsdb.retention=24h"
        ports:
        - containerPort: 9090
          protocol: TCP
        volumeMounts:
        - mountPath: "/prometheus"
          name: data
        - mountPath: "/etc/prometheus"
          name: config-volume
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
          limits:
            cpu: 500m
            memory: 2500Mi
      serviceAccountName: prometheus
      volumes:
      - name: data
        emptyDir: {}
      - name: config-volume
        configMap:
          name: prometheus-config
[root@master prometheus]# vim prometheus_service.yaml 
kind: Service
apiVersion: v1
metadata:
  labels:
    app: prometheus
  name: prometheus
  namespace: kube-system
spec:
  type: NodePort
  ports:
  - port: 9090
    targetPort: 9090
    nodePort: 30003
  selector:
    app: prometheus

執(zhí)行

[root@master prometheus]# kubectl apply -f prometheus_rbac.yaml 
clusterrole.rbac.authorization.k8s.io/prometheus created
serviceaccount/prometheus created
clusterrolebinding.rbac.authorization.k8s.io/prometheus created
[root@master prometheus]# kubectl apply -f prometheus_comfig.yaml 
configmap/prometheus-config created
[root@master prometheus]# kubectl apply -f prometheus_deployment.yaml 
deployment.apps/prometheus created
[root@master prometheus]# kubectl apply -f prometheus_service.yaml 
service/prometheus created

?查看

[root@master prometheus]# kubectl get service -A
NAMESPACE       NAME                                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                      AGE
default         kubernetes                           ClusterIP   10.1.0.1       <none>        443/TCP                      2d1h
default         my-nginx-nfs                         NodePort    10.1.32.204    <none>        8070:32621/TCP               3h9m
default         my-nginx-nfs2                        ClusterIP   10.1.202.196   <none>        80/TCP                       15m
ingress-nginx   ingress-nginx-controller             NodePort    10.1.58.218    <none>        80:30289/TCP,443:32195/TCP   47m
ingress-nginx   ingress-nginx-controller-admission   ClusterIP   10.1.241.17    <none>        443/TCP                      47m
kube-system     kube-dns                             ClusterIP   10.1.0.10      <none>        53/UDP,53/TCP,9153/TCP       2d1h
kube-system     metrics-server                       ClusterIP   10.1.33.66     <none>        443/TCP                      152m
kube-system     node-exporter                        NodePort    10.1.199.144   <none>        9100:31672/TCP               6m14s
kube-system     prometheus                           NodePort    10.1.178.35    <none>        9090:30003/TCP               98s
1.3 測(cè)試

用瀏覽器訪問(wèn)192.168.107.11:31672,這是node-exporter采集的數(shù)據(jù)

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

訪問(wèn)192.168.107.11:30003,這是Prometheus的頁(yè)面,依次點(diǎn)擊Status——Targets可以看到已經(jīng)成功連接到k8s的apiserver

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

2. 搭建garafana結(jié)合prometheus出圖

2.1 部署grafana
[root@master prometheus]# vim grafana_deploy.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: grafana-core
  namespace: kube-system
  labels:
    app: grafana
    component: core
spec:
  replicas: 1
  selector:
    matchLabels:
      app: grafana
  template:
    metadata:
      labels:
        app: grafana
        component: core
    spec:
      containers:
      - image: grafana/grafana:6.1.4
        name: grafana-core
        imagePullPolicy: IfNotPresent
        # env:
        resources:
          # keep request = limit to keep this container in guaranteed class
          limits:
            cpu: 100m
            memory: 100Mi
          requests:
            cpu: 100m
            memory: 100Mi
        env:
          # The following env variables set up basic auth twith the default admin user and admin password.
          - name: GF_AUTH_BASIC_ENABLED
            value: "true"
          - name: GF_AUTH_ANONYMOUS_ENABLED
            value: "false"
          # - name: GF_AUTH_ANONYMOUS_ORG_ROLE
          #   value: Admin
          # does not really work, because of template variables in exported dashboards:
          # - name: GF_DASHBOARDS_JSON_ENABLED
          #   value: "true"
        readinessProbe:
          httpGet:
            path: /login
            port: 3000
          # initialDelaySeconds: 30
          # timeoutSeconds: 1
        #volumeMounts:   #先不進(jìn)行掛載
        #- name: grafana-persistent-storage
        #  mountPath: /var
      #volumes:
      #- name: grafana-persistent-storage
        #emptyDir: {}
[root@master prometheus]# vim grafana_svc.yaml 
apiVersion: v1
kind: Service
metadata:
  name: grafana
  namespace: kube-system
  labels:
    app: grafana
    component: core
spec:
  type: NodePort
  ports:
    - port: 3000
  selector:
    app: grafana
    component: core
[root@master prometheus]# vim grafana_ing.yaml 
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
   name: grafana
   namespace: kube-system
spec:
   rules:
   - host: k8s.grafana
     http:
       paths:
       - path: /
         pathType: Prefix
         backend:
          service:
            name: grafana
            port: 
              number: 3000

?執(zhí)行

[root@master prometheus]# kubectl apply -f grafana_deploy.yaml 
deployment.apps/grafana-core created
[root@master prometheus]# kubectl apply -f grafana_svc.yaml 
service/grafana created
[root@master prometheus]# kubectl apply -f grafana_ing.yaml 
ingress.networking.k8s.io/grafana created

查看

[root@master prometheus]# kubectl get service -A
NAMESPACE       NAME                                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                      AGE
default         kubernetes                           ClusterIP   10.1.0.1       <none>        443/TCP                      2d1h
default         my-nginx-nfs                         NodePort    10.1.32.204    <none>        8070:32621/TCP               3h17m
default         my-nginx-nfs2                        ClusterIP   10.1.202.196   <none>        80/TCP                       24m
ingress-nginx   ingress-nginx-controller             NodePort    10.1.58.218    <none>        80:30289/TCP,443:32195/TCP   56m
ingress-nginx   ingress-nginx-controller-admission   ClusterIP   10.1.241.17    <none>        443/TCP                      56m
kube-system     grafana                              NodePort    10.1.254.118   <none>        3000:30276/TCP               71s
kube-system     kube-dns                             ClusterIP   10.1.0.10      <none>        53/UDP,53/TCP,9153/TCP       2d1h
kube-system     metrics-server                       ClusterIP   10.1.33.66     <none>        443/TCP                      160m
kube-system     node-exporter                        NodePort    10.1.199.144   <none>        9100:31672/TCP               14m
kube-system     prometheus                           NodePort    10.1.178.35    <none>        9090:30003/TCP               9m55s
2.2 測(cè)試

訪問(wèn)192.168.107.11:30276,這是grafana的頁(yè)面,賬戶、密碼都是admin

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

2.2.1?增添Prometheus數(shù)據(jù)源

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

2.2.2 導(dǎo)入模板

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

輸入模板號(hào),可以到這個(gè)網(wǎng)站去找模板

Dashboards | Grafana Labs

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

2.3?出圖效果

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

八、構(gòu)建CI/CD環(huán)境,使用gitlab集成Jenkins、Harbor構(gòu)建pipeline流水線工作,實(shí)現(xiàn)自動(dòng)相關(guān)拉取代碼、鏡像制作、上傳鏡像等功能

1. 部署gitlab環(huán)境?

1.1 安裝gitlab

此處參考了:https://blog.csdn.net/weixin_56270746/article/details/125427722?

1.1.1設(shè)置gitlab的yum源(使用清華鏡像源安裝GitLab)

gitlab-ce是它的社區(qū)版,gitlab-ee是企業(yè)版,是收費(fèi)的。

在 /etc/yum.repos.d/ 下新建 gitlab-ce.repo

[root@gitlab ~]# cd /etc/yum.repos.d/
[root@gitlab yum.repos.d]# vim gitlab-ce.repo
[gitlab-ce]
name=gitlab-ce
baseurl=https://mirrors.tuna.tsinghua.edu.cn/gitlab-ce/yum/el7/
gpgcheck=0
enabled=1
[root@gitlab yum.repos.d]# yum clean all && yum makecache
1.1.2?安裝 gitlab

直接安裝最新版

[root@gitlab yum.repos.d]#yum install -y gitlab-ce

安裝成功后會(huì)看到gitlab-ce打印了以下圖形

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

1.1.3?配置GitLab站點(diǎn)Url

GitLab默認(rèn)的配置文件路徑是/etc/gitlab/gitlab.rb

默認(rèn)的站點(diǎn)Url配置項(xiàng)是: external_url 'http://gitlab.example.com'

這里我將GitLab站點(diǎn)Url修改為http://192.168.107.17:8000

[root@gitlab gitlab]# cd /etc/gitlab
[root@gitlab gitlab]# vim gitlab.rb 
external_url 'http://192.168.107.17:8000'   #修改這里
1.2?啟動(dòng)并訪問(wèn)GitLab
1.2.1?重新配置并啟動(dòng)
[root@gitlab gitlab]# gitlab-ctl reconfigure

完成后將會(huì)看到如下輸出

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

1.2.2 在firewalld服務(wù)器上配置dnat策略,使windows能訪問(wèn)進(jìn)來(lái)
[root@fiewalld ~]# vim snat_dnat.sh 
#!/bin/bash
iptables -F
iptables -t nat -F

#enable route
echo 1 >/proc/sys/net/ipv4/ip_forward

#enable snat
iptables -t nat -A POSTROUTING -s 192.168.107.0/24 -o ens33 -j SNAT --to-source  192.168.31.69

#enable dant
iptables -t nat -A PREROUTING -i ens33 -d 192.168.31.69 -p tcp --dport 80 -j DNAT --to-destination 192.168.107.11
iptables -t nat -A PREROUTING -i ens33 -d 192.168.31.69 -p tcp --dport 80 -j DNAT --to-destination 192.168.107.12
iptables -t nat -A PREROUTING -i ens33 -d 192.168.31.69 -p tcp --dport 80 -j DNAT --to-destination 192.168.107.13

#添加下面這條,注意端口是8000
iptables -t nat -A PREROUTING -i ens33 -d 192.168.31.69 -p tcp --dport 8000 -j DNAT --to-destination 192.168.107.17
1.2.3 在window上訪問(wèn)

打開(kāi)瀏覽器輸入gitlab服務(wù)器地址,注冊(cè)用戶,如下圖?

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

?注冊(cè)用戶

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

完成后想登錄http://192.168.107.17:8000?需要賬號(hào)和密碼登錄,注冊(cè)一個(gè)后登錄報(bào)錯(cuò)誤,需要管理員賬號(hào)初始化。

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

1.2.4?配置默認(rèn)訪問(wèn)密碼
  • [root@gitlab gitlab]# cd /opt/gitlab/bin/     #切換到命令運(yùn)行的目錄
    [root@gitlab bin]# gitlab-rails console -e production    #進(jìn)行初始化密碼
    --------------------------------------------------------------------------------
     Ruby:         ruby 3.0.6p216 (2023-03-30 revision 23a532679b) [x86_64-linux]
     GitLab:       16.3.1 (ea817127f2a) FOSS
     GitLab Shell: 14.26.0
     PostgreSQL:   13.11
    ------------------------------------------------------------[ booted in 62.10s ]
    Loading production environment (Rails 7.0.6)
    irb(main):001:0> u=User.where(id:1).first
    => #<User id:1 @root>
    irb(main):002:0> u.password='sc123456'
    => "sc123456"
    irb(main):003:0> u.password_confirmation='sc123456'
    => "sc123456"
    irb(main):004:0> u.save!
    => true
    irb(main):005:0> exit
    

出現(xiàn)true說(shuō)明設(shè)置成功

此時(shí)就可以用root/sc123456來(lái)登錄頁(yè)面

1.2.5 登錄訪問(wèn)

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

成功登錄root用戶

1.3?配置使用自己創(chuàng)建的用戶登錄

需要用root賬號(hào)通過(guò)下

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

然后再次登錄,即可登錄成功!

至此,gitlab環(huán)境就搭建成功了!

2. 部署jenkins環(huán)境

2.1 先到官網(wǎng)下載通用java項(xiàng)目war包,建議選擇LTS長(zhǎng)期支持版

下載地址:?

https://www.jenkins.io/download/

這里下載通用war包

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

2.2?下載java,jdk11以上版本并安裝,安裝后配置jdk的環(huán)境變量

?參考:https://blog.csdn.net/m0_37048012/article/details/120519348

2.2.1 yum安裝?
[root@jenkins javadoc]# yum install -y java-11-openjdk java-11-openjdk-devel		# 安裝
[root@jenkins javadoc]# java -version  #查看是否安裝成功
openjdk version "11.0.20" 2023-07-18 LTS
OpenJDK Runtime Environment (Red_Hat-11.0.20.0.8-1.el7_9) (build 11.0.20+8-LTS)
OpenJDK 64-Bit Server VM (Red_Hat-11.0.20.0.8-1.el7_9) (build 11.0.20+8-LTS, mixed mode, sharing)
2.2.2??查找JAVA安裝目錄
[root@jenkins javadoc]# whereis java
java: /usr/bin/java /usr/lib/java /etc/java /usr/share/java /usr/share/man/man1/java.1.gz

如果顯示的是/usr/bin/java請(qǐng)執(zhí)行下面命令

[root@jenkins javadoc]# ls -lr /usr/bin/java
lrwxrwxrwx 1 root root 22 9月   3 19:46 /usr/bin/java -> /etc/alternatives/java
[root@jenkins javadoc]# ls -lrt /etc/alternatives/java
lrwxrwxrwx 1 root root 64 9月   3 19:46 /etc/alternatives/java -> /usr/lib/jvm/java-11-openjdk-11.0.20.0.8-1.el7_9.x86_64/bin/java
2.2.3?配置環(huán)境變量
[root@jenkins ~]# vim /etc/profile
#######添加下面內(nèi)容########
#JAVA environment
JAVA_HOME=/usr/lib/jvm/java-11-openjdk-11.0.20.0.8-1.el7_9.x86_64
JRE_HOME=$JAVA_HOME/jre
PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin
CLASS_PATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/lib:$CLASSPATH
#PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin
export JAVA_HOME JRE_HOME PATH CLASS_PATH

使環(huán)境變量生效

[root@jenkins ~]# source /etc/profile
2.3 將剛剛下載下來(lái)的jenkins.war包傳入服務(wù)器

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

2.4?啟動(dòng)jenkins服務(wù)
[root@jenkins ~]# nohup java -jar jenkins.war &

讓其在后臺(tái)運(yùn)行

[root@jenkins local]# ps aux|grep jenkins
root      11790  106 13.6 2492292 136172 pts/0  Sl   20:40   0:06 java -jar jenkins.war
root      11824  0.0  0.0 112824   980 pts/1    R+   20:40   0:00 grep --color=auto jenkins

默認(rèn)情況下端口是8080,如果要使用其他端口啟動(dòng),可以通過(guò)命令行”java –jar Jenkins.war --httpPort=80”的方式修改

2.5 測(cè)試訪問(wèn)

jenkins服務(wù)器名+8080端口?

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

這個(gè)過(guò)程需要等一會(huì)

出現(xiàn)解鎖 Jenkins界面,說(shuō)明jenkins項(xiàng)目搭建完成,這里需要輸入管理員密碼?

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

上圖中有提示:管理員密碼在:/root/.jenkins/secrets/initialAdminPassword 打開(kāi)此文件獲得密碼并輸入密碼

[root@jenkins local]# cat /root/.jenkins/secrets/initialAdminPassword
80e0160b23cf4187a0abe4974e6e9ac1

點(diǎn)擊”繼續(xù)”按鈕后如下圖:

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

等待所有插件安裝完成。安裝插件的時(shí)候,會(huì)有一些插件安裝失敗,這些插件的安裝是有前置條件的,等安裝結(jié)束后,按右下角“重試”,繼續(xù)安裝。安裝完成后,點(diǎn)擊“繼續(xù)”按鈕,

創(chuàng)建用戶?

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

到此,jenkins安裝完成,可以開(kāi)啟jenkins持續(xù)集成之旅了!?

3. 部署harbor環(huán)境

3.1?安裝docker、docker-compose
3.1.1 安裝docker
[root@harbor ~]# yum install -y yum-utils
 
[root@harbor ~]# yum-config-manager \
    --add-repo \
    https://download.docker.com/linux/centos/docker-ce.repo
 
[root@harbor ~]# yum install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin -y
 
[root@harbor ~]# systemctl start docker

[root@harbor ~]# docker -v  #查看docker是否安裝成功
Docker version 24.0.5, build ced0996
3.1.2 安裝docker-compose

下載并且安裝compose的命令行插件

[root@harbor ~]# DOCKER_CONFIG=${DOCKER_CONFIG:-$HOME/.docker}
[root@harbor ~]# echo $DOCKER_CONFIG
/root/.docker
[root@harbor ~]# mkdir -p $DOCKER_CONFIG/cli-plugins
[root@harbor ~]# 

上傳docker-compose程序到自己的linux宿主機(jī)里,存放到/root/.docker/cli-plugins/

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

[root@harbor ~]# mv docker-compose /root/.docker/cli-plugins/
[root@harbor ~]# cd /root/.docker/cli-plugins/
[root@harbor cli-plugins]# ls
docker-compose
[root@harbor cli-plugins]# chmod +x docker-compose  #授予可執(zhí)行權(quán)限
[root@harbor cli-plugins]# cp docker-compose /usr/bin/  #將docker-compose存放到PATH變量目錄下
[root@harbor cli-plugins]# docker-compose --version  #查看是否安裝成功
Docker Compose version v2.7.0
3.2 安裝harbor
3.2.1?下載harbor的源碼,上傳到linux服務(wù)器

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

3.2.2 解壓并修改內(nèi)容
[root@harbor ~]# tar xf harbor-offline-installer-v2.1.0.tgz
[root@harbor ~]# ls
anaconda-ks.cfg  harbor  harbor-offline-installer-v2.1.0.tgz
[root@harbor ~]# cd harbor
[root@harbor harbor]# ls
common.sh  harbor.v2.1.0.tar.gz  harbor.yml.tmpl  install.sh  LICENSE  prepare
[root@harbor harbor]# cp harbor.yml.tmpl harbor.yml
[root@harbor harbor]# vim harbor.yml

修改下面這兩處 ,并注釋掉https的配置

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

3.3 登錄harbor
[root@harbor harbor]# ./install.sh

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

在windows機(jī)器上訪問(wèn)網(wǎng)站,去配置harbor
http://192.168.107.19:8089/

默認(rèn)的登錄的用戶名和密碼
admin
Harbor12345

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

至此,環(huán)境部署就全部完成了!?

4. gitlab集成jenkins、harbor構(gòu)建pipeline流水線任務(wù),實(shí)現(xiàn)相關(guān)拉取代碼、鏡像制作、上傳鏡像等流水線工作?

參考:https://www.cnblogs.com/linanjie/p/13986198.html?

在jenkins中構(gòu)建流水線任務(wù)時(shí),從GitLab當(dāng)中拉取代碼,通過(guò)maven打包,然后構(gòu)建dokcer鏡像,并將鏡像推送至harbor當(dāng)中?。?

4.1 jenkins服務(wù)器上需要安裝docker且配置可登錄Harbor服務(wù)拉取鏡像?
4.1.1 jenkins服務(wù)器上安裝docker?
[root@jenkins ~]# yum install -y yum-utils
 
[root@jenkins ~]# yum-config-manager \
    --add-repo \
    https://download.docker.com/linux/centos/docker-ce.repo
 
[root@jenkins ~]# yum install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin -y
 
[root@jenkins ~]# systemctl start docker

[root@jenkins ~]# docker -v  #查看docker是否安裝成功
Docker version 24.0.5, build ced0996
4.1.2? jenkins服務(wù)器上配置可登錄Harbor服務(wù)
[root@jenkins local]# vim /etc/docker/daemon.json
{
"registry-mirrors": ["https://registry.docker-cn.com"],
"insecure-registries" : ["192.168.107.19:8089"]
}

重啟docker

[root@jenkins local]# systemctl daemon-reload
[root@jenkins local]# systemctl restart docker
4.1.3 測(cè)試登錄
[root@jenkins local]# docker login 192.168.107.19:8089
Username: admin   #這里使用前面的那個(gè)默認(rèn)用戶名和密碼
Password: 
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded

可見(jiàn),登錄成功!

4.2 在jenkins上安裝git
[root@jenkins .ssh]# yum install -y git
4.3?在jenkins上安裝maven

參考:https://blog.csdn.net/liu_chen_yang/article/details/130106529

4.3.1 下載安裝包

登錄網(wǎng)址查看下載源:清華大學(xué)開(kāi)源軟件鏡像站

搜索apache

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

進(jìn)入apache,找到maven并下載?

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

點(diǎn)擊進(jìn)入選擇自己所需版本,外面是大版本,里面還有小版本

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

我就點(diǎn)擊最新的maven-4,進(jìn)入之后在點(diǎn)擊4.0.0-alpha-7,在選擇?binaries,選擇自己想要下載包格式,我選擇的是zip格式

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

下載完成之后上傳到服務(wù)器上解壓即可.

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

4.3.2?解壓下載的包
[root@jenkins ~]# mkdir -p /usr/local/maven
[root@jenkins ~]# ls
anaconda-ks.cfg  apache-maven-4.0.0-alpha-7-bin.zip  jenkins.war  nohup.out
[root@jenkins ~]# mv apache-maven-4.0.0-alpha-7-bin.zip /usr/local/maven
[root@jenkins ~]# cd /usr/local/maven
[root@jenkins ~]# yum install unzip -y
[root@jenkins ~]# unzip apache-maven-4.0.0-alpha-7-bin.zi
4.3.3?配置環(huán)境變量
[root@jenkins ~]# vim /etc/profile
######添加下面內(nèi)容
MAVEN_HOME=/usr/local/maven/apache-maven-4.0.0-alpha-7
export PATH=${MAVEN_HOME}/bin:${PATH}

使環(huán)境變量生效

[root@jenkins ~]# source /etc/profile
4.3.4 mvn校驗(yàn)
[root@jenkins ~]# mvn -v
Unable to find the root directory. Create a .mvn directory in the root directory or add the root="true" attribute on the root project's model to identify it.
Apache Maven 4.0.0-alpha-7 (bf699a388cc04b8e4088226ba09a403b68de6b7b)
Maven home: /usr/local/maven/apache-maven-4.0.0-alpha-7
Java version: 11.0.20, vendor: Red Hat, Inc., runtime: /usr/lib/jvm/java-11-openjdk-11.0.20.0.8-1.el7_9.x86_64
Default locale: zh_CN, platform encoding: UTF-8
OS name: "linux", version: "3.10.0-1160.el7.x86_64", arch: "amd64", family: "unix"

看到上面輸出,說(shuō)明安裝成功!

4.4?gitlab中創(chuàng)建測(cè)試項(xiàng)目

參考:https://www.cnblogs.com/linanjie/p/13986198.html??

我這里選擇從模板中創(chuàng)建一個(gè)Spring項(xiàng)目,項(xiàng)目名稱自擬

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

?創(chuàng)建模板成功!

4.5 在harbor上新建dev項(xiàng)目

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

4.6 在Jenkins頁(yè)面中配置JDK和Maven?

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

編輯完成之后,點(diǎn)擊應(yīng)用,保存

4.7?在Jenkins開(kāi)發(fā)視圖中創(chuàng)建流水線任務(wù)(pipeline)

jenkins中所需插件有:?

Pipeline、docker-build-step、Docker Pipeline、Docker plugin、docker-build-step
、Role-based、Authorization Strategy

確保在jenkins中將上訴插件安裝好。

4.7.1?流水線任務(wù)需要編寫pipeline腳本,編寫腳本的第一步應(yīng)該是拉取gitlab中的項(xiàng)目

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

點(diǎn)擊"流水線語(yǔ)法":

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

然后點(diǎn)擊添加,選擇剛剛創(chuàng)建的憑據(jù)

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

記錄下來(lái):git credentialsId: '0e0ecf12-6c3d-449b-a957-124d18f2fbb7', url: 'http://192.168.107.17:8001/zhouxin/spring.git'

4.7.2 編寫pipeline
pipeline{
    agent any
	environment {
        // harbor的地址
		HARBOR_HOST = "192.168.107.19:8089" 
		BUILD_VERSION = createVersion()
	}
	tools{
		// 添加環(huán)境,名稱為Jenkins全局配置中自己定義的別名
		jdk 'jdk11'
              maven 'maven4.0.0'
    }
    stages{
		stage("拉取代碼"){
			//check CODE
			steps {
                // 使用自己前面自己生成的
				git credentialsId: 'f7c7796f-810c-4ba5-83cb-573f1be3e707', url: 'http://192.168.107.17:8001/zhouxin/my-spring.git'
			}
		}
		stage("maven構(gòu)建"){
			steps {
				sh "mvn clean package -Dmaven.test.skip=true"
			}
		}
		stage("構(gòu)建docker鏡像,并push到harbor當(dāng)中"){
			//docker push
			steps {
				sh '''
					docker build -t springproject:$BUILD_VERSION .
					docker tag springproject:$BUILD_VERSION ${HARBOR_HOST}/dev/springproject:$BUILD_VERSION
				'''
                // 使用自己的登陸harbor的用戶名和密碼
				sh "docker login -u admin -p Harbor12345" + " ${HARBOR_HOST}"
				sh "docker push ${HARBOR_HOST}/dev/springproject:$BUILD_VERSION"
				
			}
		}
	}
}

def createVersion() {
    // 定義一個(gè)版本號(hào)作為當(dāng)次構(gòu)建的版本,輸出結(jié)果 20201116165759_1
    return new Date().format('yyyyMMddHHmmss') + "_${env.BUILD_ID}"
}

請(qǐng)確保Harbor中已經(jīng)創(chuàng)建dev倉(cāng)庫(kù);pipeline的寫法可以自己在網(wǎng)上學(xué)習(xí),腳本中應(yīng)盡量不要出現(xiàn)明文的密碼,為了演示方便,我這里直接使用了harbor的明文密碼,正規(guī)來(lái)說(shuō),應(yīng)該再建一個(gè)憑據(jù)來(lái)維護(hù)harborn的用戶名和密碼,然后再通過(guò)腳本去獲取憑據(jù)中的用戶名和密碼

編寫完成后點(diǎn)擊應(yīng)用,保存

回到開(kāi)發(fā)視圖頁(yè)面,構(gòu)建剛才創(chuàng)建的流水線任務(wù)

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

第一次構(gòu)建時(shí)間相對(duì)較久,因?yàn)閙aven構(gòu)建時(shí)需要下載對(duì)應(yīng)依賴,耐心等待構(gòu)建完成,我這里因?yàn)橹耙呀?jīng)下載過(guò)相關(guān)依賴,所以時(shí)間較短

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

經(jīng)過(guò)幾次嘗試和排錯(cuò)之后(報(bào)錯(cuò)內(nèi)容寫在了文章末尾),成功了!

5. 驗(yàn)證

到harbor中查看,發(fā)現(xiàn)鏡像已上傳

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

至此,pipeline流水線工作就完成了!

九、部署跳板機(jī)限制用戶訪問(wèn)內(nèi)部網(wǎng)絡(luò)的權(quán)限

1.? 在firewalld上配置dnat策略,實(shí)現(xiàn)用戶ssh到firewalld服務(wù)后自動(dòng)轉(zhuǎn)入到跳板機(jī)服務(wù)器

[root@fiewalld ~]# vim snat_dnat.sh 
#########添加下面的規(guī)則#####
iptables -t nat -A PREROUTING -i ens33 -d 192.168.31.69 -p tcp --dport 22 -j DNAT --to-destination 192.168.107.14:22

測(cè)試,在window上ssh到firewalld服務(wù)器,查看是否自動(dòng)轉(zhuǎn)到跳板機(jī)里

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

可見(jiàn),配置成功!

2. 在跳板機(jī)服務(wù)器上配置只允許192.168.31.0/24網(wǎng)段的用戶ssh進(jìn)來(lái)

[root@jump_server ~]# yum install iptables -y
[root@jump_server ~]# iptables -A INPUT -p tcp --dport 22 -s 192.168.31.0/24 -j ACCEPT

3. 將跳板機(jī)與內(nèi)網(wǎng)其他服務(wù)器都建立免密通道

這里只展示一臺(tái)的操作,其他的也是一樣,只需要把公鑰依次傳入其他的服務(wù)器上即可?

[root@jump_server ~]# ssh-keygen  #生成密鑰
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:9axtEvUoH+VNh2MQCRO7UgwHn8CV6M05XOeQeCVgPg0 root@jump_server
The key's randomart image is:
+---[RSA 2048]----+
|       .++E*+=.  |
|        oOo**o.. |
|       . +X+o+= o|
|        .o** *.+.|
|        S +.= o .|
|         . * .   |
|          o +    |
|           o     |
|                 |
+----[SHA256]-----+
[root@jump_server ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub 192.168.107.19  #將公鑰傳到要建立免密通道的服務(wù)器上
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host '192.168.107.19 (192.168.107.19)' can't be established.
ECDSA key fingerprint is SHA256:YeJAjO9gERUBkV531t5TE3PJy74ezOWN5XlC98sMqxQ.
ECDSA key fingerprint is MD5:04:ab:31:bc:ad:88:80:7c:53:3d:77:95:55:01:9c:b0.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@192.168.107.19's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh '192.168.107.19'"
and check to make sure that only the key(s) you wanted were added.

[root@jump_server ~]# ssh root@192.168.107.19  #測(cè)試是否成功
Last login: Mon Sep  4 20:41:37 2023 from 192.168.31.67
[root@harbor ~]# 

4. 驗(yàn)證

用192.168.107.0/24網(wǎng)段的服務(wù)器登錄到firewalld里,看是否會(huì)自動(dòng)轉(zhuǎn)發(fā)到跳板機(jī)里

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

可見(jiàn),不能自動(dòng)轉(zhuǎn)發(fā)到跳板機(jī)中

再用192.168.31.0/24網(wǎng)段的服務(wù)器登錄到firewalld里

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

可見(jiàn),能自動(dòng)轉(zhuǎn)發(fā)到跳板機(jī)中

至此,跳板機(jī)就搭建成功了!

十、安裝zabbix對(duì)所有服務(wù)器區(qū)進(jìn)行監(jiān)控,監(jiān)控其CPU、內(nèi)存、網(wǎng)絡(luò)帶寬等

1. 安裝zabbix環(huán)境

官網(wǎng)

根據(jù)Centos的版本進(jìn)入官網(wǎng)www.zabbix.com選擇要下載的zabbix版本

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

安裝zabbix服務(wù)器的源

[root@zabbix ~]# rpm -Uvh https://repo.zabbix.com/zabbix/5.0/rhel/7/x86_64/zabbix-release-5.0-1.el7.noarch.rpm

安裝zabbix相關(guān)軟件

[root@zabbix ~]# yum install zabbix-server-mysql zabbix-agent

安裝前端相關(guān)的軟件并修改配置

[root@zabbix ~]# yum install centos-release-scl

修改倉(cāng)庫(kù)文件,啟用前端的源

[root@zabbix ~]# vim /etc/yum.repos.d/zabbix.repo

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

安裝web相關(guān)的軟件

[root@zabbix ~]# yum install zabbix-web-mysql-scl zabbix-nginx-conf-scl

安裝數(shù)據(jù)庫(kù)

如果已經(jīng)存在mysql的centos系統(tǒng),則不需要重新安裝數(shù)據(jù)庫(kù)

如果系統(tǒng)中沒(méi)有數(shù)據(jù)庫(kù),需要進(jìn)行安裝

[root@zabbix ~]# yum install mariadb mariadb-server -y

mariadb-server 服務(wù)器端的軟件包

mariadb 提供客戶端命令的軟件包啟動(dòng)數(shù)據(jù)庫(kù)

[root@zabbix ~]# service mariadb start
Redirecting to /bin/systemctl start mariadb.service

設(shè)置mariadb數(shù)據(jù)庫(kù)開(kāi)機(jī)啟動(dòng)

[root@zabbix ~]# systemctl enable mariadb
Created symlink from /etc/systemd/system/multi-user.target.wants/mariadb.service to /usr/lib/systemd/system/mariadb.service.

查看mysql的進(jìn)程是否運(yùn)行

[root@zabbix ~]# ps aux|grep mysqld
mysql      2574  0.0  0.1 113412  1596 ?        Ss   11:22   0:00 /bin/sh /usr/bin/mysqld_safe --basedir=/usr
mysql      2739  0.1  8.2 968920 81684 ?        Sl   11:22   0:00 /usr/libexec/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib64/mysql/plugin --log-error=/var/log/mariadb/mariadb.log --pid-file=/var/run/mariadb/mariadb.pid --socket=/var/lib/mysql/mysql.sock
root       2794  0.0  0.0 112824   980 pts/0    R+   11:26   0:00 grep --color=auto mysql

查看端口號(hào)

[root@zabbix ~]# yum install net-tools -y
[root@zabbix ~]# netstat -antplu|grep mysqld
tcp ? ? ? ?0 ? ? ?0 0.0.0.0:3306 ? ? ? ? ? ?0.0.0.0:* ? ? ? ? ? ? ? LISTEN ? ? ?2739/mysqld ??

登錄mysql

[root@zabbix ~]# mysql -uroot -p
Enter password:    #沒(méi)有密碼,直接回車
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 367
Server version: 5.5.68-MariaDB MariaDB Server

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> 

創(chuàng)建初始數(shù)據(jù)庫(kù)

MariaDB [(none)]> create database zabbix character set utf8 collate utf8_bin;  #創(chuàng)建zabbix數(shù)據(jù)庫(kù)
MariaDB [(none)]> create user zabbix@localhost identified by 'sc123456';   #創(chuàng)建用戶 
MariaDB [(none)]> grant all privileges on zabbix.* to zabbix@localhost;   #對(duì)用戶進(jìn)行授權(quán)
MariaDB [(none)]> set global log_bin_trust_function_creators = 1;
MariaDB [(none)]> quit;

導(dǎo)入初始架構(gòu)和數(shù)據(jù),系統(tǒng)將提示您輸入新創(chuàng)建的密碼

[root@zabbix ~]# zcat /usr/share/doc/zabbix-server-mysql*/create.sql.gz | mysql -uzabbix -p zabbix

為Zabbix server配置數(shù)據(jù)庫(kù)

[root@zabbix ~]# vim  /etc/zabbix/zabbix_server.conf

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

為Zabbix前端配置PHP

[root@zabbix ~]# vim /etc/opt/rh/rh-nginx116/nginx/conf.d/zabbix.conf

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

[root@zabbix ~]# vim /etc/opt/rh/rh-php72/php-fpm.d/zabbix.conf

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

將zabbix的nginx更換為默認(rèn)80端口

修改默認(rèn)nginx中的配置,防止與zabbix中的nginx搶占端口

修改默認(rèn)的nginx為8080端口

[root@zabbix ~]# vim /etc/opt/rh/rh-nginx116/nginx/nginx.conf

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

重新啟動(dòng)zabbix

[root@zabbix ~]# systemctl restart zabbix-server zabbix-agent rh-nginx116-nginx rh-php72-php-fpm
[root@zabbix ~]# systemctl enable zabbix-server zabbix-agent rh-nginx116-nginx rh-php72-php-fpm

2. 測(cè)試訪問(wèn)

訪問(wèn)http://192.168.107.16

需要做初始化操作才能出現(xiàn)下面畫面

首次登陸的賬號(hào)密碼:

賬號(hào):Admin

密碼:zabbix

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

至此zabbix環(huán)境就搭建成功了!

3.? 在要監(jiān)控的服務(wù)器上安裝zabbix-agent服務(wù)

這里以一臺(tái)機(jī)器為例,其他機(jī)器操作一致

安裝zabbix服務(wù)器的源

[root@ansible ~]# rpm -Uvh https://repo.zabbix.com/zabbix/5.0/rhel/7/x86_64/zabbix-release-5.0-1.el7.noarch.rpm

安裝zabbix-agent服務(wù)

[root@ansible ~]# yum install zabbix-agent -y

修改zabbix_agentd.conf 配置文件,讓zabbix-server服務(wù)器能來(lái)拿數(shù)據(jù)

[root@ansible ~]# cd /etc/zabbix
[root@ansible zabbix]# ls
zabbix_agentd.conf  zabbix_agentd.d
[root@ansible zabbix]# vim zabbix_agentd.conf 

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

重啟zabbix-agnt服務(wù)

[root@ansible zabbix]# service zabbix-agent restart
Redirecting to /bin/systemctl restart zabbix-agent.service

4. 在zabbix-server服務(wù)器上安裝zabbix-get服務(wù)

[root@zabbix fonts]# yum install zabbix-get

5. 獲取數(shù)據(jù)

[root@zabbix zabbix]# zabbix_get -s 192.168.31.67 -p 10050 -k "system.cpu.load[all,avg1]"
0.000000
[root@zabbix zabbix]# zabbix_get -s 192.168.107.14 -p 10050 -k "system.cpu.load[all,avg1]"
0.000000
[root@zabbix zabbix]# zabbix_get -s 192.168.107.15 -p 10050 -k "system.cpu.load[all,avg1]"
0.000000
[root@zabbix zabbix]# zabbix_get -s 192.168.107.17 -p 10050 -k "system.cpu.load[all,avg1]"
0.290000
[root@zabbix zabbix]# zabbix_get -s 192.168.107.18 -p 10050 -k "system.cpu.load[all,avg1]"
0.000000
[root@zabbix zabbix]# zabbix_get -s 192.168.107.19 -p 10050 -k "system.cpu.load[all,avg1]"
0.000000
[root@zabbix zabbix]# zabbix_get -s 192.168.107.20 -p 10050 -k "system.cpu.load[all,avg1]"
0.000000

至此,zabbix-server就可以獲取到要監(jiān)控的服務(wù)器的數(shù)據(jù)了

6. 在web頁(yè)添加監(jiān)控主機(jī)

添加每一臺(tái)主機(jī)都是一樣的操作,這里只展示其中一臺(tái)?

把文字換成中文

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

添加監(jiān)控主機(jī)

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

選擇模板

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

也可以不使用模板,自己添加各種應(yīng)用集和監(jiān)控項(xiàng)

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

查看數(shù)據(jù)圖像

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

可以看到已經(jīng)有數(shù)據(jù)

注意:圖像下面的文字方框是語(yǔ)言問(wèn)題,把語(yǔ)言換成英文再看就可以了

至此,zabbix監(jiān)控web集群外的服務(wù)器就完成了!

十一、使用ab軟件對(duì)整個(gè)k8s集群和相關(guān)服務(wù)器進(jìn)行壓力測(cè)試

這里用ansible服務(wù)器做壓力測(cè)試

1.? 安裝ab軟件

[root@ansible ~]# yum install httpd-tools -y

2. 測(cè)試

這里展示對(duì)一臺(tái)服務(wù)器的壓力測(cè)試,其他服務(wù)器也是一樣的?

[root@ansible ~]# ab -n 1000 -c 1000  -r http://192.168.31.69/
This is ApacheBench, Version 2.3 <$Revision: 1430300 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking 192.168.31.69 (be patient)  #完成的進(jìn)度
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Completed 1000 requests
Finished 1000 requests


Server Software:                        #服務(wù)器軟件版本
Server Hostname:        192.168.31.69  #服務(wù)器主機(jī)名
Server Port:            80  #服務(wù)器端口

Document Path:          /         #測(cè)試的頁(yè)面
Document Length:        0 bytes   #頁(yè)面的字節(jié)數(shù)

Concurrency Level:      1000  #請(qǐng)求的并發(fā)數(shù),代表著訪問(wèn)的客戶端數(shù)量
Time taken for tests:   0.384 seconds  #整個(gè)測(cè)試花費(fèi)的時(shí)間
Complete requests:      1000  #成功的請(qǐng)求數(shù)量
Failed requests:        2000  #失敗的請(qǐng)求數(shù)量
   (Connect: 0, Receive: 1000, Length: 0, Exceptions: 1000)
Write errors:           0
Total transferred:      0 bytes     #整個(gè)測(cè)試過(guò)程的總數(shù)據(jù)大?。ò╤eader頭信息等)
HTML transferred:       0 bytes    #整個(gè)測(cè)試過(guò)程HTML頁(yè)面實(shí)際的字節(jié)數(shù)
Requests per second:    2604.40 [#/sec] (mean)  #每秒處理的請(qǐng)求數(shù),這是非常重要的參數(shù),體現(xiàn)了服務(wù)器的吞吐量 #后面括號(hào)中的 mean 表示這是一個(gè)平均值
Time per request:       383.966 [ms] (mean)  #平均請(qǐng)求響應(yīng)時(shí)間,括號(hào)中的 mean 表示這是一個(gè)平均值

#每個(gè)請(qǐng)求的時(shí)間 0.384[毫秒],意思為在所有的并發(fā)請(qǐng)求每個(gè)請(qǐng)求實(shí)際運(yùn)行時(shí)間的平均值
#由于對(duì)于并發(fā)請(qǐng)求 cpu 實(shí)際上并不是同時(shí)處理的,而是按照每個(gè)請(qǐng)求獲得的時(shí)間片逐個(gè)輪轉(zhuǎn)處理的
#所以基本上第一個(gè) Time per request 時(shí)間約等于第二個(gè) Time per request 時(shí)間乘以并發(fā)請(qǐng)求數(shù)
Time per request:       0.384 [ms] (mean, across all concurrent requests)   
Transfer rate:          0.00 [Kbytes/sec] received  傳輸速率,平均每秒的流量 #可以幫助排除是否存在網(wǎng)絡(luò)流量過(guò)大導(dǎo)致響應(yīng)時(shí)間延長(zhǎng)的問(wèn)題

Connection Times (ms)   #連接時(shí)間
              min  mean[+/-sd] median   max
Connect:        0    0   0.0      0       0
Processing:     0    0   1.0      0       7
Waiting:        0    0   0.0      0       0
Total:          0    0   1.0      0       7

Percentage of the requests served within a certain time (ms)  #在一定的時(shí)間內(nèi)提供服務(wù)的請(qǐng)求的百分比
  50%      0
  66%      0
  75%      0
  80%      0
  90%      0
  95%      3
  98%      5
  99%      5
 100%      7 (longest request)

[root@ansible ~]# 

項(xiàng)目遇到的問(wèn)題

1. 重啟服務(wù)器后,發(fā)現(xiàn)除了firewalld服務(wù)器,其他服務(wù)器的xshell連接不上了

排錯(cuò)思路:

查看ssh進(jìn)程是否開(kāi)啟

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

是開(kāi)啟的,沒(méi)有問(wèn)題

在firewalld防火墻服務(wù)器上看防火墻規(guī)則

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

發(fā)現(xiàn)之前配置的snat沒(méi)有生效,原因是配置snat的腳本重啟后沒(méi)有生效

解決:bash snat_dnat.sh

再次查看防火墻規(guī)則

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

發(fā)現(xiàn),snat策略生效,這時(shí),其他服務(wù)器的xshell可以連接上了

為了后面重啟snat都生效,將bash snat_dnat.sh寫入開(kāi)啟自啟腳本

步驟如下:

[root@fiewalld ~]# chmod +x /root/snat_dnat.sh   #給腳本設(shè)置可執(zhí)行權(quán)限
[root@fiewalld ~]# vi /etc/rc.d/rc.local   


#!/bin/bash
# THIS FILE IS ADDED FOR COMPATIBILITY PURPOSES
#
# It is highly advisable to create own systemd services or udev rules
# to run scripts during boot instead of using this file.
#
# In contrast to previous versions due to parallel execution during boot
# this script will NOT be run after all other services.
#
# Please note that you must run 'chmod +x /etc/rc.d/rc.local' to ensure
# that this script will be executed during boot.

touch /var/lock/subsys/local

/root/snat_dnat.sh  #添加這一行
[root@fiewalld ~]# chmod +x /etc/rc.d/rc.local   #在centos7中,/etc/rc.d/rc.local的權(quán)限被降低了,所以需要執(zhí)行如下命令賦予其可執(zhí)行權(quán)限
2. pod啟動(dòng)不起來(lái),發(fā)現(xiàn)是pvc與pv的綁定出錯(cuò)了,原因是pvc和pv的yaml文件中的storageClassName不一致
3. 測(cè)試訪問(wèn)時(shí),發(fā)現(xiàn)訪問(wèn)的內(nèi)容不足自己設(shè)置的,即web數(shù)據(jù)文件掛載失敗,但是nginx.conf配置文件掛載成功
4.?pipeline執(zhí)行最后一步報(bào)錯(cuò)

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

?查看錯(cuò)誤信息

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

報(bào)錯(cuò)原因:docker沒(méi)有啟動(dòng)起來(lái)。

解決:在jenkins服務(wù)器上啟動(dòng)docker即可

[root@jenkins ~]# service docker start
Redirecting to /bin/systemctl start docker.service
[root@jenkins ~]# docker ps
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
5. pipeline執(zhí)行最后一步報(bào)錯(cuò)登錄不了harbor

報(bào)錯(cuò)信息

基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目,kubernetes,容器,云原生

原因:默認(rèn)登陸的是443端口,而我們并沒(méi)有啟用

解決:重啟harbor就可以了

[root@harbor ~]# cd harbor
[root@harbor harbor]# ./install.sh 

測(cè)試

[root@jenkins ~]# docker login -u admin -p Harbor12345 192.168.107.19:8089
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded

登錄成功?文章來(lái)源地址http://www.zghlxwxcb.cn/news/detail-701485.html


項(xiàng)目心得

  1. 對(duì)于snat+dnat策略的原理和使用更熟悉
  2. k8s的使用和集群的部署更熟悉
  3. 查看日志對(duì)排錯(cuò)很有幫助
  4. 一定要提前規(guī)劃好項(xiàng)目架構(gòu)圖,部署環(huán)境的過(guò)程要細(xì)心
  5. 對(duì)于docker+k8s中的技術(shù)和使用,包括pv+pvc+nfs掛載卷實(shí)現(xiàn)數(shù)據(jù)一致性、鏡像制作、探針技術(shù)理解更深刻,使用更熟悉
  6. 觀察到HPA技術(shù)的現(xiàn)象,深刻理解其作用和原理
  7. 對(duì)于prometheus和zabbix兩種監(jiān)控方式理解跟深刻
  8. 部署CI/CD完成流水線工作,試錯(cuò)多次才成功,對(duì)其使用方式更清楚了
  9. 同時(shí)開(kāi)啟多臺(tái)服務(wù)器,可能會(huì)導(dǎo)致電腦卡頓,要又耐心,不要急躁
  10. 排錯(cuò)過(guò)程如果一直失敗不要著急,要多方面思考和解決
  11. ingress做負(fù)載均衡的實(shí)現(xiàn)過(guò)程更熟悉了
  12. 對(duì)于gitlab+jenkins+harbor實(shí)現(xiàn)pipeline流水線工作的流程理解更深,知道背后的原理及是如何將3者連接在一起的,實(shí)現(xiàn)的過(guò)程出現(xiàn)了很多次錯(cuò)誤,試了10幾次才能夠,要穩(wěn)住心態(tài),不要急躁和放棄
  13. 深刻理解了跳板機(jī)的原理
  14. 知道了壓力測(cè)試的意義

到了這里,關(guān)于基于SNAT+DNAT發(fā)布內(nèi)網(wǎng)K8S及Jenkins+gitlab+Harbor模擬CI/CD的綜合項(xiàng)目的文章就介紹完了。如果您還想了解更多內(nèi)容,請(qǐng)?jiān)谟疑辖撬阉鱐OY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!

本文來(lái)自互聯(lián)網(wǎng)用戶投稿,該文觀點(diǎn)僅代表作者本人,不代表本站立場(chǎng)。本站僅提供信息存儲(chǔ)空間服務(wù),不擁有所有權(quán),不承擔(dān)相關(guān)法律責(zé)任。如若轉(zhuǎn)載,請(qǐng)注明出處: 如若內(nèi)容造成侵權(quán)/違法違規(guī)/事實(shí)不符,請(qǐng)點(diǎn)擊違法舉報(bào)進(jìn)行投訴反饋,一經(jīng)查實(shí),立即刪除!

領(lǐng)支付寶紅包贊助服務(wù)器費(fèi)用

相關(guān)文章

  • k8s環(huán)境jenkins發(fā)布vue項(xiàng)目指定nodejs版本

    k8s環(huán)境jenkins發(fā)布vue項(xiàng)目指定nodejs版本

    發(fā)布一個(gè)前端項(xiàng)目,它需要nodejs 16.9.0版本支持,而kubesphere 3.2.0集成的jenkins 的鏡像只支持nodejs v10.16.3 該項(xiàng)目基于的環(huán)境是k8s 1.23.4,docker 20.10.12. vue 2.7 Jenkins Kubernetes插件 kubesphere 平臺(tái)安裝了jenkins ,基于Jenkins Kubernetes插件,自動(dòng)化在Kubernetes中運(yùn)行的Jenkins-slave代理的縮放。該插件

    2024年02月09日
    瀏覽(24)
  • Vmware虛擬機(jī)搭建 K8S ingress Jenkins Gitlab Harbor Docker 一鍵部署SpringCloud微服務(wù)

    Vmware虛擬機(jī)搭建 K8S ingress Jenkins Gitlab Harbor Docker 一鍵部署SpringCloud微服務(wù)

    本文主要基于Kubernetes1.22.2和Linux操作系統(tǒng)Ubuntu 20.04.6。 操作系統(tǒng) 主機(jī)名 IP地址 進(jìn)程 功能 Ubuntu 20.04.6 k8s-master 192.168.189.128 docker,kube-apiserver,etcd,kube-scheduler,kube-controller-manager,kubelet,kube-proxy,coredns,calico 主節(jié)點(diǎn) Ubuntu 20.04.6 k8s-node1 192.168.189.129 docker,kubelet,kube-proxy,c

    2024年02月03日
    瀏覽(42)
  • Gitlab+Jenkins+Docker+Harbor+K8s集群搭建CICD平臺(tái)(持續(xù)集成部署Hexo博客Demo)

    Gitlab+Jenkins+Docker+Harbor+K8s集群搭建CICD平臺(tái)(持續(xù)集成部署Hexo博客Demo)

    目錄 涉及內(nèi)容: 一、CICD服務(wù)器環(huán)境搭建 1、docker 環(huán)境安裝 (1)、拉取鏡像,啟動(dòng)并設(shè)置開(kāi)機(jī)自啟 (2)、配置docker加速器 2、安裝并配置GitLab (1)、創(chuàng)建共享卷目錄 (2)、創(chuàng)建 gitlab 容器 (3)、關(guān)閉容器修改配置文件 (4)、修改完配置文件之后。直接啟動(dòng)容器 (5)、相關(guān)

    2024年03月15日
    瀏覽(37)
  • Mac M1 Parallels CentOS7.9 Rancher + K8S + Gitlab + Jenkins +Harbor CICD

    Mac M1 Parallels CentOS7.9 Rancher + K8S + Gitlab + Jenkins +Harbor CICD

    機(jī)器名稱 IP地址 角色 k8s+rancher高可用部署: https://blog.csdn.net/qq_41594280/article/details/135312148 rancher 10.211.55.200 管理K8S集群 k8svip 10.211.55.199 K8S VIP master01 10.211.55.201 K8S集群主節(jié)點(diǎn) master02 10.211.55.202 K8S集群主節(jié)點(diǎn) master03 10.211.55.203 K8S集群主節(jié)點(diǎn) node01 10.211.55.211 K8S集群從節(jié)點(diǎn) node02 10.

    2024年01月19日
    瀏覽(29)
  • k8s部署gin-vue-admin框架、gitlab-ci、jenkins pipeline 、CICD

    k8s部署gin-vue-admin框架、gitlab-ci、jenkins pipeline 、CICD

    測(cè)試環(huán)境使用的jenkins 正式環(huán)境使用的gitlab-ci 創(chuàng)建yaml文件 啟動(dòng)服務(wù) 配置jenkins 配置nginx jenkins機(jī)器上的dockerfile yaml其它都一樣除了svc,因?yàn)閟vc需要綁定slb地址 啟動(dòng) 配置dockerfile 配置nginx 結(jié)果圖 原文

    2024年02月08日
    瀏覽(25)
  • Devops系列六(CI篇之jenkinsfile)jenkins將gitlab helm yaml和argocd 串聯(lián),自動(dòng)部署到K8S

    Devops系列六(CI篇之jenkinsfile)jenkins將gitlab helm yaml和argocd 串聯(lián),自動(dòng)部署到K8S

    上文我們說(shuō)了pipeline,已為本文鋪路不少,接下里就是將之串聯(lián)起來(lái)。 先想說(shuō)下,為什么是jenkinsfile, 因?yàn)閖enkins job還支持pipeline方式。 這種方式,不建議實(shí)際使用,僅限于測(cè)試或調(diào)試groovy代碼。 下面貼出來(lái),我們的使用方式。好處是:采用分布式的思想,改動(dòng)git上的jenkinsf

    2024年02月13日
    瀏覽(26)
  • 基于jenkins+k8s實(shí)現(xiàn)devops

    基于jenkins+k8s實(shí)現(xiàn)devops

    由于jenkins運(yùn)行在k8s上能夠更好的利用動(dòng)態(tài)agent進(jìn)行構(gòu)建。所以寫了個(gè)部署教程,親測(cè)無(wú)坑 1、創(chuàng)建ns kubectl create namespace devops 2、kubectl apply -f jenkins.yml 注意:鏡像建議使用最新版本,因?yàn)閖enkin平臺(tái)默認(rèn)提供了最新的插件,且無(wú)法選擇版本,所以如果jenkins版本過(guò)低會(huì)導(dǎo)致插件不

    2024年02月06日
    瀏覽(21)
  • Devops系列五(CI篇之pipeline libraray)jenkins將gitlab helm yaml和argocd 串聯(lián),自動(dòng)部署到K8S

    Devops系列五(CI篇之pipeline libraray)jenkins將gitlab helm yaml和argocd 串聯(lián),自動(dòng)部署到K8S

    本文是CI篇的上文,因?yàn)樯弦黄呀?jīng)作了總體設(shè)計(jì),就不再贅述,有需要的請(qǐng)看前文。 我們將演示,使用CI工具–jenkins,怎么和CD工具–argocd串聯(lián),重點(diǎn)是在Jenkins該怎么做。準(zhǔn)備工作和argocd等相關(guān)事項(xiàng),在前文已鋪墊ok。 Jenkins,我們是使用k8s來(lái)部署的一個(gè)master-slave結(jié)構(gòu)的集群

    2024年02月13日
    瀏覽(33)
  • 實(shí)戰(zhàn)-基于Jenkins+K8s構(gòu)建DevOps平臺(tái)(九)

    實(shí)戰(zhàn)-基于Jenkins+K8s構(gòu)建DevOps平臺(tái)(九)

    第一部分:安裝持久化存儲(chǔ)nfs 1、在k8s-master和k8s-node1上安裝nfs服務(wù) [root@k8s-master ~]# yum install nfs-utils -y [root@k8s-master ~]# systemctl start nfs [root@k8s-master ~]# systemctl enable nfs [root@k8s-node1 ~]# yum install nfs-utils -y [root@k8s-node1 ~]# systemctl start nfs [root@k8s-node1 ~]# systemctl enable nfs [root@k8s-node2 ~

    2024年02月08日
    瀏覽(29)
  • DevOps基于k8s發(fā)布系統(tǒng)的實(shí)現(xiàn)

    DevOps基于k8s發(fā)布系統(tǒng)的實(shí)現(xiàn)

    首先,本篇文章所介紹的內(nèi)容,已經(jīng)有完整的實(shí)現(xiàn),可以參考這里。 在微服務(wù)、DevOps和云平臺(tái)流行的當(dāng)下,使用一個(gè)高效的持續(xù)集成工具也是一個(gè)非常重要的事情。雖然市面上目前已經(jīng)存在了比較成熟的自動(dòng)化構(gòu)建工具,比如jekines,還有一些商業(yè)公司推出的自動(dòng)化構(gòu)建工具

    2023年04月09日
    瀏覽(27)

覺(jué)得文章有用就打賞一下文章作者

支付寶掃一掃打賞

博客贊助

微信掃一掃打賞

請(qǐng)作者喝杯咖啡吧~博客贊助

支付寶掃一掃領(lǐng)取紅包,優(yōu)惠每天領(lǐng)

二維碼1

領(lǐng)取紅包

二維碼2

領(lǐng)紅包