目錄
項目架構圖:
項目環(huán)境:
項目描述:
項目步驟:
ip規(guī)劃:
一.在三臺k8s機器上安裝部署好k8s,一臺作為master,兩臺node
安裝部署k8s
node節(jié)點加入集群:
master節(jié)點初始化:
?安裝Calico網絡插件:
二,部署nfs服務,讓所有的web業(yè)務pod都取訪問,通過pv,pvc和卷掛載實現
1.搭建好nfs服務器
2.設置共享目錄
3.創(chuàng)建pv使用nfs服務器上的共享目錄
測試:
三.啟動nginx和MySQL的pod,采用HPA技術,cpu使用率高時進行水平擴縮,使用ab進行壓力測試。
k8s部署mysql pod:
安裝metrics
啟動開啟了HPA功能的nginx的部署控制器,啟動nginx的pod
下面在nfs上用ab工具測試
四.使用ingress給web業(yè)務做基于域名的負載均衡,基于url的負載均衡的實現
第1大步驟: ?安裝ingress controller
第2大步驟: ?創(chuàng)建pod和暴露pod的服務
第3大步驟: 啟用ingress 關聯ingress controller 和service
在nfs服務器上進行測試,需要在/etc/hosts文件里添加域名解析記錄
五.使用探針對web業(yè)務pod進行監(jiān)控, 一旦出現問題馬上重啟, 增強業(yè)務pod的可靠性。
六.構建CI/CD環(huán)境, k8smaster上安裝部署Jenkins,一臺機器上安裝部署harbor倉庫。
安裝jenkins:
部署harbor:
測試harbor的上傳和拉取
七,使用dashboard對整個集群資源進行掌控
八,安裝部署Prometheus+grafana:
安裝Prometheus
在master或者被監(jiān)聽的機器上操作(安裝相應的exporter,常規(guī)的是node_exporter):
安裝部署Grafana和Prometheus安裝在一起:
九.安裝部署firewalld防火墻和jumpserver堡壘機,對web集群進行保護。
jumpserver 的安裝部署
部署firewall服務器,對內網進行保護
十.安裝部署ansible機器,寫好主機清單,便于日后的自動化運維。
1.建立免密通道 在ansible主機上生成密鑰對,上傳公鑰到所有服務器的root用戶家目錄下
?2.編寫主機清單
3.測試
項目心得:
項目架構圖:
項目環(huán)境:
centos7,k8s,docker,prometheus,nfs,JumpServer,harbor,ansible,Jenkins等
項目描述:
模擬企業(yè)里的k8s生產環(huán)境,,構建一個高可用高性能的系統(tǒng),
項目步驟:
ip規(guī)劃:
k8smaster:192.168.220.100
k8snode1:192.168.220.101
k8snode2:192.168.220.102
nfs:192.168.220.103
harbor:192.168.220.104
Prometheus:192.168.220.105
jumpserver:192.168.220.106
firewalld: 192.168.220.107
ansible:192.168.220.108
一.在三臺k8s機器上安裝部署好k8s,一臺作為master,兩臺node
安裝部署k8s
###下面操作每臺機器都要操作,建議xshell上開啟輸入到所有會話
1.配置靜態(tài)ip地址和設置主機名和關閉selinux和firewalld
hostnamectl set-hostname master && bash
hostnamectl set-hostname node1 && bash
hostnamectl set-hostname node2 && bash
#關閉firewalld防火墻服務,并且設置開機不要啟動
service firewalld stop
systemctl ?disable ?firewalld
?
#臨時關閉selinux
setenforce 0
#永久關閉selinux
sed -i '/^SELINUX=/ s/enforcing/disabled/' ?/etc/selinux/config
添加域名解析
[root@master ~]# vim /etc/hosts
[root@master ~]# cat /etc/hosts
127.0.0.1 ? localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 ? ? ? ? localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.220.100 master
192.168.220.101 node1
192.168.220.102 node2
? ##### ?注意?。。∠旅鏋槿_機器都要操作;
關閉交換分區(qū)
k8s設計的時候為了能提升性能,默認是不允許使用交換分區(qū)的。
[root@master ~]# swapoff -a 臨時關閉
?sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
[root@master ~]# vim /etc/fstab
加注釋:
#/dev/mapper/centos-swap swap ? ? ?swap ? ?defaults ? ? ? ?0 0 ? ?
加載網橋和地址地址轉發(fā)功能(實現容器之間通信):
[root@master ~]# modprobe br_netfilter
[root@master ~]# modprobe overlay
#如果文件不存在,tee 會創(chuàng)建它;如果文件已經存在,tee 會覆蓋它的內容
[root@master ~]# ?cat << EOF | tee /etc/modules-load.d/k8s.conf?
br_netfilter
overlay
EOF
查看內核模板是否成功
#lsmod是Linux系統(tǒng)中用于列出已加載內核模塊(Kernel Modules)的命令
[root@master ~]# ?lsmod |grep -e br_netfilter -e overlay
overlay ? ? ? ? ? ? ? ?91659 ?0
br_netfilter ? ? ? ? ? 22256 ?0
bridge ? ? ? ? ? ? ? ?151336 ?1 br_netfilter
[root@master ~]# ?cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
?更新和配置軟件源
# 添加阿里云yum源
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
# 重新生成yum元數據緩存
yum clean all && yum makecache
# 安裝基礎軟件包
yum install -y vim wget
?
# 配置阿里云Docker yum倉庫源
yum install -y yum-utils
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
配置ipvs功能:(實現負載均衡的軟件):
# 安裝ipset和ipvsadm
yum install -y ipset ipvsadm
# 添加需要加載的模塊寫入腳本文件,保證在節(jié)點重啟后能自動加載所需模塊
cat <<EOF > /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
# 為腳本文件添加執(zhí)行權限
chmod +x /etc/sysconfig/modules/ipvs.modules
# 執(zhí)行腳本文件
/bin/bash /etc/sysconfig/modules/ipvs.modules
# 查看對應的模塊是否加載成功
lsmod | grep -e ip_vs -e nf_conntrack_ipv4
nf_conntrack_ipv4 ? ? ?15053 ?24
nf_defrag_ipv4 ? ? ? ? 12729 ?1 nf_conntrack_ipv4
ip_vs_sh ? ? ? ? ? ? ? 12688 ?0
ip_vs_wrr ? ? ? ? ? ? ?12697 ?0
ip_vs_rr ? ? ? ? ? ? ? 12600 ?105
ip_vs ? ? ? ? ? ? ? ? 145497 ?111 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack ? ? ? ? ?139264 ?10 ip_vs,nf_nat,nf_nat_ipv4,nf_nat_ipv6,xt_conntrack,nf_nat_masquerade_ipv4,nf_nat_masquerade_ipv6,nf_conntrack_netlink,nf_conntrack_ipv4,nf_conntrack_ipv6
libcrc32c ? ? ? ? ? ? ?12644 ?4 xfs,ip_vs,nf_nat,nf_conntrack
?配置時間同步:
# 啟用chronyd服務
systemctl start chronyd && systemctl enable chronyd
# 設置時區(qū)
timedatectl set-timezone Asia/Shanghai
docker的配置
yum install -y docker-ce-20.10.24-3.el7 docker-ce-cli-20.10.24-3.el7 containerd.io
[root@master ~]# systemctl start docker
[root@master ~]# systemctl enable docker
# 創(chuàng)建文件夾
mkdir -p /etc/docker
# 編輯配置
cat > /etc/docker/daemon.json <<EOF
{
? "registry-mirrors": [
? ? "https://youraddr.mirror.aliyuncs.com",
? ? "http://hub-mirror.c.163.com",
? ? "https://reg-mirror.qiniu.com",
? ? "https://docker.mirrors.ustc.edu.cn"
? ],
? "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
systemctl daemon-reload && systemctl restart docker
[root@master ~]# docker ps
CONTAINER ID ? IMAGE ? ? COMMAND ? CREATED ? STATUS ? ?PORTS ? ? NAMES
[root@master ~]# getenforce
Disabled
[root@master ~]# swap
-bash: swap: 未找到命令
[root@master ~]# cat /proc/s
sched_debug ? ?self/ ? ? ? ? ?stat ? ? ? ? ? sysrq-trigger
schedstat ? ? ?slabinfo ? ? ? swaps ? ? ? ? ?sysvipc/
scsi/ ? ? ? ? ?softirqs ? ? ? sys/
[root@master ~]# cat /proc/swaps
Filename ? ? ? ? ? ? ? ?Type ? ? ? ?Size ?Used ? ?Priority
node節(jié)點加入集群:
下面步驟三臺機器都要做
配置k8s集群環(huán)境:
# 配置組件源
cat <<EOF | tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
# 構建本地yum緩存
yum makecache
# 安裝
yum install -y kubeadm-1.23.17-0 kubelet-1.23.17-0 kubectl-1.23.17-0 --disableexcludes=kubernetes
[root@master ~]# cat <<EOF > /etc/sysconfig/kubelet
KUBELET_CGROUP_ARGS="--cgroup-driver=systemd"
KUBE_PROXY_MODE="ipvs"
EOF
# 啟動并設置開機自啟
systemctl enable --now kubelet
master節(jié)點初始化:
#只需要master上面做
kubeadm init \
? ? --kubernetes-version=v1.23.17 \
? ? --pod-network-cidr=10.224.0.0/16 \
? ? --service-cidr=10.96.0.0/12 \
? ? --apiserver-advertise-address=192.168.220.100 \
? ? --image-repository=registry.aliyuncs.com/google_containers
(不能有空格)
結果:
kubeadm join 192.168.220.100:6443 --token v7a9n5.ppfursc2nbica1fg \
?? ?--discovery-token-ca-cert-hash sha256:6a4863a28201e03ee1b8083fc6fc08b6f7f39b44899b9c8bd6b627ab044b77ea?
journalctl -u kubelet 看錯誤
在node機器上:
kubeadm join 192.168.220.100:6443 --token v7a9n5.ppfursc2nbica1fg ?--discovery-token-ca-cert-hash sha256:6a4863a28201e03ee1b8083fc6fc08b6f7f39b44899b9c8bd6b627ab044b77ea?
master:
[root@master ~]# kubectl get nodes
NAME ? ? STATUS ? ? ROLES ? ? ? ? ? ? ? ? ?AGE ? VERSION
master ? NotReady ? control-plane,master ? 22m ? v1.23.17
node1 ? ?NotReady ? <none> ? ? ? ? ? ? ? ? 50s ? v1.23.17
node2 ? ?NotReady ? <none> ? ? ? ? ? ? ? ? 44s ? v1.23.17
給node節(jié)點打上標簽
# 在master上執(zhí)行
kubectl label node node1 node-role.kubernetes.io/worker=worker
kubectl label node node2 node-role.kubernetes.io/worker=worker
[root@master ~]# kubectl ?get nodes
NAME ? ? STATUS ? ? ROLES ? ? ? ? ? ? ? ? ?AGE ? ? VERSION
master ? NotReady ? control-plane,master ? 28m ? ? v1.23.17
node1 ? ?NotReady ? worker ? ? ? ? ? ? ? ? 6m3s ? ?v1.23.17
node2 ? ?NotReady ? worker ? ? ? ? ? ? ? ? 5m57s ? v1.23.17
kubeadm ?reset 可以清除初始化的操作
?安裝Calico網絡插件:
# master執(zhí)行
wget --no-check-certificate https://docs.projectcalico.org/archive/v3.25/manifests/calico.yaml ? #用的wget下載
calico 是k8s里的一個網絡組件,用于實現master和node節(jié)點之間的通信的,底層使用overlay網絡
# master執(zhí)行
kubectl apply -f https://docs.projectcalico.org/archive/v3.25/manifests/calico.yaml ?# k8s 1.23適用此版本
[root@master ~]# kubectl ?get node
NAME ? ? STATUS ? ROLES ? ? ? ? ? ? ? ? ?AGE ? VERSION
master ? Ready ? ?control-plane,master ? 53m ? v1.23.17
node1 ? ?Ready ? ?worker ? ? ? ? ? ? ? ? 30m ? v1.23.17
node2 ? ?Ready ? ?worker ? ? ? ? ? ? ? ? 30m ? v1.23.17
k8s配置ipvs:
master上:
kubectl edit configmap kube-proxy -n kube-system
mode: "ipvs"
# 刪除所有kube-proxy pod使之重啟
kubectl delete pods -n kube-system -l k8s-app=kube-proxy
看k8s容器(pod):
kubectl ?get pod -n kube-system
kubectl get pod 是查看有哪些pod在運行 ?--》docker ps
-n kube-system ? 是查看kube-system命名空間里運行的pod ? namespace
kube-system ?是k8s控制平面的pod所在的命名空間
pod 是運行容器的單元
以pod治理pod
住在kube-system ?命名空間里的pod是控制平面的pod
kubectl get ns ?= kubectl get namespace ?查看有哪些命名空間
NAME ? ? ? ? ? ? ?STATUS ? AGE
default ? ? ? ? ? Active ? 78m ? ? ? ? ? #是創(chuàng)建的普通的pod運行的命名空間
kube-node-lease ? Active ? 78m
kube-public ? ? ? Active ? 78m
kube-system ? ? ? Active ? 78m ? ?#是管理相關的命名空間
二,部署nfs服務,讓所有的web業(yè)務pod都取訪問,通過pv,pvc和卷掛載實現
注意點:
nfs服務器上:
#關閉firewalld防火墻服務,并且設置開機不要啟動
service firewalld stop
systemctl ?disable ?firewalld
?
#臨時關閉selinux
setenforce 0
#永久關閉selinux
sed -i '/^SELINUX=/ s/enforcing/disabled/' ?/etc/selinux/config
1.搭建好nfs服務器
# 在nfs服務器和k8s集群上安裝nfs
[root@nfs ~]# yum install nfs-utils -y
[root@master ~]# yum install nfs-utils -y
[root@node1 ~]# yum install nfs-utils -y
[root@node2 ~]# yum install nfs-utils -y
2.設置共享目錄
[root@nfs ~]# vim /etc/exports
[root@nfs ~]# cat /etc/exports
/web ? 192.168.220.0/24(rw,no_root_squash,sync)
[root@nfs ~]# mkdir /web
[root@nfs ~]# cd /web
[root@nfs web]# echo "tiantianming" >index.html
[root@nfs web]# ls
index.html
[root@localhost web]# exportfs -rv ? ? ? ?#刷新nfs服務
exporting 192.168.220.0/24:/web
#重啟服務并且設置開機啟動
[root@nfs web]# systemctl restart nfs && systemctl enable nfs
Created symlink from /etc/systemd/system/multi-user.target.wants/nfs-server.service to /usr/lib/systemd/system/nfs-server.service.
3.創(chuàng)建pv使用nfs服務器上的共享目錄
[root@master ~]# mkdir /pv
[root@master ~]# cd /pv/
[root@master pv]# vim ?nfs-pv.yml
apiVersion: v1
kind: PersistentVolume
metadata:
? name: pv-web
? labels:
? ? type: pv-web
spec:
? capacity:
? ? storage: 10Gi?
? accessModes:
? ? - ReadWriteMany
? storageClassName: nfs ? ? ? ? # pv對應的名字
? nfs:
? ? path: "/web" ? ? ? # nfs共享的目錄
? ? server: 192.168.220.103 ? # nfs服務器的ip地址
? ? readOnly: false ? # 訪問模式
[root@master pv]# ?kubectl apply -f nfs-pv.yml
persistentvolume/pv-web created
[root@master pv]# kubectl get pv
NAME ? ? CAPACITY ? ACCESS MODES ? RECLAIM POLICY ? STATUS ? ? ?CLAIM ? STORAGECLASS ? REASON ? AGE
pv-web ? 10Gi ? ? ? RWX ? ? ? ? ? ?Retain ? ? ? ? ? Available ? ? ? ? ? nfs ? ? ? ? ? ? ? ? ? ? 12s
# 創(chuàng)建pvc使用pv
[root@master pv]# vim nfs-pvc.yml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
? name: pvc-web
spec:
? accessModes:
? - ReadWriteMany ? ? ?
? resources:
? ? ?requests:
? ? ? ?storage: 1Gi
? storageClassName: nfs #使用nfs類型的pv
[root@master pv]# kubectl apply -f nfs-pvc.yml
persistentvolumeclaim/pvc-web created
[root@master pv]# kubectl get pvc
NAME ? ? ?STATUS ? VOLUME ? CAPACITY ? ACCESS MODES ? STORAGECLASS ? AGE
pvc-web ? Bound ? ?pv-web ? 10Gi ? ? ? RWX ? ? ? ? ? ?nfs ? ? ? ? ? ?13s
#創(chuàng)建pod使用pvc
[root@master pv]# vim nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
? name: nginx-deployment
? labels:
? ? app: nginx
spec:
? replicas: 3
? selector:
? ? matchLabels:
? ? ? app: nginx
? template:
? ? metadata:
? ? ? labels:
? ? ? ? app: nginx
? ? spec:
? ? ? volumes:
? ? ? ? - name: sc-pv-storage-nfs
? ? ? ? ? persistentVolumeClaim:
? ? ? ? ? ? claimName: pvc-web
? ? ? containers:
? ? ? ? - name: sc-pv-container-nfs
? ? ? ? ? image: nginx
? ? ? ? ? imagePullPolicy: IfNotPresent
? ? ? ? ? ports:
? ? ? ? ? ? - containerPort: 80
? ? ? ? ? ? ? name: "http-server"
? ? ? ? ? volumeMounts:
? ? ? ? ? ? - mountPath: "/usr/share/nginx/html"
? ? ? ? ? ? ? name: sc-pv-storage-nfs
[root@master pv]# kubectl apply -f nginx-deployment.yaml
deployment.apps/nginx-deployment created
[root@master pv]# kubectl get pod -o wide
NAME ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?READY ? STATUS ? ?RESTARTS ? AGE ? IP ? ? ? ? ? ? ? NODE ? ?NOMINATED NODE ? READINESS GATES
nginx-deployment-794d8c5666-dsxkq ? 1/1 ? ? Running ? 0 ? ? ? ? ?17m ? 10.224.166.130 ? node1 ? <none> ? ? ? ? ? <none>
nginx-deployment-794d8c5666-fsctm ? 1/1 ? ? Running ? 0 ? ? ? ? ?12m ? 10.224.104.4 ? ? node2 ? <none> ? ? ? ? ? <none>
nginx-deployment-794d8c5666-spkzs ? 1/1 ? ? Running ? 0 ? ? ? ? ?12m ? 10.224.104.3 ? ? node2 ? <none> ? ? ? ? ? <none>
測試:
[root@master pv]# curl 10.224.166.130
tiantianming
修改下nfs服務器上的index.html的內容
[root@nfs web]# vim index.html?
[root@nfs web]# cat index.html?
tiantianming
welcome to hangzhou!!!
訪問也變了,表示已經成功!
[root@master pv]# curl 10.224.166.130
tiantianming
welcome to hangzhou!!!
三.啟動nginx和MySQL的pod,采用HPA技術,cpu使用率高時進行水平擴縮,使用ab進行壓力測試。
k8s部署mysql pod:
1.編寫yaml文件,包括了deployment、service
[root@master ~]# mkdir /mysql
[root@master ~]# cd /mysql/
[root@master mysql]# vim mysql.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
? ? labels:
? ? ? ? app: mysql
? ? name: mysql
spec:
? ? replicas: 1
? ? selector:
? ? ? ? matchLabels:
? ? ? ? ? ? app: mysql
? ? template:
? ? ? ? metadata:
? ? ? ? ? ? labels:?
? ? ? ? ? ? ? ? app: mysql
? ? ? ? spec:
? ? ? ? ? ? containers:
? ? ? ? ? ? - image: mysql:latest
? ? ? ? ? ? ? name: mysql
? ? ? ? ? ? ? imagePullPolicy: IfNotPresent
? ? ? ? ? ? ? env:
? ? ? ? ? ? ? - name: MYSQL_ROOT_PASSWORD
? ? ? ? ? ? ? ? value: "123456" ?#mysql的密碼
? ? ? ? ? ? ? ports:
? ? ? ? ? ? ? - containerPort: 3306
---
apiVersion: v1
kind: Service
metadata:
? labels:
? ? app: svc-mysql
? name: svc-mysql
spec:
? selector:
? ? app: mysql
? type: NodePort
? ports:
? - port: 3306
? ? protocol: TCP
? ? targetPort: 3306
? ? nodePort: 30007
2.部署
[root@master mysql]# kubectl apply -f mysql.yaml?
deployment.apps/mysql created
service/svc-mysql created
[root@master mysql]# kubectl get svc
NAME ? ? ? ? TYPE ? ? ? ?CLUSTER-IP ? ? ?EXTERNAL-IP ? PORT(S) ? ? ? ? ?AGE
kubernetes ? ClusterIP ? 10.96.0.1 ? ? ? <none> ? ? ? ?443/TCP ? ? ? ? ?23h
php-apache ? ClusterIP ? 10.96.134.145 ? <none> ? ? ? ?80/TCP ? ? ? ? ? 21h
svc-mysql ? ?NodePort ? ?10.109.190.20 ? <none> ? ? ? ?3306:30007/TCP ? 9s
[root@master mysql]# kubectl ?get pod
NAME ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?READY ? STATUS ? ? ? ? ? ? ?RESTARTS ? ? ?AGE
mysql-597ff9595d-tzqzl ? ? ? ? ? ? ?0/1 ? ? ContainerCreating ? 0 ? ? ? ? ? ? 27s
nginx-deployment-794d8c5666-dsxkq ? 1/1 ? ? Running ? ? ? ? ? ? 1 (15m ago) ? 22h
nginx-deployment-794d8c5666-fsctm ? 1/1 ? ? Running ? ? ? ? ? ? 1 (15m ago) ? 22h
nginx-deployment-794d8c5666-spkzs ? 1/1 ? ? Running ? ? ? ? ? ? 1 (15m ago) ? 22h
php-apache-7b9f758896-2q44p ? ? ? ? 1/1 ? ? Running ? ? ? ? ? ? 1 (15m ago) ? 21h
[root@master mysql]# kubectl exec -it mysql-597ff9595d-tzqzl ? ?-- bash
root@mysql-597ff9595d-tzqzl:/# mysql -uroot -p123456 ? ?#容器內部進入mysql
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor. ?Commands end with ; or \g.
Your MySQL connection id is 8
Server version: 8.0.27 MySQL Community Server - GPL
Copyright (c) 2000, 2021, Oracle and/or its affiliates.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql>?
安裝metrics
下載配置文件
wget https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml?
? ? ?args:
# ? ? ? ?// 新增下面兩行參數
? ? ? ? - --kubelet-insecure-tls
? ? ? ? - --kubelet-preferred-address-types=InternalDNS,InternalIP,ExternalDNS,ExternalIP,Hostname
# 替換image
? ? ? ? image: registry.aliyuncs.com/google_containers/metrics-server:v0.6.0
? ? ? ? imagePullPolicy: IfNotPresent
??
部署:
kubectl apply -f components.yaml
[root@master ~]# kubectl get pod -n kube-system
NAME ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? READY ? STATUS ? ?RESTARTS ? AGE
calico-kube-controllers-6949477b58-tbkl8 ? 1/1 ? ? Running ? 1 ? ? ? ? ?7h10m
calico-node-4t8kx ? ? ? ? ? ? ? ? ? ? ? ? ?1/1 ? ? Running ? 1 ? ? ? ? ?7h10m
calico-node-6lbdw ? ? ? ? ? ? ? ? ? ? ? ? ?1/1 ? ? Running ? 1 ? ? ? ? ?7h10m
calico-node-p6ghl ? ? ? ? ? ? ? ? ? ? ? ? ?1/1 ? ? Running ? 1 ? ? ? ? ?7h10m
coredns-7f89b7bc75-dxc9v ? ? ? ? ? ? ? ? ? 1/1 ? ? Running ? 1 ? ? ? ? ?7h15m
coredns-7f89b7bc75-kw7ph ? ? ? ? ? ? ? ? ? 1/1 ? ? Running ? 1 ? ? ? ? ?7h15m
etcd-master ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?1/1 ? ? Running ? 1 ? ? ? ? ?7h15m
kube-apiserver-master ? ? ? ? ? ? ? ? ? ? ?1/1 ? ? Running ? 2 ? ? ? ? ?7h15m
kube-controller-manager-master ? ? ? ? ? ? 1/1 ? ? Running ? 1 ? ? ? ? ?7h15m
kube-proxy-87ptg ? ? ? ? ? ? ? ? ? ? ? ? ? 1/1 ? ? Running ? 1 ? ? ? ? ?7h15m
kube-proxy-8gbsd ? ? ? ? ? ? ? ? ? ? ? ? ? 1/1 ? ? Running ? 1 ? ? ? ? ?7h15m
kube-proxy-x4fbj ? ? ? ? ? ? ? ? ? ? ? ? ? 1/1 ? ? Running ? 1 ? ? ? ? ?7h15m
kube-scheduler-master ? ? ? ? ? ? ? ? ? ? ?1/1 ? ? Running ? 1 ? ? ? ? ?7h15m
metrics-server-7787b94d94-jt9sc ? ? ? ? ? ?1/1 ? ? Running ? 0 ? ? ? ? ?47s
[root@master hpa]# kubectl top nodes
NAME ? ? CPU(cores) ? CPU% ? MEMORY(bytes) ? MEMORY% ??
master ? 129m ? ? ? ? 6% ? ? 1111Mi ? ? ? ? ?64% ? ? ??
node1 ? ?61m ? ? ? ? ?3% ? ? 608Mi ? ? ? ? ? 35% ? ? ??
node2 ? ?59m ? ? ? ? ?2% ? ? 689Mi ? ? ? ? ? 40% ? ? ??
[root@master hpa]#?
啟動開啟了HPA功能的nginx的部署控制器,啟動nginx的pod
[root@master hpa]# vim nginx-hpa.yaml?
apiVersion: apps/v1
kind: Deployment
metadata:
? name: ab-nginx
spec:
? selector:
? ? matchLabels:
? ? ? run: ab-nginx
? template:
? ? metadata:
? ? ? labels:
? ? ? ? run: ab-nginx
? ? spec:
? ? ? #nodeName: node-2 ?取消指定
? ? ? containers:
? ? ? - name: ab-nginx
? ? ? ? image: nginx
? ? ? ? imagePullPolicy: IfNotPresent
? ? ? ? ports:
? ? ? ? - containerPort: 80
? ? ? ? resources:
? ? ? ? ? limits:
? ? ? ? ? ? cpu: 100m
? ? ? ? ? requests:
? ? ? ? ? ? cpu: 50m
---
apiVersion: v1
kind: Service
metadata:
? name: ab-nginx-svc
? labels:
? ? run: ab-nginx-svc
spec:
? type: NodePort
? ports:
? - port: 80
? ? targetPort: 80
? ? nodePort: 31000
? selector:
? ? run: ab-nginx
---
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
? name: ab-nginx
spec:
? scaleTargetRef:
? ? apiVersion: apps/v1
? ? kind: Deployment
? ? name: ab-nginx
? minReplicas: 1
? maxReplicas: 10
? targetCPUUtilizationPercentage: 50
[root@master hpa]#?
創(chuàng)建具有hpa功能的nginx的pod
[root@master hpa]# kubectl apply -f nginx-hpa.yaml?
deployment.apps/ab-nginx created
service/ab-nginx-svc created
horizontalpodautoscaler.autoscaling/ab-nginx created
[root@master hpa]#?
查看啟動hpa和pod、deployment、service的情況
[root@master hpa]# kubectl get deploy
NAME ? ? ? READY ? UP-TO-DATE ? AVAILABLE ? AGE
ab-nginx ? 1/1 ? ? 1 ? ? ? ? ? ?1 ? ? ? ? ? 55s
[root@master hpa]# kubectl get hpa
NAME ? ? ? ? REFERENCE ? ? ? ? ? ? ? TARGETS ? MINPODS ? MAXPODS ? REPLICAS ? AGE
ab-nginx ? ? Deployment/ab-nginx ? ? 0%/50% ? ?1 ? ? ? ? 10 ? ? ? ?1 ? ? ? ? ?58s
php-apache ? Deployment/php-apache ? 0%/50% ? ?1 ? ? ? ? 10 ? ? ? ?1 ? ? ? ? ?20d
[root@master hpa]# kubectl get pod
NAME ? ? ? ? ? ? ? ? ? ? ? ?READY ? STATUS ? ?RESTARTS ? AGE
ab-nginx-5f4c4b9558-xbxjb ? 1/1 ? ? Running ? 0 ? ? ? ? ?63s
configmap-demo-pod ? ? ? ? ?1/1 ? ? Running ? 31 ? ? ? ? 2d23h
[root@master hpa]#?
[root@master hpa]# kubectl get svc
NAME ? ? ? ? ? ? ? ?TYPE ? ? ? ?CLUSTER-IP ? ? ? EXTERNAL-IP ? PORT(S) ? ? ? ? ?AGE
ab-nginx-svc ? ? ? ?NodePort ? ?10.107.155.209 ? <none> ? ? ? ?80:31000/TCP ? ? 2m26s
訪問宿主機的31000端口
http://192.168.220.100:31000/
測試nginx pod是否啟動成功
下面在nfs上用ab工具測試
安裝http-tools工具得到ab軟件
[root@nfs-server ~]# yum install httpd-tools -y
模擬訪問
[root@nfs-server ~]# ab ?-n 1000 ?-c50 ?http://192.168.220.100:31000/index.html
root@master hpa]# kubectl get hpa --watch
增加并發(fā)數和請求總數
[root@nfs-server ~]# ab ?-n 5000 ?-c100 ?http://192.168.203.128:31000/index.html
[root@nfs-server ~]# ab ?-n 10000 ?-c200 ?http://192.168.203.128:31000/index.html
[root@nfs-server ~]# ab ?-n 20000 ?-c400 ?http://192.168.203.128:31000/index.html
[root@master hpa]# kubectl describe pod ab-nginx-5f4c4b9558-shtt5
Warning ?OutOfmemory ?98s ? kubelet ?Node didn't have enough resource: memory, requested: 268435456, used: 3584032768, capacity: 3848888320
[root@master hpa]#?
原因是node-2節(jié)點沒有足夠的內存去啟動新的pod了
四.使用ingress給web業(yè)務做基于域名的負載均衡,基于url的負載均衡的實現
第1大步驟: ?安裝ingress controller
使用舊版本ingress controller v1.1完成
? ? 準備工作:需要提前上傳下面的這些鏡像和yaml文件到k8s集群里的linux系統(tǒng)里,建議存放到master節(jié)點上,然后再scp到node節(jié)點上
[root@master .kube]# mkdir /ingress
[root@master .kube]# cd /ingress/
[root@master ingress]#?
ingress-controller-deploy.yaml ? 是部署ingress controller使用的yaml文件
ingress-nginx-controllerv1.1.0.tar.gz ? ?ingress-nginx-controller鏡像
kube-webhook-certgen-v1.1.0.tar.gz ? ? ? kube-webhook-certgen鏡像
#kube-webhook-certgen鏡像主要用于生成Kubernetes集群中用于Webhook的證書。
#kube-webhook-certgen鏡像生成的證書,可以確保Webhook服務在Kubernetes集群中的安全通信和身份驗證
ingress.yaml 創(chuàng)建ingress的配置文件
nginx-svc-3.yaml ? ?創(chuàng)建service3 和相關pod
nginx-svc-4.yaml ? ?創(chuàng)建service4 和相關pod
1.將鏡像scp到所有的node節(jié)點服務器上
[root@master ingress]# scp ingress-nginx-controllerv1.1.0.tar.gz node1:/root
ingress-nginx-controllerv1.1.0.tar.gz ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? 100% ?276MB ?42.7MB/s ? 00:06 ? ?
[root@master ingress]# scp ingress-nginx-controllerv1.1.0.tar.gz node2:/root
ingress-nginx-controllerv1.1.0.tar.gz ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? 100% ?276MB ?45.7MB/s ? 00:06 ? ?
[root@master ingress]# scp kube-webhook-certgen-v1.1.0.tar.gz node2:/root
kube-webhook-certgen-v1.1.0.tar.gz ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?100% ? 47MB ?40.5MB/s ? 00:01 ? ?
[root@master ingress]# scp kube-webhook-certgen-v1.1.0.tar.gz node1:/root
kube-webhook-certgen-v1.1.0.tar.gz ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?100% ? 47MB ?47.1MB/s ? 00:00 ? ?
[root@master ingress]#?
2.導入鏡像,在所有的節(jié)點服務器(node-1和node-2)上進行
[root@k8smaster ingress]# docker load -i ingress-nginx-controllerv1.1.0.tar.gz?
[root@k8smaster ingress]# docker load -i kube-webhook-certgen-v1.1.0.tar.gz
[root@k8snode2 ~]# docker images
REPOSITORY ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? TAG ? ? ? ?IMAGE ID ? ? ? CREATED ? ? ? ? SIZE
registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller ? v1.1.0 ? ? ae1a7201ec95 ? 16 months ago ? 285MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen ? ? ? v1.1.1 ? ? c41e9fcadf5a ? 17 months ago ? 47.7MB
[root@k8smaster new]#?
3.使用ingress-controller-deploy.yaml ?文件去啟動ingress ?controller
[root@k8smaster 4-4]# kubectl apply -f ingress-controller-deploy.yaml?
namespace/ingress-nginx created
serviceaccount/ingress-nginx created
configmap/ingress-nginx-controller created
clusterrole.rbac.authorization.k8s.io/ingress-nginx created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created
role.rbac.authorization.k8s.io/ingress-nginx created
rolebinding.rbac.authorization.k8s.io/ingress-nginx created
service/ingress-nginx-controller-admission created
service/ingress-nginx-controller created
deployment.apps/ingress-nginx-controller created
ingressclass.networking.k8s.io/nginx created
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created
serviceaccount/ingress-nginx-admission created
clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
role.rbac.authorization.k8s.io/ingress-nginx-admission created
rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
job.batch/ingress-nginx-admission-create created
job.batch/ingress-nginx-admission-patch created
[root@k8smaster 4-4]#
查看ingress controller的相關命名空間
[root@k8smaster 4-4]# kubectl get ns
NAME ? ? ? ? ? ? ?STATUS ? AGE
default ? ? ? ? ? Active ? 11d
ingress-nginx ? ? Active ? 52s
kube-node-lease ? Active ? 11d
kube-public ? ? ? Active ? 11d
kube-system ? ? ? Active ? 11d
[root@k8smaster 4-4]#?
查看ingress controller的相關service
[root@k8smaster 4-4]# kubectl get svc -n ingress-nginx
NAME ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? TYPE ? ? ? ?CLUSTER-IP ? ? EXTERNAL-IP ? PORT(S) ? ? ? ? ? ? ? ? ? ? ?AGE
ingress-nginx-controller ? ? ? ? ? ? NodePort ? ?10.99.160.10 ? <none> ? ? ? ?80:30092/TCP,443:30263/TCP ? 91s
ingress-nginx-controller-admission ? ClusterIP ? 10.99.138.23 ? <none> ? ? ? ?443/TCP ? ? ? ? ? ? ? ? ? ? ?91s
[root@k8smaster 4-4]#?
查看ingress controller的相關pod
[root@k8smaster 4-4]# kubectl get pod -n ingress-nginx
NAME ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?READY ? STATUS ? ? ?RESTARTS ? AGE
ingress-nginx-admission-create-k69t2 ? ? ? ?0/1 ? ? Completed ? 0 ? ? ? ? ?119s
ingress-nginx-admission-patch-zsrk8 ? ? ? ? 0/1 ? ? Completed ? 1 ? ? ? ? ?119s
ingress-nginx-controller-6c8ffbbfcf-bt94p ? 1/1 ? ? Running ? ? 0 ? ? ? ? ?119s
ingress-nginx-controller-6c8ffbbfcf-d49kx ? 1/1 ? ? Running ? ? 0 ? ? ? ? ?119s
[root@k8smaster 4-4]#?
第2大步驟: ?創(chuàng)建pod和暴露pod的服務
[root@master url]# cat sc-nginx-svc-3.yaml?
apiVersion: apps/v1
kind: Deployment
metadata:
? name: sc-nginx-deploy-3
? labels:
? ? app: sc-nginx-feng-3
spec:
? replicas: 3
? selector:
? ? matchLabels:
? ? ? app: sc-nginx-feng-3
? template:
? ? metadata:
? ? ? labels:
? ? ? ? app: sc-nginx-feng-3
? ? spec:
? ? ? containers:
? ? ? - name: sc-nginx-feng-3
? ? ? ? image: nginx
? ? ? ? imagePullPolicy: IfNotPresent
? ? ? ? ports:
? ? ? ? - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
? name: ?sc-nginx-svc-3
? labels:
? ? app: sc-nginx-svc-3
spec:
? selector:
? ? app: sc-nginx-feng-3
? ports:
? - name: name-of-service-port
? ? protocol: TCP
? ? port: 80
? ? targetPort: 80
[root@master url]#?
[root@master url]# cat sc-nginx-svc-4.yaml?
apiVersion: apps/v1
kind: Deployment
metadata:
? name: sc-nginx-deploy-4
? labels:
? ? app: sc-nginx-feng-4
spec:
? replicas: 3
? selector:
? ? matchLabels:
? ? ? app: sc-nginx-feng-4
? template:
? ? metadata:
? ? ? labels:
? ? ? ? app: sc-nginx-feng-4
? ? spec:
? ? ? containers:
? ? ? - name: sc-nginx-feng-4
? ? ? ? image: nginx
? ? ? ? imagePullPolicy: IfNotPresent
? ? ? ? ports:
? ? ? ? - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
? name: ?sc-nginx-svc-4
? labels:
? ? app: sc-nginx-svc-4
spec:
? selector:
? ? app: sc-nginx-feng-4
? ports:
? - name: name-of-service-port
? ? protocol: TCP
? ? port: 80
? ? targetPort: 80
[root@master url]#?
[root@master lb-url]# kubectl apply -f sc-nginx-svc-3.yaml?
deployment.apps/sc-nginx-deploy-3 created
service/sc-nginx-svc-3 created
[root@master lb-url]# kubectl apply -f sc-nginx-svc-4.yaml?
deployment.apps/sc-nginx-deploy-4 created
service/sc-nginx-svc-4 created
第3大步驟: 啟用ingress 關聯ingress controller 和service
[root@master url]# cat ingress-url.yaml?
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
? name: simple-url-lb-example
? annotations:
? ? kubernets.io/ingress.class: nginx
spec:
? ingressClassName: nginx
? rules:
? - host: www.guan.com
? ? http:
? ? ? paths:
? ? ? - path: /tian1
? ? ? ? pathType: Prefix
? ? ? ? backend:
? ? ? ? ? service:
? ? ? ? ? ? name: sc-nginx-svc-3 ?#必須與上面的service名字相同
? ? ? ? ? ? port:
? ? ? ? ? ? ? number: 80
? ? ? - path: /tian2
? ? ? ? pathType: Prefix
? ? ? ? backend:
? ? ? ? ? service:
? ? ? ? ? ? name: sc-nginx-svc-4
? ? ? ? ? ? port:
? ? ? ? ? ? ? number: 80
[root@master ingress]# kubectl apply -f sc-ingress-url.yaml?
ingress.networking.k8s.io/simple-fanout-example created
[root@master ingress]# kubectl get ingress
NAME ? ? ? ? ? ? ? ? ? ?CLASS ? HOSTS ? ? ? ? ?ADDRESS ? ? ? ? ? ? ? ? ? ? ? ? ? PORTS ? AGE
simple-fanout-example ? nginx ? www.guan.com ? 192.168.220.101,192.168.220.102 ? 80 ? ? ?29s
在nfs服務器上進行測試,需要在/etc/hosts文件里添加域名解析記錄
[root@nfs-server ~]# ?cat /etc/hosts
127.0.0.1 ? localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 ? ? ? ? localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.220.101 www.guan.com #兩個node節(jié)點上都添加
192.168.220.102 www.guan.com?
測試發(fā)現不能找到頁面,到底是ingress controller的問題還是我們后端的pod的問題
進入service4,service3 對應的一個pod里,新建tian1和tian2文件夾以及index.html網頁文件
[root@master ingress]# kubectl exec -it ?sc-nginx-deploy-3-5c4b975ffc-6x4kc ?-- bash
root@sc-nginx-deploy-3-5c4b975ffc-6x4kc:/# cd /usr/share/nginx/html/
root@sc-nginx-deploy-3-5c4b975ffc-6x4kc:/usr/share/nginx/html# ls
50x.html ?index.html
root@sc-nginx-deploy-3-5c4b975ffc-6x4kc:/usr/share/nginx/html# mkdir tian1
root@sc-nginx-deploy-3-5c4b975ffc-6x4kc:/usr/share/nginx/html# echo "tiantianming" > tian1/index.html
root@sc-nginx-deploy-3-5c4b975ffc-6x4kc:/usr/share/nginx/html# ls
50x.html ?index.html ?tian1
service4對應的一樣創(chuàng)建文件夾tian2
[root@master ingress]# kubectl exec -it ?sc-nginx-deploy-4-7d4b5c487f-2sdvf ? -- bash
root@sc-nginx-deploy-4-7d4b5c487f-2sdvf:/# cd /usr/share/nginx/html/
root@sc-nginx-deploy-4-7d4b5c487f-2sdvf:/usr/share/nginx/html# ls
50x.html ?index.html
root@sc-nginx-deploy-4-7d4b5c487f-2sdvf:/usr/share/nginx/html# mkdir tian2
root@sc-nginx-deploy-4-7d4b5c487f-2sdvf:/usr/share/nginx/html# echo "tiantianming2222" > tian2/index.html?
再次在nfs服務器上測試,多測試幾次,因為service 背后的ipvs的調度算法是輪詢的,所以建議每個pod都建立對應的文件夾
?curl ?www.guan.com/tian1/index.html
?curl ?www.guan.com/tian2/index.html
效果如下:
[root@nfs ~]# ?curl ?www.guan.com/tian1/index.html
tiantianming
[root@nfs ~]# ?curl ?www.guan.com/tian2/index.html
tiantianming2222
五.使用探針對web業(yè)務pod進行監(jiān)控, 一旦出現問題馬上重啟, 增強業(yè)務pod的可靠性。
? ? ? ? livenessProbe:
? ? ? ? ? exec:
? ? ? ? ? ? command:
? ? ? ? ? ? - ls
? ? ? ? ? ? - /tmp
? ? ? ? ? initialDelaySeconds: 5
? ? ? ? ? periodSeconds: 5
?
? ? ? ? readinessProbe:
? ? ? ? ? exec:
? ? ? ? ? ? command:
? ? ? ? ? ? - ls
? ? ? ? ? ? - /tmp
? ? ? ? ? initialDelaySeconds: 5
? ? ? ? ? periodSeconds: 5?
?
? ? ? ? startupProbe:
? ? ? ? ? httpGet:
? ? ? ? ? ? path: /
? ? ? ? ? ? port: 8000
? ? ? ? ? failureThreshold: 30
? ? ? ? ? periodSeconds: 10
?
[root@k8smaster probe]# vim my-web.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
? labels:
? ? app: myweb
? name: myweb
spec:
? replicas: 3
? selector:
? ? matchLabels:
? ? ? app: myweb
? template:
? ? metadata:
? ? ? labels:
? ? ? ? app: myweb
? ? spec:
? ? ? containers:
? ? ? - name: myweb
? ? ? ? image: nginx
? ? ? ? imagePullPolicy: IfNotPresent
? ? ? ? ports:
? ? ? ? - containerPort: 8000
? ? ? ? resources:
? ? ? ? ? limits:
? ? ? ? ? ? cpu: 300m
? ? ? ? ? requests:
? ? ? ? ? ? cpu: 100m
? ? ? ? livenessProbe:
? ? ? ? ? exec:
? ? ? ? ? ? command:
? ? ? ? ? ? - ls
? ? ? ? ? ? - /tmp
? ? ? ? ? initialDelaySeconds: 5
? ? ? ? ? periodSeconds: 5
? ? ? ? readinessProbe:
? ? ? ? ? exec:
? ? ? ? ? ? command:
? ? ? ? ? ? - ls
? ? ? ? ? ? - /tmp
? ? ? ? ? initialDelaySeconds: 5
? ? ? ? ? periodSeconds: 5 ??
? ? ? ? startupProbe:
? ? ? ? ? httpGet:
? ? ? ? ? ? path: /
? ? ? ? ? ? port: 8000
? ? ? ? ? failureThreshold: 30
? ? ? ? ? periodSeconds: 10
---
apiVersion: v1
kind: Service
metadata:
? labels:
? ? app: myweb-svc
? name: myweb-svc
spec:
? selector:
? ? app: myweb
? type: NodePort
? ports:
? - port: 8000
? ? protocol: TCP
? ? targetPort: 8000
? ? nodePort: 30001
?
[root@k8smaster probe]# kubectl apply -f my-web.yaml?
deployment.apps/myweb created
service/myweb-svc created
?
[root@master probe]# kubectl get pod |grep -i ?myweb
myweb-7df8f89d75-2c9v6 ? ? ? ? ? ? ? 0/1 ? ? Running ? 0 ? ? ? ? ? ? ?69s
myweb-7df8f89d75-cf82r ? ? ? ? ? ? ? 0/1 ? ? Running ? 0 ? ? ? ? ? ? ?69s
myweb-7df8f89d75-fmbpn ? ? ? ? ? ? ? 0/1 ? ? Running ? 0 ? ? ? ? ? ? ?69s
?
[root@k8smaster probe]# kubectl describe pod myweb-6b89fb9c7b-4cdh9
。。。
? ?Liveness: ? ? exec [ls /tmp] delay=5s timeout=1s period=5s #success=1 #failure=3
? ? Readiness: ? ?exec [ls /tmp] delay=5s timeout=1s period=5s #success=1 #failure=3
? ? Startup: ? ? ?http-get http://:8000/ delay=0s timeout=1s period=10s #success=1 #failure=30
。。。
六.構建CI/CD環(huán)境, k8smaster上安裝部署Jenkins,一臺機器上安裝部署harbor倉庫。
安裝jenkins:
# Jenkins部署到k8s里
# 1.安裝git軟件
[root@k8smaster jenkins]# yum install git -y
?
# 2.下載相關的yaml文件
[root@k8smaster jenkins]# git clone https://github.com/scriptcamp/kubernetes-jenkins
[root@k8smaster jenkins]# ls
kubernetes-jenkins
[root@k8smaster jenkins]# cd kubernetes-jenkins/
[root@k8smaster kubernetes-jenkins]# ls
deployment.yaml ?namespace.yaml ?README.md ?serviceAccount.yaml ?service.yaml ?volume.yaml
?
# 3.創(chuàng)建命名空間
[root@k8smaster kubernetes-jenkins]# cat namespace.yaml?
apiVersion: v1
kind: Namespace
metadata:
? name: devops-tools
[root@k8smaster kubernetes-jenkins]# kubectl apply -f namespace.yaml?
namespace/devops-tools created
?
[root@k8smaster kubernetes-jenkins]# kubectl get ns
NAME ? ? ? ? ? ? ? ? ? STATUS ? AGE
default ? ? ? ? ? ? ? ?Active ? 22h
devops-tools ? ? ? ? ? Active ? 19s
ingress-nginx ? ? ? ? ?Active ? 139m
kube-node-lease ? ? ? ?Active ? 22h
kube-public ? ? ? ? ? ?Active ? 22h
kube-system ? ? ? ? ? ?Active ? 22h
?
# 4.創(chuàng)建服務賬號,集群角色,綁定
[root@k8smaster kubernetes-jenkins]# cat serviceAccount.yaml?
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
? name: jenkins-admin
rules:
? - apiGroups: [""]
? ? resources: ["*"]
? ? verbs: ["*"]
?
---
apiVersion: v1
kind: ServiceAccount
metadata:
? name: jenkins-admin
? namespace: devops-tools
?
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
? name: jenkins-admin
roleRef:
? apiGroup: rbac.authorization.k8s.io
? kind: ClusterRole
? name: jenkins-admin
subjects:
- kind: ServiceAccount
? name: jenkins-admin
?
[root@k8smaster kubernetes-jenkins]# kubectl apply -f serviceAccount.yaml?
clusterrole.rbac.authorization.k8s.io/jenkins-admin created
serviceaccount/jenkins-admin created
clusterrolebinding.rbac.authorization.k8s.io/jenkins-admin created
?
# 5.創(chuàng)建卷,用來存放數據
[root@k8smaster kubernetes-jenkins]# cat volume.yaml?
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
? name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
?
---
apiVersion: v1
kind: PersistentVolume
metadata:
? name: jenkins-pv-volume
? labels:
? ? type: local
spec:
? storageClassName: local-storage
? claimRef:
? ? name: jenkins-pv-claim
? ? namespace: devops-tools
? capacity:
? ? storage: 10Gi
? accessModes:
? ? - ReadWriteOnce
? local:
? ? path: /mnt
? nodeAffinity:
? ? required:
? ? ? nodeSelectorTerms:
? ? ? - matchExpressions:
? ? ? ? - key: kubernetes.io/hostname
? ? ? ? ? operator: In
? ? ? ? ? values:
? ? ? ? ? - node1 ? # 需要修改為k8s里的node節(jié)點的名字
?
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
? name: jenkins-pv-claim
? namespace: devops-tools
spec:
? storageClassName: local-storage
? accessModes:
? ? - ReadWriteOnce
? resources:
? ? requests:
? ? ? storage: 3Gi
?
[root@k8smaster kubernetes-jenkins]# kubectl apply -f volume.yaml?
storageclass.storage.k8s.io/local-storage created
persistentvolume/jenkins-pv-volume created
persistentvolumeclaim/jenkins-pv-claim created
?
[root@k8smaster kubernetes-jenkins]# kubectl get pv
NAME ? ? ? ? ? ? ? ?CAPACITY ? ACCESS MODES ? RECLAIM POLICY ? STATUS ? CLAIM ? ? ? ? ? ? ? ? ? ? ? ? ? STORAGECLASS ? ?REASON ? AGE
jenkins-pv-volume ? 10Gi ? ? ? RWO ? ? ? ? ? ?Retain ? ? ? ? ? Bound ? ?devops-tools/jenkins-pv-claim ? local-storage ? ? ? ? ? ?33s
pv-web ? ? ? ? ? ? ?10Gi ? ? ? RWX ? ? ? ? ? ?Retain ? ? ? ? ? Bound ? ?default/pvc-web ? ? ? ? ? ? ? ? nfs ? ? ? ? ? ? ? ? ? ? ?21h
?
[root@k8smaster kubernetes-jenkins]# kubectl describe pv jenkins-pv-volume
Name: ? ? ? ? ? ? ?jenkins-pv-volume
Labels: ? ? ? ? ? ?type=local
Annotations: ? ? ? <none>
Finalizers: ? ? ? ?[kubernetes.io/pv-protection]
StorageClass: ? ? ?local-storage
Status: ? ? ? ? ? ?Bound
Claim: ? ? ? ? ? ? devops-tools/jenkins-pv-claim
Reclaim Policy: ? ?Retain
Access Modes: ? ? ?RWO
VolumeMode: ? ? ? ?Filesystem
Capacity: ? ? ? ? ?10Gi
Node Affinity: ? ??
? Required Terms: ?
? ? Term 0: ? ? ? ?kubernetes.io/hostname in [k8snode1]
Message: ? ? ? ? ??
Source:
? ? Type: ?LocalVolume (a persistent volume backed by local storage on a node)
? ? Path: ?/mnt
Events: ? ?<none>
?
# 6.部署Jenkins
[root@k8smaster kubernetes-jenkins]# cat deployment.yaml?
apiVersion: apps/v1
kind: Deployment
metadata:
? name: jenkins
? namespace: devops-tools
spec:
? replicas: 1
? selector:
? ? matchLabels:
? ? ? app: jenkins-server
? template:
? ? metadata:
? ? ? labels:
? ? ? ? app: jenkins-server
? ? spec:
? ? ? securityContext:
? ? ? ? ? ? fsGroup: 1000?
? ? ? ? ? ? runAsUser: 1000
? ? ? serviceAccountName: jenkins-admin
? ? ? containers:
? ? ? ? - name: jenkins
? ? ? ? ? image: jenkins/jenkins:lts
? ? ? ? ? imagePullPolicy: IfNotPresent
? ? ? ? ? resources:
? ? ? ? ? ? limits:
? ? ? ? ? ? ? memory: "2Gi"
? ? ? ? ? ? ? cpu: "1000m"
? ? ? ? ? ? requests:
? ? ? ? ? ? ? memory: "500Mi"
? ? ? ? ? ? ? cpu: "500m"
? ? ? ? ? ports:
? ? ? ? ? ? - name: httpport
? ? ? ? ? ? ? containerPort: 8080
? ? ? ? ? ? - name: jnlpport
? ? ? ? ? ? ? containerPort: 50000
? ? ? ? ? livenessProbe:
? ? ? ? ? ? httpGet:
? ? ? ? ? ? ? path: "/login"
? ? ? ? ? ? ? port: 8080
? ? ? ? ? ? initialDelaySeconds: 90
? ? ? ? ? ? periodSeconds: 10
? ? ? ? ? ? timeoutSeconds: 5
? ? ? ? ? ? failureThreshold: 5
? ? ? ? ? readinessProbe:
? ? ? ? ? ? httpGet:
? ? ? ? ? ? ? path: "/login"
? ? ? ? ? ? ? port: 8080
? ? ? ? ? ? initialDelaySeconds: 60
? ? ? ? ? ? periodSeconds: 10
? ? ? ? ? ? timeoutSeconds: 5
? ? ? ? ? ? failureThreshold: 3
? ? ? ? ? volumeMounts:
? ? ? ? ? ? - name: jenkins-data
? ? ? ? ? ? ? mountPath: /var/jenkins_home ? ? ? ??
? ? ? volumes:
? ? ? ? - name: jenkins-data
? ? ? ? ? persistentVolumeClaim:
? ? ? ? ? ? ? claimName: jenkins-pv-claim
?
[root@k8smaster kubernetes-jenkins]# kubectl apply -f deployment.yaml?
deployment.apps/jenkins created
?
[root@k8smaster kubernetes-jenkins]# kubectl get deploy -n devops-tools
NAME ? ? ?READY ? UP-TO-DATE ? AVAILABLE ? AGE
jenkins ? 1/1 ? ? 1 ? ? ? ? ? ?1 ? ? ? ? ? 5m36s
?
[root@k8smaster kubernetes-jenkins]# kubectl get pod -n devops-tools
NAME ? ? ? ? ? ? ? ? ? ? ? READY ? STATUS ? ?RESTARTS ? AGE
jenkins-7fdc8dd5fd-bg66q ? 1/1 ? ? Running ? 0 ? ? ? ? ?19s
?
# 7.啟動服務發(fā)布Jenkins的pod
[root@k8smaster kubernetes-jenkins]# cat service.yaml?
apiVersion: v1
kind: Service
metadata:
? name: jenkins-service
? namespace: devops-tools
? annotations:
? ? ? prometheus.io/scrape: 'true'
? ? ? prometheus.io/path: ? /
? ? ? prometheus.io/port: ? '8080'
spec:
? selector:?
? ? app: jenkins-server
? type: NodePort ?
? ports:
? ? - port: 8080
? ? ? targetPort: 8080
? ? ? nodePort: 32000
?
[root@k8smaster kubernetes-jenkins]# kubectl apply -f service.yaml?
service/jenkins-service created
?
[root@k8smaster kubernetes-jenkins]# kubectl get svc -n devops-tools
NAME ? ? ? ? ? ? ?TYPE ? ? ? CLUSTER-IP ? ? ?EXTERNAL-IP ? PORT(S) ? ? ? ? ?AGE
jenkins-service ? NodePort ? 10.104.76.252 ? <none> ? ? ? ?8080:32000/TCP ? 24s
?
# 8.在Windows機器上訪問Jenkins,宿主機ip+端口號
http://192.168.220.100:32000/
?
# 9.進入pod里獲取登錄的密碼
[root@master kubernetes-jenkins]# kubectl exec -it jenkins-b96f7764f-qkzvd -n devops-tools -- bash
jenkins@jenkins-b96f7764f-qkzvd:/$ cat /var/jenkins_home/secrets/initialAdminPassword
557fc27bdf4149bb824b3c6e21a7f823
# 修改密碼
部署harbor:
# 前提是安裝好 docker 和 docker compose
# 1.配置阿里云的repo源
yum install -y yum-utils
?
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
?
# 2.安裝docker服務
yum install docker-ce-20.10.6 -y
?
# 啟動docker,設置開機自啟
systemctl start docker && systemctl enable docker.service
?
# 3.查看docker版本,docker compose版本
[root@harbor ~]# docker version
[root@harbor ~]# docker compose version
# 5.安裝 harbor,到 harbor 官網或者 github 下載harbor源碼包,上傳到本地。
wget https://github.com/goharbor/harbor/releases/download/v2.8.3/harbor-offline-installer-v2.8.3.tgz
[root@localhost ~]# ls
anaconda-ks.cfg ?harbor-offline-installer-v2.8.3.tgz
?
# 6.解壓
[root@localhost ~]# tar xf harbor-offline-installer-v2.8.3.tgz
[root@harbor ~]# ls
anaconda-ks.cfg ?harbor ?harbor-offline-installer-v2.8.3.tgz
[root@harbor ~]# cd harbor
[root@harbor harbor]# ls
common.sh ? ? ? ? ? ? harbor.yml.tmpl ?LICENSE
harbor.v2.8.3.tar.gz ?install.sh ? ? ? prepare
?
# 7.修改配置文件
[root@harbor harbor]# vim harbor.yml.tmpl?
# Configuration file of Harbor
# The IP address or hostname to access admin UI and registry service.
# DO NOT use localhost or 127.0.0.1, because Harbor needs to be accessed by external clients.
hostname: 192.168.220.104 #修改為主機ip地址
# http related config
http:
? # port for http, default is 80. If https enabled, this port will redirect to https port
? port: 5001 #可以進行修改端口
# https related config
#https:
? # https port for harbor, default is 443
?# port: 443
? # The path of cert and key files for nginx
? #certificate: /your/certificate/path
? #private_key: /your/private/key/path
# # Uncomment following will enable tls communication between all harbor components
# internal_tls:
# ? # set enabled to true means internal tls is enabled
# ? enabled: true
# ? # put your cert and key files on dir
# ? dir: /etc/harbor/tls/internal
?
# Uncomment external_url if you want to enable external proxy
# And when it enabled the hostname will no longer used
# external_url: https://reg.mydomain.com:8433
?
# The initial password of Harbor admin
# It only works in first time to install harbor
# Remember Change the admin password from UI after launching Harbor.
harbor_admin_password: Harbor12345 ?#登錄密碼
?
# Uncomment external_url if you want to enable external proxy
# And when it enabled the hostname will no longer used
# external_url: https://reg.mydomain.com:8433
# The initial password of Harbor admin
# It only works in first time to install harbor
# Remember Change the admin password from UI after launching Harbor.
harbor_admin_password: Harbor12345 #登錄密碼,可以修改
# Harbor DB configuration
database:
? # The password for the root user of Harbor DB. Change this before any production use.
? password: root123 ?#這是 Harbor 數據庫的 root 用戶的密碼
? # The maximum number of connections in the idle connection pool. If it <=0, no idle connections are retained.
? max_idle_conns: 100 #這是空閑連接池中的最大連接數。
? # The maximum number of open connections to the database. If it <= 0, then there is no limit on the number of open connections.
? # Note: the default number of connections is 1024 for postgres of harbor.
? max_open_conns: 900 #這是到數據庫的最大打開連接數。
?
# 8.執(zhí)行部署腳本
[root@harbor harbor]# ./install.sh
看harbor相關容器運行情況:
?[root@harbor harbor]# docker compose ps |grep harbor
harbor-core ? ? ? ? goharbor/harbor-core:v2.8.3 ? ? ? ? ?"/harbor/entrypoint.…" ? core ? ? ? ? ?About a minute ago ? Up About a minute (healthy) ??
harbor-db ? ? ? ? ? goharbor/harbor-db:v2.8.3 ? ? ? ? ? ?"/docker-entrypoint.…" ? postgresql ? ?About a minute ago ? Up About a minute (healthy) ??
harbor-jobservice ? goharbor/harbor-jobservice:v2.8.3 ? ?"/harbor/entrypoint.…" ? jobservice ? ?About a minute ago ? Up About a minute (healthy) ??
harbor-log ? ? ? ? ?goharbor/harbor-log:v2.8.3 ? ? ? ? ? "/bin/sh -c /usr/loc…" ? log ? ? ? ? ? About a minute ago ? Up About a minute (healthy) ? 127.0.0.1:1514->10514/tcp
harbor-portal ? ? ? goharbor/harbor-portal:v2.8.3 ? ? ? ?"nginx -g 'daemon of…" ? portal ? ? ? ?About a minute ago ? Up About a minute (healthy) ??
nginx ? ? ? ? ? ? ? goharbor/nginx-photon:v2.8.3 ? ? ? ? "nginx -g 'daemon of…" ? proxy ? ? ? ? About a minute ago ? Up About a minute (healthy) ? 0.0.0.0:5001->8080/tcp, :::5001->8080/tcp
redis ? ? ? ? ? ? ? goharbor/redis-photon:v2.8.3 ? ? ? ? "redis-server /etc/r…" ? redis ? ? ? ? About a minute ago ? Up About a minute (healthy) ??
registry ? ? ? ? ? ?goharbor/registry-photon:v2.8.3 ? ? ?"/home/harbor/entryp…" ? registry ? ? ?About a minute ago ? Up About a minute (healthy) ??
registryctl ? ? ? ? goharbor/harbor-registryctl:v2.8.3 ? "/home/harbor/start.…" ? registryctl ? About a minute ago ? Up About a minute (healthy) ??
# 9.測試登錄
http://192.168.220.104:5001/
問題:prepare base dir is set to /root/harbor
no config file: /root/harbor/harbor.yml
解決方法:
[root@harbor harbor]# mv harbor.yml.tmpl ?harbor.yml
?
# 賬號:admin
# 密碼:Harbor12345
登錄后:
在harbor里創(chuàng)建一個項目k8s-harbor
并且新建一個用戶 guan ?密碼是Gxx123456
授權k8s-harbor這個項目允許guan這個用戶去訪問,授予項目管理員權限?
#10.實現pod集群都用這個harbor倉庫;
master機器:
[root@master ~]# vim /etc/docker/daemon.json?
{
? "registry-mirrors": ["https://ruk1gp3w.mirror.aliyuncs.com"],
? "insecure-registries" : ["192.168.220.104:5001"]?
}
然后重啟docker
[root@master ~]# systemctl daemon-reload
[root@master ~]# systemctl restart docker
[root@master ~]# journalctl -xe
出現 "3月 14 14:49:02 master systemd[1]: Failed to start Docker Application Container Engine.
配置文件不標準。
測試harbor的上傳和拉取
在原來安裝harbor的宿主機上,重新啟動harbor相關的容器
[root@harbor harbor]# cd /root
[root@harbor ~]# cd harbor
[root@harbor harbor]# docker compose up -d
pod集群機器上拉取一個鏡像或者dockerfile制作一個鏡像,修改鏡像的名字
[root@master ~]# docker tag nginx:latest ?192.168.220.104:5001/k8s-harbor/nginx:latest
[root@master ~]# docker images
192.168.220.104:5001/k8s-harbor/nginx ? ? ? ? ? ? ? ? ? ? ? ? ? ? latest ? ? 605c77e624dd ? 2 years ago ? ? 141MB
。。。)
本機上傳
首先登陸私有倉庫
登錄使用guan這個用戶,密碼是Gxx123456
[root@master ~]# docker login 192.168.220.104:5001
Username: guan
Password:?
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
上傳到倉庫
[root@master ~]# docker push 192.168.220.104:5001/k8s-harbor/nginx:latest
在瀏覽器檢測已經收到
在nfs機器上拉取pod機器上傳的鏡像:
# 1.配置阿里云的repo源
yum install -y yum-utils
?
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
?
# 2.安裝docker服務
yum install docker-ce-20.10.6 -y
?
# 啟動docker,設置開機自啟
systemctl start docker && systemctl enable docker.service
root@nfs ~]# vim /etc/docker/daemon.json?
{
? "registry-mirrors": ["https://ruk1gp3w.mirror.aliyuncs.com"],
? "insecure-registries" : ["192.168.203.128:80"]?
}
然后重啟docker
[root@nfs ~]# systemctl daemon-reload
[root@nfs ~]# systemctl restart docker
[root@nfs ~]# docker login 192.168.220.104:5001
Username: guan
Password:?
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
拉取成功:
[root@nfs ~]# ?docker pull ?192.168.220.104:5001/k8s-harbor/nginx:latest
latest: Pulling from k8s-harbor/nginx
a2abf6c4d29d: Pull complete?
a9edb18cadd1: Pull complete?
589b7251471a: Pull complete?
186b1aaa4aa6: Pull complete?
b4df32aa5a72: Pull complete?
a0bcbecc962e: Pull complete?
Digest: sha256:ee89b00528ff4f02f2405e4ee221743ebc3f8e8dd0bfd5c4c20a2fa2aaa7ede3
Status: Downloaded newer image for 192.168.220.104:5001/k8s-harbor/nginx:latest
192.168.220.104:5001/k8s-harbor/nginx:latest
[root@nfs ~]# docker images
REPOSITORY ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?TAG ? ? ? IMAGE ID ? ? ? CREATED ? ? ? SIZE
192.168.220.104:5001/k8s-harbor/nginx ? latest ? ?605c77e624dd ? 2 years ago ? 141MB
七,使用dashboard對整個集群資源進行掌控
[root@master ~]# mkdir dashboard
[root@master ~]# cd dashboard/
[root@master dashboard]# ?wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.5.0/aio/deploy/recommended.yaml
[root@master dashboard]# ls
recommended.yaml
修改配置文件,因為訪問用戶需要RBAC鑒權才能訪問資源。
?[root@master dashboard]# vim recommended.yaml?
---
kind: Service
apiVersion: v1
metadata:
? labels:
? ? k8s-app: kubernetes-dashboard
? name: kubernetes-dashboard
? namespace: kubernetes-dashboard
spec:
? type: NodePort ?#指定類型
? ports:
? ? - port: 443
? ? ? targetPort: 8443
? ? ? nodePort: 30088 ?#指定宿主機端口號
? selector:
? ? k8s-app: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
? name: serviceaccount-cluster-admin
roleRef:
? apiGroup: rbac.authorization.k8s.io
? kind: ClusterRole
? name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
? kind: User
? name: system:serviceaccount:kubernetes-dashboard:kubernetes-dashboard
查看是否啟動dashboard的pod
[root@master dashboard]# kubectl get pod --all-namespaces|grep dashboard
kubernetes-dashboard ? dashboard-metrics-scraper-799d786dbf-lltbf ? 1/1 ? ? Running ? ? 0 ? ? ? ? ? ? ?39s
kubernetes-dashboard ? kubernetes-dashboard-546cbc58cd-p4xlr ? ? ? ?1/1 ? ? Running ? ? 0 ? ? ? ? ? ? ?39s
查看服務是否創(chuàng)建
[root@master dashboard]# kubectl get svc --all-namespaces|grep dash
kubernetes-dashboard ? dashboard-metrics-scraper ? ? ? ? ? ?ClusterIP ? 10.98.46.11 ? ? ?<none> ? ? ? ?8000/TCP ? ? ? ? ? ? ? ? ? ? 66s
kubernetes-dashboard ? kubernetes-dashboard ? ? ? ? ? ? ? ? NodePort ? ?10.109.239.147 ? <none> ? ? ? ?443:30088/TCP ? ? ? ? ? ? ? ?66s
在瀏覽器里訪問,使用https協(xié)議去訪問
https://192.168.220.100:30088/
點擊繼續(xù)訪問
https://192.168.220.100:30088/#/login
出現一個登錄畫圖,需要輸入token
獲取dashboard 的secret的名字
[root@master dashboard]# kubectl get secret -n kubernetes-dashboard|grep dashboard-token
kubernetes-dashboard-token-pnt2v ? kubernetes.io/service-account-token ? 3 ? ? ?6m6s
獲取secret里的token
[root@master dashboard]# kubectl describe secret kubernetes-dashboard-token-pnt2v ?-n kubernetes-dashboard
token: ? ? ?eyJhbGciOiJSUzI1NiIsImtpZCI6ImhvV1g5cTQ1Q2F1N1A5RGxCQnhrTkVkeFNmczgtRG5WNlFMNWJ4SzcyaTQifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi1wbnQydiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjBkYmYwMGFkLWU1YTktNDc0Ny05YWZiLWM0MDk2N2RmY2I1MCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.l1U9GljbDpv3OpQcqijj10YaqymqLC18pY1Ut6-UPzNiY8sSyKvpnC9_6aCMFLz-mXV2x17TWmvOME5mK8DO0pV8QH3JJcsS-XCkn3RBxHygZfVYqcpSnoPibCA6QhaCCchSYDQ9a6fmriIztySgsXCtNV4Rfow49l5pkafTYLllV3dXp5SbxsL3TL46IOXw2uQ0iG0JD5QYj4pMfe_rZiwDQNwriaVqLb84K88AglDh3uniPg8XuYWs_nDIy3pztdwQOjWFLCy8NsQ1TGftZg6HRXD9pon2W8QeUj3vhKvA1B8L1MdzSfGpLPIojjHVLHB9C6aCnI3HqrjjvmrKjA
然后在瀏覽器上就可以訪問資源了
八,安裝部署Prometheus+grafana:
安裝Prometheus
[root@prometheus ~]# ls
anaconda-ks.cfg
grafana-enterprise-9.1.2-1.x86_64.rpm
mysqld_exporter-0.12.1.linux-amd64 (1).tar.gz
prometheus-2.43.0.linux-amd64.tar.gz
[root@prometheus ~]# mkdir /prom
[root@prometheus ~]# mv prometheus-2.43.0.linux-amd64.tar.gz ?/prom/prometheus-2.43.0.linux-amd64.tar.gz?
[root@prometheus ~]# cd /prom
[root@prometheus prom]# ls
prometheus-2.43.0.linux-amd64.tar.gz
[root@prometheus prom]# tar xf prometheus-2.43.0.linux-amd64.tar.gz?
[root@prometheus prom]# ls
prometheus-2.43.0.linux-amd64
[root@prometheus prom]# mv prometheus-2.43.0.linux-amd64 prometheus
[root@prometheus prom]# ls
prometheus ?prometheus-2.43.0.linux-amd64.tar.gz
[root@prometheus prometheus]# PATH=/prom/prometheus:$PATH
[root@prometheus prometheus]# vim /etc/profile
添加到末尾:PATH=/prom/prometheus:$PATH
[root@prometheus prometheus]# nohup prometheus --config.file=/prom/prometheus/prometheus.yml ?& #在后臺運行
[1] 2137
[root@prometheus prometheus]# nohup: 忽略輸入并把輸出追加到"nohup.out"
看進程:
[root@prometheus prometheus]# ps aux |grep prome
root ? ? ? 2137 ?0.4 ?2.3 798956 44252 pts/0 ? ?Sl ? 12:38 ? 0:00prometheus --config.file=/prom/prometheus/prometheus.yml
看端口:
[root@prometheus prometheus]# netstat -anplut |grep prom
tcp6 ? ? ? 0 ? ? ?0 :::9090 ? ? ? ? ? ? ? ? :::* ? ? ? ? ? ? ? ? ? ?LISTEN ? ? ?2137/prometheus ? ??
tcp6 ? ? ? 0 ? ? ?0 ::1:48882 ? ? ? ? ? ? ? ::1:9090 ? ? ? ? ? ? ? ?ESTABLISHED 2137/prometheus ? ??
tcp6 ? ? ? 0 ? ? ?0 ::1:9090 ? ? ? ? ? ? ? ?::1:48882 ? ? ? ? ? ? ? ESTABLISHED 2137/prometheus ? ??
關閉防火墻:
[root@prometheus prometheus]# service firewalld stop
Redirecting to /bin/systemctl stop firewalld.service
[root@prometheus prometheus]# systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
把Prometheus做成一個服務進行管理,
[root@prometheus prometheus]# vim /usr/lib/systemd/system/prometheus.service
[Unit]
Description=prometheus
[Service]
ExecStart=/prom/prometheus/prometheus --config.file=/prom/promethe
us/prometheus.yml
ExecReload=/bin/kill -HUP $MAINPID
killMode=process
Restart=on-failure
[Install]
WantedBy=multi-user.target
[root@prometheus prometheus]# systemctl daemon-reload #重新加載systemd相關的服務
瀏覽器訪問ip+9090端口
殺死Prometheus進程 ?:
[root@prometheus prometheus]# ps aux|grep prom
root ? ? ? 2137 ?0.0 ?2.9 799212 54400 pts/0 ? ?Sl ? 12:38 ? 0:00prometheus --config.file=/prom/prometheus/prometheus.yml
root ? ? ? 2346 ?0.0 ?0.0 112824 ? 972 pts/0 ? ?S+ ? 12:59 ? 0:00 grep --color=auto prom
[root@prometheus prometheus]# kill -9 2137
在master或者被監(jiān)聽的機器上操作(安裝相應的exporter,常規(guī)的是node_exporter):
[root@master ~]# tar xf node_exporter-1.4.0-rc.0.linux-amd64.tar.gz?
[root@master ~]# mv node_exporter-1.4.0-rc.0.linux-amd64 /node_exporter
[root@master ~]# cd /node_exporter/
[root@master node_exporter]# ls
LICENSE ?node_exporter ?NOTICE
修改環(huán)境變量
[root@master node_exporter]# PATH=/node_exporter/:$PATH
[root@master node_exporter]# vim /root/.bashrc?
PATH=/node_exporter/:$PATH
[root@prometheus /]# nohup node_exporter --web.listen-address 0.0.0.0:8090 &
[1] 4844
[root@prometheus /]# nohup: 忽略輸入并把輸出追加到"nohup.out"
[root@prometheus /]# ps aux |grep node
root ? ? ? 4844 ?0.0 ?0.7 716544 13104 pts/0 ? ?Sl ? 13:55 ? 0:00 node_exporter --web.listen-address 0.0.0.0:8090
關閉防火墻:
[root@prometheus /]# service firewalld stop
Redirecting to /bin/systemctl stop firewalld.service
[root@prometheus /]# systemctl disable ?firewalld
查看selinux的狀態(tài):
[root@prometheus /]# getenforce
關閉selinux:
#臨時關閉selinux
setenforce 0
#永久關閉selinux
sed -i '/^SELINUX=/ s/enforcing/disabled/' ?/etc/selinux/config
瀏覽器訪問:ip+8090
設置node_exporter開機啟動
[root@prometheus /]# vim /etc/rc.local?
nohup /node_exporter/node_exporter/node_exporter --web.listen-address 0.0.0.0:8090 &
[root@prometheus node_exporter]# chmod +x /etc/rc.d/rc.local
在Prometheus里添加我們在哪些機器里安裝了exporter程序,就可以去pull了
? - job_name: "prometheus"
? ? static_configs:
? ? ? - targets: ["localhost:9090"]
? - job_name: "master"
? ? static_configs:
? ? ? - targets: ["192.168.220.100:8090"]
(。。。)
?
~ ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?
[root@prometheus prometheus]# service prometheus ?restart
Redirecting to /bin/systemctl restart prometheus.service
安裝部署Grafana和Prometheus安裝在一起:
wget https://d1.grafana.com/enterprise/release/grafana-enterprise-9.1.2-1.x86_64.rpm
[root@prometheus ~]# yum install grafana-enterprise-9.1.2-1.x86_64.rpm ?-y
啟動
[root@prometheus ~]# service grafana-server start
Starting grafana-server (via systemctl): ? ? ? ? ? ? ? ? ? [ ?確定 ?]
[root@prometheus ~]# ps aux |grep grafana
grafana ? ?5115 ?2.1 ?3.6 1129728 68768 ? ? ? ? Ssl ?14:28 ? 0:00 /usr/sbin/grafana-server --config=/etc/grafana/grafana.ini --pidfile=/var/run/grafana/grafana-server.pid --packaging=rpm cfg:default.paths.logs=/var/log/grafana cfg:default.paths.data=/var/lib/grafana cfg:default.paths.plugins=/var/lib/grafana/plugins cfg:default.paths.provisioning=/etc/grafana/provisioning
root ? ? ? 5124 ?0.0 ?0.0 112824 ? 976 pts/0 ? ?S+ ? 14:28 ? 0:00 grep --color=auto grafana
[root@prometheus ~]# netstat -anlput |grep grafana
tcp ? ? ? ?0 ? ? ?0 192.168.220.165:54852 ? 34.120.177.193:443 ? ? ?ESTABLISHED 5115/grafana-server?
tcp ? ? ? ?0 ? ? ?0 192.168.220.165:46122 ? 185.199.109.133:443 ? ? ESTABLISHED 5115/grafana-server?
tcp6 ? ? ? 0 ? ? ?0 :::3000 ? ? ? ? ? ? ? ? :::* ? ? ? ? ? ? ? ? ? ?LISTEN ? ? ?5115/grafana-server?
瀏覽器訪問 ip+3000端口
默認用戶admin
密碼:admin
添加數據源:
管理——》數據源-》添加Prometheus -》http://192.168.220.105:9090
添加模板:1860 ?
儀表盤-》 導入
監(jiān)聽的是master的使用情況
九.安裝部署firewalld防火墻和jumpserver堡壘機,對web集群進行保護。
jumpserver 的安裝部署
準備一臺 2核4G (最低)且可以訪問互聯網的 64 位 Linux 主機;
以 root 用戶執(zhí)行如下命令一鍵安裝 JumpServer。curl -sSL https://resource.fit2cloud.com/jumpserver/jumpserver/releases/latest/download/quick_start.sh | bash
部署firewall服務器,對內網進行保護
ens33的wan口ip為外面window的ip網段,連接外網
# 關閉虛擬機,增加一塊網卡(ens36)用來連內網
vim /etc/sysconfig/network-scripts/ifcfg-ens36
關閉dhcp
添加:
IPADDR=192.168.220.107
GATEWAY=192.168.220.106 #jump server的ip地址
NETMASK=255.255.255.0
DNS1=114.114.114.114
而且內網的機器都需要把網關設置為firewalld的ip
GATEWAY=192.168.220.108
?#內網的機器出去都需要通過這臺firewalld機器
# 編寫腳本實現SNAT_DNAT功能
[root@firewalld ~]# cat snat_dnat.sh?
#!/bin/bash
?
# 開啟路由功能
echo 1 >/proc/sys/net/ipv4/ip_forward
?
# stop firewall
systemctl ? stop ?firewalld
systemctl disable firewalld
?
# clear iptables rule
iptables -F
iptables -t nat -F
?
# enable snat
iptables -t nat ?-A POSTROUTING ?-s 192.168.220.0/24 ?-o ens33 ?-j ?MASQUERADE
#內網來的192.168.220.0網段過來的ip地址全部偽裝(替換)為ens33接口的公網ip地址,好處就是不需要考慮ens33接口的ip地址是多少,你是哪個ip地址,我就偽裝成哪個ip地址
?
?iptables ?-t filter ?-P INPUT ACCEPT #默認是ACCPET
?
?
# web服務器上操作,開啟相應的端口,默認為drop,實現對k8s集群的保護。
[root@k8smaster ~]# cat open.sh?
#!/bin/bash
?
# open ssh
iptables -t filter ?-A INPUT ?-p tcp ?--dport ?22 -j ACCEPT
?
# 打開 dns
iptables -t filter ?-A INPUT ?-p udp ?--dport 53 -s 192.168.220.0/24 -j ACCEPT ?可以dns解析來自于內網段的機器ip?
?
# 打開 dhcp?
iptables -t filter ?-A INPUT ?-p udp ? --dport 67 -j ACCEPT
?
# 打開 http/https
iptables -t filter ?-A INPUT -p tcp ? --dport 80 -j ACCEPT
iptables -t filter ?-A INPUT -p tcp ? --dport 443 -j ACCEPT
?
# 打開mysql端口
iptables ?-t filter ?-A INPUT -p tcp ?--dport 3306 ?-j ACCEPT
?
# 默認為drop
iptables ?-t filter ?-P INPUT DROP
?
十.安裝部署ansible機器,寫好主機清單,便于日后的自動化運維。
1.建立免密通道 在ansible主機上生成密鑰對,上傳公鑰到所有服務器的root用戶家目錄下
# ? ? 所有服務器上開啟ssh服務 ,開放22號端口,允許root用戶登錄
[root@ansible ~]# yum install -y epel-release
[root@ansible ~]# yum install ansible
[root@ansible ~]# ssh-keygen?
[root@ansible ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub 192.168.220.100
[root@ansible ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub 192.168.220.101
[root@ansible ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub 192.168.220.102
[root@ansible ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub 192.168.220.103
(。。。)
?2.編寫主機清單
[root@ansible ansible]# vim hosts?
[master]
192.168.220.100
[node]
192.168.220.101
192.168.220.102
[nfs]
192.168.220.103
[harbor]
192.168.220.104
[prometheus]
192.168.220.105
[jumpserver]
192.168.220.106
3.測試
[root@ansible ansible]# ansible all -m shell -a "ip add"
項目心得:
1.更加深入的了解了k8s的各個功能(service,pv,pvc,ingress)等。
2.更加了解開發(fā)和運維的關系。
3.對負載均衡和高可用, 自動擴縮有了認識。文章來源:http://www.zghlxwxcb.cn/news/detail-849534.html
4.對各個服務(Prometheus, nfs等) 深入了解文章來源地址http://www.zghlxwxcb.cn/news/detail-849534.html
到了這里,關于基于k8s的綜合的web服務器構建的文章就介紹完了。如果您還想了解更多內容,請在右上角搜索TOY模板網以前的文章或繼續(xù)瀏覽下面的相關文章,希望大家以后多多支持TOY模板網!