-
k8s安裝kube-promethues(0.7版本)
- 一.檢查本地k8s版本,下載對(duì)應(yīng)安裝包
-
二.安裝前準(zhǔn)備
- 1.文件分類(lèi)整理
-
2.查看K8s集群是否安裝NFS持久化存儲(chǔ),如果沒(méi)有則需要安裝配置
- 1).安裝NFS服務(wù)
- 2).k8s注冊(cè)nfs服務(wù)
- 3.修改Prometheus 持久化
- 4.修改grafana持久化配置
- 5.修改 promethus和Grafana的Service 端口設(shè)置
-
三.安裝Prometheus
- 1.安裝promethues-operator
- 2.安裝其他所有組件
- 3.驗(yàn)證是否安裝成功
k8s安裝kube-promethues(0.7版本)
一.檢查本地k8s版本,下載對(duì)應(yīng)安裝包
kubectl version
如圖可見(jiàn)是1.19版本
進(jìn)入kube-promethus下載地址,查找自己的k8s版本適合哪一個(gè)kube-promethues版本。
然后下載自己合適的版本
#還可以通過(guò)如下地址,在服務(wù)器上直接下已經(jīng)打包好的包。或者復(fù)制地址到瀏覽器下載后上傳到服務(wù)器。
wget https://github.com/prometheus-operator/kube-prometheus/archive/refs/tags/v0.7.0.tar.gz
本次安裝是手動(dòng)上傳的
tar -zxvf kube-prometheus-0.7.0.tar.gz
二.安裝前準(zhǔn)備
1.文件分類(lèi)整理
我們cd到對(duì)應(yīng)目錄可以看見(jiàn),初始的安裝文件很亂。
cd kube-prometheus-0.7.0/manifests/
新建目錄,然后把對(duì)應(yīng)的安裝文件歸類(lèi)。
# 創(chuàng)建文件夾
mkdir -p node-exporter alertmanager grafana kube-state-metrics prometheus serviceMonitor adapter
# 移動(dòng) yaml 文件,進(jìn)行分類(lèi)到各個(gè)文件夾下
mv *-serviceMonitor* serviceMonitor/
mv grafana-* grafana/
mv kube-state-metrics-* kube-state-metrics/
mv alertmanager-* alertmanager/
mv node-exporter-* node-exporter/
mv prometheus-adapter* adapter/
mv prometheus-* prometheus
分類(lèi)后的目錄樹(shù)如下
.
├── adapter
│ ├── prometheus-adapter-apiService.yaml
│ ├── prometheus-adapter-clusterRole.yaml
│ ├── prometheus-adapter-clusterRoleAggregatedMetricsReader.yaml
│ ├── prometheus-adapter-clusterRoleBinding.yaml
│ ├── prometheus-adapter-clusterRoleBindingDelegator.yaml
│ ├── prometheus-adapter-clusterRoleServerResources.yaml
│ ├── prometheus-adapter-configMap.yaml
│ ├── prometheus-adapter-deployment.yaml
│ ├── prometheus-adapter-roleBindingAuthReader.yaml
│ ├── prometheus-adapter-service.yaml
│ └── prometheus-adapter-serviceAccount.yaml
├── alertmanager
│ ├── alertmanager-alertmanager.yaml
│ ├── alertmanager-secret.yaml
│ ├── alertmanager-service.yaml
│ └── alertmanager-serviceAccount.yaml
├── grafana
│ ├── grafana-dashboardDatasources.yaml
│ ├── grafana-dashboardDefinitions.yaml
│ ├── grafana-dashboardSources.yaml
│ ├── grafana-deployment.yaml
│ ├── grafana-pvc.yaml
│ ├── grafana-service.yaml
│ └── grafana-serviceAccount.yaml
├── kube-state-metrics
│ ├── kube-state-metrics-clusterRole.yaml
│ ├── kube-state-metrics-clusterRoleBinding.yaml
│ ├── kube-state-metrics-deployment.yaml
│ ├── kube-state-metrics-service.yaml
│ └── kube-state-metrics-serviceAccount.yaml
├── node-exporter
│ ├── node-exporter-clusterRole.yaml
│ ├── node-exporter-clusterRoleBinding.yaml
│ ├── node-exporter-daemonset.yaml
│ ├── node-exporter-service.yaml
│ └── node-exporter-serviceAccount.yaml
├── prometheus
│ ├── prometheus-clusterRole.yaml
│ ├── prometheus-clusterRoleBinding.yaml
│ ├── prometheus-prometheus.yaml
│ ├── prometheus-roleBindingConfig.yaml
│ ├── prometheus-roleBindingSpecificNamespaces.yaml
│ ├── prometheus-roleConfig.yaml
│ ├── prometheus-roleSpecificNamespaces.yaml
│ ├── prometheus-rules.yaml
│ ├── prometheus-service.yaml
│ └── prometheus-serviceAccount.yaml
├── serviceMonitor
│ ├── alertmanager-serviceMonitor.yaml
│ ├── grafana-serviceMonitor.yaml
│ ├── kube-state-metrics-serviceMonitor.yaml
│ ├── node-exporter-serviceMonitor.yaml
│ ├── prometheus-adapter-serviceMonitor.yaml
│ ├── prometheus-operator-serviceMonitor.yaml
│ ├── prometheus-serviceMonitor.yaml
│ ├── prometheus-serviceMonitorApiserver.yaml
│ ├── prometheus-serviceMonitorCoreDNS.yaml
│ ├── prometheus-serviceMonitorKubeControllerManager.yaml
│ ├── prometheus-serviceMonitorKubeScheduler.yaml
│ └── prometheus-serviceMonitorKubelet.yaml
└── setup
├── 0namespace-namespace.yaml
├── prometheus-operator-0alertmanagerConfigCustomResourceDefinition.yaml
├── prometheus-operator-0alertmanagerCustomResourceDefinition.yaml
├── prometheus-operator-0podmonitorCustomResourceDefinition.yaml
├── prometheus-operator-0probeCustomResourceDefinition.yaml
├── prometheus-operator-0prometheusCustomResourceDefinition.yaml
├── prometheus-operator-0prometheusruleCustomResourceDefinition.yaml
├── prometheus-operator-0servicemonitorCustomResourceDefinition.yaml
├── prometheus-operator-0thanosrulerCustomResourceDefinition.yaml
├── prometheus-operator-clusterRole.yaml
├── prometheus-operator-clusterRoleBinding.yaml
├── prometheus-operator-deployment.yaml
├── prometheus-operator-service.yaml
└── prometheus-operator-serviceAccount.yaml
8 directories, 68 files
2.查看K8s集群是否安裝NFS持久化存儲(chǔ),如果沒(méi)有則需要安裝配置
kubectl get sc
此截圖顯示已經(jīng)安裝。下面是NFS的安裝和部署方法
1).安裝NFS服務(wù)
Ubuntu:
sudo apt update
sudo apt install nfs-kernel-server
Centos:
yum update
yum -y install nfs-utils
# 創(chuàng)建或使用用已有的文件夾作為nfs文件存儲(chǔ)點(diǎn)
mkdir -p /home/data/nfs/share
vi /etc/exports
寫(xiě)入如下內(nèi)容
/home/data/nfs/share *(rw,no_root_squash,sync,no_subtree_check)
# 配置生效并查看是否生效
exportfs -r
exportfs
# 啟動(dòng)rpcbind、nfs服務(wù)
#Centos
systemctl restart rpcbind && systemctl enable rpcbind
systemctl restart nfs && systemctl enable nfs
#Ubuntu
systemctl restart rpcbind && systemctl enable rpcbind
systemctl start nfs-kernel-server && systemctl enable nfs-kernel-server
# 查看 RPC 服務(wù)的注冊(cè)狀況
rpcinfo -p localhost
# showmount測(cè)試
showmount -e localhost
以上都沒(méi)有問(wèn)題則說(shuō)明安裝成功
2).k8s注冊(cè)nfs服務(wù)
新建storageclass-nfs.yaml文件,粘貼如下內(nèi)容:
## 創(chuàng)建了一個(gè)存儲(chǔ)類(lèi)
apiVersion: storage.k8s.io/v1
kind: StorageClass #存儲(chǔ)類(lèi)的資源名稱
metadata:
name: nfs-storage #存儲(chǔ)類(lèi)的名稱,自定義
annotations:
storageclass.kubernetes.io/is-default-class: "true" #注解,是否是默認(rèn)的存儲(chǔ),注意:KubeSphere默認(rèn)就需要個(gè)默認(rèn)存儲(chǔ),因此這里注解要設(shè)置為“默認(rèn)”的存儲(chǔ)系統(tǒng),表示為"true",代表默認(rèn)。
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner #存儲(chǔ)分配器的名字,自定義
parameters:
archiveOnDelete: "true" ## 刪除pv的時(shí)候,pv的內(nèi)容是否要備份
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
labels:
app: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
spec:
replicas: 1 #只運(yùn)行一個(gè)副本應(yīng)用
strategy: #描述了如何用新的POD替換現(xiàn)有的POD
type: Recreate #Recreate表示重新創(chuàng)建Pod
selector: #選擇后端Pod
matchLabels:
app: nfs-client-provisioner
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner #創(chuàng)建賬戶
containers:
- name: nfs-client-provisioner
image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/nfs-subdir-external-provisioner:v4.0.2 #使用NFS存儲(chǔ)分配器的鏡像
# resources:
# limits:
# cpu: 10m
# requests:
# cpu: 10m
volumeMounts:
- name: nfs-client-root #定義個(gè)存儲(chǔ)卷,
mountPath: /persistentvolumes #表示掛載容器內(nèi)部的路徑
env:
- name: PROVISIONER_NAME #定義存儲(chǔ)分配器的名稱
value: k8s-sigs.io/nfs-subdir-external-provisioner #需要和上面定義的保持名稱一致
- name: NFS_SERVER #指定NFS服務(wù)器的地址,你需要改成你的NFS服務(wù)器的IP地址
value: 192.168.0.0 ## 指定自己nfs服務(wù)器地址
- name: NFS_PATH
value: /home/data/nfs/share ## nfs服務(wù)器共享的目錄 #指定NFS服務(wù)器共享的目錄
volumes:
- name: nfs-client-root #存儲(chǔ)卷的名稱,和前面定義的保持一致
nfs:
server: 192.168.0.0 #NFS服務(wù)器的地址,和上面保持一致,這里需要改為你的IP地址
path: /home/data/nfs/share #NFS共享的存儲(chǔ)目錄,和上面保持一致
---
apiVersion: v1
kind: ServiceAccount #創(chuàng)建個(gè)SA賬號(hào)
metadata:
name: nfs-client-provisioner #和上面的SA賬號(hào)保持一致
# replace with namespace where provisioner is deployed
namespace: default
---
#以下就是ClusterRole,ClusterRoleBinding,Role,RoleBinding都是權(quán)限綁定配置,不在解釋。直接復(fù)制即可。
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
roleRef:
kind: Role
name: leader-locking-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io
需要修改的就只有服務(wù)器地址和共享的目錄
創(chuàng)建StorageClass
kubectl apply -f storageclass-nfs.yaml
# 查看是否存在
kubectl get sc
3.修改Prometheus 持久化
vi prometheus/prometheus-prometheus.yaml
在文件末尾新增:
...
serviceMonitorSelector: {}
version: v2.11.0
retention: 3d
storage:
volumeClaimTemplate:
spec:
storageClassName: nfs-storage
resources:
requests:
storage: 5Gi
4.修改grafana持久化配置
#新增garfana的PVC配置文件
vi grafana/grafana-pvc.yaml
完整內(nèi)容如下:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: grafana
namespace: monitoring #---指定namespace為monitoring
spec:
storageClassName: nfs-storage #---指定StorageClass
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
接著修改 grafana-deployment.yaml 文件設(shè)置持久化配置,順便修改Garfana的鏡像版本(有些模板不支持7.5以下的Grafana),應(yīng)用上面的 PVC
vi grafana/grafana-deployment.yaml
修改內(nèi)容如下:
serviceAccountName: grafana
volumes:
- name: grafana-storage # 新增持久化配置
persistentVolumeClaim:
claimName: grafana # 設(shè)置為創(chuàng)建的PVC名稱
# - emptyDir: {} # 注釋舊的注釋
# name: grafana-storage
- name: grafana-datasources
secret:
secretName: grafana-datasources
之前的鏡像版本
修改后的
5.修改 promethus和Grafana的Service 端口設(shè)置
修改 Prometheus Service
vi prometheus/prometheus-service.yaml
修改為如下內(nèi)容:
apiVersion: v1
kind: Service
metadata:
labels:
prometheus: k8s
name: prometheus-k8s
namespace: monitoring
spec:
type: NodePort
ports:
- name: web
port: 9090
targetPort: web
nodePort: 32101
selector:
app: prometheus
prometheus: k8s
sessionAffinity: ClientIP
修改 Grafana Service
vi grafana/grafana-service.yaml
修改為如下內(nèi)容:
apiVersion: v1
kind: Service
metadata:
labels:
app: grafana
name: grafana
namespace: monitoring
spec:
type: NodePort
ports:
- name: http
port: 3000
targetPort: http
nodePort: 32102
selector:
app: grafana
三.安裝Prometheus
1.安裝promethues-operator
首先保證在manifests目錄下
開(kāi)始安裝 Operator:
kubectl apply -f setup/
查看 Pod,等 pod 全部ready在進(jìn)行下一步:
kubectl get pods -n monitoring
2.安裝其他所有組件
#依次執(zhí)行
kubectl apply -f adapter/
kubectl apply -f alertmanager/
kubectl apply -f node-exporter/
kubectl apply -f kube-state-metrics/
kubectl apply -f grafana/
kubectl apply -f prometheus/
kubectl apply -f serviceMonitor/
然后查看pod是否創(chuàng)建成功,并等待所有pod處于Running狀態(tài)
kubectl get pods -n monitoring
3.驗(yàn)證是否安裝成功
如果知道集群節(jié)點(diǎn)地址就可以直接ip:32101訪問(wèn)Prometheus,如果不知道則可以訪問(wèn)Rancher管理界面,命名空間選擇monitoring。在Services中找到,prometheus-k8s和grafana然后鼠標(biāo)點(diǎn)擊目標(biāo)端口就可以訪問(wèn)。
在Prometheus界面隨便測(cè)試一個(gè)函數(shù),查看是否能夠正常使用。
然后登錄Grafana
默認(rèn)用戶名和密碼是admin/admin,第一次登陸會(huì)提示修改密碼。進(jìn)入Grafana后,導(dǎo)入模板測(cè)試。推薦的模板ID有,12884和13105文章來(lái)源:http://www.zghlxwxcb.cn/news/detail-710634.html
效果圖:文章來(lái)源地址http://www.zghlxwxcb.cn/news/detail-710634.html
到了這里,關(guān)于k8s安裝promethues監(jiān)控,kube-promethues安裝法的文章就介紹完了。如果您還想了解更多內(nèi)容,請(qǐng)?jiān)谟疑辖撬阉鱐OY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!