目錄
(一)前言
(二)categraf作為Daemonset的方式去運行監(jiān)控k8s組件
?(1)1.24版本以下的k8s集群部署方法:
①創(chuàng)建autu.yaml綁定權限
②Daemonset部署categraf采集監(jiān)控kubelet,kube-proxy
③測試數(shù)據(jù)是否采集成功
?(2)1.24版本以上的k8s集群部署方法:
①創(chuàng)建secret token 綁定sa賬號
③測試認證
④Daemonset部署categraf采集監(jiān)控kubelet,kube-proxy
⑤測試采集是否成功
(三)使用kube-state-metrics監(jiān)控kubernetes對象?
(1)下載配置kube-state-metrices
(2)抓緊KSM指標
①Prometheus-agent抓取
②categraf抓取
(3)監(jiān)控大盤?
(4)分片邏輯
(四)最后的最后
(一)前言
? ? 上一期我們講了我們使用Prometheus-Agent的方式通過endpoint的服務發(fā)現(xiàn)功能來抓取metrics信息,但是Prometheus-agent的方式太重了,我們可以使用categraf作為Daemonset的方式去運行,使用categraf的input-Prometheus的插件方式直接填入url即可,相對來說比Prometheus-agent的方式輕,簡單一點。
? ? 但是categraf的這個插件方式有個缺點就是如果k8s的pod如果物理機出現(xiàn)問題,他pod的IP地址就會飄走那么categraf的插件功能就會失效,如果你的pod的IP地址不穩(wěn)定,可以使用Prometheus-agent的方式通過endpoint的服務發(fā)現(xiàn)功能穩(wěn)定一點,不用擔心物理機的問題。當然這個categraf的服務發(fā)現(xiàn)功能會在后面的版本更新出來,大家可以實時的關注夜鶯的官網(wǎng)
? ? 還要一點就是,我們使用daemonset的方式監(jiān)控k8s組件的時候,只能監(jiān)控kube-poxy,kubelet這兩個組件,因為daemonset是每個節(jié)點都掛在pod?但是node節(jié)點沒有apiserver?etcd這些master節(jié)點獨有的組件,所以只有監(jiān)控kube-proxy,kubelet這兩個node節(jié)點和master節(jié)點都有的組件。daemonset?監(jiān)控node上的資源(物理機資源+pod資源(cadvisor)) 。
? ? 接下來,我們首先講一下categraf-Daemonset的方式去運行監(jiān)控k8s組件,然后我們再說一下使用kube-state-metrics(KSM)的方式去監(jiān)控k8s的資源,在使用Daemonset和KSM的時候,我做的時候發(fā)現(xiàn)了有一些小坑,這篇文章都給大家一起解決
(二)categraf作為Daemonset的方式去運行監(jiān)控k8s組件
? ? 當我們使用Daemonset方式去采集監(jiān)控的時候,就設計到了kubelet需要認證權限的問題,這里要通過ServiceAccount賬號綁定對應的權限給categraf使用,但是在部署的時候,因為我使用的k8s的版本是1.25,1.24的版本以上k8s取消了ServiceAccount賬號自動綁定secrets的功能,如果需要給ServiceAccount綁定secrets,要自己手動去創(chuàng)建secrets token然后綁定給指定的sa
?(1)1.24版本以下的k8s集群部署方法:
? ? 采集kube-proxy組件時,不需要額外的認證權限,采集kubelet的時候則需要。這里我們可以先做個測試不通過認證直接訪問,看看能不能查看到metrics的信息
## Kubelet 監(jiān)聽兩個固定端口(我的環(huán)境,你的環(huán)境可能不同),一個是10248,一個是10250,通過下面的命令可以知道,10248是健康檢查的端口:
[root@k8s-master ~]# ss -ntpl | grep kubelet
LISTEN 0 128 127.0.0.1:10248 *:* users:(("kubelet",pid=1935,fd=24))
LISTEN 0 128 [::]:10250 [::]:* users:(("kubelet",pid=1935,fd=30))
[root@k8s-master ~]# curl localhost:10248/healthz
ok
## 我們再看一下 10250,10250實際是Kubelet的默認端口
[root@k8s-master ~]# curl https://localhost:10250/metrics
curl: (60) Peer's certificate issuer has been marked as not trusted by the user.
More details here: http://curl.haxx.se/docs/sslcerts.html
curl performs SSL certificate verification by default, using a "bundle"
of Certificate Authority (CA) public keys (CA certs). If the default
bundle file isn't adequate, you can specify an alternate file
using the --cacert option.
If this HTTPS server uses a certificate signed by a CA represented in
the bundle, the certificate verification probably failed due to a
problem with the certificate (it might be expired, or the name might
not match the domain name in the URL).
If you'd like to turn off curl's verification of the certificate, use
the -k (or --insecure) option.
[root@k8s-master ~]# curl -k https://localhost:10250/metrics
Unauthorized
## 最后的命令可以看到返回了 Unauthorized,表示認證失敗,我們先來解決一下認證問題。
? ? 最后的命令可以看到返回了 Unauthorized,表示認證失敗,我們先來解決一下認證問題。
①創(chuàng)建autu.yaml綁定權限
? ? 下面的信息可以保存為 auth.yaml,創(chuàng)建了 ClusterRole、ServiceAccount、ClusterRoleBinding。
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: categraf-daemonset
rules:
- apiGroups:
- ""
resources:
- nodes/metrics
- nodes/stats
- nodes/proxy
verbs:
- get
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: categraf-daemonset
namespace: flashcat
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: categraf-daemonset
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: categraf-daemonset
subjects:
- kind: ServiceAccount
name: categraf-daemonset
namespace: flashcat
? ? ClusterRole是個全局概念,不屬于任一個namespace,定義了很多權限點,都是讀權限,監(jiān)控嘛,讀權限就可以了,ServiceAccount則是namespace顆粒度的一個概念,這里我們創(chuàng)建了一個名為categraf-daemonset的ServiceAccount,然后綁定到ClusterRole上面,具備了各種查詢權限。apply一下即可:
[work@tt-fc-dev01.nj yamls]$ kubectl apply -f auth.yaml
clusterrole.rbac.authorization.k8s.io/categraf-daemonset created
serviceaccount/categraf-daemonset created
clusterrolebinding.rbac.authorization.k8s.io/categraf-daemonset created
[work@tt-fc-dev01.nj yamls]$ kubectl get ClusterRole | grep categraf-daemon
categraf-daemonset 2022-11-14T03:53:54Z
[work@tt-fc-dev01.nj yamls]$ kubectl get sa -n flashcat
NAME SECRETS AGE
categraf-daemonset 1 90m
default 1 4d23h
[work@tt-fc-dev01.nj yamls]$ kubectl get ClusterRoleBinding -n flashcat | grep categraf-daemon
categraf-daemonset ClusterRole/categraf-daemonset 91m
? ? 我們已經(jīng)成功創(chuàng)建了 ServiceAccount,把ServiceAccount的內(nèi)容打印出來看一下,可以發(fā)現(xiàn)sa賬號自動綁定了一個secrets文件:
[root@tt-fc-dev01.nj qinxiaohui]# kubectl get sa categraf-daemonset -n flashcat -o yaml
apiVersion: v1
kind: ServiceAccount
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"ServiceAccount","metadata":{"annotations":{},"name":"categraf-daemonset","namespace":"flashcat"}}
creationTimestamp: "2022-11-14T03:53:54Z"
name: categraf-daemonset
namespace: flashcat
resourceVersion: "120570510"
uid: 22f5a785-871c-4454-b82e-12bf104450a0
secrets:
- name: categraf-daemonset-token-7mccq
? ? 注意最后兩行,這個ServiceAccount實際是關聯(lián)了一個Secret,我們再看看這個Secret的內(nèi)容:
[root@tt-fc-dev01.nj qinxiaohui]# kubectl get secret categraf-daemonset-token-7mccq -n flashcat -o yaml
apiVersion: v1
data:
ca.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMvakNDQWVhZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeU1ERXdPVEF4TXpjek9Gb1hEVE15TURFd056QXhNemN6T0Zvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBS2F1Ck9wU3hHdXB0ZlNraW1zbmlONFVLWnp2b1p6akdoTks1eUVlZWFPcmptdXIwdTFVYlFHbTBRWlpMem8xVi9GV1gKVERBOUthcFRNVllyS2hBQjNCVXdqdGhCaFp1NjJVQzg5TmRNSDVzNFdmMGtMNENYZWQ3V2g2R05Md0MyQ2xKRwp3Tmp1UkZRTndxMWhNWjY4MGlaT1hLZk1NbEt6bWY4aDJWZmthREdpVHk0VzZHWE5sRlRJSFFkVFBVMHVMY3dYCmc1cUVsMkd2cklmd05JSXBOV3ZoOEJvaFhyc1pOZVNlNHhGMVFqY0R2QVE4Q0xta2J2T011UGI5bGtwalBCMmsKV055RTVtVEZCZ2NCQ3dzSGhjUHhyN0E3cXJXMmtxbU1MbUJpc2dHZm9ieXFWZy90cTYzS1oxYlRvWjBIbXhicQp6TkpOZUJpbm9jbi8xblJBK3NrQ0F3RUFBYU5aTUZjd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZLVkxrbVQ5RTNwTmp3aThsck5UdXVtRm1MWHNNQlVHQTFVZEVRUU8KTUF5Q0NtdDFZbVZ5Ym1WMFpYTXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBSm5QR24rR012S1ZadFVtZVc2bQoxanY2SmYvNlBFS2JzSHRkN2dINHdwREI3YW9pQVBPeTE0bVlYL2d5WWgyZHdsRk9hTWllVS9vUFlmRDRUdGxGCkZMT08yVkdLVTJBSmFNYnVBekw4ZTlsTFREM0xLOGFJUm1FWFBhQkR2V3VUYXZuSTZCWDhiNUs4SndraVd0R24KUFh0ejZhOXZDK1BoaWZDR0phMkNxQWtJV0Nrc0lWenNJcWJ0dkEvb1pHK1dhMlduemFlMC9OUFl4QS8waldOMwpVcGtDWllFaUQ4VlUwenRIMmNRTFE4Z2Mrb21uc3ljaHNjaW5KN3JsZS9XbVFES3ZhVUxLL0xKVTU0Vm1DM2grCnZkaWZtQStlaFZVZnJaTWx6SEZRbWdzMVJGMU9VczNWWUd0REt5YW9uRkc0VFlKa1NvM0IvRlZOQ0ZtcnNHUTYKZWV3PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
namespace: Zmxhc2hjYXQ=
token: ZXlKaGJHY2lPaUpTVXpJMU5pSXNJbXRwWkNJNklqRTJZVTlNU2pObFFVbEhlbmhDV1dsVmFIcEVTRlZVWVdoZlZVaDZSbmd6TUZGZlVWUjJUR0pzVUVraWZRLmV5SnBjM01pT2lKcmRXSmxjbTVsZEdWekwzTmxjblpwWTJWaFkyTnZkVzUwSWl3aWEzVmlaWEp1WlhSbGN5NXBieTl6WlhKMmFXTmxZV05qYjNWdWRDOXVZVzFsYzNCaFkyVWlPaUptYkdGemFHTmhkQ0lzSW10MVltVnlibVYwWlhNdWFXOHZjMlZ5ZG1salpXRmpZMjkxYm5RdmMyVmpjbVYwTG01aGJXVWlPaUpqWVhSbFozSmhaaTFrWVdWdGIyNXpaWFF0ZEc5clpXNHROMjFqWTNFaUxDSnJkV0psY201bGRHVnpMbWx2TDNObGNuWnBZMlZoWTJOdmRXNTBMM05sY25acFkyVXRZV05qYjNWdWRDNXVZVzFsSWpvaVkyRjBaV2R5WVdZdFpHRmxiVzl1YzJWMElpd2lhM1ZpWlhKdVpYUmxjeTVwYnk5elpYSjJhV05sWVdOamIzVnVkQzl6WlhKMmFXTmxMV0ZqWTI5MWJuUXVkV2xrSWpvaU1qSm1OV0UzT0RVdE9EY3hZeTAwTkRVMExXSTRNbVV0TVRKaVpqRXdORFExTUdFd0lpd2ljM1ZpSWpvaWMzbHpkR1Z0T25ObGNuWnBZMlZoWTJOdmRXNTBPbVpzWVhOb1kyRjBPbU5oZEdWbmNtRm1MV1JoWlcxdmJuTmxkQ0o5Lm03czJ2Z1JuZDJzMDJOUkVwakdpc0JYLVBiQjBiRjdTRUFqb2RjSk9KLWh6YWhzZU5FSDFjNGNDbXotMDN5Z1Rkal9NT1VKaWpCalRmaW9FSWpGZHRCS0hEMnNjNXlkbDIwbjU4VTBSVXVDemRYQl9tY0J1WDlWWFM2bE5zYVAxSXNMSGdscV9Sbm5XcDZaNmlCaWp6SU05QUNuckY3MGYtd1FZTkVLc2MzdGhubmhSX3E5MkdkZnhmdGU2NmhTRGthdGhPVFRuNmJ3ZnZMYVMxV1JCdEZ4WUlwdkJmVXpkQ1FBNVhRYVNPck00RFluTE5uVzAxWDNqUGVZSW5ka3NaQ256cmV6Tnp2OEt5VFRTSlJ2VHVKMlZOU2lHaDhxTEgyZ3IzenhtQm5Qb1d0czdYeFhBTkJadG0yd0E2OE5FXzY0SlVYS0tfTlhfYmxBbFViakwtUQ==
kind: Secret
metadata:
annotations:
kubernetes.io/service-account.name: categraf-daemonset
kubernetes.io/service-account.uid: 22f5a785-871c-4454-b82e-12bf104450a0
creationTimestamp: "2022-11-14T03:53:54Z"
name: categraf-daemonset-token-7mccq
namespace: flashcat
resourceVersion: "120570509"
uid: 0a228da5-6e60-4b22-beff-65cc56683e41
type: kubernetes.io/service-account-token
? ? 我們把這個token字段拿到,然后base64轉碼一下,作為Bearer Token來請求測試一下:
[root@tt-fc-dev01.nj qinxiaohui]# token=`kubectl get secret categraf-daemonset-token-7mccq -n flashcat -o jsonpath={.data.token} | base64 -d`
[root@tt-fc-dev01.nj qinxiaohui]# curl -s -k -H "Authorization: Bearer $token" https://localhost:10250/metrics > aaaa
[root@tt-fc-dev01.nj qinxiaohui]# head -n 5 aaaa
# HELP apiserver_audit_event_total [ALPHA] Counter of audit events generated and sent to the audit backend.
# TYPE apiserver_audit_event_total counter
apiserver_audit_event_total 0
# HELP apiserver_audit_requests_rejected_total [ALPHA] Counter of apiserver requests rejected due to an error in audit logging backend.
# TYPE apiserver_audit_requests_rejected_total counter
apiserver_audit_requests_rejected_total 0
通了!
這就說明我們創(chuàng)建的ServiceAccount是好使的,后面我們把 Categraf 作為采集器搞成 Daemonset,再為 Categraf 這個 Daemonset 指定 ServiceAccountName,Kubernetes就會自動把 Token 的內(nèi)容掛到 Daemonset 的目錄里
②Daemonset部署categraf采集監(jiān)控kubelet,kube-proxy
? ? 首先我們先創(chuàng)建categraf的配置文件comfigmap,配置我們的n9e的地址和input-Prometheus插件的配置。
vim categraf-configmap.yaml
---
kind: ConfigMap
metadata:
name: categraf-config
apiVersion: v1
data:
config.toml: |
[global]
hostname = "$HOSTNAME"
interval = 15
providers = ["local"]
[writer_opt]
batch = 2000
chan_size = 10000
[[writers]]
url = "http://10.206.0.16:19000/prometheus/v1/write"
## 這里配置的是你n9e的地址
timeout = 5000
dial_timeout = 2500
max_idle_conns_per_host = 100
---
kind: ConfigMap
metadata:
name: categraf-input-prometheus
apiVersion: v1
data:
prometheus.toml: |
## 這里每一個instances表示一個字段,采集監(jiān)控的目標
[[instances]]
urls = ["http://127.0.0.1:10249/metrics"]
labels = { job="kube-proxy" }
[[instances]]
urls = ["https://127.0.0.1:10250/metrics"]
bearer_token_file = "/var/run/secrets/kubernetes.io/serviceaccount/token" ##這個是認證secrets token的地址
use_tls = true
insecure_skip_verify = true ## 是否跳過認證
labels = { job="kubelet" } ##打標簽
[[instances]]
urls = ["https://127.0.0.1:10250/metrics/cadvisor"]
bearer_token_file = "/var/run/secrets/kubernetes.io/serviceaccount/token"
use_tls = true
insecure_skip_verify = true
labels = { job="cadvisor" }
? ? 然后apply一下 生成配置:
[work@tt-fc-dev01.nj yamls]$ kubectl apply -f categraf-configmap.yaml -n flashcat
configmap/categraf-config unchanged
configmap/categraf-input-prometheus configured
? ? 接下來就是創(chuàng)建categraf的Daemonset了,需要添加上ServiceAccount我們創(chuàng)建的sa賬號
vim categraf-daemonset.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
labels:
app: categraf-daemonset
name: categraf-daemonset
spec:
selector:
matchLabels:
app: categraf-daemonset
template:
metadata:
labels:
app: categraf-daemonset
spec:
containers:
- env:
- name: TZ
value: Asia/Shanghai
- name: HOSTNAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
- name: HOSTIP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.hostIP
image: flashcatcloud/categraf:v0.2.18
imagePullPolicy: IfNotPresent
name: categraf
volumeMounts:
- mountPath: /etc/categraf/conf
name: categraf-config
- mountPath: /etc/categraf/conf/input.prometheus
name: categraf-input-prometheus
hostNetwork: true
serviceAccountName: categraf-daemonset ## sa賬號
restartPolicy: Always
tolerations:
- effect: NoSchedule
operator: Exists
volumes:
- configMap:
name: categraf-config
name: categraf-config
- configMap:
name: categraf-input-prometheus
name: categraf-input-prometheus
apply創(chuàng)建即可?
[work@tt-fc-dev01.nj yamls]$ kubectl apply -f categraf-daemonset.yaml -n flashcat
daemonset.apps/categraf-daemonset created
# waiting...
[work@tt-fc-dev01.nj yamls]$ kubectl get pods -n flashcat
NAME READY STATUS RESTARTS AGE
categraf-daemonset-d8jt8 1/1 Running 0 37s
categraf-daemonset-fpx8v 1/1 Running 0 43s
categraf-daemonset-mp468 1/1 Running 0 32s
categraf-daemonset-s775l 1/1 Running 0 40s
categraf-daemonset-wxkjk 1/1 Running 0 47s
categraf-daemonset-zwscc 1/1 Running 0 35s
③測試數(shù)據(jù)是否采集成功
? ? 測試指標:kubelet_running_pods? 如果你還想知道其他的更改的指標,可以看一下我上一篇文件的kubele組件監(jiān)控最后
列出了常規(guī)經(jīng)常用的一些指標:夜鶯(Flashcat)V6監(jiān)控(五):夜鶯監(jiān)控k8s
? ? ?當然監(jiān)控大盤也有,對應的json文件也在我上一篇文章里,感興趣的可以去導入kubelet監(jiān)控儀表盤,怎么導入儀表盤也在上一篇講解了:夜鶯(Flashcat)V6監(jiān)控(五):夜鶯監(jiān)控k8s組件(上)
這里pod容器相關的儀表盤的json文件地址:categraf/pod-dash.n · flashcatcloud/categraf · GitHub
?
?(2)1.24版本以上的k8s集群部署方法:
①創(chuàng)建secret token 綁定sa賬號
? ? 因為1.24版本以上k8s取消了sa賬號自動綁定secrets token的功能,所以這里我們先創(chuàng)建一個sa然后再手動去綁定secret
kubectl create sa categraf-daemonset -n flashcat
##創(chuàng)建sa賬號
kubectl create token categraf-daemonset -n flashcat
## 創(chuàng)建指定sa的token
? ? 然后創(chuàng)建secret指定我們的sa賬號
vim categraf-daemonset-secre.yaml
apiVersion: v1
kind: Secret
metadata:
name: categraf-daemonset-secret
namespace: flashcat
annotations:
kubernetes.io/service-account.name: categraf-daemonset
type: kubernetes.io/service-account-token
? ? 更改我們創(chuàng)建的sa賬號指定我們的這個secret
[root@k8s-master ~]# kubectl edit sa categraf-daemonset -n flashcat
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: "2023-05-16T06:37:55Z"
name: categraf-daemonset
namespace: flashcat
resourceVersion: "521218"
uid: abd1736b-c12c-4e76-a752-c1dcca9b22be
secrets:
- name: categraf-daemonset-secret ##添加這兩行 指定我們的secret
? ? 然后我們查看是否綁定成功
[root@k8s-master ~]# kubectl get sa -n flashcat
NAME SECRETS AGE
categraf-daemonset 1 2d23h
default 0 5d21h
## 這里secrets 有1 就說明綁定上了
## 我們可以通過命令查看
[root@k8s-master ~]# kubectl get sa categraf-daemonset -n flashcat -o yaml
apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: "2023-05-16T06:37:55Z"
name: categraf-daemonset
namespace: flashcat
resourceVersion: "521218"
uid: abd1736b-c12c-4e76-a752-c1dcca9b22be
secrets:
- name: categraf-daemonset-secret
[root@k8s-master ~]# kubectl get secrets categraf-daemonset-secret -o yaml -n flashcat
apiVersion: v1
data:
ca.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMvakNDQWVhZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJek1ETXlNVEUxTVRRME9Wb1hEVE16TURNeE9ERTFNVFEwT1Zvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTWlkCmNvUkRHTUsvZEcxSmRCVkZVbUxmZzF4YUs4SkxFOS9TTk5aTHYzUFYyWWdNQ3VuMExYSmEzdzV0STlNME04R2QKV3RaMnpsZW4rNWdTM3ZMbDVrd3piK1pla3I5TGpCQmN5ajczbW5lYVV4NW5SUlQvT085UERaVzBYaFNyenJ0QwpYQ3ZmNFJob05kRk1SWXlwSUF1VGFKNHhHQ2x4eU05cTlGaytreCtITGFPcnJVQ1ZUYk1wQXYyNm5DY1BjaWdrCjM3aXlnOEp5c3hXYk51UmhwYWp1Z2g3ODRsOHpNVHlidUdiNHZpMWFmQStGRXJtNGNWQnY5L0NqeDhvdFdlY1cKc1YxZW9VKzR2d2p4RFQwS1RuVHZ1cmdRTnpoVW5NcXRjcDZZdldGekpPSnE0SWs1RlM1d2ZaVVlCbGt2eGo0UQp3TmNaVGFGREVtTVFHdWY0NWNNQ0F3RUFBYU5aTUZjd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZNb3c5N3ZoTlNSVVJJV0VJWm0xSGRYR0ZjdEhNQlVHQTFVZEVRUU8KTUF5Q0NtdDFZbVZ5Ym1WMFpYTXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBRTRJeHZ6OFVsWU9uZGVYRW9nWApsSE5QN0FsNno2QnZjRU54K1g5OVJrMDI2WGNaaUJqRmV0ZjhkMlhZWlhNSVVub3poSU5RNFNyRlpOR3lGeUtRCjZqelVUVjdhR2pKczJydnI1aDBJazFtTVU5VXJMVGJCSk5GOExqall2bVlyTEY5OTM4SldRRFVFSGhCaTJ0NW0KWFcyYS8vZkJ2ZHF3SkhPSDVIU082RUZFU2NjT05EZU5aQWhZTnJEMjZhZDU0c0U3Ti9adDcxenFjMHZ4SFdvRQpuQlZZOVBTcGRKTm1WWjgzL1FjbHViMWRhREpzR1R2UDJLdU1OTy9EcEQwa0Q2bHFuQmR1VndubHp2cFlqYXhUCnFXVCs0UHJ2OXM5RE91MXowYlMzUW1JM0E0cGtFM1JYdFBZUXN0Vmw1OXg1djM4QjI5U0lnNGl1SU1EZ1JPcXkKSzNJPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
namespace: Zmxhc2hjYXQ=
token: ZXlKaGJHY2lPaUpTVXpJMU5pSXNJbXRwWkNJNklrZ3hhVlpXUkhacVMyOUhNSFpNWTAxVFVFeFhOa3hFUkhoWlFYTjJia1l3WVVKWU1HMWpTRGhLT1VVaWZRLmV5SnBjM01pT2lKcmRXSmxjbTVsZEdWekwzTmxjblpwWTJWaFkyTnZkVzUwSWl3aWEzVmlaWEp1WlhSbGN5NXBieTl6WlhKMmFXTmxZV05qYjNWdWRDOXVZVzFsYzNCaFkyVWlPaUptYkdGemFHTmhkQ0lzSW10MVltVnlibVYwWlhNdWFXOHZjMlZ5ZG1salpXRmpZMjkxYm5RdmMyVmpjbVYwTG01aGJXVWlPaUpqWVhSbFozSmhaaTFrWVdWdGIyNXpaWFF0YzJWamNtVjBJaXdpYTNWaVpYSnVaWFJsY3k1cGJ5OXpaWEoyYVdObFlXTmpiM1Z1ZEM5elpYSjJhV05sTFdGalkyOTFiblF1Ym1GdFpTSTZJbU5oZEdWbmNtRm1MV1JoWlcxdmJuTmxkQ0lzSW10MVltVnlibVYwWlhNdWFXOHZjMlZ5ZG1salpXRmpZMjkxYm5RdmMyVnlkbWxqWlMxaFkyTnZkVzUwTG5WcFpDSTZJbUZpWkRFM016WmlMV014TW1NdE5HVTNOaTFoTnpVeUxXTXhaR05qWVRsaU1qSmlaU0lzSW5OMVlpSTZJbk41YzNSbGJUcHpaWEoyYVdObFlXTmpiM1Z1ZERwbWJHRnphR05oZERwallYUmxaM0poWmkxa1lXVnRiMjV6WlhRaWZRLkFnTmwyNkMzUHg4ekE1Z0tGRDJya0N3Vzl6aS11ZzBQdnFMSnBUNEZ4QmFyTTVtSmE1VGlpQ1o5RzNKQmgtMm4tcTcwbHBPMVZLSUQzaGJMNzBIV3p6cEdKdThTUEZNeFN3OVRYTjhleFp3ZFVmcGVtbXF6azZvV3owRzBNbXVJSHFpRWJIVHNOZnluNEhDWHhHZmJRX0tmaDNEaURESThkYTBRdldLaWd0NTBMS0lRQlVFT3ZwSnpURVh1YmFxWHExbFdnV1VBQ0VPTktYZmxzbWEweDEwdUExY1JkOXF1UzdEWE93cWRnLUF1NGZVb0lmTTdRcTZyeEFFT2pXQnJWbmNYV1VzSnlNbk5uM0xQMXBWVUFCa21wQzFjYVJkSVB5bHBLYnpHaVlwSlloRjJKT3BwRWU2SUZsYTNwX1NHeVp5WUV2UkpPNXJNVVhTSWZJTnZMZw==
kind: Secret
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Secret","metadata":{"annotations":{"kubernetes.io/service-account.name":"categraf-daemonset"},"name":"categraf-daemonset-secret","namespace":"flashcat"},"type":"kubernetes.io/service-account-token"}
kubernetes.io/service-account.name: categraf-daemonset
kubernetes.io/service-account.uid: abd1736b-c12c-4e76-a752-c1dcca9b22be
creationTimestamp: "2023-05-16T06:39:26Z"
name: categraf-daemonset-secret
namespace: flashcat
resourceVersion: "520851"
uid: 1095820e-c420-47c4-9a52-c051c19b9f17
type: kubernetes.io/service-account-token
②給sa賬號權限
? ? 然后創(chuàng)建我們的權限綁定我們的sa賬號,讓我們的sa賬號有對我們k8s集群查看采集metrices的權限
vim categraf-daemonset-rbac.yaml
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: categraf-daemonset
rules:
- apiGroups: [""]
resources:
- nodes
- nodes/metrics
- nodes/stats
- nodes/proxy
- services
- endpoints
- pods
verbs: ["get", "list", "watch"]
- apiGroups:
- extensions
- networking.k8s.io
resources:
- ingresses
verbs: ["get", "list", "watch"]
- nonResourceURLs: ["/metrics", "/metrics/cadvisor"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: categraf-daemonset
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: categraf-daemonset
subjects:
- kind: ServiceAccount
name: categraf-daemonset ##這里是我們綁定的sa
namespace: flashcat
kubectl apply -f categraf-daemonset-rbac.yaml
③測試認證
? ? 創(chuàng)建完成后,我們base64轉碼測試一下我們創(chuàng)建綁定的secrets token是否生效
[root@k8s-master daemonset]# token=`kubectl get secret categraf-daemonset-secret -n flashcat -o jsonpath={.data.token} | base64 -d`
[root@k8s-master daemonset]# curl -s -k -H "Authorization: Bearer $token" https://localhost:10250/metrics > aaaa
[root@k8s-master daemonset]# head -n 5 aaaa
# HELP apiserver_audit_event_total [ALPHA] Counter of audit events generated and sent to the audit backend.
# TYPE apiserver_audit_event_total counter
apiserver_audit_event_total 0
# HELP apiserver_audit_requests_rejected_total [ALPHA] Counter of apiserver requests rejected due to an error in audit logging backend.
# TYPE apiserver_audit_requests_rejected_total counter
? 最后把我們的token通過base64永久保存到一個指定的文件里面,這樣我們我在categraf的input-Prometheus的配置文件里面可以直接引用我們的token采集metrices
kubectl get secret categraf-daemonset-secret -n flashcat -o jsonpath='{.data.token}' | base64 -d > /var/run/secrets/kubernetes.io/serviceaccount/token
④Daemonset部署categraf采集監(jiān)控kubelet,kube-proxy
? ? 接下來的操作就跟上面1.24版本以下一樣了,配置categraf的配置文件,然后啟動categraf-daemonset就可以了
首先創(chuàng)建配置我們的categraf的配置文件的configmap
---
kind: ConfigMap
metadata:
name: categraf-config
apiVersion: v1
data:
config.toml: |
[global]
hostname = "$HOSTNAME"
interval = 15
providers = ["local"]
[writer_opt]
batch = 2000
chan_size = 10000
[[writers]]
url = "http://192.168.120.17:17000/prometheus/v1/write"
timeout = 5000
dial_timeout = 2500
max_idle_conns_per_host = 100
---
kind: ConfigMap
metadata:
name: categraf-input-prometheus
apiVersion: v1
data:
prometheus.toml: |
[[instances]]
urls = ["http://127.0.0.1:10249/metrics"]
labels = { job="kube-proxy" }
[[instances]]
urls = ["https://127.0.0.1:10250/metrics"]
bearer_token_file = "/var/run/secrets/kubernetes.io/serviceaccount/token"
use_tls = true
insecure_skip_verify = true
labels = { job="kubelet" }
[[instances]]
urls = ["https://127.0.0.1:10250/metrics/cadvisor"]
bearer_token_file = "/var/run/secrets/kubernetes.io/serviceaccount/token"
## 這里的token地址就是我們上面base64轉碼后把token放在的那個文件里面的路徑
use_tls = true
insecure_skip_verify = true
labels = { job="cadvisor" }
kubectl apply -f categraf-configmap-v2.yaml -n flashcat
然后創(chuàng)建我們的daemonset的yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
labels:
app: categraf-daemonset
name: categraf-daemonset
spec:
selector:
matchLabels:
app: categraf-daemonset
template:
metadata:
labels:
app: categraf-daemonset
spec:
containers:
- env:
- name: TZ
value: Asia/Shanghai
- name: HOSTNAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
- name: HOSTIP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.hostIP
image: flashcatcloud/categraf:v0.2.18
imagePullPolicy: IfNotPresent
name: categraf
volumeMounts:
- mountPath: /etc/categraf/conf
name: categraf-config
- mountPath: /etc/categraf/conf/input.prometheus
name: categraf-input-prometheus
hostNetwork: true
serviceAccountName: categraf-daemonset
restartPolicy: Always
tolerations:
- effect: NoSchedule
operator: Exists
volumes:
- configMap:
name: categraf-config
name: categraf-config
- configMap:
name: categraf-input-prometheus
name: categraf-input-prometheus
kubectl apply -f categraf-daemonset-v2.yaml -n flashcat
然后都apply 一下,讓配置生成就可以了
[root@k8s-master daemonset]# kubectl get pod -n flashcat
NAME READY STATUS RESTARTS AGE
categraf-daemonset-26rsz 1/1 Running 0 3h36m
categraf-daemonset-7qc6p 1/1 Running 0 3h36m
⑤測試采集是否成功
?? ? 測試指標:kubelet_running_pods? 如果你還想知道其他的更改的指標,可以看一下我上一篇文件的kubele組件監(jiān)控最后
列出了常規(guī)經(jīng)常用的一些指標:夜鶯(Flashcat)V6監(jiān)控(五):夜鶯監(jiān)控k8s
? ? ?當然監(jiān)控大盤也有,對應的json文件也在我上一篇文章里,感興趣的可以去導入kubelet監(jiān)控儀表盤,怎么導入儀表盤也在上一篇講解了:夜鶯(Flashcat)V6監(jiān)控(五):夜鶯監(jiān)控k8s組件(上)
這里pod容器相關的儀表盤的json文件地址:categraf/pod-dash.n · flashcatcloud/categraf · GitHub
(三)使用kube-state-metrics監(jiān)控kubernetes對象?
kube-state-metrics Github地址:kubernetes/kube-state metrics. (github.com)
? ? 前面的系列文章我們花費了大量篇幅介紹了 Kubernetes 各個組件的監(jiān)控指標,Node 節(jié)點上的 Kube-Proxy、Kubelet,Master 節(jié)點的 APIServer、Controller-manager、Scheduler、ETCD。但是,如果我想知道總共有幾個 Namespace,有幾個 Service、Deployment、Statefulset,某個 Deployment 期望有幾個 Pod 要運行,實際有幾個 Pod 在運行,這些既有的指標就無法回答了。
? ??這些信息需要讀取 Kubernetes 的 Metadata 才可以,有需求就有解,KSM就是專門解決這個需求。KSM 會調(diào)用 kube-apiserver 的接口,監(jiān)聽各個 Kubernetes 對象的狀態(tài),生成指標暴露出來。
Kube-state-metrics 提供了以下功能:
- 實時監(jiān)控集群狀態(tài),包括資源使用情況、網(wǎng)絡性能和安全性等指標。
- 自動收集和存儲數(shù)據(jù),無需手動重啟節(jié)點或應用程序。
- 通過擴展點,可以在集群中添加更多的節(jié)點,以提高監(jiān)控覆蓋率。
- 提供報警功能,以及通過圖表和警報等可視化工具來可視化監(jiān)控數(shù)據(jù)。
- 支持多種數(shù)據(jù)格式和頻率,以適應不同的需求
下面我們部署測試一下。
(1)下載配置kube-state-metrices
? ? KSM 既然要訪問 APIServer,讀取相關的信息,那就需要有權限控制,需要有 ServiceAccount、ClusterRole、ClusterRoleBinding 這些東西。在 KSM 的代碼倉庫中可以直接找到這些 yaml,而且還有 deployment 和 service 相關的 yaml,一步到位:
? ? 這里我們可以直接把包全部下到本地然后把examples文件導入我們的linux里 也可以直接在GitHub里面把examples文件的standard目錄下的yaml文件都復制到我們linux
把standard文件夾傳到我們linux服務器里面
?
? ? 這里需要配置我們的時區(qū)還有鏡像,這里默認的鏡像- image: registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.8.2 這個鏡像用的k8s的官方的鏡像站,在我們國內(nèi)根本拉不下來,這里我們可以通過kube-state-metrices的GitHub項目自己構建一個鏡像站來生成kube-state-metrices鏡像
這里我自己構建了一個,大家可以拿來直接使用 :
registry.cn-qingdao.aliyuncs.com/dream-1/dream-ksm:v2.8.2
也可以自己構建,手把手的構建方式我在我的一篇博客里寫出來了,大家可以參考一下:
(一) Docker Hub網(wǎng)站倉庫國內(nèi)進不去了?手把手教你通過GitHub項目構建自己的鏡像倉庫站!_Dream云原生夢工廠的博客-CSDN博客
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/component: exporter
app.kubernetes.io/name: kube-state-metrics
app.kubernetes.io/version: 2.8.2
name: kube-state-metrics
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: kube-state-metrics
template:
metadata:
labels:
app.kubernetes.io/component: exporter
app.kubernetes.io/name: kube-state-metrics
app.kubernetes.io/version: 2.8.2
spec:
automountServiceAccountToken: true
containers:
- image: registry.cn-qingdao.aliyuncs.com/dream-1/dream-ksm:v2.8.2
## 這里為我們自己構建的kube-state-metrices鏡像地址
livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 5
timeoutSeconds: 5
name: kube-state-metrics
env:
- name: TZ
value: Asia/Shanghai ## 這里三行的時區(qū)要同步加上去
ports:
- containerPort: 8080
name: http-metrics
- containerPort: 8081
name: telemetry
readinessProbe:
httpGet:
path: /
port: 8081
initialDelaySeconds: 5
timeoutSeconds: 5
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
runAsUser: 65534
nodeSelector:
kubernetes.io/os: linux
serviceAccountName: kube-state-metrics
然后apply一下這個文件夾里面所有的yaml文件就可以了?
kubectl apply -f standard/
細心的小伙伴肯定發(fā)現(xiàn)了,ksm在service文件中暴露了兩個http端口8081,8080:
vim service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/component: exporter
app.kubernetes.io/name: kube-state-metrics
app.kubernetes.io/version: 2.8.2
name: kube-state-metrics
namespace: kube-system
spec:
clusterIP: None
ports:
- name: http-metrics
port: 8080
targetPort: http-metrics
- name: telemetry
port: 8081
targetPort: telemetry
selector:
app.kubernetes.io/name: kube-state-metrics
8080 端口返回的內(nèi)容就是各類 Kubernetes 對象信息,比如 node 相關的信息;
8081 端口,暴露的是 KSM 自身的指標,KSM 要調(diào)用 APIServer 的接口,watch 相關數(shù)據(jù),需要度量這些動作的健康狀況
這里我們也可以直接curl測試一下,這里的IP地址必須是你pod暴露出來的IP地址才能采集查看到。
[root@k8s-master ~]# kubectl get pod -A -o wide | grep kube-state
kube-system kube-state-metrics-58b98984fc-nk9rm 1/1 Running 2 (4h14m ago) 18h 10.95.156.119 k8s-node1 <none> <none>
[root@k8s-master ~]# kubectl get endpoints -A | grep kube-state
kube-system kube-state-metrics 10.95.156.119:8081,10.95.156.119:8080 21h
[root@k8s-master ~]#curl 10.95.156.119:8080/metrics
# HELP kube_certificatesigningrequest_annotations Kubernetes annotations converted to Prometheus labels.
# TYPE kube_certificatesigningrequest_annotations gauge
# HELP kube_certificatesigningrequest_labels [STABLE] Kubernetes labels converted to Prometheus labels.
# TYPE kube_certificatesigningrequest_labels gauge
# HELP kube_certificatesigningrequest_created [STABLE] Unix creation timestamp
# TYPE kube_certificatesigningrequest_created gauge
# HELP kube_certificatesigningrequest_condition [STABLE] The number of each certificatesigningrequest condition
# TYPE kube_certificatesigningrequest_condition gauge
# HELP kube_certificatesigningrequest_cert_length [STABLE] Length of the issued cert
# TYPE kube_certificatesigningrequest_cert_length gauge
# HELP kube_configmap_annotations Kubernetes annotations converted to Prometheus labels.
# TYPE kube_configmap_annotations gauge
kube_configmap_annotations{namespace="kube-system",configmap="extension-apiserver-authentication"} 1
kube_configmap_annotations{namespace="kube-system",configmap="kube-proxy"} 1
kube_configmap_annotations{namespace="flashcat",configmap="categraf-config"} 1
[root@k8s-master ~]#curl 10.95.156.119:8081/metrics
kube_state_metrics_watch_total{resource="*v1.Deployment",result="success"} 14567
kube_state_metrics_watch_total{resource="*v1.Endpoints",result="success"} 14539
kube_state_metrics_list_total{resource="*v1.VolumeAttachment",result="error"} 1
kube_state_metrics_list_total{resource="*v1.VolumeAttachment",result="success"} 2
(2)抓緊KSM指標
? ? 這里指標的抓取,我們也可以使用兩種方式,一種是categraf的input-Prometheus 的插件來采集抓取,還有一種是通過Prometheus-agent的endpoint的服務發(fā)現(xiàn)采集抓取指標。
? ??兩者的區(qū)別就是如果你的kubernetets集群如果物理機關機重啟后,pod的IP地址就會飄走,變,如果你使用categraf的方式的話那么每次你都要去檢查你的pod狀態(tài),查看url的地址。而如果你使用的Prometheus-agent的方式他就不會出現(xiàn)這種問題,因為你的endpoint的端口一直都是8081 8080,不會發(fā)生變化,所以不會飄走。但是Prometheus-agent的方式比較重消資源,categraf比較輕一點,但是在后續(xù)的版本里面,categraf也會增加服務發(fā)現(xiàn)功能,那個時候直接使用categraf即可,這次我兩種都給大家配置一遍。
①Prometheus-agent抓取
? ? 只需要在我們之前采集的Prometheus-agent的configmap里面增加一行scrape規(guī)則即可,如果不知道Prometheus-agent方式怎么部署的,也可以查看我上一篇博客,里面詳細講解了,這里添加兩個job name即可
apiVersion: v1
kind: ConfigMap
metadata:
name: prometheus-agent-conf
labels:
name: prometheus-agent-conf
namespace: flashcat
data:
prometheus.yml: |-
global:
scrape_interval: 15s
evaluation_interval: 15s
scrape_configs:
- job_name: 'kube-state-metrics'
kubernetes_sd_configs:
- role: endpoints
scheme: http
relabel_configs:
- source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
action: keep
regex: kube-system;kube-state-metrics;http-metrics
- job_name: 'kube-state-metrics-self'
kubernetes_sd_configs:
- role: endpoints
scheme: http
relabel_configs:
- source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
action: keep
regex: kube-system;kube-state-metrics;telemetry
remote_write:
- url: 'http://192.168.120.17:17000/prometheus/v1/write'
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: categraf
rules:
- apiGroups: [""]
resources:
- nodes
- nodes/metrics
- nodes/stats
- nodes/proxy
- services
- endpoints
- pods
verbs: ["get", "list", "watch"]
- apiGroups:
- extensions
- networking.k8s.io
resources:
- ingresses
verbs: ["get", "list", "watch"]
- nonResourceURLs: ["/metrics", "/metrics/cadvisor"]
verbs: ["get"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: categraf
namespace: flashcat
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: categraf
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: categraf
subjects:
- kind: ServiceAccount
name: categraf
namespace: flashcat
apiVersion: apps/v1
kind: Deployment
metadata:
name: prometheus-agent
namespace: flashcat
labels:
app: prometheus-agent
spec:
replicas: 1
selector:
matchLabels:
app: prometheus-agent
template:
metadata:
labels:
app: prometheus-agent
spec:
serviceAccountName: categraf
containers:
- name: prometheus
image: prom/prometheus
args:
- "--config.file=/etc/prometheus/prometheus.yml"
- "--web.enable-lifecycle"
- "--enable-feature=agent"
ports:
- containerPort: 9090
resources:
requests:
cpu: 500m
memory: 500M
limits:
cpu: 1
memory: 1Gi
volumeMounts:
- name: prometheus-config-volume
mountPath: /etc/prometheus/
- name: prometheus-storage-volume
mountPath: /prometheus/
volumes:
- name: prometheus-config-volume
configMap:
defaultMode: 420
name: prometheus-agent-conf
- name: prometheus-storage-volume
emptyDir: {}
②categraf抓取
? ? categraf抓取也很簡單,因為ksm不需要認證權限,我們直接在categraf-input-Prometheus的配置文件里面添加一個instances是ksm的
---
kind: ConfigMap
metadata:
name: categraf-config
apiVersion: v1
data:
config.toml: |
[global]
hostname = "$HOSTNAME"
interval = 15
providers = ["local"]
[writer_opt]
batch = 2000
chan_size = 10000
[[writers]]
url = "http://192.168.120.17:17000/prometheus/v1/write"
timeout = 5000
dial_timeout = 2500
max_idle_conns_per_host = 100
---
kind: ConfigMap
metadata:
name: categraf-input-prometheus
apiVersion: v1
data:
prometheus.toml: |
[[instances]]
urls = ["http://127.0.0.1:10249/metrics"]
labels = { job="kube-proxy" }
[[instances]]
urls = ["https://127.0.0.1:10250/metrics"]
bearer_token_file = "/var/run/secrets/kubernetes.io/serviceaccount/token"
use_tls = true
insecure_skip_verify = true
labels = { job="kubelet" }
[[instances]]
urls = ["https://127.0.0.1:10250/metrics/cadvisor"]
bearer_token_file = "/var/run/secrets/kubernetes.io/serviceaccount/token"
use_tls = true
insecure_skip_verify = true
labels = { job="cadvisor" }
[[instances]] ##這里開始添加
urls = ["http://10.95.156.119:8080/metrics"]
use_tls = true
insecure_skip_verify = true
labels = { job="kube-state-metrics" }
[[instances]]
urls = ["http://10.95.156.119:8081/metrics"]
use_tls = true
insecure_skip_verify = true
labels = { job="kube-state-metrics-self" }
然后重啟apply生成配置文件,重啟categra的daemonset就可以了。
(3)監(jiān)控大盤?
? ? 夜鶯提供的監(jiān)控大盤json文件:categraf/inputs/kube_state_metrics at main? · GitHub
里面的dashboard.json
(4)分片邏輯
? ? KSM 要從 Kubernetes 中讀取所有對象的信息,量是很大的,稍微大點的集群,調(diào)用 8080 端口拉取的數(shù)據(jù)會特別大,可能需要拉取十幾秒甚至幾十秒。最近發(fā)現(xiàn) KSM 支持了分片邏輯,上面的例子我們使用單副本的 Deployment 來做,分片的話使用 Daemonset,每個 Node 上都跑一個 KSM,這個 KSM 只同步與自身節(jié)點相關的數(shù)據(jù),KSM 的官方 README 里說的很清楚了,Daemonset 樣例如下,不做過多介紹了:
apiVersion: apps/v1
kind: DaemonSet
spec:
template:
spec:
containers:
- image: registry.k8s.io/kube-state-metrics/kube-state-metrics:IMAGE_TAG
name: kube-state-metrics
args:
- --resource=pods
- --node=$(NODE_NAME)
env:
- name: NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
KSM 自身運行是否健康,也需要有告警規(guī)則來檢測,官方也提供了相關的alerting rule:
groups:
- name: kube-state-metrics
rules:
- alert: KubeStateMetricsListErrors
annotations:
description: kube-state-metrics is experiencing errors at an elevated rate in list operations. This is likely causing it to not be able to expose metrics about Kubernetes objects correctly or at all.
summary: kube-state-metrics is experiencing errors in list operations.
expr: |
(sum(rate(kube_state_metrics_list_total{job="kube-state-metrics",result="error"}[5m]))
/
sum(rate(kube_state_metrics_list_total{job="kube-state-metrics"}[5m])))
> 0.01
for: 15m
labels:
severity: critical
- alert: KubeStateMetricsWatchErrors
annotations:
description: kube-state-metrics is experiencing errors at an elevated rate in watch operations. This is likely causing it to not be able to expose metrics about Kubernetes objects correctly or at all.
summary: kube-state-metrics is experiencing errors in watch operations.
expr: |
(sum(rate(kube_state_metrics_watch_total{job="kube-state-metrics",result="error"}[5m]))
/
sum(rate(kube_state_metrics_watch_total{job="kube-state-metrics"}[5m])))
> 0.01
for: 15m
labels:
severity: critical
- alert: KubeStateMetricsShardingMismatch
annotations:
description: kube-state-metrics pods are running with different --total-shards configuration, some Kubernetes objects may be exposed multiple times or not exposed at all.
summary: kube-state-metrics sharding is misconfigured.
expr: |
stdvar (kube_state_metrics_total_shards{job="kube-state-metrics"}) != 0
for: 15m
labels:
severity: critical
- alert: KubeStateMetricsShardsMissing
annotations:
description: kube-state-metrics shards are missing, some Kubernetes objects are not being exposed.
summary: kube-state-metrics shards are missing.
expr: |
2^max(kube_state_metrics_total_shards{job="kube-state-metrics"}) - 1
-
sum( 2 ^ max by (shard_ordinal) (kube_state_metrics_shard_ordinal{job="kube-state-metrics"}) )
!= 0
for: 15m
labels:
severity: critical
? ? KSM 提供了兩種方式來過濾要 watch 的對象類型,一個是白名單的方式指定具體要 watch 哪類對象,通過命令行啟動參數(shù)中的?--resources=daemonsets,deployments
?表示只 watch daemonsets 和 deployments,雖然已經(jīng)限制了對象資源類型,如果采集的某些指標仍然不想要,可以采用黑名單的方式對指標做過濾:--metric-denylist=kube_deployment_spec_.*
?這個過濾規(guī)則支持正則寫法,多個正則之間可以使用逗號分隔。文章來源:http://www.zghlxwxcb.cn/news/detail-491379.html
(四)最后的最后
? ? 相信如果你把我這兩篇關于夜鶯監(jiān)控k8s組件的文章讀完,一定可以更好的了解夜鶯的監(jiān)控功能,因為這都是我這一周來慢慢排錯,慢慢排坑做的經(jīng)歷,屬實創(chuàng)造不易!創(chuàng)造不易!點點贊收藏一下下就行,有問題評論區(qū)留言,看見都會解答!?文章來源地址http://www.zghlxwxcb.cn/news/detail-491379.html
到了這里,關于夜鶯(Flashcat)V6監(jiān)控(五):夜鶯監(jiān)控k8s組件(下)---使用kube-state-metrics監(jiān)控K8s對象的文章就介紹完了。如果您還想了解更多內(nèi)容,請在右上角搜索TOY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關文章,希望大家以后多多支持TOY模板網(wǎng)!