国产 无码 综合区,色欲AV无码国产永久播放,无码天堂亚洲国产AV,国产日韩欧美女同一区二区

外獨立部署Prometheus+Grafana+Alertmanager監(jiān)控K8S

這篇具有很好參考價值的文章主要介紹了外獨立部署Prometheus+Grafana+Alertmanager監(jiān)控K8S。希望對大家有所幫助。如果存在錯誤或未考慮完全的地方,請大家不吝賜教,您也可以點擊"舉報違法"按鈕提交疑問。

用集群外的prometheus來監(jiān)控k8s,主要是想把監(jiān)控服務跟k8s集群隔離開,這樣就能減少k8s資源的開銷。

一、環(huán)境準備

CentOS Linux release 7.7.1908 (Core)??3.10.0-1062.el7.x86_64?

Docker version 20.10.21

主機名 IP 備注
prometheus-server.test.cn 192.168.10.166
k8s集群 192.168.10.160:6443 集群master-vip

二、監(jiān)控指標介紹

需要通過exporter收集各種維度的監(jiān)控指標,其維度如下

監(jiān)控維度 監(jiān)控工具 監(jiān)控url 備注
Node性能 node-exporter http://node-ip:9100/metrics 節(jié)點狀態(tài)
Pod性能 kubelet
cadvisor

https://192.168.10.160:6443/api/v1/nodes/node-name:10250/proxy/metrics

https://192.168.10.160:6443/api/v1/nodes/node-name:10250/proxy/metrics/cadvisor

容器狀態(tài)
k8s集群資源 kube-state-metrics http://192.168.10.160:30866/metrics
http://192.168.10.160:30867/metrics
demploy,ds的各種狀態(tài)

三、k8s apiserver授權(quán)

要訪問k8s?apiserver集群需要先進行授權(quán),而集群內(nèi)部Prometheus可以使用集群內(nèi)默認配置進行訪問,而集群外的Prometheus訪問需要使用token+客戶端cert進行認證,因此需要先進行RBAC授權(quán)。

由于我們需要訪問不同的namespace,因此我們最好分配cluster-admin,以免權(quán)限不足。具體步驟如下:

# 創(chuàng)建命名空間
kubectl create ns devops
# 創(chuàng)建serviceaccounts
kubectl create sa  prometheus -n devops
# 創(chuàng)建prometheus角色并對其綁定cluster-admin
kubectl create clusterrolebinding prometheus --clusterrole cluster-admin --serviceaccount=devops:prometheus

雖創(chuàng)建了serviceaccount,但訪問apiserver并不是直接使用serviceaccount,而是通過token。因此我們需要獲取serviceaccount:prometheus對應的token,而此token是經(jīng)過base64加密過的,必須解密后才能使用。

# 1.查看sa,在devops
# kubectl get sa  -n devops
NAME                 SECRETS   AGE
default              1         7d1h
prometheus           1         7d1h


# 2.查看secret
# kubectl get sa prometheus -o yaml -n devops
apiVersion: v1
kind: ServiceAccount
metadata:
  creationTimestamp: "2022-11-28T14:01:29Z"
  name: prometheus
  namespace: devops
  resourceVersion: "2452117"
  uid: 949d3611-04fd-435f-8a93-df189ba27cdf
secrets:
- name: prometheus-token-c9f99

# 3.獲取token,從yaml中得到token
# kubectl get secret  prometheus-token-c9f99 -o yaml -n devops
apiVersion: v1
data:
  ca.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tL.................省略
  namespace: ZGV2b3Bz
  token: ZXlKaGJHY2lPaUpTVXpJMU5pSXNJbXRwWkNJN...................省略
kind: Secret
metadata:
  annotations:
    kubernetes.io/service-account.name: prometheus
    kubernetes.io/service-account.uid: 949d3611-04fd-435f-8a93-df189ba27cdf
  creationTimestamp: "2022-11-28T14:01:29Z"
  name: prometheus-token-c9f99
  namespace: devops
  resourceVersion: "2452116"
  uid: 43393401-e7f0-4b58-add5-b88b2abc302f
type: kubernetes.io/service-account-token

# 4.token解密
# 由于此token是經(jīng)過base64加密的,我們需要通過base64解密獲取token值,解出的值要存起來,后面會用到
# echo "ZXlKaGJHY2lPaUpTVXpJMU5pSXNJbXRwWkNJN...................省略" |base64 -d

通過上面的一系列操作獲得的token,解密后把它存成k8s_token文件,后面需要在配置Prometheus server時要用作為bearer_token訪問k8s的apiserver

四、監(jiān)控指標收集工具安裝

1、node指標收集工具安裝

node-exporter工具主要用來采集集群node節(jié)點的服務器層面的數(shù)據(jù),如cpu、內(nèi)存、磁盤、網(wǎng)卡流量等,監(jiān)控的url是:http://node-ip:9100/metrics

node-exporter工具也可以用docker方式獨立部署在服務器上,但是獨立部署有一個問題 ,就是在集群擴展上,需要手動部署,并且prometheus server也需要手動修改配置文件,非常麻煩。因此我們采用在k8s集群上將node-exporter工具以DaemonSet形式部署,配合Prometheus動態(tài)發(fā)現(xiàn)更加方便

kind: DaemonSet
apiVersion: apps/v1
metadata:
  name: node-exporter
  annotations:
    prometheus.io/scrape: 'true'  #用于prometheus自動發(fā)現(xiàn)
spec:
  selector:
    matchLabels:
      app: node-exporter
  template:
    metadata:
      labels:
        app: node-exporter
      name: node-exporter
    spec:
      containers:
      - image: quay.io/prometheus/node-exporter:latest
        name: node-exporter
        ports:
        - containerPort: 9100
          hostPort: 9100
          name: node-exporter
      hostNetwork: true  #這里面要修改成hostNetwork,以便于直接通過node url直接訪問
      hostPID: true
      tolerations:  #設置tolerations,以便master節(jié)點也可以安裝
      - key: "node-role.kubernetes.io/master"
        operator: "Exists"
        effect: "NoSchedule"

---
kind: Service
apiVersion: v1
metadata:
  annotations:
    prometheus.io/scrape: 'true'
  labels:
    app: node-exporter
  name: node-exporter
spec:
  type: ClusterIP
  clusterIP: None
  ports:
  - name: node-exporter
    port: 9100
    protocol: TCP
  selector:
    app: node-exporter

kubectl apply -f??node-exporter.yaml

# kubectl get pod -n devops
NAME                                  READY   STATUS    RESTARTS       AGE
node-exporter-2bjln                   1/1     Running   1 (4d6h ago)   7d1h
node-exporter-784sc                   1/1     Running   0              7d1h
node-exporter-klts6                   1/1     Running   0              7d1h
node-exporter-nz29b                   1/1     Running   0              7d1h
node-exporter-tlgjn                   1/1     Running   0              7d1h
node-exporter-xnq67                   1/1     Running   0              7d1h

2、pod指標收集工具安裝

pod的指標是用cAdvisor來收集,目前這個工具集成到 Kubelet中,當Kubelet啟動時會同時啟動cAdvisor,且一個cAdvisor只監(jiān)控一個Node節(jié)點的信息。cAdvisor 自動查找所有在其所在節(jié)點上的容器,自動采集CPU、內(nèi)存、文件系統(tǒng)和網(wǎng)絡使用的統(tǒng)計信息。cAdvisor 通過它所在節(jié)點機的 Root 容器,采集并分析該節(jié)點機的全面使用情況。

當然kubelet也會輸出一些監(jiān)控指標數(shù)據(jù),因此pod的監(jiān)控數(shù)據(jù)有kubelet和cadvisor,監(jiān)控url分別為

https://192.168.10.160:6443/api/v1/nodes/node-name:10250/proxy/metrics

https://192.168.10.160:6443/api/v1/nodes/node-name:10250/proxy/metrics/cadvisor

由于kubelet天然存在,因此直接使用即可,無需做其他配置。

3、k8s集群資源對像指標工具安裝

kube-state-metrics工具是一個簡單的服務,它監(jiān)聽Kubernetes?API服務器并生成關(guān)聯(lián)對象的指標。 它不關(guān)注單個Kubernetes組件的運行狀況,而是關(guān)注內(nèi)部各種對象(如deployment、node、pod等)的運行狀況。

我這邊下載的是kube-state-metrics-2.4.2 版本,這個可以根據(jù)自已的k8s集群版本來選擇,上面有對應列表GitHub - kubernetes/kube-state-metrics: Add-on agent to generate and expose cluster-level metrics.

kube-state-metrics Kubernetes client-go Version
v2.3.0 v1.23
v2.4.2 v1.23
v2.5.0 v1.24
v2.6.0 v1.24
v2.7.0 v1.25
master v1.25
#下載
https://github.com/kubernetes/kube-state-metrics/archive/refs/tags/v2.4.2.zip

#解壓
unzip kube-state-metrics-2.4.2.zip
cd kube-state-metrics-2.4.2/examples/standard

#ls
cluster-role-binding.yaml  cluster-role.yaml  deployment.yaml  service-account.yaml  service.yaml
#看到5個文件,其中在4個文件里加上之前創(chuàng)建的namespace devops
# grep devops *
cluster-role-binding.yaml:  namespace: devops
deployment.yaml:  namespace: devops
service-account.yaml:  namespace: devops
service.yaml:  namespace: devops

#修改service.yaml,增加NodePort,以便Promethues服務可以訪問到

apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/component: exporter
    app.kubernetes.io/name: kube-state-metrics
    app.kubernetes.io/version: 2.4.2
  name: kube-state-metrics
  namespace: devops
spec:
  #clusterIP: None
  type: NodePort  #用NodePort
  ports:
  - name: http-metrics
    port: 8080
    targetPort: http-metrics
    nodePort: 30866  #增加端口
  - name: telemetry
    port: 8081
    targetPort: telemetry
    nodePort: 30867  #增加端口
  selector:
    app.kubernetes.io/name: kube-state-metrics

kubectl apply -f *.yaml

# kubectl get pod -n devops
NAME                                  READY   STATUS    RESTARTS       AGE
kube-state-metrics-554c4b8c57-5cwld   1/1     Running   0              7d
node-exporter-2bjln                   1/1     Running   1 (4d6h ago)   7d1h
node-exporter-784sc                   1/1     Running   0              7d1h
node-exporter-klts6                   1/1     Running   0              7d1h
node-exporter-nz29b                   1/1     Running   0              7d1h
node-exporter-tlgjn                   1/1     Running   0              7d1h
node-exporter-xnq67                   1/1     Running   0              7d1h

五、Promethemus+Grafana+Alertmanager服務部署

1、Promethemus服務部署

ssh登上192.168.10.166,采用docker部署Promethemus服務

mkdir /etc/prometheus/
docker run -d -p 9090:9090 -v /etc/prometheus/:/etc/prometheus/ --restart=always --name=prometheus --net=bridge prom/prometheus

2、配置修改

vim /etc/prometheus/prometheus.yml

# my global config
global:
  scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
  evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
  # scrape_timeout is set to the global default (10s).

# Alertmanager configuration
alerting:
  alertmanagers:
    - static_configs:
        - targets:
           - 192.168.10.166:9093

# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
  # - "first_rules.yml"
  # - "second_rules.yml"
    - "rules/node_exporter.yml"
    - "rules/process_exporter.yml"
    - "rules/pod_exporter.yml"

# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
          # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
#- job_name: "node-monitor"

  # metrics_path defaults to '/metrics'
  # scheme defaults to 'http'.
scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: "prometheus"
    static_configs:
      - targets: ["192.168.10.166:9090"]

#kube-state-metrics服務采集
  - job_name: "kube-state-metrics"
    static_configs:
      - targets: ["192.168.10.160:30866","192.168.10.160:30867"]

#API Serevr節(jié)點指標信息采集
  - job_name: 'kubernetes-apiservers-monitor'
    kubernetes_sd_configs:
    - role: endpoints
      api_server: https://192.168.10.160:6443
      tls_config:
        insecure_skip_verify: true
      bearer_token_file: k8s_token  #這個k8s_token文件就是剛才前面生成的文件,保存在/etc/promethues路徑下
    scheme: https
    tls_config:
      insecure_skip_verify: true
    bearer_token_file: k8s_token
    relabel_configs:
    - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
      action: keep
      regex: default;kubernetes;https
    - source_labels: [__address__]
      regex: '(.*):6443'
      replacement: '${1}:9100'
      target_label: __address__
      action: replace
    - source_labels: [__scheme__]
      regex: https
      replacement: http
      target_label: __scheme__
      action: replace

#node節(jié)點指標信息采集
  - job_name: 'kubernetes-nodes-monitor'
    scheme: http
    tls_config:
      insecure_skip_verify: true
    bearer_token_file: k8s_token
    kubernetes_sd_configs:
    - role: node
      api_server: https://192.168.10.160:6443
      tls_config:
        insecure_skip_verify: true
      bearer_token_file: k8s_token
    relabel_configs:
      - source_labels: [__address__]
        regex: '(.*):10250'
        replacement: '${1}:9100'
        target_label: __address__
        action: replace
      - source_labels: [__meta_kubernetes_node_label_failure_domain_beta_kubernetes_io_region]
        regex: '(.*)'
        replacement: '${1}'
        action: replace
        target_label: LOC
      - source_labels: [__meta_kubernetes_node_label_failure_domain_beta_kubernetes_io_region]
        regex: '(.*)'
        replacement: 'NODE'
        action: replace
        target_label: Type
      - source_labels: [__meta_kubernetes_node_label_failure_domain_beta_kubernetes_io_region]
        regex: '(.*)'
        replacement: 'K8S-test'
        action: replace
        target_label: Env
      - action: labelmap
        regex: __meta_kubernetes_node_label_(.+)

#pod指標信息采集
# kubelet
  - job_name: "kube-node-kubelet"
    scheme: https
    tls_config:
      insecure_skip_verify: true
    bearer_token_file: k8s_token
    kubernetes_sd_configs:
    - role: node
      api_server: "https://192.168.10.160:6443"
      tls_config:
        insecure_skip_verify: true
      bearer_token_file: k8s_token
    relabel_configs:
    - target_label: __address__
      # 使用replacement值替換__address__默認值
      replacement: 192.168.10.160:6443
    - source_labels: [__meta_kubernetes_node_name]
      regex: (.+)
      # 使用replacement值替換__metrics_path__默認值
      target_label: __metrics_path__
      replacement: /api/v1/nodes/${1}:10250/proxy/metrics
    - action: labelmap
      regex: __meta_kubernetes_service_label_(.+)
    - source_labels: [__meta_kubernetes_namespace]
      action: replace
      target_label: kubernetes_namespace
    - source_labels: [__meta_kubernetes_service_name]
      action: replace
      target_label: service_name

#advisor
  - job_name: "kube-node-cadvisor"
    scheme: https
    tls_config:
      insecure_skip_verify: true
    bearer_token_file: k8s_token
    kubernetes_sd_configs:
    - role: node
      api_server: "https://192.168.10.160:6443"
      tls_config:
        insecure_skip_verify: true
      bearer_token_file: k8s_token
    relabel_configs:
    - target_label: __address__
      # 使用replacement值替換__address__默認值
      replacement: 192.168.10.160:6443
    - source_labels: [__meta_kubernetes_node_name]
      regex: (.+)
      # 使用replacement值替換__metrics_path__默認值
      target_label: __metrics_path__
      replacement: /api/v1/nodes/${1}:10250/proxy/metrics/cadvisor
    - action: labelmap
      regex: __meta_kubernetes_service_label_(.+)
    - source_labels: [__meta_kubernetes_namespace]
      action: replace
      target_label: kubernetes_namespace
    - source_labels: [__meta_kubernetes_service_name]
      action: replace
      target_label: service_name

修改完配置文件后,重啟Prometheus服務,然后查看一下日志是否正常,看到以下"Server is ready to receive web requests",代表服務正常

on=2.641481066s wal_replay_duration=5.355716351s total_replay_duration=8.304813832s
ts=2022-12-06T04:49:14.648Z caller=main.go:945 level=info fs_type=EXT4_SUPER_MAGIC
ts=2022-12-06T04:49:14.649Z caller=main.go:948 level=info msg="TSDB started"
ts=2022-12-06T04:49:14.649Z caller=main.go:1129 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml
ts=2022-12-06T04:49:14.665Z caller=main.go:1166 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=16.486363ms db_storage=3.223μs remote_storage=32.417μs web_handler=2.464μs query_engine=3.066μs scrape=757.183μs scrape_sd=1.55814ms notify=60.724μs notify_sd=29.287μs rules=11.47234ms
ts=2022-12-06T04:49:14.665Z caller=main.go:897 level=info msg="Server is ready to receive web requests."

部署完后,在瀏覽器上輸入http://192.168.10.166:9090/targets

看到各種狀態(tài)都是up,說明可以正常收取到信息

外獨立部署Prometheus+Grafana+Alertmanager監(jiān)控K8S

3、Grafana服務部署

mkdir -p /opt/grafana-storage/
#密碼設成admin,第一次進入后,需要修改一個新的密碼
docker run -d -p 3000:3000 --restart=always --name prom-grafana -v /opt/grafana-storage:/var/lib/grafana -v /etc/localtime:/etc/localtime -e "GF_SECURITY_ADMIN_PASSWORD=admin" grafana/grafana

部署完后在瀏覽器上輸入http://192.168.10.166:3000/,就可以打開Grafana頁面了,第一次用戶名和密碼都是admin,進入后需要設置一個新的密碼

##添加數(shù)據(jù)源

外獨立部署Prometheus+Grafana+Alertmanager監(jiān)控K8S
?

?然后可以到https://grafana.com/官網(wǎng)上下載自已的模板,也可以自已創(chuàng)建

4、部署Alertmanager服務

Alertmanager主要是用于郵件報警,微信、釘釘?shù)确战涌诜眨@邊只介紹郵件服務,別的就不介紹了。

ssh登錄192.168.10.166

mkdir -p /etc/alertmanager
mkdir -p /etc/alertmanager/template
docker run -d --restart=always --name=alertmanager -p 9093:9093 -v /etc/alertmanager:/etc/alertmanager -v /etc/localtime:/etc/localtime ?prom/alertmanager

#修改配置文件

vim /etc/alertmanager/alertmanager.yml

global:
  resolve_timeout: 5m
  smtp_smarthost: 'mail.test.cn:2525'
  smtp_from: 'monitor@test.cn'
  smtp_auth_username: 'monitor@test.cn'
  smtp_auth_password:  'xxxxxxxxxxx'
  smtp_require_tls: false

templates:
  - 'template/*.tmpl'

route:
  group_by: ['alertname']
  group_wait: 5s
  group_interval: 5s
  repeat_interval: 1m
  receiver: 'email'

receivers:
 - name: 'email'
   email_configs:
   - to: 'test01@test.cn,it@test.cn'
     html: '{{ template "email.html" . }}'
     send_resolved: true

inhibit_rules:
  - source_match:
      severity: 'critical'
    target_match:
      severity: 'warning'
    equal: ['alertname', 'dev', 'instance']

vim /etc/alertmanager/template/main.tmpl

{{ define "email.html" }}
{{- if gt (len .Alerts.Firing) 0 -}}
{{- range $index, $alert := .Alerts -}}
{{- if eq $index 0 -}}
**********告警通知**********  <br>
告警類型: {{ $alert.Labels.alertname }}  <br>
告警級別: {{ $alert.Labels.severity }}   <br>
{{- end }}
=====================  <br>
告警主題: {{ $alert.Annotations.summary }}  <br>
告警詳情: {{ $alert.Annotations.description }}  <br>
故障時間: {{ $alert.StartsAt.Local }}  <br>
{{ if gt (len $alert.Labels.instance) 0 -}}
故障實例: {{ $alert.Labels.instance }}  <br>
{{- end -}}
{{- end }}
{{- end }}

{{- if gt (len .Alerts.Resolved) 0 -}}
{{- range $index, $alert := .Alerts -}}
{{- if eq $index 0 -}}
**********恢復通知**********  <br>
告警類型: {{ $alert.Labels.alertname }}  <br>
告警級別: {{ $alert.Labels.severity }}  <br>
{{- end }}
=====================  <br>
告警主題: {{ $alert.Annotations.summary }}  <br>
告警詳情: {{ $alert.Annotations.description }}  <br>
故障時間: {{ $alert.StartsAt.Local }}  <br>
恢復時間: {{ $alert.EndsAt.Local }}  <br>
{{ if gt (len $alert.Labels.instance) 0 -}}
故障實例: {{ $alert.Labels.instance }} <br>
{{- end -}}
{{- end }}
{{- end }}
{{- end }}

配置文件修改后,重啟alertmanager服務。在瀏覽上輸入? http://192.168.10.166:9093/#/status

外獨立部署Prometheus+Grafana+Alertmanager監(jiān)控K8S

這樣Alertmanager服務就具備發(fā)郵件的功能了,這時候還需要在Prometheus的配置,需要添加一些規(guī)則,比如host down后,cpu,memery等資源達到多少需要報警?

#修改Prometheus主配置文件,添加以上內(nèi)網(wǎng)

# Alertmanager configuration
alerting:
  alertmanagers:
    - static_configs:
        - targets:
           - 192.168.10.166:9093  ##調(diào)用Alertmanager服務

# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
  # - "first_rules.yml"
  # - "second_rules.yml"
    - "rules/node_exporter.yml"  #node資源報警規(guī)則
    - "rules/process_exporter.yml" #服務進程資源報警規(guī)則
    - "rules/pod_exporter.yml"   #k8s里的服務狀誠資源報警規(guī)則

#把這些規(guī)則文件全部放在rules目錄

mkdir -p /etc/prometheus/rules/

?vim??/etc/prometheus/rules/node_exporter.yml

groups:
  - name: host-monitoring
    rules:
    - alert: hostsDown
      expr: up == 0
      for: 1m
      annotations:
         summary: "主機:{{ $labels.hostname }},{{ $labels.instance }} 關(guān)機"

    - alert: 內(nèi)存使用率
      expr: node_memory_MemAvailable_bytes / node_memory_MemTotal_bytes * 100 < 10
      for: 1m
      labels:
         severity: warning
      annotations:
         summary: "內(nèi)存使用率>90%"
         description: "主機:{{ $labels.hostname }},{{ $labels.instance }},當前值:{{ humanize $value }}"

    - alert: CPU使用率
      expr: 100 - (avg by(instance) (rate(node_cpu_seconds_total{mode="idle"}[2m])) * 100) > 60
      for: 1m
      labels:
         severity: warning
      annotations:
         summary: "CPU使用率>60%"
         description: "主機:{{ $labels.hostname }},{{  $labels.instance }},當前值:{{ humanize $value }}"

    - alert: 磁盤使用率
      expr: (node_filesystem_avail_bytes * 100) / node_filesystem_size_bytes < 10 and ON (instance, device, mountpoint) node
_filesystem_readonly == 0
      for: 1m
      labels:
         severity: warning
      annotations:
         summary: "磁盤使用率>90%"
         description: "主機:{{ $labels.hostname }},{{ $labels.instance }},當前值:{{ humanize $value }}"

vim /etc/prometheus/rules/pod_exporter.yml

groups:
- name: node.rules
  rules:
  - alert: JobDown #檢測job的狀態(tài),持續(xù)5分鐘metrices不能訪問會發(fā)給altermanager進行報警
    expr: up == 0  #0不正常,1正常
    for: 5m  #持續(xù)時間 , 表示持續(xù)5分鐘獲取不到信息,則觸發(fā)報警
    labels:
      severity: error
      #cluster: k8s
    annotations:
      summary: "Job: {{ $labels.job }} down"
      description: "Instance:{{ $labels.instance }}, Job {{ $labels.job }} stop "
  - alert: PodDown
    expr: kube_pod_container_status_running != 1
    for: 2s
    labels:
      severity: warning
      #cluster: k8s
    annotations:
      summary: 'Container: {{ $labels.container }} down'
      description: 'Namespace: {{ $labels.namespace }}, Pod: {{ $labels.pod }} is not running'
  - alert: PodReady
    expr: kube_pod_container_status_ready != 1
    for: 5m   #Ready持續(xù)5分鐘,說明啟動有問題
    labels:
      severity: warning
      #cluster: k8s
    annotations:
      summary: 'Container: {{ $labels.container }} ready'
      description: 'Namespace: {{ $labels.namespace }}, Pod: {{ $labels.pod }} always ready for 5 minitue'
  - alert: PodRestart
    expr: changes(kube_pod_container_status_restarts_total[30m])>0 #最近30分鐘pod重啟
    for: 2s
    labels:
      severity: warning
      #cluster: k8s
    annotations:
      summary: 'Container: {{ $labels.container }} restart'
      description: 'namespace: {{ $labels.namespace }}, pod: {{ $labels.pod }} restart {{ $value }} times'

vim /etc/prometheus/process_exporter.yml

groups:
  - name: Server-monitoring
    rules:
    - alert: etcd
      expr: (namedprocess_namegroup_num_procs{groupname="map[:etcd]"}) == 0       ## map[:一定要寫規(guī)定的進程名]
      for: 30s
      labels:
         severity: error
      annotations:
         summary: "{{ $labels.instance }}: etcd進程服務掛了,已經(jīng)超過30秒"
         value: "{{ $value }}"

然后重啟Prometheus服務,這時候在http://192.168.10.166:9090/rules#host-monitoring

就可以看到生效的規(guī)則。文章來源地址http://www.zghlxwxcb.cn/news/detail-474307.html

到了這里,關(guān)于外獨立部署Prometheus+Grafana+Alertmanager監(jiān)控K8S的文章就介紹完了。如果您還想了解更多內(nèi)容,請在右上角搜索TOY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!

本文來自互聯(lián)網(wǎng)用戶投稿,該文觀點僅代表作者本人,不代表本站立場。本站僅提供信息存儲空間服務,不擁有所有權(quán),不承擔相關(guān)法律責任。如若轉(zhuǎn)載,請注明出處: 如若內(nèi)容造成侵權(quán)/違法違規(guī)/事實不符,請點擊違法舉報進行投訴反饋,一經(jīng)查實,立即刪除!

領(lǐng)支付寶紅包贊助服務器費用

相關(guān)文章

  • k8s集群監(jiān)控cadvisor+prometheus+grafana部署

    k8s集群監(jiān)控cadvisor+prometheus+grafana部署

    目錄 1.新建命名空間monitor 2.部署 2.1部署cadvisor 2.2部署node_exporter 2.3部署prometheus 2.4部署rbac權(quán)限 2.5.部署 metrics 2.6部署grafana 3.測試監(jiān)控效果 參考文章: k8s集群部署cadvisor+node-exporter+prometheus+grafana監(jiān)控系統(tǒng) - cyh00001 - 博客園 準備工作: Cluster集群節(jié)點介紹: master:192.168.136.21(以

    2024年01月16日
    瀏覽(98)
  • 基于k8s容器化部署Prometheus和Grafana可視化監(jiān)控數(shù)據(jù)

    基于k8s容器化部署Prometheus和Grafana可視化監(jiān)控數(shù)據(jù)

    提示:需要提前部署k8s集群(master、node01、node02 .. ) 目錄 1.部署kube-state-metrics 2.部署node-export 3.部署nfs-pv 4.部署alertmanager ?4.1 vim alertmanager-configmap.yml 4.2 vim alertmanager-deployment.yml? ?4.3?vim alertmanager-pvc.yml ?4.4?vim alertmanager-service.yml 5.部署promethus-server 6.部署grafana 6.1.配置grafa

    2024年04月11日
    瀏覽(97)
  • 采用Prometheus+Grafana+Altermanager搭建部署K8S集群節(jié)點可視化監(jiān)控告警平臺

    采用Prometheus+Grafana+Altermanager搭建部署K8S集群節(jié)點可視化監(jiān)控告警平臺

    采用 \\\"Prometheus+Grafana\\\"的開源監(jiān)控系統(tǒng),安裝部署K8S集群監(jiān)控平臺。 并使用Altermanager告警插件,配合使用企業(yè)微信,實現(xiàn)系統(tǒng)集群監(jiān)控報警機制。 主機名稱 IP地址 安裝組件 m1 192.168.200.61 Prometheus+Grafana+Alertmanager+node_exporter m2 192.168.200.62 node_exporter m3 192.168.200.63 node_exporter n1 192

    2024年02月11日
    瀏覽(99)
  • 使用大衛(wèi)的k8s監(jiān)控面板(k8s+prometheus+grafana)

    使用大衛(wèi)的k8s監(jiān)控面板(k8s+prometheus+grafana)

    書接上回,對EKS(AWS云k8s)啟用AMP(AWS云Prometheus)監(jiān)控+AMG(AWS云 grafana),上次我們只是配通了EKS+AMP+AMG的監(jiān)控路徑。這次使用一位大衛(wèi)老師的grafana的面板,具體地址如下: https://grafana.com/grafana/dashboards/15757-kubernetes-views-global/ 為了想Prometheus暴露一些有用的性能指標,需要在

    2024年04月23日
    瀏覽(656)
  • Prometheus+Grafana+AlertManager監(jiān)控Linux主機狀態(tài)

    Prometheus+Grafana+AlertManager監(jiān)控Linux主機狀態(tài)

    Docker搭建并配置Prometheus Docker拉取并配置Grafana Docker安裝并配置Node-Exporter Docker安裝并配置cAdvisor Docker安裝并運行Alertmanager 點擊Add your first data source 選擇Prometheus Prometheus server URL中輸入IP地址及端口號 注意:此處IP地址為Prometheus在Docker容器內(nèi)部的IP地址 查看容器內(nèi)ID方法: 退出

    2024年02月11日
    瀏覽(17)
  • Kubernetes(k8s)上安裝Prometheus和Grafana監(jiān)控

    Kubernetes(k8s)上安裝Prometheus和Grafana監(jiān)控

    當然前提環(huán)境是你得先有一個Kubernetes集群,版本在v1.21.*~v1.27.*之間,當然我已經(jīng)準備好了Kubernetes: 可以看到我準備的Kubernetes版本為1.21.14的,符合要求。本篇文章也以這個版本來進行安裝,上面提到的版本安裝步驟和這個版本大體相同,按照步驟來即可。 因為在Kubernetes上安

    2024年02月10日
    瀏覽(778)
  • 【Minikube & Prometheus】基于Prometheus & Grafana監(jiān)控由Minikube創(chuàng)建的K8S集群

    【Minikube & Prometheus】基于Prometheus & Grafana監(jiān)控由Minikube創(chuàng)建的K8S集群

    通過運行以下命令來檢查狀態(tài) 由于使用的是 Minikube,第二個命令 prometheus-server 使用 NodePort . 這樣,當 Pod 準備就緒時,就可以輕松訪問 Prometheus Web 界面: http://192.168.20.20:30944/targets 由于使用的是 Minikube,為了輕松訪問 Grafana 的 Web 界面,將該服務公開為 NodePort 。 注意: Gr

    2024年02月03日
    瀏覽(1019)
  • Prometheus+Grafana(外)監(jiān)控Kubernetes(K8s)集群(基于containerd)

    Prometheus+Grafana(外)監(jiān)控Kubernetes(K8s)集群(基于containerd)

    1、k8s環(huán)境 版本 v1.26.5 二進制安裝Kubernetes(K8s)集群(基于containerd)—從零安裝教程(帶證書) 主機名 IP 系統(tǒng)版本 安裝服務 master01 10.10.10.21 rhel7.5 nginx、etcd、api-server、scheduler、controller-manager、kubelet、proxy master02 10.10.10.22 rhel7.5 nginx、etcd、api-server、scheduler、controller-manager、kubel

    2024年02月16日
    瀏覽(101)
  • 部署prometheus、grafana、alertmanager

    簡介:由于資源有限,本實驗用了兩臺機器 監(jiān)控端:部署prometheus、grafana、alertmanager 被監(jiān)控端:node_exporter、mysqld_exporter https://prometheus.io/download/ ?? mkdir -p /data/prometheus ?? tar -zxvf /root/prometheus-2.42.0.linux-amd64.tar.gz -C /data/ ?? cd /data ?? mv prometheus-2.42.0.linux-amd64/ prometheus 創(chuàng)建

    2023年04月19日
    瀏覽(26)
  • Prometheus+Grafana+AlertManager監(jiān)控SpringBoot項目并發(fā)送郵件告警通知

    Prometheus+Grafana+AlertManager監(jiān)控SpringBoot項目并發(fā)送郵件告警通知

    Docker搭建并配置Prometheus Docker拉取并配置Grafana Docker安裝并配置Node-Exporter Docker安裝并配置cAdvisor Docker安裝并運行Alertmanager 新建項目,引入依賴 新建接口,運行程序 Prometheus配置文件中已添加該項目地址,運行后到Prometheus頁面中查看連接情況 成功 引入依賴 新增push接口,用于

    2024年02月11日
    瀏覽(25)

覺得文章有用就打賞一下文章作者

支付寶掃一掃打賞

博客贊助

微信掃一掃打賞

請作者喝杯咖啡吧~博客贊助

支付寶掃一掃領(lǐng)取紅包,優(yōu)惠每天領(lǐng)

二維碼1

領(lǐng)取紅包

二維碼2

領(lǐng)紅包