国产 无码 综合区,色欲AV无码国产永久播放,无码天堂亚洲国产AV,国产日韩欧美女同一区二区

k8s集群監(jiān)控cadvisor+prometheus+grafana部署

這篇具有很好參考價值的文章主要介紹了k8s集群監(jiān)控cadvisor+prometheus+grafana部署。希望對大家有所幫助。如果存在錯誤或未考慮完全的地方,請大家不吝賜教,您也可以點(diǎn)擊"舉報違法"按鈕提交疑問。

目錄

1.新建命名空間monitor

2.部署

2.1部署cadvisor

2.2部署node_exporter

2.3部署prometheus

2.4部署rbac權(quán)限

2.5.部署 metrics

2.6部署grafana

3.測試監(jiān)控效果


參考文章:

k8s集群部署cadvisor+node-exporter+prometheus+grafana監(jiān)控系統(tǒng) - cyh00001 - 博客園

準(zhǔn)備工作:

Cluster集群節(jié)點(diǎn)介紹:

master:192.168.136.21(以下所步驟都在該節(jié)點(diǎn)進(jìn)行)

worker:192.168.136.22

worker:192.168.136.23

##vim縮進(jìn)混亂,冒號模式下,:set paste進(jìn)入黏貼模式,:set nopaste退出黏貼模式(默認(rèn))。##

1.新建命名空間monitor

kubectl create ns monitor

k8s cadvisor,k8s,云計(jì)算,kubernetes,prometheus,云計(jì)算

k8s cadvisor,k8s,云計(jì)算,kubernetes,prometheus,云計(jì)算

拉取cadvisor鏡像,由于官方的鏡像在在谷歌鏡像中,國內(nèi)無法訪問,我這里直接用別人的,直接拉取即可,注意鏡像名是 lagoudocker/cadvisor:v0.37.0。

docker pull lagoudocker/cadvisor:v0.37.0?

k8s cadvisor,k8s,云計(jì)算,kubernetes,prometheus,云計(jì)算

2.部署

新建 /opt/cadvisor_prome_gra 目錄,配置文件較多,單獨(dú)新建一個目錄。

2.1部署cadvisor

部署cadvisor的DaemonSet資源,DaemonSet資源可以保證集群內(nèi)的每一個節(jié)點(diǎn)運(yùn)行同一組相同的pod,即使是新加入的節(jié)點(diǎn)也會自動創(chuàng)建對應(yīng)的pod。

?vim case1-daemonset-deploy-cadvisor.yaml

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: cadvisor
  namespace: monitor
spec:
  selector:
    matchLabels:
      app: cAdvisor
  template:
    metadata:
      labels:
        app: cAdvisor
    spec:
      tolerations:    #污點(diǎn)容忍,忽略master的NoSchedule
        - effect: NoSchedule
          key: node-role.kubernetes.io/master
      hostNetwork: true
      restartPolicy: Always   # 重啟策略
      containers:
      - name: cadvisor
        image: lagoudocker/cadvisor:v0.37.0
        imagePullPolicy: IfNotPresent  # 鏡像策略
        ports:
        - containerPort: 8080
        volumeMounts:
          - name: root
            mountPath: /rootfs
          - name: run
            mountPath: /var/run
          - name: sys
            mountPath: /sys
          - name: docker
            mountPath: /var/lib/containerd
      volumes:
      - name: root
        hostPath:
          path: /
      - name: run
        hostPath:
          path: /var/run
      - name: sys
        hostPath:
          path: /sys
      - name: docker
        hostPath:
          path: /var/lib/containerd

kubectl apply -f case1-daemonset-deploy-cadvisor.yaml

kubectl get pod -n monitor -owide 查詢

因?yàn)橛腥齻€節(jié)點(diǎn),所以會有三個pod,如果后期加入工作節(jié)點(diǎn),DaemonSet會自動添加。?

k8s cadvisor,k8s,云計(jì)算,kubernetes,prometheus,云計(jì)算

測試cadvisor? <masterIP>:<8080>

k8s cadvisor,k8s,云計(jì)算,kubernetes,prometheus,云計(jì)算

k8s cadvisor,k8s,云計(jì)算,kubernetes,prometheus,云計(jì)算

2.2部署node_exporter

部署node-exporter的DaemonSet資源和Service資源。

vim case2-daemonset-deploy-node-exporter.yaml

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: node-exporter
  namespace: monitor
  labels:
    k8s-app: node-exporter
spec:
  selector:
    matchLabels:
        k8s-app: node-exporter
  template:
    metadata:
      labels:
        k8s-app: node-exporter
    spec:
      tolerations:
        - effect: NoSchedule
          key: node-role.kubernetes.io/master
      containers:
      - image: prom/node-exporter:v1.3.1 
        imagePullPolicy: IfNotPresent
        name: prometheus-node-exporter
        ports:
        - containerPort: 9100
          hostPort: 9100
          protocol: TCP
          name: metrics
        volumeMounts:
        - mountPath: /host/proc
          name: proc
        - mountPath: /host/sys
          name: sys
        - mountPath: /host
          name: rootfs
        args:
        - --path.procfs=/host/proc
        - --path.sysfs=/host/sys
        - --path.rootfs=/host
      volumes:
        - name: proc
          hostPath:
            path: /proc
        - name: sys
          hostPath:
            path: /sys
        - name: rootfs
          hostPath:
            path: /
      hostNetwork: true
      hostPID: true
---
apiVersion: v1
kind: Service
metadata:
  annotations:
    prometheus.io/scrape: "true"
  labels:
    k8s-app: node-exporter
  name: node-exporter
  namespace: monitor
spec:
  type: NodePort
  ports:
  - name: http
    port: 9100
    nodePort: 39100
    protocol: TCP
  selector:
    k8s-app: node-exporter

?kubectl get pod -n monitor

?k8s cadvisor,k8s,云計(jì)算,kubernetes,prometheus,云計(jì)算

?驗(yàn)證 node-exporter 數(shù)據(jù) ,注意是9100端口,<nodeIP>:<9100>

k8s cadvisor,k8s,云計(jì)算,kubernetes,prometheus,云計(jì)算

2.3部署prometheus

prometheus資源包括ConfigMap資源、Deployment資源、Service資源。

vim case3-1-prometheus-cfg.yaml

---
kind: ConfigMap
apiVersion: v1
metadata:
  labels:
    app: prometheus
  name: prometheus-config
  namespace: monitor 
data:
  prometheus.yml: |
    global:
      scrape_interval: 15s
      scrape_timeout: 10s
      evaluation_interval: 1m
    scrape_configs:
    - job_name: 'kubernetes-node'
      kubernetes_sd_configs:
      - role: node
      relabel_configs:
      - source_labels: [__address__]
        regex: '(.*):10250'
        replacement: '${1}:9100'
        target_label: __address__
        action: replace
      - action: labelmap
        regex: __meta_kubernetes_node_label_(.+)
    - job_name: 'kubernetes-node-cadvisor'
      kubernetes_sd_configs:
      - role:  node
      scheme: https
      tls_config:
        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      relabel_configs:
      - action: labelmap
        regex: __meta_kubernetes_node_label_(.+)
      - target_label: __address__
        replacement: kubernetes.default.svc:443
      - source_labels: [__meta_kubernetes_node_name]
        regex: (.+)
        target_label: __metrics_path__
        replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor
    - job_name: 'kubernetes-apiserver'
      kubernetes_sd_configs:
      - role: endpoints
      scheme: https
      tls_config:
        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      relabel_configs:
      - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
        action: keep
        regex: default;kubernetes;https
    - job_name: 'kubernetes-service-endpoints'
      kubernetes_sd_configs:
      - role: endpoints
      relabel_configs:
      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
        action: keep
        regex: true
      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
        action: replace
        target_label: __scheme__
        regex: (https?)
      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
        action: replace
        target_label: __metrics_path__
        regex: (.+)
      - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
        action: replace
        target_label: __address__
        regex: ([^:]+)(?::\d+)?;(\d+)
        replacement: $1:$2
      - action: labelmap
        regex: __meta_kubernetes_service_label_(.+)
      - source_labels: [__meta_kubernetes_namespace]
        action: replace
        target_label: kubernetes_namespace
      - source_labels: [__meta_kubernetes_service_name]
        action: replace
        target_label: kubernetes_service_name

注意case3-2配置文件中的k8s-master記得更改,不能改成本地主機(jī)ip(原因未知)

設(shè)置192.168.136.21(k8s-master)節(jié)點(diǎn)為prometheus數(shù)據(jù)存放路徑 /data/prometheus。

k8s cadvisor,k8s,云計(jì)算,kubernetes,prometheus,云計(jì)算

vim?case3-2-prometheus-deployment.yaml

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: prometheus-server
  namespace: monitor
  labels:
    app: prometheus
spec:
  replicas: 1
  selector:
    matchLabels:
      app: prometheus
      component: server
    #matchExpressions:
    #- {key: app, operator: In, values: [prometheus]}
    #- {key: component, operator: In, values: [server]}
  template:
    metadata:
      labels:
        app: prometheus
        component: server
      annotations:
        prometheus.io/scrape: 'false'
    spec:
      nodeName: k8s-master
      serviceAccountName: monitor
      containers:
      - name: prometheus
        image: prom/prometheus:v2.31.2
        imagePullPolicy: IfNotPresent
        command:
          - prometheus
          - --config.file=/etc/prometheus/prometheus.yml
          - --storage.tsdb.path=/prometheus
          - --storage.tsdb.retention=720h
        ports:
        - containerPort: 9090
          protocol: TCP
        volumeMounts:
        - mountPath: /etc/prometheus/prometheus.yml
          name: prometheus-config
          subPath: prometheus.yml
        - mountPath: /prometheus/
          name: prometheus-storage-volume
      volumes:
        - name: prometheus-config
          configMap:
            name: prometheus-config
            items:
              - key: prometheus.yml
                path: prometheus.yml
                mode: 0644
        - name: prometheus-storage-volume
          hostPath:
           path: /data/prometheusdata
           type: Directory

創(chuàng)建sa和clusterrolebinding

kubectl create serviceaccount monitor -n monitor

kubectl create clusterrolebinding monitor-clusterrolebinding -n monitor --clusterrole=cluster-admin --serviceaccount=monitor:monitor

kubectl apply -f case3-2-prometheus-deployment.yaml

k8s cadvisor,k8s,云計(jì)算,kubernetes,prometheus,云計(jì)算

?case3-2這一步有大坑,用“k8s-master"可以,但是用“192.168.136.21”就不可以!Deployment和pod一直起不來,查看pod的日志顯示找不到“192.168.136.21”主機(jī),改成“k8s-master”也不行,幾天后突然就好了,期間有關(guān)過機(jī)。(原因未知)

?k8s cadvisor,k8s,云計(jì)算,kubernetes,prometheus,云計(jì)算

k8s cadvisor,k8s,云計(jì)算,kubernetes,prometheus,云計(jì)算

vim?case3-3-prometheus-svc.yaml

---
apiVersion: v1
kind: Service
metadata:
  name: prometheus
  namespace: monitor
  labels:
    app: prometheus
spec:
  type: NodePort
  ports:
    - port: 9090
      targetPort: 9090
      nodePort: 30090
      protocol: TCP
  selector:
    app: prometheus
    component: server

kubectl apply -f case3-3-prometheus-svc.yaml

k8s cadvisor,k8s,云計(jì)算,kubernetes,prometheus,云計(jì)算

2.4部署rbac權(quán)限

包括Secret資源、ServiceAccount資源、ClusterRole資源、ClusterRoleBinding資源,ServiceAccount是服務(wù)賬戶,ClusterRole是權(quán)限規(guī)則,ClusterRoleBinding是將ServiceAccount和ClusterRole進(jìn)行綁定。

pod和 apiserver 的認(rèn)證信息通過 secret 進(jìn)行定義,由于認(rèn)證信息屬于敏感信息,所以需要保存在secret 資源當(dāng)中,并以存儲卷的方式掛載到 Pod 當(dāng)中。從而讓 Pod 內(nèi)運(yùn)行的應(yīng)用通過對應(yīng)的secret 中的信息來連接 apiserver,并完成認(rèn)證。

rbac權(quán)限管理是k8s的一套認(rèn)證系統(tǒng),上面只是簡單講解,深入了解可以瀏覽:k8s APIserver 安全機(jī)制之 rbac 授權(quán)_笨小孩@GF 知行合一的博客-CSDN博客_k8s rbac 寫在哪個文件

vim?case4-prom-rbac.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: prometheus
  namespace: monitor

---
apiVersion: v1
kind: Secret
type: kubernetes.io/service-account-token
metadata:
  name: monitor-token
  namespace: monitor
  annotations:
    kubernetes.io/service-account.name: "prometheus"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: prometheus
rules:
- apiGroups:
  - ""
  resources:
  - nodes
  - services
  - endpoints
  - pods
  - nodes/proxy
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - "extensions"
  resources:
    - ingresses
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - configmaps
  - nodes/metrics
  verbs:
  - get
- nonResourceURLs:
  - /metrics
  verbs:
  - get
---
#apiVersion: rbac.authorization.k8s.io/v1beta1
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: prometheus
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: prometheus
subjects:
- kind: ServiceAccount
  name: prometheus
  namespace: monitor

kubectl apply -f case4-prom-rbac.yaml

k8s cadvisor,k8s,云計(jì)算,kubernetes,prometheus,云計(jì)算

k8s cadvisor,k8s,云計(jì)算,kubernetes,prometheus,云計(jì)算

2.5.部署 metrics

包括Deployment資源、Service資源、ServiceAccount資源、ClusterRole資源、ClusterRoleBinding資源。

注意是部署在kube-system!

k8s cadvisor,k8s,云計(jì)算,kubernetes,prometheus,云計(jì)算

vim?case5-kube-state-metrics-deploy.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: kube-state-metrics
  namespace: kube-system
spec:
  replicas: 1
  selector:
    matchLabels:
      app: kube-state-metrics
  template:
    metadata:
      labels:
        app: kube-state-metrics
    spec:
      serviceAccountName: kube-state-metrics
      containers:
      - name: kube-state-metrics
        image: registry.cn-hangzhou.aliyuncs.com/zhangshijie/kube-state-metrics:v2.6.0 
        ports:
        - containerPort: 8080

---
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: kube-state-metrics
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: kube-state-metrics
rules:
- apiGroups: [""]
  resources: ["nodes", "pods", "services", "resourcequotas", "replicationcontrollers", "limitranges", "persistentvolumeclaims", "persistentvolumes", "namespaces", "endpoints"]
  verbs: ["list", "watch"]
- apiGroups: ["extensions"]
  resources: ["daemonsets", "deployments", "replicasets"]
  verbs: ["list", "watch"]
- apiGroups: ["apps"]
  resources: ["statefulsets"]
  verbs: ["list", "watch"]
- apiGroups: ["batch"]
  resources: ["cronjobs", "jobs"]
  verbs: ["list", "watch"]
- apiGroups: ["autoscaling"]
  resources: ["horizontalpodautoscalers"]
  verbs: ["list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kube-state-metrics
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kube-state-metrics
subjects:
- kind: ServiceAccount
  name: kube-state-metrics
  namespace: kube-system

---
apiVersion: v1
kind: Service
metadata:
  annotations:
    prometheus.io/scrape: 'true'
  name: kube-state-metrics
  namespace: kube-system
  labels:
    app: kube-state-metrics
spec:
  type: NodePort
  ports:
  - name: kube-state-metrics
    port: 8080
    targetPort: 8080
    nodePort: 31666
    protocol: TCP
  selector:
    app: kube-state-metrics

?kubectl apply -f?case5-kube-state-metrics-deploy.yaml

k8s cadvisor,k8s,云計(jì)算,kubernetes,prometheus,云計(jì)算

k8s cadvisor,k8s,云計(jì)算,kubernetes,prometheus,云計(jì)算

k8s cadvisor,k8s,云計(jì)算,kubernetes,prometheus,云計(jì)算

2.6部署grafana

grafana圖形界面對接prometheus數(shù)據(jù)源,包括Deployment資源、Service資源。

vim?grafana-enterprise.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: grafana-enterprise
  namespace: monitor
spec:
  replicas: 1
  selector:
    matchLabels:
      app: grafana-enterprise
  template:
    metadata:
      labels:
        app: grafana-enterprise
    spec:
      containers:
      - image: grafana/grafana
        imagePullPolicy: Always
        #command:
        #  - "tail"
        #  - "-f"
        #  - "/dev/null"
        securityContext:
          allowPrivilegeEscalation: false
          runAsUser: 0
        name: grafana
        ports:
        - containerPort: 3000
          protocol: TCP
        volumeMounts:
        - mountPath: "/var/lib/grafana"
          name: data
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
          limits:
            cpu: 500m
            memory: 2500Mi
      volumes:
      - name: data
        emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
  name: grafana
  namespace: monitor
spec:
  type: NodePort
  ports:
  - port: 80
    targetPort: 3000
    nodePort: 31000
  selector:
    app: grafana-enterprise

kubectl apply -f?grafana-enterprise.yaml

k8s cadvisor,k8s,云計(jì)算,kubernetes,prometheus,云計(jì)算k8s cadvisor,k8s,云計(jì)算,kubernetes,prometheus,云計(jì)算

k8s cadvisor,k8s,云計(jì)算,kubernetes,prometheus,云計(jì)算

賬號admin 密碼admin

添加數(shù)據(jù)源data sources,命名為prometheus,注意端口號30090。

k8s cadvisor,k8s,云計(jì)算,kubernetes,prometheus,云計(jì)算

?添加模板13332,還可以添加其他模板,例如:14981、13824、14518。

點(diǎn)擊左側(cè)“+”號,選擇“import”導(dǎo)入模板。

k8s cadvisor,k8s,云計(jì)算,kubernetes,prometheus,云計(jì)算

?模板13332

k8s cadvisor,k8s,云計(jì)算,kubernetes,prometheus,云計(jì)算

cadvisor模板編號14282,此處有個bug尚未解決,可以監(jiān)控集群內(nèi)所有容器的性能資源,但如果選中其中一個容器就無法顯示數(shù)據(jù)。(應(yīng)該是可以解決的)。

k8s cadvisor,k8s,云計(jì)算,kubernetes,prometheus,云計(jì)算

?現(xiàn)在顯示的是pod的ID,不方便管理員瀏覽,為了方便顯示成pod的name,模板右側(cè)的“設(shè)置圖標(biāo)”,選擇“Variables”,選擇第二個,將“name”改成“pod”即可。

k8s cadvisor,k8s,云計(jì)算,kubernetes,prometheus,云計(jì)算

k8s cadvisor,k8s,云計(jì)算,kubernetes,prometheus,云計(jì)算

??儀表臺的每一個板塊也需要更改,點(diǎn)擊板塊標(biāo)題,選擇“Edit”,“name”改成“pod”。

k8s cadvisor,k8s,云計(jì)算,kubernetes,prometheus,云計(jì)算

k8s cadvisor,k8s,云計(jì)算,kubernetes,prometheus,云計(jì)算

3.測試監(jiān)控效果

新建名為nginx01的deployment任務(wù),測試監(jiān)控結(jié)果。

vim nginx01.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx01
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx01
  template:
    metadata:
      labels:
        app: nginx01
    spec:
      containers:
      - name: nginx
        image: nginx:1.7.9

?kubectl apply -f nginx01.yaml?

k8s cadvisor,k8s,云計(jì)算,kubernetes,prometheus,云計(jì)算

k8s cadvisor,k8s,云計(jì)算,kubernetes,prometheus,云計(jì)算

出現(xiàn)兩個nginx01,因?yàn)樵O(shè)置了2個副本。

k8s cadvisor,k8s,云計(jì)算,kubernetes,prometheus,云計(jì)算

?至此,cadvisor+prometheus+grafana集群監(jiān)控部署完成。文章來源地址http://www.zghlxwxcb.cn/news/detail-795405.html

到了這里,關(guān)于k8s集群監(jiān)控cadvisor+prometheus+grafana部署的文章就介紹完了。如果您還想了解更多內(nèi)容,請?jiān)谟疑辖撬阉鱐OY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!

本文來自互聯(lián)網(wǎng)用戶投稿,該文觀點(diǎn)僅代表作者本人,不代表本站立場。本站僅提供信息存儲空間服務(wù),不擁有所有權(quán),不承擔(dān)相關(guān)法律責(zé)任。如若轉(zhuǎn)載,請注明出處: 如若內(nèi)容造成侵權(quán)/違法違規(guī)/事實(shí)不符,請點(diǎn)擊違法舉報進(jìn)行投訴反饋,一經(jīng)查實(shí),立即刪除!

領(lǐng)支付寶紅包贊助服務(wù)器費(fèi)用

相關(guān)文章

  • k8s集群部署Prometheus和Grafana

    參考https://zhaoll.blog.csdn.net/article/details/128155767 創(chuàng)建pvc 創(chuàng)建RBAC 創(chuàng)建Prometheus的configmap,也就是配置文件 創(chuàng)建Prometheus的sts和svc metrics文件 四、部署node_exporter 創(chuàng)建pvc 創(chuàng)建RBAC 創(chuàng)建sts和svc: 六、部署alarm 創(chuàng)建PVC 創(chuàng)建ConfigMap 注意替換里面的企業(yè)微信信息 創(chuàng)建Deploy和svc 配置告警規(guī)

    2024年02月12日
    瀏覽(28)
  • k8s集群監(jiān)控方案--node-exporter+prometheus+grafana

    k8s集群監(jiān)控方案--node-exporter+prometheus+grafana

    目錄 前置條件 一、下載yaml文件 二、部署yaml各個組件 2.1 node-exporter.yaml 2.2 Prometheus 2.3 grafana 2.4訪問測試 三、grafana初始化 3.1加載數(shù)據(jù)源 3.2導(dǎo)入模板 四、helm方式部署 安裝好k8s集群(幾個節(jié)點(diǎn)都可以,本人為了方便實(shí)驗(yàn)k8s集群只有一個master節(jié)點(diǎn)),注意prometheus是部署在k8s集群

    2024年02月12日
    瀏覽(91)
  • k8s集群監(jiān)控及報警(Prometheus+AlertManager+Grafana+prometheusAlert+Dingding)

    k8s集群監(jiān)控及報警(Prometheus+AlertManager+Grafana+prometheusAlert+Dingding)

    k8s集群部署后,急需可靠穩(wěn)定低延時的集群監(jiān)控報警系統(tǒng),報警k8s集群正常有序運(yùn)行,經(jīng)過不斷調(diào)研和測試,最終選擇Prometheus+AlertManager+Grafana+prometheusAlert的部署方案,故障信息報警至釘釘群和郵件,如需要額外監(jiān)控可部署pushgateway主動推送數(shù)據(jù)到Prometheus進(jìn)行數(shù)據(jù)采集 Promet

    2024年02月08日
    瀏覽(30)
  • 外獨(dú)立部署Prometheus+Grafana+Alertmanager監(jiān)控K8S

    外獨(dú)立部署Prometheus+Grafana+Alertmanager監(jiān)控K8S

    用集群外的prometheus來監(jiān)控k8s,主要是想把監(jiān)控服務(wù)跟k8s集群隔離開,這樣就能減少k8s資源的開銷。 CentOS Linux release 7.7.1908 (Core)??3.10.0-1062.el7.x86_64? Docker version 20.10.21 主機(jī)名 IP 備注 prometheus-server.test.cn 192.168.10.166 k8s集群 192.168.10.160:6443 集群master-vip 需要通過exporter收集各種維

    2024年02月08日
    瀏覽(94)
  • K8s部署Prometheus+grafana+alertmanager報警監(jiān)控系統(tǒng)(持續(xù)更新)

    K8s部署Prometheus+grafana+alertmanager報警監(jiān)控系統(tǒng)(持續(xù)更新)

    自行準(zhǔn)備一套k8s集群,如果不知道怎么搭建,可以參考一下我之前的博客 https://blog.csdn.net/qq_46902467/article/details/126660847 我的k8s集群地址是: k8s-master1 10.0.0.10 k8s-node1 10.0.0.11 k8s-node2 10.0.0.12 一、安裝nfs服務(wù) 二、安裝nfs客戶端 三、部署Prometheus 四、部署grafana 五、部署alertmanage

    2023年04月24日
    瀏覽(102)
  • 基于k8s容器化部署Prometheus和Grafana可視化監(jiān)控數(shù)據(jù)

    基于k8s容器化部署Prometheus和Grafana可視化監(jiān)控數(shù)據(jù)

    提示:需要提前部署k8s集群(master、node01、node02 .. ) 目錄 1.部署kube-state-metrics 2.部署node-export 3.部署nfs-pv 4.部署alertmanager ?4.1 vim alertmanager-configmap.yml 4.2 vim alertmanager-deployment.yml? ?4.3?vim alertmanager-pvc.yml ?4.4?vim alertmanager-service.yml 5.部署promethus-server 6.部署grafana 6.1.配置grafa

    2024年04月11日
    瀏覽(98)
  • 【k8s】基于Prometheus監(jiān)控Kubernetes集群安裝部署

    【k8s】基于Prometheus監(jiān)控Kubernetes集群安裝部署

    目錄 基于Prometheus監(jiān)控Kubernetes集群安裝部署 一、環(huán)境準(zhǔn)備 二、部署kubernetes集群 三、部署Prometheus監(jiān)控平臺 四、部署Grafana服務(wù) 五、grafana? web操作 IP地址 主機(jī)名 組件 192.168.100.131 k8s-master kubeadm、kubelet、kubectl、docker-ce 192.168.100.132 k8s-node01 kubeadm、kubelet、kubectl、docker-ce 192.168

    2024年02月12日
    瀏覽(107)
  • 使用大衛(wèi)的k8s監(jiān)控面板(k8s+prometheus+grafana)

    使用大衛(wèi)的k8s監(jiān)控面板(k8s+prometheus+grafana)

    書接上回,對EKS(AWS云k8s)啟用AMP(AWS云Prometheus)監(jiān)控+AMG(AWS云 grafana),上次我們只是配通了EKS+AMP+AMG的監(jiān)控路徑。這次使用一位大衛(wèi)老師的grafana的面板,具體地址如下: https://grafana.com/grafana/dashboards/15757-kubernetes-views-global/ 為了想Prometheus暴露一些有用的性能指標(biāo),需要在

    2024年04月23日
    瀏覽(657)
  • Kubernetes(k8s)上安裝Prometheus和Grafana監(jiān)控

    Kubernetes(k8s)上安裝Prometheus和Grafana監(jiān)控

    當(dāng)然前提環(huán)境是你得先有一個Kubernetes集群,版本在v1.21.*~v1.27.*之間,當(dāng)然我已經(jīng)準(zhǔn)備好了Kubernetes: 可以看到我準(zhǔn)備的Kubernetes版本為1.21.14的,符合要求。本篇文章也以這個版本來進(jìn)行安裝,上面提到的版本安裝步驟和這個版本大體相同,按照步驟來即可。 因?yàn)樵贙ubernetes上安

    2024年02月10日
    瀏覽(778)
  • docker容器監(jiān)控:Cadvisor +Prometheus+Grafana的安裝部署

    docker容器監(jiān)控:Cadvisor +Prometheus+Grafana的安裝部署

    目錄 Cadvisor +Prometheus+Grafana的安裝部署 一、安裝docker: 1、安裝docker-ce 2、阿里云鏡像加速器 3、下載組件鏡像 4、創(chuàng)建自定義網(wǎng)絡(luò) 二、部署Cadvisor 1、被監(jiān)控主機(jī)上部署Cadvisor容器 2、訪問cAdvisor頁面 三、安裝prometheus 1、部署Prometheus? 2、先準(zhǔn)備配置 3、訪問prometheus頁面 四、部

    2024年02月14日
    瀏覽(30)

覺得文章有用就打賞一下文章作者

支付寶掃一掃打賞

博客贊助

微信掃一掃打賞

請作者喝杯咖啡吧~博客贊助

支付寶掃一掃領(lǐng)取紅包,優(yōu)惠每天領(lǐng)

二維碼1

領(lǐng)取紅包

二維碼2

領(lǐng)紅包