如何在k8s中搭建efk收集集群日志
在離線環(huán)境部署一套日志采集系統(tǒng)我采用的是elasticsearch+kibana+flentd日志系統(tǒng)
首先跟大部分網(wǎng)友一樣 創(chuàng)建ns,es的無頭服務
yaml文件如下:
apiVersion: v1
kind: Namespace
metadata:
name: logging
kind: Service
apiVersion: v1
metadata:
name: elasticsearch
namespace: logging
labels:
app: elasticsearch
spec:
selector:
app: elasticsearch
clusterIP: None
ports:
- port: 9200
name: rest
- port: 9300
name: inter-node
2.部署es集群
apiVersion: apps/v1 kind: StatefulSet metadata: name: es namespace: logging spec: serviceName: elasticsearch replicas: 3 selector: matchLabels: app: elasticsearch template: metadata: labels: app: elasticsearch spec: nodeSelector: es: log initContainers: - name: increase-vm-max-map image: busybox imagePullPolicy: “IfNotPresent” command: [“sysctl”, “-w”, “vm.max_map_count=262144”] securityContext: privileged: true - name: increase-fd-ulimit image: busybox command: [“sh”, “-c”, “ulimit -n 65536”] securityContext: privileged: true containers: - name: elasticsearch image: docker.elastic.co/elasticsearch/elasticsearch:7.17.1 imagePullPolicy: “IfNotPresent” ports: - name: rest containerPort: 9200 - name: inter containerPort: 9300 resources: limits: cpu: 1000m requests: cpu: 1000m volumeMounts: - name: data mountPath: /usr/share/elasticsearch/data env: - name: cluster.name value: k8s-logs - name: node.name valueFrom: fieldRef: fieldPath: metadata.name - name: cluster.initial_master_nodes value: “es-0,es-1,es-2” - name: discovery.zen.minimum_master_nodes value: “2” - name: discovery.seed_hosts value: “elasticsearch” - name: ES_JAVA_OPTS value: “-Xms512m -Xmx512m” - name: network.host value: “0.0.0.0” vloumes: - name: data hostpath: path: /var/app/ocr 這里我采用的是hostpath的掛載方式,因為我還沒有單獨劃分一塊磁盤出來。接下來我們可以通過es的rest接口去測試一下es集群 使用下面的命令將本地端口9200 轉(zhuǎn)發(fā)到 Elasticsearch 節(jié)點(如es-0)對應的端口: $ kubectl port-forward es-0 9200:9200 --namespace=logging Forwarding from 127.0.0.1:9200 -> 9200 Forwarding from [::1]:9200 -> 9200 另外開啟終端執(zhí)行:$ curl http://localhost:9200/_cluster/state?pretty apiVersion: v1 kind: ConfigMap metadata: namespace: logging name: kibana-config labels: app: kibana data: kibana.yml: | server.name: kibana server.host: “0.0.0.0” i18n.locale: zh-CN #設置默認語言為中文 elasticsearch: hosts: ${ELASTICSEARCH_HOSTS} #es集群連接地址,由于我這都都是k8s部署且在一個ns下,可以直接使用service name連接
apiVersion: v1
kind: Service
metadata:
name: kibana
namespace: logging
labels:
app: kibana
spec:
ports:
- port: 5601
type: NodePort
selector:
app: kibana
apiVersion: apps/v1
kind: Deployment
metadata:
name: kibana
namespace: logging
labels:
app: kibana
spec:
selector:
matchLabels:
app: kibana
template:
metadata:
labels:
app: kibana
spec:
nodeSelector:
es: log
containers:
- name: kibana
image: harbor.domain.com/efk/kibana:7.17.1
imagePullPolicy: “IfNotPresent”
resources:
limits:
cpu: 1000m
requests:
cpu: 1000m
env:
- name: ELASTICSEARCH_URL
value: http://elasticsearch:9200 #設置為handless service dns地址即可
- name: ELASTICSEARCH_HOSTS
value: http://elasticsearch:9200
ports:
- containerPort: 5601
這里我依舊沒有掛載配置文件,通過nortport端口訪問 正常能打開
3.以上我是直接參考的其他網(wǎng)友的操作,但是接下來如果還按照他們的配置的話,我發(fā)現(xiàn)自己的集群沒有辦法采集日志到es,瀏覽官網(wǎng)后發(fā)現(xiàn),不需要自己去配置規(guī)則,官網(wǎng)把采集配置已經(jīng)打包在鏡像里面,
于是我的yaml文件只是換個鏡像 apiVersion: v1 kind: ServiceAccount metadata: name: fluentd-es namespace: logging labels: k8s-app: fluentd-es kubernetes.io/cluster-service: “true” addonmanager.kubernetes.io/mode: Reconcile
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: fluentd-es
labels:
k8s-app: fluentd-es
kubernetes.io/cluster-service: “true”
addonmanager.kubernetes.io/mode: Reconcile
rules:
- apiGroups:
- “”
resources: - “namespaces”
- “pods”
verbs: - “get”
- “watch”
- “l(fā)ist”
- “”
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: fluentd-es
labels:
k8s-app: fluentd-es
kubernetes.io/cluster-service: “true”
addonmanager.kubernetes.io/mode: Reconcile
subjects:
- kind: ServiceAccount
name: fluentd-es
namespace: logging
apiGroup: “”
roleRef:
kind: ClusterRole
name: fluentd-es
apiGroup: “”
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd-es
namespace: logging
labels:
k8s-app: fluentd-es
kubernetes.io/cluster-service: “true”
addonmanager.kubernetes.io/mode: Reconcile
spec:
selector:
matchLabels:
k8s-app: fluentd-es
template:
metadata:
labels:
k8s-app: fluentd-es
kubernetes.io/cluster-service: “true”
# 此注釋確保如果節(jié)點被驅(qū)逐,fluentd不會被驅(qū)逐,支持關鍵的基于 pod 注釋的優(yōu)先級方案。
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ‘’
spec:
serviceAccountName: fluentd-es
containers:
- name: fluentd-es
image: harbor.domain.com/efk/fluentd:v3.4.0
imagePullPolicy: “IfNotPresent”
env:
- name: FLUENTD_ARGS
value: --no-supervisor -q
resources:
limits:
memory: 500Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /data/docker/containers
readOnly: true
- name: config-volume
mountPath: /etc/fluent/config.
tolerations:
- operator: Exists
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: config-volume
configMap:
name: fluentd-config
我又發(fā)現(xiàn)一個bug,掛載的時候mountpath 跟hostpath目錄要保持一致,hostpath路徑為docker本地的日志存放位置文章來源:http://www.zghlxwxcb.cn/news/detail-564552.html
4.測試服務
測試的時候發(fā)現(xiàn)日志能正常推送,調(diào)es的rest接口也能看到索引,但是正常運行二天過后就出現(xiàn)了日志無法發(fā)送es的問題,初步判斷可能是跟新搭建的es集群有關系,還有kibana照網(wǎng)友安裝 會有安全問題,導致沒辦法把日志吐到es,正好我們公司有公共的es,于是我
在環(huán)境變量里面添加地址跟賬號信息,發(fā)現(xiàn)可以正常展示日志,而且是實時的,于是改成公共的es,這樣還不用我們?nèi)ゾS護
建議:還是要以官方文檔位置,每個人的環(huán)境都是不一樣,不能完全照搬別人的搭建思路
參考文件:https://blog.csdn.net/qq_36200932/article/details/123166613文章來源地址http://www.zghlxwxcb.cn/news/detail-564552.html
到了這里,關于在k8s集群中搭建elasticsearch+kibana+flentd日志系統(tǒng)的文章就介紹完了。如果您還想了解更多內(nèi)容,請在右上角搜索TOY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關文章,希望大家以后多多支持TOY模板網(wǎng)!