目錄
背景
技術(shù)架構(gòu)
部署安裝
環(huán)境準(zhǔn)備
配置Logback并模擬產(chǎn)生日志
制作fluentd鏡像
運(yùn)行docker-compose
效果展示
背景
????????在現(xiàn)代的軟件開發(fā)和運(yùn)維領(lǐng)域,監(jiān)控和日志管理是至關(guān)重要的任務(wù)。隨著應(yīng)用程序規(guī)模的擴(kuò)大和分布式系統(tǒng)的普及,有效地跟蹤和分析日志數(shù)據(jù)成為了挑戰(zhàn)。Elasticsearch、Fluentd和Kibana(EFK)框架是一組流行的工具,可幫助您實(shí)現(xiàn)高效的日志收集、存儲(chǔ)和分析。
? ? ? ? 本文將介紹采集SpringCloud Logback日志為例,使用Docker容器化技術(shù)快速部署EFK架構(gòu)來滿足您的監(jiān)控和日志管理需求。
項(xiàng)目代碼:GitHub - huangyang1230/springboot_efk: EFK采集Springboot日志并展示
技術(shù)架構(gòu)
如圖所示:
- 應(yīng)用程序?qū)崟r(shí)產(chǎn)生日志文件;
- Fluentd的tail組件采集并使用multiline解析;
- 采集到的日志數(shù)據(jù)經(jīng)過Fluentd的處理后,會(huì)被發(fā)送到Elasticsearch。Elasticsearch是一個(gè)高性能的搜索和分析引擎,用于存儲(chǔ)和索引日志數(shù)據(jù)。Fluentd與Elasticsearch之間通常使用名為fluent-plugin-elasticsearch的插件來建立連接和傳輸數(shù)據(jù)。一旦數(shù)據(jù)到達(dá)Elasticsearch,它將自動(dòng)建立索引,以便稍后的查詢和分析;
- 使用Kibana查看、搜索、分析日志。
部署安裝
環(huán)境準(zhǔn)備
- docker和docker-compose 可以參考官方文檔
配置Logback并模擬產(chǎn)生日志
logback-spring.xml
<?xml version="1.0" encoding="UTF-8"?>
<configuration scan="false" scanPeriod="60 seconds" debug="false">
<springProperty scope="context" name="APP_NAME" source="spring.application.name" defaultValue="springBoot"/>
<property name="LOG_HOME" value="/usr/local/logs"/>
<property name="CONSOLE_LOG_PATTERN" value="%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %highlight(%-5level) %cyan(%logger{50}:%line) - %highlight(%msg) %n"/>
<property name="FILE_LOG_PATTERN" value="%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level [%logger{50}:%line] - %msg%n"/>
<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
<encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
<!--格式化輸出:%d表示日期,%thread表示線程名,%-5level:級(jí)別從左顯示5個(gè)字符寬度%msg:日志消息,%n是換行符-->
<pattern>${CONSOLE_LOG_PATTERN}</pattern>
<charset>UTF-8</charset>
</encoder>
</appender>
<!-- Log file debug output -->
<appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>${LOG_HOME}/${APP_NAME}.log</file>
<rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
<fileNamePattern>${LOG_HOME}/%d{yyyy-MM}/${APP_NAME}-%d{yyyy-MM-dd}-%i.log.gz</fileNamePattern>
<maxFileSize>50MB</maxFileSize>
<maxHistory>30</maxHistory>
</rollingPolicy>
<encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
<pattern>${FILE_LOG_PATTERN}</pattern>
<charset>UTF-8</charset>
</encoder>
</appender>
<logger name="java.sql.Connection" level="debug"/>
<logger name="java.sql.Statement" level="debug"/>
<logger name="java.sql.PreparedStatement" level="debug"/>
<!-- Level: FATAL 0 ERROR 3 WARN 4 INFO 6 DEBUG 7 -->
<root level="DEBUG">
<appender-ref ref="STDOUT"/>
<appender-ref ref="FILE"/>
</root>
</configuration>
Java代碼模擬產(chǎn)生日志
import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.InitializingBean;
import org.springframework.stereotype.Component;
import java.util.UUID;
/**
* @author Yang Huang
* @create 2023-10-21-11:21
* @description TODO
*/
@Component
@Slf4j
public class LogFactory implements InitializingBean{
public void log() {
log.debug("開始記錄日志");
while (true) {
log.debug("我是debug日志,{}", UUID.randomUUID().toString());
log.info("我是info日志,{}", UUID.randomUUID().toString());
try {
int i = 1 / 0;
} catch (Exception e) {
log.error("產(chǎn)生了錯(cuò)誤", e);
}
try {
Thread.sleep(3000);
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
}
}
@Override
public void afterPropertiesSet() throws Exception {
log();
}
}
啟動(dòng)Springboot項(xiàng)目,讓日志文件生成到/usr/local/logs目錄下,下一步讓fluentd實(shí)時(shí)采集。
制作fluentd鏡像
由于官方fluentd鏡像不包含fluent-plugin-elasticsearch插件,需要自定義鏡像。Dockerfile文件內(nèi)容:
FROM fluent/fluentd:v1.16-debian-1
# Use root account to use apt
USER root
# below RUN includes plugin as examples elasticsearch is not required
# you may customize including plugins as you wish
RUN buildDeps="sudo make gcc g++ libc-dev" \
&& apt-get update \
&& apt-get install -y --no-install-recommends $buildDeps \
&& sudo gem install fluent-plugin-elasticsearch \
&& sudo gem sources --clear-all \
&& SUDO_FORCE_REMOVE=yes \
apt-get purge -y --auto-remove \
-o APT::AutoRemove::RecommendsImportant=false \
$buildDeps \
&& rm -rf /var/lib/apt/lists/* \
&& rm -rf /tmp/* /var/tmp/* /usr/lib/ruby/gems/*/cache/*.gem
#COPY fluent.conf /fluentd/etc/
#COPY entrypoint.sh /bin/
USER fluent
在Dockerfile目錄運(yùn)行以下命令:
docker build -t fluentd_es:v1 .
運(yùn)行docker-compose
一、文件目錄準(zhǔn)備
? ? ? ? fluentd
????????????????fluent.conf
? ? ? ? docker-compose.yml? ? ? ??
二、?fluent.conf內(nèi)容
<source>
@type tail # 內(nèi)置的輸入方式,從源文件中獲取新的日志。
path /usr/local/logs/*.log # 掛載的服務(wù)器 Docker 容器日志地址
pos_file /usr/local/logs/*.log.pos
tag test.* # 設(shè)置日志標(biāo)簽
read_from_head true
<parse>
@type multiline
format_firstline /\d{4}-\d{1,2}-\d{1,2}/ #匹配日期開頭
format1 /^(?<time>\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}\.\d{3}) \[(?<thread>[^\]]+)\] (?<level>[^\s]+) (?<message>.*)/
</parse>
</source>
<match **>
@id elasticsearch # 唯一標(biāo)識(shí)符
@type elasticsearch # elasticsearch 插件
# @log_level info
host "192.168.0.110" #需要配置你IP
port "9200"
user "elastic"
password "B6P0hW7x"
logstash_format true
logstash_prefix test
logstash_dateformat %Y-%m-%d
include_tag_key true
tag_key @log_name
<buffer>
@type file # 使用文件將緩沖區(qū)塊存儲(chǔ)在磁盤上
path /usr/local/logs/fluentd.buffer
flush_mode interval
retry_type exponential_backoff
flush_thread_count 2
flush_interval 5s
retry_forever
retry_max_interval 30
overflow_action block
</buffer>
</match>
三、?docker-compose.yml? 內(nèi)容
version: "3.8"
#網(wǎng)絡(luò)配置
networks:
network:
ipam:
config:
- subnet: "10.10.10.0/24"
#服務(wù)配置
services:
#es服務(wù)
elasticsearch:
image: elasticsearch:7.14.0
container_name: es
privileged: true
environment:
ES_JAVA_OPTS: -Xms1g -Xmx1g
node.name: es-single
cluster.name: es-cluster
discovery.type: single-node
# 開啟es跨域
http.cors.enabled: "true"
http.cors.allow-origin: "*"
http.cors.allow-headers: Authorization
# 安全控制(根據(jù)個(gè)人需要選擇打開或關(guān)閉)
xpack.security.enabled: "true"
xpack.security.transport.ssl.enabled: "true"
ELASTIC_PASSWORD: "B6P0hW7x"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- ./es/data:/usr/share/elasticsearch/data
- ./es/plugins:/usr/share/elasticsearch/plugins
- ./es/logs:/usr/share/elasticsearch/logs
ports:
- "9200:9200"
- "9300:9300"
## 指定ip
networks:
network:
ipv4_address: 10.10.10.100
#kibana
kibana:
image: kibana:7.14.0
restart: always
container_name: kibana
ports:
- "5601:5601"
depends_on:
- elasticsearch
environment:
ELASTICSEARCH_HOSTS: '["http://192.168.0.110:9200"]'
ELASTICSEARCH_USERNAME: 'elastic'
ELASTICSEARCH_PASSWORD: 'B6P0hW7x'
## 指定ip
networks:
network:
ipv4_address: 10.10.10.120
#fluentd
fluentd:
image: fluentd_es:v1
container_name: fluentd
restart: always
environment:
TZ: "Asia/Shanghai"
volumes:
- ./fluentd:/fluentd/etc
#左邊/usr/local/logs是你應(yīng)用生成日志的路徑,根據(jù)實(shí)際情況調(diào)整
- /usr/local/logs:/usr/local/logs
depends_on:
- elasticsearch
## 指定ip
networks:
network:
ipv4_address: 10.10.10.130
注意:以上腳本中?"192.168.0.110"需要改成本地IP
最后運(yùn)行命令啟動(dòng)容器:
docker-compose up -d
效果展示
一、打開http://localhost:5601/login?next=%2F登錄,賬號(hào)密碼在腳本中。
二、創(chuàng)建索引
三、查看索引數(shù)據(jù)
至此,你可以在Kibana中查看應(yīng)用日志。文章來源:http://www.zghlxwxcb.cn/news/detail-777761.html
完結(jié)?。。?span toymoban-style="hidden">文章來源地址http://www.zghlxwxcb.cn/news/detail-777761.html
到了這里,關(guān)于(實(shí)戰(zhàn))docker-compose部署分布式日志方案EFK(Elasticsearch+Fluentd+Kibana)的文章就介紹完了。如果您還想了解更多內(nèi)容,請(qǐng)?jiān)谟疑辖撬阉鱐OY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!