前言
相信作為一個(gè)資深的搬磚人,在處理問題的時(shí)候免不了查看應(yīng)用系統(tǒng)日志,且可以根據(jù)這個(gè)日志日志精準(zhǔn)、快速的解決實(shí)際的問題。一般情況下我們的系統(tǒng)日志都放置在包的運(yùn)行目錄下面,非常不便于查看和分類。那么。今天我們就引入ELK的日志處理架構(gòu)來解決它。
技術(shù)積累
ELK組成及功能
ELK是logstash、elasticsearch、kibana的簡稱,和其名字一樣,elk架構(gòu)就是將這三個(gè)中間件進(jìn)行整合搭建一個(gè)日志系統(tǒng)。
首先我們應(yīng)用系統(tǒng)集成logstash客戶端并采集日志上傳到logstash服務(wù)端進(jìn)行過濾、轉(zhuǎn)換,轉(zhuǎn)換后的日志寫入elasticsearch,es的強(qiáng)大功能提供數(shù)據(jù)存儲(chǔ),分詞和倒排索引提升查詢效率;最后的kibana直接是渲染日志數(shù)據(jù)的分析和可視化平臺(tái)。
框架搭建基礎(chǔ)
為方便我們架構(gòu)的搭建,我們用docker-compose進(jìn)行容器化編排,只要保存elk三個(gè)組件同網(wǎng)絡(luò)下它們就能夠根據(jù)服務(wù)名進(jìn)行通訊。
當(dāng)然,對(duì)于向外暴露的接口我們僅僅需要暴露logstash的進(jìn)行數(shù)據(jù)上傳,es的進(jìn)行數(shù)據(jù)外部查詢即可。每個(gè)應(yīng)用服務(wù)都必須有自己的logstash配置,在配置中提供輸入、輸出路徑和過濾參數(shù),對(duì)于的端口我們也需要向外暴露以便于數(shù)據(jù)的上傳。
EIK環(huán)境搭建
elk目錄下文件樹:
./
├── docker-compose.yml
├── elasticsearch
│ ├── config
│ │ └── elasticsearch.yml
│ ├── data
│ └── logs
├── kabana
│ └── config
│ └── kabana.yml
└── logstash
├── config
│ ├── logstash.yml
│ └── small-tools
│ └── demo.config
└── data
elasticsearch配置相關(guān)
mkdie elk
#增加es目錄
cd elk
mkdir -p ./elasticsearch/logs ./elasticsearch/data ./elasticsearch/config
chmod 777 ./elasticsearch/data
#./elasticsearch/config 下增加es配置文件
cd elasticsearch/config
vim elasticsearch.yml
cluster.name: "docker-cluster"
network.host: 0.0.0.0
http.port: 9200
# 開啟es跨域
http.cors.enabled: true
http.cors.allow-origin: "*"
http.cors.allow-headers: Authorization
# 開啟安全控制
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
kibana配置相關(guān)
cd elk
mkdir -p ./kibana/config
#./kibana/config 下增加kibana配置文件
cd kibana/config
vim kibana.yml
server.name: kibana
server.host: "0.0.0.0"
server.publicBaseUrl: "http://kibana:5601"
elasticsearch.hosts: [ "http://elasticsearch:9200" ]
xpack.monitoring.ui.container.elasticsearch.enabled: true
elasticsearch.username: "elastic"
elasticsearch.password: "123456"
i18n.locale: zh-CN
logstash配置相關(guān)
cd elk
mkdir -p ./logstash/data ./logstash/config ./logstash/config/small-tools
chmod 777 ./logstash/data
#./logstash/config 下增加logstash配置文件
cd logstash/config
vim logstash.yml
http.host: "0.0.0.0"
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.hosts: [ "http://elasticsearch:9200" ]
xpack.monitoring.elasticsearch.username: "elastic"
xpack.monitoring.elasticsearch.password: "123456"
#./logstash/config/small-tools 下增加demo項(xiàng)目監(jiān)控配置文件
cd small-tools
vim demo.config
input { #輸入
tcp {
mode => "server"
host => "0.0.0.0" # 允許任意主機(jī)發(fā)送日志
type => "demo" # 設(shè)定type以區(qū)分每個(gè)輸入源
port => 9999
codec => json_lines # 數(shù)據(jù)格式
}
}
filter {
mutate {
# 導(dǎo)入之過濾字段
remove_field => ["LOG_MAX_HISTORY_DAY", "LOG_HOME", "APP_NAME"]
remove_field => ["@version", "_score", "port", "level_value", "tags", "_type", "host"]
}
}
output { #輸出-控制臺(tái)
stdout{
codec => rubydebug
}
}
output { #輸出-es
if [type] == "demo" {
elasticsearch {
action => "index" # 輸出時(shí)創(chuàng)建映射
hosts => "http://elasticsearch:9200" # ES地址和端口
user => "elastic" # ES用戶名
password => "123456" # ES密碼
index => "demo-%{+YYYY.MM.dd}" # 指定索引名-按天
codec => "json"
}
}
}
elk目錄下增加docker-compose文件
docker-compose.yml
version: '3.3'
networks:
elk:
driver: bridge
services:
elasticsearch:
image: registry.cn-hangzhou.aliyuncs.com/zhengqing/elasticsearch:7.14.1
container_name: elk_elasticsearch
restart: unless-stopped
volumes:
- "./elasticsearch/data:/usr/share/elasticsearch/data"
- "./elasticsearch/logs:/usr/share/elasticsearch/logs"
- "./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml"
environment:
TZ: Asia/Shanghai
LANG: en_US.UTF-8
TAKE_FILE_OWNERSHIP: "true" # 權(quán)限
discovery.type: single-node
ES_JAVA_OPTS: "-Xmx512m -Xms512m"
ELASTIC_PASSWORD: "123456" # elastic賬號(hào)密碼
ports:
- "9200:9200"
- "9300:9300"
networks:
- elk
kibana:
image: registry.cn-hangzhou.aliyuncs.com/zhengqing/kibana:7.14.1
container_name: elk_kibana
restart: unless-stopped
volumes:
- "./kibana/config/kibana.yml:/usr/share/kibana/config/kibana.yml"
ports:
- "5601:5601"
depends_on:
- elasticsearch
links:
- elasticsearch
networks:
- elk
logstash:
image: registry.cn-hangzhou.aliyuncs.com/zhengqing/logstash:7.14.1
container_name: elk_logstash
restart: unless-stopped
environment:
LS_JAVA_OPTS: "-Xmx512m -Xms512m"
volumes:
- "./logstash/data:/usr/share/logstash/data"
- "./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml"
- "./logstash/config/small-tools:/usr/share/logstash/config/small-tools"
command: logstash -f /usr/share/logstash/config/small-tools
ports:
- "9600:9600"
- "9999:9999"
depends_on:
- elasticsearch
networks:
- elk
查看elk目錄文件樹
yum -y install tree
#查看當(dāng)前目錄下4層
tree -L 4
#顯示所有文件、文件夾
tree -a
#顯示大小
tree -s
[root@devops-01 elk]# pwd
/home/test/demo/elk
[root@devops-01 elk]# tree ./
./
├── docker-compose.yml
├── elasticsearch
│ ├── config
│ │ └── elasticsearch.yml
│ ├── data
│ └── logs
├── kabana
│ └── config
│ └── kabana.yml
└── logstash
├── config
│ ├── logstash.yml
│ └── small-tools
│ └── demo.config
└── data
10 directories, 5 files
編排elk
docker-compose up -d
編排成功查看容器是否成功啟動(dòng)
[root@devops-01 elk]# docker ps | grep elk
edcf6c1cecb3 registry.cn-hangzhou.aliyuncs.com/zhengqing/kibana:7.14.1 “/bin/tini – /usr/l…” 6 minutes ago Up 10 seconds 0.0.0.0:5601->5601/tcp, :::5601->5601/tcp elk_kibana
7c24b65d2a27 registry.cn-hangzhou.aliyuncs.com/zhengqing/logstash:7.14.1 “/usr/local/bin/dock…” 6 minutes ago Up 13 seconds 5044/tcp, 9600/tcp elk_logstash
b4be2f1c0a28 registry.cn-hangzhou.aliyuncs.com/zhengqing/elasticsearch:7.14.1 “/bin/tini – /usr/l…” 6 minutes ago Up 6 minutes 0.0.0.0:9800->9200/tcp, :::9800->9200/tcp, 0.0.0.0:9900->9300/tcp, :::9900->9300/tcp elk_elasticsearch
編排成功訪問kibana頁面
http://10.10.22.174:5601/app/home#/
springboot集成logstash
pom.xml
<!--logstash start-->
<dependency>
<groupId>net.logstash.logback</groupId>
<artifactId>logstash-logback-encoder</artifactId>
<version>6.6</version>
</dependency>
<!--logstash end-->
logback-spring.xml
<springProfile name="uat">
<appender name="logstash" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
<destination>10.10.22.174:9999</destination>
<encoder charset="UTF-8" class="net.logstash.logback.encoder.LogstashEncoder"/>
</appender>
<root level="INFO">
<appender-ref ref="logstash"/>
</root>
</springProfile>
啟動(dòng)項(xiàng)目logstash采集日志
kibana配置查看日志
http://10.10.22.174:5601/app/home#/ 輸入ES用戶名和密碼進(jìn)入kibana控制臺(tái)
點(diǎn)擊管理按鈕進(jìn)入管理界面
點(diǎn)擊索引模式進(jìn)入–>創(chuàng)建索引模式
輸入配置日志表達(dá)式–>點(diǎn)擊下一步
選擇timestamp -->創(chuàng)建索引模式
創(chuàng)建完成如下所示代表成功
查看日志
菜單點(diǎn)擊–>discover文章來源:http://www.zghlxwxcb.cn/news/detail-582472.html
寫在最后
ELK環(huán)境部署并采集springboot項(xiàng)目日志還是比較簡單,我們只需要用docker容器化技術(shù)搭建起elk框架,然后在自己的項(xiàng)目中進(jìn)行數(shù)據(jù)采集上傳即可。當(dāng)然對(duì)于elk組成元素的logstash、elasticsearch、kibana還是需要一些基礎(chǔ)的了解,方便在實(shí)戰(zhàn)的時(shí)候進(jìn)行操作。文章來源地址http://www.zghlxwxcb.cn/news/detail-582472.html
到了這里,關(guān)于實(shí)戰(zhàn):ELK環(huán)境部署并采集springboot項(xiàng)目日志的文章就介紹完了。如果您還想了解更多內(nèi)容,請(qǐng)?jiān)谟疑辖撬阉鱐OY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!