前言:我們使用nginx來模擬產(chǎn)生日志的服務(wù),通過filebeat收集,交給kafka進(jìn)行消息隊(duì)列,再用logstash消費(fèi)kafka集群中的數(shù)據(jù),交給elasticsearch+kibana監(jiān)控文章來源:http://www.zghlxwxcb.cn/news/detail-776259.html
一,環(huán)境

服務(wù)器環(huán)境:文章來源地址http://www.zghlxwxcb.cn/news/detail-776259.html
192.168.2.1:elasticsearch
192.168.2.2:filebeat+nginx
192.168.2.3:kafka
192.168.2.4:logstash
二,服務(wù)的安裝
elasticseatch+filebeat+kafka+logsstash(6.60)清華源下載: https://mirrors.tuna.tsinghua.edu.cn/elasticstack/6.x/yum/6.6.0/
zookeeper官網(wǎng)下載: https://zookeeper.apache.org/releases.html
kafka官網(wǎng)下載: https://kafka.apache.org/downloads
一,配置elasticsearch
1.驗(yàn)證java環(huán)境(存在無需安裝)
java -version#驗(yàn)證java環(huán)境
安裝JDK1.8:yum -y install java-1.8.0-openjdk.x86_64
2.安裝elasticsearch
rpm -ivh /mnt/elk-6.6/elasticsearch-6.6.0.rpm
3.修改配置文件
vi /etc/elasticsearch/elasticsearch.yml
修改一下內(nèi)容:
node.name: node-1 #群集中本機(jī)節(jié)點(diǎn)名
network.host: 192.168.2.1,127.0.0.1 #監(jiān)聽的ip地址
http.port: 9200
4.開啟elasticsearch
systemctl start elasticsearch
5.查看啟動情況
[root@localhost ~]# netstat -anpt | grep java
tcp6 0 0192.168.2.1:9200 :::* LISTEN 12564/java
tcp6 0 0127.0.0.1:9200 :::* LISTEN 12564/java
tcp6 0 0192.168.2.1:9300 :::* LISTEN 12564/java
tcp6 0 0127.0.0.1:9300 :::* LISTEN 12564/java
tcp6 0 0192.168.2.1:9200 192.168.2.4:34428 ESTABLISHED 12564/java
tcp6 0 0192.168.2.1:9200 192.168.2.4:34436 ESTABLISHED 12564/java
二,配置filebeat+nginx
1.安裝nginx
yum -y install nginx
2.安裝filebeat
rpm -ivh /mnt/elk-6.6/filebeat-6.6.0-x86_64.rpm
3.修改filebeat配置文件
[root@localhost ~]# vi /etc/filebeat/filebeat.yml
添加一下內(nèi)容:
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/nginx/access.log
?
output.kafka:
enabled: true
hosts: ["192.168.2.3:9092"] #kafka的IP地址和端口
topic: test1 #kafka的topic
4.開啟服務(wù)
systemctl start nginx
systemctl start filebeat
三,配置kafka環(huán)境
1.安裝java環(huán)境(存在無需安裝)
yum -y install java-1.8.0-openjdk.x86_64
2.安裝zookeeper
tar xf /mnt/zookeeper-3.4.9.tar.gz -C /usr/local/
mv /usr/local/zookeeper-3.4.9/ /usr/local/zookeeper
cd /usr/local/zookeeper/conf
cp zoo_sample.cfg zoo.cfg
vi zoo.cfg
mkdir data logs
echo 1 > data/myid
/usr/local/zookeeper/bin/zkServer.sh start
/usr/local/zookeeper/bin/zkServer.sh status
1.修改zookeeper配置文件
vi zoo.cfg
添加一下內(nèi)容:
dataDir=/usr/local/zookeeper/data
dataLogDir=/usr/local/zookeeper/logs
server.1=192.168.2.3:3188:3288
保存退出
mkdir data logs
echo1 > data/myid
2.啟動zookeeper文件
/usr/local/zookeeper/bin/zkServer.sh start
/usr/local/zookeeper/bin/zkServer.sh status
3.安裝kafka
tar xf /mnt/kafka_2.11-2.2.1.tgz -C /usr/local/
mv /usr/local/kafka_2.11-2.2.1/ /usr/local/kafka
4.配置kafka
cd /usr/local/kafka/config/
cp server.properties server.properties.bak
vi server.properties
修改一下內(nèi)容:
broker.id=1
listeners=PLAINTEXT://192.168.2.3:9092
zookeeper.connect=192.168.2.3:2181
5.啟動kafka
cd /usr/local/kafka/
./bin/kafka-server-start.sh ./config/server.properties &
6.kafka創(chuàng)建topic
./bin/kafka-topics.sh --create--zookeeper localhost:2181 --replication-factor1--partitions1--topic test1 #創(chuàng)建名為test1的topic
./bin/kafka-topics.sh --list--zookeeper localhost:2181 #查看當(dāng)前有哪些topic
./bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test1 --from-beginning #查看test1中有那些信息
四,配置logstash
1.安裝java環(huán)境(存在可省略)
yum -y install java-1.8.0-openjdk.x86_64
2.安裝logstash
rpm -ivh /mnt/elk-6.6/logstash-6.6.0.rpm
3.編寫配置文件
vi /etc/logstash/conf.d/kafka.conf
input {
kafka {
bootstrap_servers => ["192.168.2.3:9092"]
group_id => "es-test"
topics => ["test1"] #與filebeat使用的topic一致
codec => json
}
}
?
output {
kafka{
codec => json {
charset => "UTF-8"
}
topic_id => "test1"
bootstrap_servers => "192.168.2.3:9092"
?
}
elasticsearch {
hosts => "http://192.168.2.1:9200"
index => "kafka‐%{+YYYY.MM.dd}"
}
}
4.啟動服務(wù)
/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/redis.conf
五,配置kibana
1.安裝kiana
rpm -ihv /mnt/elk-6.6/kibana-6.6.0-x86_64.rpm
2.配置kiana
vim /etc/kibana/kibana.yml
修改:
server.port: 5601
server.host: "192.168.2.5"
server.name: "db01"
elasticsearch.hosts: ["http://192.168.2.1:9200"] #es服務(wù)器的ip,便于接收日志數(shù)據(jù)
3.開啟kiana服務(wù)
systemctl start kibana
三,收集日志
一,kibana收集日志
添加日志信息

選擇日志格式

3.查看日志信息

到了這里,關(guān)于filebeat+kafka+logstash+elasticsearch+kibana實(shí)現(xiàn)日志收集解決方案的文章就介紹完了。如果您還想了解更多內(nèi)容,請?jiān)谟疑辖撬阉鱐OY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!