ELK大家應(yīng)該很了解了,廢話不多說開始部署
kafka在其中作為消息隊列解耦和讓logstash高可用
kafka和zk 的安裝可以參考這篇文章
深入理解Kafka3.6.0的核心概念,搭建與使用-CSDN博客
第一步、官網(wǎng)下載安裝包
需要
elasticsearch-8.10.4
logstash-8.10.4
kibana-8.10.4
kafka_2.13-3.6.0
apache-zookeeper-3.9.1-bin.tar
filebeat-8.10.4-linux-x86_64.tar
第二步: 環(huán)境配置(每一臺都做)
創(chuàng)建es用戶
?
useradd es
配置主機(jī)名、配置IP地址、每臺主機(jī)配置/etc/hosts名稱解析
192.168.1.1?es1
192.168.1.2?es2
192.168.1.3?es3
將Linux系統(tǒng)的軟硬限制最大文件數(shù)改為65536,將所有用戶的最大線程數(shù)修改為65536
打開/etc/security/limits.conf文件,添加以下配置(每一臺都做)
vim? /etc/security/limits.conf
* soft nofile 65536
* hard nofile 65536
* soft nproc 65536
* hard nproc 65536
es hard core unlimited #打開生成Core文件
es soft core unlimited
es soft memlock unlimited #允許用戶鎖定內(nèi)存
es hard memlock unlimited
soft xxx : 代表警告的設(shè)定,可以超過這個設(shè)定值,但是超過后會有警告。
hard xxx : 代表嚴(yán)格的設(shè)定,不允許超過這個設(shè)定的值。
nproc : 是操作系統(tǒng)級別對每個用戶創(chuàng)建的進(jìn)程數(shù)的限制
nofile : 是每個進(jìn)程可以打開的文件數(shù)的限制
soft nproc :單個用戶可用的最大進(jìn)程數(shù)量(超過會警告);
hard nproc:單個用戶可用的最大進(jìn)程數(shù)量(超過會報錯);
soft nofile :可打開的文件描述符的最大數(shù)(超過會警告);
hard nofile :可打開的文件描述符的最大數(shù)(超過會報錯);
修改/etc/sysctl.conf文件,添加下面這行,并執(zhí)行命令sysctl ?-p使其生效
vim /etc/sysctl.conf
vm.max_map_count=262144 #限制一個進(jìn)程可以擁有的VMA(虛擬內(nèi)存區(qū)域)的數(shù)量,es要求最低65536
net.ipv4.tcp_retries2=5 #數(shù)據(jù)重傳次數(shù)超過 tcp_retries2 會直接放棄重傳,關(guān)閉 TCP 流
解壓安裝包,進(jìn)入config文件夾,修改elasticsearch.yml 配置文件?
cluster.name: elk #集群名稱
node.name: node1 #節(jié)點名稱
node.roles: [ master,data ] #節(jié)點角色
node.attr.rack: r1 #機(jī)架位置,一般沒啥意義這個配置
path.data: /data/esdata
path.logs: /data/eslog
bootstrap.memory_lock: true #允許鎖定內(nèi)存
network.host: 0.0.0.0
http.max_content_length: 200mb
network.tcp.keep_alive: true
network.tcp.no_delay: true
http.port: 9200
http.cors.enabled: true #允許http跨域訪問,es_head插件必須開啟
http.cors.allow-origin: "*" #允許http跨域訪問,es_head插件必須開啟
discovery.seed_hosts: ["ypd-dmcp-log01", "ypd-dmcp-log02"]
cluster.initial_master_nodes: ["ypd-dmcp-log01", "ypd-dmcp-log02"]
xpack.monitoring.collection.enabled: true #添加這個配置以后在kibana中才會顯示聯(lián)機(jī)狀態(tài),否則會顯示脫機(jī)狀態(tài)
xpack.security.enabled: true
#xpack.security.enrollment.enabled: true
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.keystore.path: elastic-certificates.p12 #我把文件都放在config下。所以直接寫文件名,放在別處需要寫路徑
xpack.security.http.ssl.truststore.path: elastic-certificates.p12
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: elastic-certificates.p12k
xpack.security.transport.ssl.truststore.path: elastic-certificates.p12
配置jvm內(nèi)存大小?
修改 jvm.options
-Xms6g #你服務(wù)器內(nèi)存的一半,最高32G
-Xmx6g #你服務(wù)器內(nèi)存的一半,最高32G
改好文件夾準(zhǔn)備生成相關(guān)key
?創(chuàng)建ca證書,什么也不用輸入,兩次回車即可(會在當(dāng)前目錄生成名為elastic-stack-ca.p12的證書文件)
bin/elasticsearch-certutil ca
?使用之前生成的ca證書創(chuàng)建節(jié)點證書,過程三次回車,會在當(dāng)前目錄生成一個名為elastic-certificates.p12的文件
?
bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12
?生成http證書,根據(jù)提示信息進(jìn)行操作,主要是下面幾步
bin/elasticsearch-certutil http
Generate a CSR? [y/N]n
Use an existing CA? [y/N]y
CA Path: /usr/local/elasticsearch-8.10.4/config/certs/elastic-stack-ca.p12
Password for elastic-stack-ca.p12: 直接回車,不使用密碼
For how long should your certificate be valid? [5y] 50y#過期時間
Generate a certificate per node? [y/N]n
Enter all the hostnames that you need, one per line. #輸入es的節(jié)點 兩次回車確認(rèn)
When you are done, press <ENTER> once more to move on to the next step.
es1
es2
es3
You entered the following hostnames.
- es1
- es2
- es3
Is this correct [Y/n]y
When you are done, press <ENTER> once more to move on to the next step. #輸入es的ip 兩次回車確認(rèn)
192.168.1.1
192.168.1.2
192.168.1.3
You entered the following IP addresses.
- 192.168.1.1
- 192.168.1.2
- 192.168.1.3
Is this correct [Y/n]y
Do you wish to change any of these options? [y/N]n
接下來一直回車,然后會在當(dāng)前目錄生成名為:elasticsearch-ssl-http.zip的壓縮文件
解壓縮http證書文件到config下,證書在http文件夾里。名字是http.p12,mv出來到config下
?確保elasticsearch目錄下所有文件的歸屬關(guān)系都是es用戶
?chown -R es:es /home/es/elasticsearch-8.10.4
啟動es
su - es #到es用戶下
bin/elasticsearch 初次可以前臺啟動 沒問題就放后臺
bin/elasticsearch -d
復(fù)制整個es文件夾到es2,es3
只需要修改
node.name: es2 #節(jié)點名稱
network.host: 192.168.1.2 #節(jié)點ip
node.name: es3 #節(jié)點名稱
network.host: 192.168.1.3 #節(jié)點ip
?瀏覽器訪問一下es的web ui
https://192.168.1.1:9200?
?
生成賬戶密碼
?
bin/elasticsearch-setup-passwords interactive
warning: ignoring JAVA_HOME=/usr/local/java/jdk1.8.0_361; using bundled JDK
******************************************************************************
Note: The 'elasticsearch-setup-passwords' tool has been deprecated. This command will be removed in a future release.
******************************************************************************
Initiating the setup of passwords for reserved users elastic,apm_system,kibana,kibana_system,logstash_system,beats_system,remote_monitoring_user.
You will be prompted to enter passwords as the process progresses.
Please confirm that you would like to continue [y/N]y
Enter password for [elastic]:
Reenter password for [elastic]:
Enter password for [apm_system]:
Reenter password for [apm_system]:
Enter password for [kibana_system]:
Reenter password for [kibana_system]:
Enter password for [logstash_system]:
Reenter password for [logstash_system]:
Enter password for [beats_system]:
Reenter password for [beats_system]:
Enter password for [remote_monitoring_user]:
Reenter password for [remote_monitoring_user]:
Changed password for user [apm_system]
Changed password for user [kibana_system]
Changed password for user [kibana]
Changed password for user [logstash_system]
Changed password for user [beats_system]
Changed password for user [remote_monitoring_user]
Changed password for user [elastic]
這個時候就可以使用賬號密碼訪問了
創(chuàng)建一個給kibana使用的用戶
bin/elasticsearch-users useradd kibanauser
kibana不能用es超級用戶,此處展示一下用法
bin/elasticsearch-users roles -a superuser kibanauser
加兩個角色 不然沒有監(jiān)控權(quán)限
bin/elasticsearch-users roles -a kibana_admin kibanauser
bin/elasticsearch-users roles -a monitoring_user kibanauser
?然后配置kibana
解壓然后修改kibana.yml
server.port: 5601
server.host: "0.0.0.0"
server.ssl.enabled: true
server.ssl.certificate: /data/elasticsearch-8.10.4/config/client.cer
server.ssl.key: /data/elasticsearch-8.10.4/config/client.key
elasticsearch.hosts: ["https://192.168.1.1:9200"]
elasticsearch.username: "kibanauser"
elasticsearch.password: "kibanauser"
elasticsearch.ssl.certificate: /data/elasticsearch-8.10.4/config/client.cer
elasticsearch.ssl.key: /data/elasticsearch-8.10.4/config/client.key
elasticsearch.ssl.certificateAuthorities: [ "/data/elasticsearch-8.10.4/config/client-ca.cer" ]
elasticsearch.ssl.verificationMode: certificate
i18n.locale: "zh-CN"
xpack.encryptedSavedObjects.encryptionKey: encryptedSavedObjects1234567890987654321
xpack.security.encryptionKey: encryptionKeysecurity1234567890987654321
xpack.reporting.encryptionKey: encryptionKeyreporting1234567890987654321
啟動
bin/kibana
訪問 https://ip:5601?
?配置logstash
解壓后在conf下創(chuàng)建一個配置文件,我取名logstash.conf
input {
kafka {
bootstrap_servers => "192.168.1.1:9092"
group_id => "logstash_test"
client_id => 1 #設(shè)置相同topic,設(shè)置相同groupid,設(shè)置不同clientid,實現(xiàn)LogStash多實例并行消費kafka
topics => ["testlog"]
consumer_threads => 2 #等于 topic分區(qū)數(shù)
codec => json { #添加json插件,filebeat發(fā)過來的是json格式的數(shù)據(jù)
charset => "UTF-8"
}
decorate_events => false #此屬性會將當(dāng)前topic、offset、group、partition等信息也帶到message中
type => "testlog" #跟topics不重合。因為output讀取不了topics這個變量
}
}
filter {
mutate {
remove_field => "@version" #去掉一些沒用的參數(shù)
remove_field => "event"
remove_field => "fields"
}
}
output {
elasticsearch {
cacert => "/data/elasticsearch-8.10.4/config/client-ca.cer"
ssl => true
ssl_certificate_verification => false
user => elastic
password => "123456"
action => "index"
hosts => "https://192.168.1.1:9200"
index => "%{type}-%{+YYYY.MM.dd}"
}
}
修改jvm.options
-Xms6g #你服務(wù)器內(nèi)存的一半,最高32G
-Xmx6g #你服務(wù)器內(nèi)存的一半,最高32G
?
?啟動logstash
bin/logstash -f conf/logstash.conf
最后去服務(wù)器上部署filebeat?文章來源:http://www.zghlxwxcb.cn/news/detail-764435.html
filebeat.inputs:
- type: filestream 跟以前的log類似。普通的日志選這個就行了
id: testlog1
enabled: true
paths:
- /var/log/testlog1.log
field_under_root: true #讓kafka的topic: '%{[fields.log_topic]}'取到變量值
fields:
log_topic: testlog1 #跟id不沖突,id輸出取不到變量值
multiline.pattern: '^\d(4)' # 設(shè)置多行合并匹配的規(guī)則,意思就是不以4個連續(xù)數(shù)字,比如2023開頭的 視為同一條
multiline.negate: true # 如果匹配不上
multiline.match: after # 合并到后面
- type: filestream
id: testlog2
enabled: true
paths:
- /var/log/testlog2
field_under_root: true
fields:
log_topic: testlog2
multiline.pattern: '^\d(4)'
multiline.negate: true
multiline.match: after
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: true #開啟運行時重載配置
#reload.period: 10s
path.home: /data/filebeat-8.10.4/ #指明filebeat的文件夾。啟動多個時需要
path.data: /data/filebeat-8.10.4/data/
path.logs: /data/filebeat-8.10.4/logs/
processors:
- drop_fields: #刪除不需要顯示的字段
fields: ["agent","event","input","log","type","ecs"]
output.kafka:
enabled: true
hosts: ["10.8.74.35:9092"] #kafka地址,可配置多個用逗號隔開
topic: '%{[fields.log_topic]}' #根據(jù)上面添加字段發(fā)送不同topic
初步的部署這就完成了。后面的使用才是大頭,路漫漫其修遠(yuǎn)兮?文章來源地址http://www.zghlxwxcb.cn/news/detail-764435.html
到了這里,關(guān)于Elastic stack8.10.4搭建、啟用安全認(rèn)證,啟用https,TLS,SSL 安全配置詳解的文章就介紹完了。如果您還想了解更多內(nèi)容,請在右上角搜索TOY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!