国产 无码 综合区,色欲AV无码国产永久播放,无码天堂亚洲国产AV,国产日韩欧美女同一区二区

ELK+Kafka+Zookeeper日志收集系統(tǒng)

這篇具有很好參考價值的文章主要介紹了ELK+Kafka+Zookeeper日志收集系統(tǒng)。希望對大家有所幫助。如果存在錯誤或未考慮完全的地方,請大家不吝賜教,您也可以點擊"舉報違法"按鈕提交疑問。

環(huán)境準(zhǔn)備

節(jié)點IP 節(jié)點規(guī)劃 主機(jī)名
192.168.112.3 Elasticsearch + Kibana + Logstash + Zookeeper + Kafka + Nginx elk-node1
192.168.112.3 Elasticsearch + Logstash + Zookeeper + Kafka elk-node2
192.168.112.3 Elasticsearch + Logstash + Zookeeper + Kafka + Nginx elk-node3

基礎(chǔ)環(huán)境

systemctl disable firewalld --now && setenforce 0
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
mv /etc/yum.repos.d/CentOS-* /tmp/
curl -o /etc/yum.repos.d/centos.repo http://mirrors.aliyun.com/repo/Centos-7.repo
curl -o /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
yum install -y vim net-tools wget unzip

修改主機(jī)名

[root@localhost ~]# hostnamectl set-hostname elk-node1
[root@localhost ~]# bash

[root@localhost ~]# hostnamectl set-hostname elk-node2
[root@localhost ~]# bash

[root@localhost ~]# hostnamectl set-hostname elk-node3
[root@localhost ~]# bash

配置映射

[root@elk-node1 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.112.3 elk-node1
192.168.112.4 elk-node2
192.168.112.5 elk-node3

Elasticserach部署

安裝Elasticserach

三臺主機(jī)都需安裝java及elasticserach

[root@elk-node1 ~]# yum install -y java-1.8.0-*

[root@elk-node1 ~]# wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.0.0.rpm

[root@elk-node1 ~]# rpm -ivh elasticsearch-6.0.0.rpm
### 參數(shù)含義:i表示安裝,v表示顯示安裝過程,h表示顯示進(jìn)度

啟動報錯

### 二進(jìn)制安裝
[root@elk-node1 ~]# ln -s /opt/jdk1.8.0_391/bin/java /usr/bin/java

Elasticserach配置

elk1節(jié)點配置
[root@elk-node1 ~]# cat /etc/elasticsearch/elasticsearch.yml | grep -v ^# | grep -v ^$
cluster.name: ELK
node.name: elk-node-1
node.master: true
node.data: true
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 192.168.112.3
http.port: 9200
discovery.zen.ping.unicast.hosts: ["elk-node1", "elk-node2","elk-node3"]
elk2節(jié)點配置
[root@elk-node2 ~]# cat /etc/elasticsearch/elasticsearch.yml | grep -v ^# | grep -v ^$
cluster.name: ELK
node.name: elk-node2
node.master: true
node.data: true
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 192.168.112.4
http.port: 9200
discovery.zen.ping.unicast.hosts: ["elk-node1", "elk-node2","elk-node3"]
elk3節(jié)點配置
[root@elk-node3 ~]# cat /etc/elasticsearch/elasticsearch.yml | grep -v ^# | grep -v ^$
cluster.name: ELK
node.name: elk-node3
node.master: true
node.data: true
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 192.168.112.5
http.port: 9200
discovery.zen.ping.unicast.hosts: ["elk-node1", "elk-node2","elk-node3"]

啟動服務(wù)

[root@elk-node1 ~]# systemctl daemon-reload
[root@elk-node1 ~]# systemctl enable elasticsearch --now
Created symlink from /etc/systemd/system/multi-user.target.wants/elasticsearch.service to /usr/lib/systemd/system/elasticsearch.service.

檢測進(jìn)程和端口

[root@elk-node1 ~]# ps -ef | grep elasticsearch
elastic+  12663      1 99 22:28 ?        00:00:11 /bin/java -Xms1g -Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+AlwaysPreTouch -server -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djna.nosys=true -XX:-OmitStackTraceInFastThrow -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true -Dio.netty.recycler.maxCapacityPerThread=0 -Dlog4j.shutdownHookEnabled=false -Dlog4j2.disable.jmx=true -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/var/lib/elasticsearch -Des.path.home=/usr/share/elasticsearch -Des.path.conf=/etc/elasticsearch -cp /usr/share/elasticsearch/lib/* org.elasticsearch.bootstrap.Elasticsearch -p /var/run/elasticsearch/elasticsearch.pid --quiet
root      12720   1822  0 22:28 pts/0    00:00:00 grep --color=auto elasticsearch
[root@elk-node1 ~]# netstat -ntpl
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1021/sshd           
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      1175/master         
tcp6       0      0 192.168.112.3:9200      :::*                    LISTEN      12663/java          
tcp6       0      0 192.168.112.3:9300      :::*                    LISTEN      12663/java          
tcp6       0      0 :::22                   :::*                    LISTEN      1021/sshd           
tcp6       0      0 ::1:25                  :::*                    LISTEN      1175/master

檢測集群狀態(tài)

[root@elk-node1 ~]# curl 'elk-node1:9200/_cluster/health?pretty'
{
  "cluster_name" : "ELK",   		//集群名稱
  "status" : "green",   				//集群健康狀態(tài),green為健康,yellow或者red則是集群有問題
  "timed_out" : false   				//是否超時,
  "number_of_nodes" : 3,   			//集群中節(jié)點數(shù)
  "number_of_data_nodes" : 3,   //集群中data節(jié)點數(shù)量
  "active_primary_shards" : 0,
  "active_shards" : 0,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0
}

Kibana部署

安裝Kibana

[root@elk-node1 ~]# wget https://artifacts.elastic.co/downloads/kibana/kibana-6.0.0-x86_64.rpm

[root@elk-node1 ~]# rpm -ivh kibana-6.0.0-x86_64.rpm

Kibana配置

添加nginx源

[root@elk-node1 ~]# vim /etc/yum.repos.d/nginx.repo
[nginx]
name = nginx repo
baseurl = https://nginx.org/packages/mainline/centos/7/$basearch/
gpgcheck = 0
enabled = 1

安裝nginx

[root@elk-node1 ~]# yum install -y nginx

啟動服務(wù)

[root@elk-node1 ~]# systemctl enable nginx --now
Created symlink from /etc/systemd/system/multi-user.target.wants/nginx.service to /usr/lib/systemd/system/nginx.service.

配置nginx負(fù)載均衡

[root@elk-node1 ~]# cat /etc/nginx/nginx.conf

user  nginx;
worker_processes  auto;

error_log  /var/log/nginx/error.log notice;
pid        /var/run/nginx.pid;


events {
    worker_connections  1024;
}


http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile        on;
    #tcp_nopush     on;

    keepalive_timeout  65;

    #gzip  on;


    upstream elasticsearch {
        zone elasticsearch 64K;
        server elk-node1:9200;
        server elk-node2:9200;
        server elk-node3:9200;
    }

    server {
        listen 80;
        server_name 192.168.112.3;

        location / {
            proxy_pass http://elasticsearch;
            proxy_redirect off;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        }

        access_log /var/log/es_access.log;
    }


    include /etc/nginx/conf.d/*.conf;
}

重啟服務(wù)

[root@elk-node1 ~]# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
[root@elk-node1 ~]# nginx -s reload
[root@elk-node1 ~]# systemctl restart nginx

Kibana配置

[root@elk-node1 ~]# cat /etc/kibana/kibana.yml | grep -v ^#
server.port: 5601
server.host: 192.168.112.3
elasticsearch.url: "http://192.168.112.3:80"

啟動服務(wù)

[root@elk-node1 ~]# systemctl enable kibana --now
Created symlink from /etc/systemd/system/multi-user.target.wants/kibana.service to /etc/systemd/system/kibana.service.
[root@elk-node1 ~]# ps -ef | grep kibana
kibana    13384      1 32 06:02 ?        00:00:02 /usr/share/kibana/bin/../node/bin/node --no-warnings /usr/share/kibana/bin/../src/cli -c /etc/kibana/kibana.yml
root      13396   1822  0 06:03 pts/0    00:00:00 grep --color=auto kibana

瀏覽器訪問

ELK+Kafka+Zookeeper日志收集系統(tǒng),elk,kafka,zookeeper

Zoopeeper集群部署

安裝Zoopeeper

[root@elk-node1 ~]# tar -zxvf apache-zookeeper-3.8.3-bin.tar.gz -C /usr/local/
[root@elk-node1 ~]# mv /usr/local/apache-zookeeper-3.8.3-bin/ /usr/local/zookeeper
[root@elk-node1 ~]# cp /usr/local/zookeeper/conf/zoo_sample.cfg /usr/local/zookeeper/conf/zoo.cfg

配置環(huán)境變量

[root@elk-node1 ~]# cat >> /etc/profile << EOF
export ZOOKEEPER_HOME=/usr/local/zookeeper
export PATH=$ZOOKEEPER_HOME/bin:$PATH
EOF

[root@elk-node1 ~]# source /etc/profile

[root@elk-node1 ~]# scp /etc/profile 192.168.112.4:/etc/profile
[root@elk-node1 ~]# scp /etc/profile 192.168.112.5:/etc/profile

[root@elk-node2 ~]# source /etc/profile
[root@elk-node3 ~]# source /etc/profile

配置zoopeeper

[root@elk-node1 ~]# cat /usr/local/zookeeper/conf/zoo.cfg
# The number of milliseconds of each tick
tickTime=2000   #### zookeeper 之間心跳間隔2秒
# The number of ticks that the initial 
# synchronization phase can take
initLimit=10    ### LF初始通信時限
# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
syncLimit=5     ### LF同步通信時限
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just 
# example sakes.
dataDir=/tmp/zookeeper  ### zookeeper保存數(shù)據(jù)的目錄
dataLogDir=/usr/local/zookeeper/logs ### zookeeper保存日志文件的目錄
# the port at which the clients will connect
clientPort=2181         ### 客戶端連接 zookeeper 服務(wù)器的端口
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
maxClientCnxns=60
#
# Be sure to read the maintenance section of the 
# administrator guide before turning on autopurge.
#
# https://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
autopurge.purgeInterval=1

server.1=elk-node1:2888:3888
server.2=elk-node2:2888:3888
server.3=elk-node3:2888:3888

## Metrics Providers
#
# https://prometheus.io Metrics Exporter
#metricsProvider.className=org.apache.zookeeper.metrics.prometheus.PrometheusMetricsProvider
#metricsProvider.httpHost=0.0.0.0
#metricsProvider.httpPort=7000
#metricsProvider.exportJvmInfo=true

配置節(jié)點標(biāo)識

[root@elk-node1 ~]# scp /usr/local/zookeeper/conf/zoo.cfg 192.168.112.4:/usr/local/zookeeper/conf/zoo.cfg
[root@elk-node1 ~]# scp /usr/local/zookeeper/conf/zoo.cfg 192.168.112.5:/usr/local/zookeeper/conf/zoo.cfg

[root@elk-node1 ~]# mkdir /tmp/zookeeper
[root@elk-node1 ~]# echo "1" > /tmp/zookeeper/myid

[root@elk-node2 ~]# mkdir /tmp/zookeeper
[root@elk-node2 ~]# echo "2" > /tmp/zookeeper/myid

[root@elk-node3 ~]# mkdir /tmp/zookeeper
[root@elk-node3 ~]# echo "3" > /tmp/zookeeper/myid

啟動服務(wù)

三個節(jié)點都需要啟動否測報錯

[root@elk-node1 ~]# zkServer.sh start
/bin/java
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED

查看服務(wù)狀態(tài)

[root@elk-node1 ~]# zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: follower

Kafka集群部署

安裝Kafka

[root@elk-node1 ~]# tar -zxvf kafka_2.12-3.6.1.tgz -C /usr/local/
[root@elk-node1 ~]# mv /usr/local/kafka_2.12-3.6.1/ /usr/local/kafka
[root@elk-node1 ~]# cp /usr/local/kafka/config/server.properties{,.bak}
[root@elk-node1 ~]# scp kafka_2.12-3.6.1.tgz 192.168.112.4:/root
[root@elk-node1 ~]# scp kafka_2.12-3.6.1.tgz 192.168.112.5:/root

配置環(huán)境變量

[root@elk-node1 ~]# cat >> /etc/profile << EOF
export KAFKA_HOME=/usr/local/kafka
export PATH=$KAFKA_HOME/bin:$PATH
EOF

[root@elk-node1 ~]# source /etc/profile
[root@elk-node1 ~]# echo $KAFKA_HOME
/usr/local/kafka

[root@elk-node1 ~]# scp /etc/profile 192.168.112.4:/etc/profile
[root@elk-node1 ~]# scp /etc/profile 192.168.112.5:/etc/profile

[root@elk-node2 ~]# source /etc/profile
[root@elk-node3 ~]# source /etc/profile

配置Kafka

[root@elk-node1 ~]# grep -v "^#" /usr/local/kafka/config/server.properties.bak > /usr/local/kafka/config/server.properties
[root@elk-node1 ~]# vim /usr/local/kafka/config/server.properties
# 每一個broker在集群中的唯一表示,要求是正數(shù)
broker.id=1

# 監(jiān)控的kafka端口
listenters=PLAINTEXT://192.168.112.3:9092

# broker處理消息的最大線程數(shù),一般情況下不需要去修改
num.network.threads=3

# broker處理磁盤IO的線程數(shù),數(shù)值應(yīng)該大于你的硬盤數(shù)
num.io.threads=8

# socket的發(fā)送緩沖區(qū)
socket.send.buffer.bytes=102400

# socket的接受緩沖區(qū)
socket.receive.buffer.bytes=102400

# socket請求的最大字節(jié)數(shù)
socket.request.max.bytes=104857600

# kafka數(shù)據(jù)的存放地址,多個地址用逗號分割,多個目錄分布在不同磁盤上可以提高讀寫性能 /tmp/kafka-log,/tmp/kafka-log2
log.dirs=/usr/local/kafka/kafka-logs

# 設(shè)置partitions的個數(shù)
num.partitions=1

num.recovery.threads.per.data.dir=1

offsets.topic.replication.factor=1

transaction.state.log.replication.factor=1

transaction.state.log.min.isr=1

# 數(shù)據(jù)文件保留多長時間,此處為168h,粒度還可設(shè)置為分鐘,或按照文件大小
log.retention.hours=168

# topic的分區(qū)是以一堆segment文件存儲的,這個控制每個segment的大小,會被topic創(chuàng)建時的指定參數(shù)覆蓋
log.retention.check.interval.ms=300000

# zookeeper集群地址
zookeeper.connect=elk-node1:2181,elk-node2:2181,elk-node3:2181

# kafka連接zookeeper的超時時間
zookeeper.connection.timeout.ms=6000

group.initial.rebalance.delay.ms=0
[root@elk-node1 ~]# scp /usr/local/kafka/config/server.properties 192.168.112.4:/usr/local/kafka/config/server.properties
[root@elk-node1 ~]# scp /usr/local/kafka/config/server.properties 192.168.112.5:/usr/local/kafka/config/server.properties

########	修改節(jié)點broker.id
# 每一個broker在集群中的唯一表示,要求是正數(shù)
broker.id=1
broker.id=2
broker.id=3

啟動Kafka

三個節(jié)點都需要啟動

### 啟動
[root@elk-node1 ~]# kafka-server-start.sh -daemon /usr/local/kafka/config/server.properties
### 關(guān)閉
[root@elk-node1 ~]# kafka-server-stop.sh

ps:注: kafka節(jié)點默認(rèn)需要的內(nèi)存為1G,在?作中可能會調(diào)?該參數(shù),可修改kafka-server-start.sh的配置項。找到KAFKA_HEAP_OPTS配置項,例如修改為:export KAFKA_HEAP_OPTS="-Xmx2G -Xms2G"

測試Kafka

[root@elk-node1 ~]# jps 
24099 QuorumPeerMain
48614 Jps
47384 Kafka
23258 Elasticsearch
創(chuàng)建Topic
`在kf1(Broker)上創(chuàng)建測試Tpoic:test-ken,這?我們指定了3個副本Broker、test-ken有2個分區(qū)`

[root@elk-node1 ~]# kafka-topics.sh --create --bootstrap-server elk-node1:9092 --replication-factor 3 --partitions 2 --topic test-ken
Created topic test-ken.

在創(chuàng)建Topic時不允許使?"_."之類的符號 選項解釋:
--create:創(chuàng)建新的Topic
--bootstrap-server:指定要哪臺Kafka服務(wù)器上創(chuàng)建Topic,主機(jī)加端?,指定的主機(jī)地址? 定要和配置?件中的listeners?致
--zookeeper:指定要哪臺zookeeper服務(wù)器上創(chuàng)建Topic,主機(jī)加端?,指定的主機(jī)地址?定要 和配置?件中的listeners?致
--replication-factor:創(chuàng)建Topic中的每個分區(qū)(partition)中的復(fù)制因?數(shù)量,即為Topic
的副本數(shù)量,建議和Broker節(jié)點數(shù)量?致,如果復(fù)制因?超出Broker節(jié)點將?法創(chuàng)建
--partitions:創(chuàng)建該Topic中的分區(qū)(partition)數(shù)量
--topic:指定Topic名稱
查看Topic

Topic在kf1上創(chuàng)建后也會同步到集群中另外兩個副本Broker:kf2、kf3,通過以下命令列出指定Broker的topic信息

[root@elk-node1 ~]# kafka-topics.sh --list --bootstrap-server elk-node1:9092 
test-ken

[root@elk-node1 ~]# kafka-topics.sh --list --bootstrap-server elk-node2:9092 __consumer_offsets
__consumer_offsets
test-ken
查看Topic詳情
[root@elk-node3 ~]# kafka-topics.sh --describe --bootstrap-server elk-node1:9092 --topic test-ken
Topic: test-ken TopicId: CMsPBF2XQySuUyr9ekEf7Q PartitionCount: 2       ReplicationFactor: 3    Configs: 
        Topic: test-ken Partition: 0    Leader: 3       Replicas: 3,2,1 Isr: 3,2,1
        Topic: test-ken Partition: 1    Leader: 1       Replicas: 1,3,2 Isr: 1,3,2
        
`Topic:kafka_data`# topic名稱
`PartitionCount: 2`# 分?數(shù)量
`ReplicationFactor: 3`# Topic副本數(shù)量
發(fā)送消息
向Broker(id=1)的Topic=test-ken發(fā)送消息

[root@elk-node1 ~]# kafka-console-producer.sh --broker-list elk-node1:9092 --topic test-ken
>this is test   
>bye

--broker-list:指定使?哪臺broker來?產(chǎn)消息
--topic:指定要往哪個Topic中?產(chǎn)消息
驗證接收消息
### 消費者:
### 從開始位置消費(所有節(jié)點均能收到)

### elk-node1測試
[root@elk-node1 ~]# kafka-console-consumer.sh --bootstrap-server elk-node2:9092 --topic test-ken --from-beginning 
this is test
bye

Processed a total of 2 messages

### elk-node2測試
[root@elk-node2 ~]# kafka-console-consumer.sh --bootstrap-server elk-node1:9092 --topic test-ken --from-beginning     
this is test
bye

Processed a total of 2 messages
### 消費者組:
### ?個Consumer group,多個consumer進(jìn)程,數(shù)量?于等于partition分區(qū)的數(shù)量
### test-ken只有2個分區(qū),只能有兩個消費者consumer進(jìn)程去輪詢消費消息

[root@elk-node1 ~]# kafka-console-consumer.sh --bootstrap-server elk-node1:9092 --topic test-ken --group testgroup_ken
刪除Topic
[root@elk-node1 ~]# kafka-topics.sh --delete --bootstrap-server elk-node1:9092 --topic test-ken
查看刪除信息
[root@elk-node3 ~]# kafka-topics.sh --describe --bootstrap-server elk-node1:9092 --topic test-ken
Error while executing topic command : Topic 'test-ken' does not exist as expected
[2024-01-13 15:14:10,659] ERROR java.lang.IllegalArgumentException: Topic 'test-ken' does not exist as expected
        at kafka.admin.TopicCommand$.kafka$admin$TopicCommand$$ensureTopicExists(TopicCommand.scala:400)
        at kafka.admin.TopicCommand$TopicService.describeTopic(TopicCommand.scala:312)
        at kafka.admin.TopicCommand$.main(TopicCommand.scala:63)
        at kafka.admin.TopicCommand.main(TopicCommand.scala)
 (kafka.admin.TopicCommand$)

Zookeeper的作用

1、broker在zk中注冊
kafka的每個broker(相當(dāng)于?個節(jié)點,相當(dāng)于?個機(jī)器)在啟動時,都會在zk中注冊,告訴zkb
rokerid,在整個的集群中,broker.id/brokers/ids,當(dāng)節(jié)點失效時,zk就會刪除該節(jié)點,就 很?便的監(jiān)控整個集群broker的變化,及時調(diào)整負(fù)載均衡。

WatchedEvent state:SyncConnected type:None path:null
[zk: elk-node1:2181(CONNECTED) 0] ls /brokers 
[ids, seqid, topics]
[zk: elk-node1:2181(CONNECTED) 1] ls /brokers/ids
[1, 2, 3]
[zk: elk-node1:2181(CONNECTED) 2]
2、topic在zk中注冊
kafka中可以定義很多個topic,每個topic?被分為很多個分區(qū)。?般情況下,每個分區(qū)獨?在 存在?個broker上,所有的這些topicbroker的對應(yīng)關(guān)系都有zk進(jìn)?維護(hù)

剛才已經(jīng)刪除了Topic再次創(chuàng)建
[root@elk-node1 ~]# kafka-topics.sh --create --bootstrap-server elk-node1:9092 --replication-factor 3 --partitions 2 --topic test-ken
Created topic test-ken.

WatchedEvent state:SyncConnected type:None path:null
[zk: elk-node1:2181(CONNECTED) 0] ls /brokers/topics/test-ken/partitions
[0, 1]
3、consumer(消費者)在zk中注冊
注意:從kafka-0.9版本及以后,kafka的消費者組和offset信息就不存zookeeper了,?是存到
broker服務(wù)器上。 所以,如果你為某個消費者指定了?個消費者組名稱(group.id),那么,?旦這個消費者啟動, 這個消費者組名和它要消費的那個topicoffset信息就會被記錄在broker服務(wù)器上。,但是zook
eeper其實并不適合進(jìn)??批量的讀寫操作,尤其是寫操作。因此kafka提供了另?種解決?案:增 加__consumeroffsets topic,將offset信息寫?這個topic

[zk: elk-node1:2181(CONNECTED) 0] ls /brokers/topics
[__consumer_offsets, test-ken]
[zk: elk-node1:2181(CONNECTED) 1] ls /brokers/topics/__consumer_offsets/partitions
[0, 1, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 2, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 3, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 4, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 5, 6, 7, 8, 9]

Beats采集?志部署

安裝Beats

[root@elk-node1 ~]# scp filebeat-6.0.0-x86_64.rpm 192.168.112.4:/root
[root@elk-node1 ~]# scp filebeat-6.0.0-x86_64.rpm 192.168.112.5:/root

[root@elk-node1 ~]# rpm -ivh filebeat-6.0.0-x86_64.rpm 
warning: filebeat-6.0.0-x86_64.rpm: Header V4 RSA/SHA512 Signature, key ID d88e42b4: NOKEY
Preparing...                          ################################# [100%]
Updating / installing...
   1:filebeat-6.0.0-1                 ################################# [100%]

Beats配置

elk-node1節(jié)點
### 編輯配置?件
[root@elk-node1 ~]# > /etc/filebeat/filebeat.yml
[root@elk-node1 ~]# vim vim /etc/filebeat/filebeat.yml
filebeat.prospectors:
- type: log
  enabled: true
  paths:
    - /var/log/es_access.log	### 此處可??改為想要監(jiān)聽的?志?件

output.kafka:
  enabled: true
  hosts: ["elk-node1:9092","elk-node2:9092","elk-node3:9092"]
  topic: "es_access"		### 對應(yīng)zookeeper?成的topic
  keep_alive: 10s
elk-node2節(jié)點
[root@elk-node2 ~]# > /etc/filebeat/filebeat.yml
[root@elk-node2 ~]# vim /etc/filebeat/filebeat.yml 
filebeat.prospectors:
- type: log
  enabled: true
  paths:
    - /var/log/vmware-network.log

output.kafka:
  enabled: true
  hosts: ["elk-node1:9092","elk-node2:9092","elk-node3:9092"]
  topic: "vmware-network"
  keep_alive: 10s
elk-node3節(jié)點
[root@elk-node3 ~]# > /etc/filebeat/filebeat.yml
[root@elk-node3 ~]# vim /etc/filebeat/filebeat.yml
filebeat.prospectors:
- type: log
  enabled: true
  paths:
    - /var/log/access.log

output.kafka:
  enabled: true
  hosts: ["elk-node1:9092","elk-node2:9092","elk-node3:9092"]
  topic: "access"
  keep_alive: 10s

啟動服務(wù)

[root@elk-node1 ~]# systemctl enable filebeat --now
Created symlink from /etc/systemd/system/multi-user.target.wants/filebeat.service to /usr/lib/systemd/system/filebeat.service.

[root@elk-node1 ~]# systemctl status filebeat       
● filebeat.service - filebeat
   Loaded: loaded (/usr/lib/systemd/system/filebeat.service; enabled; vendor preset: disabled)
   Active: active (running) since Sat 2024-01-13 15:43:19 CST; 6s ago
     Docs: https://www.elastic.co/guide/en/beats/filebeat/current/index.html
 Main PID: 55537 (filebeat)
   CGroup: /system.slice/filebeat.service
           └─55537 /usr/share/filebeat/bin/filebeat -c /etc/filebeat/filebeat.yml -path.home /usr/share/filebeat -path.config /etc/filebeat -path.data /var/lib/filebeat...

Jan 13 15:43:19 elk-node1 systemd[1]: Started filebeat

Logstash部署

安裝Logstash

[root@elk-node1 ~]# wget https://artifacts.elastic.co/downloads/logstash/logstash-6.0.0.rpm

[root@elk-node1 ~]# rpm -ivh logstash-6.0.0.rpm 
warning: logstash-6.0.0.rpm: Header V4 RSA/SHA512 Signature, key ID d88e42b4: NOKEY
Preparing...                          ################################# [100%]
Updating / installing...
   1:logstash-1:6.0.0-1               ################################# [100%]
Using provided startup.options file: /etc/logstash/startup.options
Successfully created system startup script for Logstash

配置Logstash

elk-node1節(jié)點
### 配置/etc/logstash/logstash.yml,修改增加如下
[root@elk-node1 ~]# grep -v '^#' /etc/logstash/logstash.yml 
http.host: "192.168.112.3"
path.data: /var/lib/logstash
path.config: /etc/logstash/conf.d/*.conf
path.logs: /var/log/logstash
elk-node2節(jié)點
### 配置logstash收集es_access的?志
[root@elk-node2 ~]# cat /etc/logstash/conf.d/es_access.conf
# Settings file in YAML
input {
  kafka {
    bootstrap_servers => "elk-node1:9092,elk-node2:9092,elk-node3:9092"
    group_id => "logstash"
    auto_offset_reset => "earliest"
    decorate_events => true
    topics => ["es_access"]
    type => "messages"
  }
}

output {
  if [type] == "messages" {
    elasticsearch {
      hosts => ["elk-node1:9200","elk-node2:9200","elk-node3:9200"]
      index => "es_access-%{+YYYY.MM.dd}"
    }
  }
}
### 配置logstash收集vmware的?志
[root@elk-node2 ~]# cat /etc/logstash/conf.d/vmware.conf
# Settings file in YAML
input {
  kafka {
    bootstrap_servers => "elk-node1:9092,elk-node2:9092,elk-node3:9092"
    group_id => "logstash"
    auto_offset_reset => "earliest"
    decorate_events => true
    topics => ["vmware"]
    type => "messages"
  }
}

output {
  if [type] == "messages" {
    elasticsearch {
      hosts => ["elk-node1:9200","elk-node2:9200","elk-node3:9200"]
      index => "vmware-%{+YYYY.MM.dd}"
    }
  }
}
### 配置logstash收集nginx的?志
[root@elk-node2 ~]# cat /etc/logstash/conf.d/nginx.conf
# Settings file in YAML
input {
  kafka {
    bootstrap_servers => "elk-node1:9092,elk-node2:9092,elk-node3:9092"
    group_id => "logstash"
    auto_offset_reset => "earliest"
    decorate_events => true
    topics => ["nginx"]
    type => "messages"
  }
}

output {
  if [type] == "messages" {
    elasticsearch {
      hosts => ["elk-node1:9200","elk-node2:9200","elk-node3:9200"]
      index => "nginx-%{+YYYY.MM.dd}"
    }
  }
}

檢查配置文件是否有誤

[root@elk-node2 ~]# ln -s /usr/share/logstash/bin/logstash /usr/bin/

### 檢查es_access
[root@elk-node2 ~]# logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/es_access.conf --config.test_and_exit
Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
Configuration OK

### 檢查vmware
[root@elk-node2 ~]# logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/vmware.conf --config.test_and_exit   
Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
Configuration OK

### 檢查nginx
[root@elk-node2 ~]# logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/nginx.conf --config.test_and_exit
Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
Configuration OK

### 為ok則代表沒問題

### 參數(shù)解釋:
 --path.settings : ?于指定logstash的配置?件所在的?錄
 -f : 指定需要被檢測的配置?件的路徑
 --config.test_and_exit : 指定檢測完之后就退出,不然就會直接啟動了

啟動Logstash

三個節(jié)點需要啟動

### 檢查配置?件沒有問題后,啟動Logstash服務(wù)
[root@elk-node2 ~]# systemctl enable logstash --now
Created symlink from /etc/systemd/system/multi-user.target.wants/logstash.service to /etc/systemd/system/logstash.service.

### 查看進(jìn)程
[root@elk-node2 ~]# ps -ef | grep logstash
logstash  17845      1  0 17:32 ?        00:00:00 /bin/java -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+DisableExplicitGC -Djava.awt.headless=true -Dfile.encoding=UTF-8 -XX:+HeapDumpOnOutOfMemoryError -Xmx1g -Xms256m -Xss2048k -Djffi.boot.library.path=/usr/share/logstash/vendor/jruby/lib/jni -Xbootclasspath/a:/usr/share/logstash/vendor/jruby/lib/jruby.jar -classpath : -Djruby.home=/usr/share/logstash/vendor/jruby -Djruby.lib=/usr/share/logstash/vendor/jruby/lib -Djruby.script=jruby -Djruby.shell=/bin/sh org.jruby.Main /usr/share/logstash/lib/bootstrap/environment.rb logstash/runner.rb --path.settings /etc/logstash

### 查看端口
[root@elk-node2 ~]# netstat -ntpl
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      1151/master         
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1020/sshd           
tcp6       0      0 ::1:25                  :::*                    LISTEN      1151/master         
tcp6       0      0 :::9092                 :::*                    LISTEN      15757/java          
tcp6       0      0 :::2181                 :::*                    LISTEN      14812/java          
tcp6       0      0 :::40039                :::*                    LISTEN      15757/java          
tcp6       0      0 :::42696                :::*                    LISTEN      14812/java          
tcp6       0      0 192.168.112.4:3888      :::*                    LISTEN      14812/java          
tcp6       0      0 :::8080                 :::*                    LISTEN      14812/java          
tcp6       0      0 192.168.112.4:9200      :::*                    LISTEN      13070/java          
tcp6       0      0 192.168.112.4:9300      :::*                    LISTEN      13070/java          
tcp6       0      0 :::22                   :::*                    LISTEN      1020/sshd
啟動報錯解決
[root@elk-node2 ~]# systemctl start logstash
Failed to start logstash.service: Unit not found.

[root@elk-node2 ~]# sudo /usr/share/logstash/bin/system-install /etc/logstash/startup.options systemd
which: no java in (/sbin:/bin:/usr/sbin:/usr/bin)
could not find java; set JAVA_HOME or ensure java is in PATH

[root@elk-node2 ~]# ln -s /opt/jdk1.8.0_391/bin/java /usr/bin/java

[root@elk-node2 ~]# sudo /usr/share/logstash/bin/system-install /etc/logstash/startup.options systemd
Using provided startup.options file: /etc/logstash/startup.options
Manually creating startup for specified platform: systemd
Successfully created system startup script for Logstash

如果啟動服務(wù)后,有進(jìn)程但是沒有9600端口,是因為權(quán)限問題,之前我們以root的身份在終端啟動過logstash,所以產(chǎn)生的相關(guān)文件的屬組屬主都是root,解決方法如下

[root@elk-node2 ~]# cat /var/log/logstash/logstash-plain.log | grep que
[2024-01-13T17:23:56,589][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/var/lib/logstash/queue"}
[2024-01-13T17:23:56,589][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/var/lib/logstash/dead_letter_queue"}

[root@elk-node2 ~]# ll /var/lib/logstash/
total 0
drwxr-xr-x. 2 root root 6 Jan 13 17:23 dead_letter_queue
drwxr-xr-x. 2 root root 6 Jan 13 17:23 queue
### 修改/var/lib/logstash/?錄的所屬組為logstash,并重啟服務(wù)
[root@elk-node2 ~]# chown -R logstash /var/lib/logstash/
[root@elk-node2 ~]# ll /var/lib/logstash/               
total 0
drwxr-xr-x. 2 logstash root 6 Jan 13 17:23 dead_letter_queue
drwxr-xr-x. 2 logstash root 6 Jan 13 17:23 queue

[root@elk-node2 ~]# systemctl restart logstash

[root@elk-node2 ~]# netstat -ntpl
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      1151/master         
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1020/sshd           
tcp6       0      0 ::1:25                  :::*                    LISTEN      1151/master         
tcp6       0      0 127.0.0.1:9600          :::*                    LISTEN      18707/java          
tcp6       0      0 :::9092                 :::*                    LISTEN      15757/java          
tcp6       0      0 :::2181                 :::*                    LISTEN      14812/java          
tcp6       0      0 :::40039                :::*                    LISTEN      15757/java          
tcp6       0      0 :::42696                :::*                    LISTEN      14812/java          
tcp6       0      0 192.168.112.4:3888      :::*                    LISTEN      14812/java          
tcp6       0      0 :::8080                 :::*                    LISTEN      14812/java          
tcp6       0      0 192.168.112.4:9200      :::*                    LISTEN      13070/java          
tcp6       0      0 192.168.112.4:9300      :::*                    LISTEN      13070/java          
tcp6       0      0 :::22                   :::*                    LISTEN      1020/sshd

Kibana查看日志

[root@elk-node1 ~]# curl 'elk-node1:9200/_cat/indices?v'
health status index                uuid                   pri rep docs.count docs.deleted store.size pri.store.size
green  open   .kibana              sQtNJsqNQ3mW4Bs62m5hpQ   1   1          1            0     26.1kb           13kb
green  open   nginx-2024.01.13     KVTsisxoRGKs60LYwdlbVA   5   1        424            0    517.9kb        258.9kb
green  open   vmware-2024.01.13    S_uEeLq6TluD4fajPGAz-g   5   1        424            0    549.8kb        274.9kb
green  open   es_access-2024.01.13 -743RqwoQMOBhBOlkOdVWg   5   1        424            0    540.5kb        270.2kb

Web界?配置

瀏覽器訪問192.168.112.3:5601,到Kibana上配置索引

此處的 Index pattern 使用 curl 'elk-node1:9200/_cat/indices?v'獲取index

ELK+Kafka+Zookeeper日志收集系統(tǒng),elk,kafka,zookeeper

ELK+Kafka+Zookeeper日志收集系統(tǒng),elk,kafka,zookeeper

?產(chǎn)部署?案

在?個?產(chǎn)集群中我們可以對這些節(jié)點進(jìn)?劃分。

建議集群中設(shè)置3臺以上的節(jié)點作為master節(jié)點【

node.master: true node.data: false】

這些節(jié)點只負(fù)責(zé)成為主節(jié)點,維護(hù)整個集群的狀態(tài)。

再根據(jù)數(shù)據(jù)量設(shè)置?批data節(jié)點【

node.master: false node.data: true】

這些節(jié)點只負(fù)責(zé)存儲數(shù)據(jù),后期提供建?索引和查詢索引的服務(wù),這樣的話如果?戶請求?較頻繁,這

些節(jié)點的壓?也會?較?

所以在集群中建議再設(shè)置?批client節(jié)點【

node.master: false node.data:false】

這些節(jié)點只負(fù)責(zé)處理?戶請求,實現(xiàn)請求轉(zhuǎn)發(fā),負(fù)載均衡等功能。

master節(jié)點:普通服務(wù)器即可(CPU 內(nèi)存 消耗?般)

data節(jié)點:主要消耗磁盤,內(nèi)存

client節(jié)點:普通服務(wù)器即可(如果要進(jìn)?分組聚合操作的話,建議這個節(jié)點內(nèi)存也分配多?點)文章來源地址http://www.zghlxwxcb.cn/news/detail-855010.html

到了這里,關(guān)于ELK+Kafka+Zookeeper日志收集系統(tǒng)的文章就介紹完了。如果您還想了解更多內(nèi)容,請在右上角搜索TOY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!

本文來自互聯(lián)網(wǎng)用戶投稿,該文觀點僅代表作者本人,不代表本站立場。本站僅提供信息存儲空間服務(wù),不擁有所有權(quán),不承擔(dān)相關(guān)法律責(zé)任。如若轉(zhuǎn)載,請注明出處: 如若內(nèi)容造成侵權(quán)/違法違規(guī)/事實不符,請點擊違法舉報進(jìn)行投訴反饋,一經(jīng)查實,立即刪除!

領(lǐng)支付寶紅包贊助服務(wù)器費用

相關(guān)文章

  • ELK日志收集平臺部署(kafka)

    正文:ELK日志收集平臺部署 Kafka: 數(shù)據(jù)緩沖隊列。作為消息隊列解耦合處理過程,同時提高了可擴(kuò)展性。具有峰值處理能力,使用消息隊列能夠使關(guān)鍵組件頂住突發(fā)的訪問壓力,而不會因為突發(fā)的超負(fù)荷的請求而完全崩潰。 基于zookeeper協(xié)調(diào)的分布式消息系統(tǒng),它的最大的特

    2024年01月25日
    瀏覽(24)
  • Elk+Filebeat+Kafka實現(xiàn)日志收集

    Elk+Filebeat+Kafka實現(xiàn)日志收集

    1.實驗組件 2.安裝前準(zhǔn)備 3.安裝Zookeeper 1.安裝Kafka 2.命令行測試 ?1.安裝Filebeat 2.時間同步 3.配置filebeat 4.配置logstash 1.安裝配置filebeat 2.配置logstash

    2024年02月05日
    瀏覽(28)
  • 搭建ELK+Filebead+zookeeper+kafka實驗

    搭建ELK+Filebead+zookeeper+kafka實驗

    架構(gòu)圖分別演示 第一層:數(shù)據(jù)采集層 數(shù)據(jù)采集層位于最左邊的業(yè)務(wù)服務(wù)集群上,在每個業(yè)務(wù)服務(wù)器上面安裝了filebead做日志收集,然后把采集到的原始日志發(fā)送到kafka+zookeeper集群上。 第二層:消息隊列層 原始日志發(fā)送到kafka+zookeeper集群上后,會進(jìn)行集中存儲,此時filebead是

    2024年02月06日
    瀏覽(23)
  • Zookeeper集群 + Kafka集群 + Filebeat + ELK

    Zookeeper集群 + Kafka集群 + Filebeat + ELK

    目錄 一:Zookeeper 概述 1、Zookeeper 定義 2、Zookeeper 工作機(jī)制 ?3、Zookeeper 特點 4、?Zookeeper 數(shù)據(jù)結(jié)構(gòu) 5、?Zookeeper 應(yīng)用場景 6、?Zookeeper 選舉機(jī)制 (1)第一次啟動選舉機(jī)制 (2)非第一次啟動選舉機(jī)制 ?二:部署 Zookeeper 集群 1.安裝前準(zhǔn)備 2、?安裝 Zookeeper ?3、修改配置文件 ?

    2024年02月16日
    瀏覽(19)
  • 【ELK 使用指南 3】Zookeeper、Kafka集群與Filebeat+Kafka+ELK架構(gòu)(附部署實例)

    【ELK 使用指南 3】Zookeeper、Kafka集群與Filebeat+Kafka+ELK架構(gòu)(附部署實例)

    分布式應(yīng)用管理框架 。 Zookeeper是個開源的,分布式的,為分布式框架提供協(xié)調(diào)服務(wù)的Apach項目。 主要用于解決分布式應(yīng)用集群中 應(yīng)用系統(tǒng)的一致性問題 。 作為 文件系統(tǒng) ,用于注冊各種分布式應(yīng)用, 儲存管理分布式應(yīng)用的元信息 ; 作為 通知機(jī)制 ,如果節(jié)點或者服務(wù)本身的

    2024年02月08日
    瀏覽(54)
  • Zookeeper、Kafka集群與Filebeat+Kafka+ELK架構(gòu)、部署實例

    Zookeeper、Kafka集群與Filebeat+Kafka+ELK架構(gòu)、部署實例

    Zookeeper是一個開源的分布式的,為分布式框架提供協(xié)調(diào)服務(wù)的Apache項目。 Zookeeper:一個領(lǐng)導(dǎo)者(Leader),多個跟隨者(Follower)組成的集群。 Zookeeper集群中只要有半數(shù)以上節(jié)點存活,Zookeeper集群就能正常服務(wù)。所以Zookeeper適合安裝奇數(shù)臺服務(wù)器。 全局?jǐn)?shù)據(jù)一致:每個Server保

    2024年02月08日
    瀏覽(27)
  • SpringBoot+Kafka+ELK 完成海量日志收集(超詳細(xì))

    SpringBoot+Kafka+ELK 完成海量日志收集(超詳細(xì))

    SpringBoot項目準(zhǔn)備 引入log4j2替換SpringBoot默認(rèn)log,demo項目結(jié)構(gòu)如下: pom IndexController 測試Controller,用以打印日志進(jìn)行調(diào)試 InputMDC 用以獲取log中的 [%X{hostName}] 、 [%X{ip}] 、 [%X{applicationName}] 三個字段值 NetUtil 啟動項目,訪問 /index 和 /ero 接口,可以看到項目中生成了 app-collector.

    2024年04月16日
    瀏覽(20)
  • ELK分布式日志收集快速入門-(一)-kafka單體篇

    ELK分布式日志收集快速入門-(一)-kafka單體篇

    JDK 安裝教程自行百度-這個比較簡單。 zookeeper zookeeper安裝參考地址((2條消息) 快速搭建-分布式遠(yuǎn)程調(diào)用框架搭建-dubbo+zookper+springboot demo 演示_康世行的博客-CSDN博客) 修改zookeeper配合文件 啟動成功 開放端口號 下載kafka安裝包 安裝遇到的問題(由于網(wǎng)站證書不安全導(dǎo)致) 解

    2023年04月08日
    瀏覽(26)
  • Elasticsearch實踐:ELK+Kafka+Beats對日志收集平臺的實現(xiàn)

    Elasticsearch實踐:ELK+Kafka+Beats對日志收集平臺的實現(xiàn)

    可以在短時間內(nèi)搜索和分析大量數(shù)據(jù)。 Elasticsearch 不僅僅是一個全文搜索引擎,它還提供了分布式的多用戶能力,實時的分析,以及對復(fù)雜搜索語句的處理能力,使其在眾多場景下,如企業(yè)搜索,日志和事件數(shù)據(jù)分析等,都有廣泛的應(yīng)用。 本文將介紹 ELK+Kafka+Beats 對日志收集

    2024年02月08日
    瀏覽(21)
  • 基于Filebeat+Kafka+ELK實現(xiàn)Nginx日志收集并采用Elastalert2實現(xiàn)釘釘告警

    基于Filebeat+Kafka+ELK實現(xiàn)Nginx日志收集并采用Elastalert2實現(xiàn)釘釘告警

    ???????先準(zhǔn)備3臺Nginx服務(wù)器,用做后端服務(wù)器,(由于機(jī)器有限,也直接用這三臺機(jī)器來部署ES集群),然后準(zhǔn)備2臺服務(wù)器做負(fù)載均衡器(Nginx實現(xiàn)負(fù)載均衡具體實現(xiàn)操作有機(jī)會在介紹),如果是簡單學(xué)習(xí)測試,可以先使用3臺Nginx服務(wù)器就可以,先告一段落。 3臺Nginx服務(wù)

    2024年02月15日
    瀏覽(32)

覺得文章有用就打賞一下文章作者

支付寶掃一掃打賞

博客贊助

微信掃一掃打賞

請作者喝杯咖啡吧~博客贊助

支付寶掃一掃領(lǐng)取紅包,優(yōu)惠每天領(lǐng)

二維碼1

領(lǐng)取紅包

二維碼2

領(lǐng)紅包