国产 无码 综合区,色欲AV无码国产永久播放,无码天堂亚洲国产AV,国产日韩欧美女同一区二区

使用Docker安裝ELK(Elasticsearch+Logstash+Kibana)+filebeat____基于CentOS7.9

這篇具有很好參考價(jià)值的文章主要介紹了使用Docker安裝ELK(Elasticsearch+Logstash+Kibana)+filebeat____基于CentOS7.9。希望對大家有所幫助。如果存在錯誤或未考慮完全的地方,請大家不吝賜教,您也可以點(diǎn)擊"舉報(bào)違法"按鈕提交疑問。

目錄

一、安裝JDK

二、部署Elasticsearch

三、部署kibana

四、部署Logstash

五、部署filebeat

六、filebeat采集數(shù)據(jù),logstash過濾,在kibana中顯示

七、kibana增加索引


PS:本文中,ip為部署服務(wù)器的IP地址,esip為es容器的通訊ip地址。

一、安裝JDK

1、更新系統(tǒng)

sudo yum update

2、安裝Java

下面是安裝OpenJDK的命令:

sudo yum install java-1.8.0-openjdk

3、驗(yàn)證安裝

java -version

二、部署Elasticsearch

1、查看是否安裝docker

docker version
Client: Docker Engine - Community
 Version:           24.0.5
 API version:       1.43
 Go version:        go1.20.6
 Git commit:        ced0996
 Built:             Fri Jul 21 20:39:02 2023
 OS/Arch:           linux/amd64
 Context:           default

Server: Docker Engine - Community
 Engine:
  Version:          24.0.5
  API version:      1.43 (minimum version 1.12)
  Go version:       go1.20.6
  Git commit:       a61e2b4
  Built:            Fri Jul 21 20:38:05 2023
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.6.21
  GitCommit:        3dce8eb055cbb6872793272b4f20ed16117344f8
 runc:
  Version:          1.1.7
  GitCommit:        v1.1.7-0-g860f061
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

安裝最新版的docker可能導(dǎo)致部分系統(tǒng)不兼容,可以安裝早些的版本。

2、查找并安裝elasticsearch鏡像

查找:

docker search elasticsearch
NAME                                     DESCRIPTION                                     STARS     OFFICIAL   AUTOMATED
elasticsearch                            Elasticsearch is a powerful open source sear…   6122      [OK]       
kibana                                   Kibana gives shape to any kind of data — str…   2626      [OK]       
bitnami/elasticsearch                    Bitnami Docker Image for Elasticsearch          67                   [OK]
bitnami/elasticsearch-exporter           Bitnami Elasticsearch Exporter Docker Image     7                    [OK]
rancher/elasticsearch-conf                                                               2                    
rapidfort/elasticsearch                  RapidFort optimized, hardened image for Elas…   10                   
bitnami/elasticsearch-curator-archived   A copy of the container images of the deprec…   0                    
rapidfort/elasticsearch-official         RapidFort optimized, hardened image for Elas…   0                    
bitnamicharts/elasticsearch                                                              0                    
onlyoffice/elasticsearch                                                                 1                    
rancher/elasticsearch                                                                    1                    
couchbase/elasticsearch-connector        Couchbase Elasticsearch Connector               0                    
rancher/elasticsearch-bootstrap                                                          1                    
dtagdevsec/elasticsearch                 T-Pot Elasticsearch                             4                    [OK]
corpusops/elasticsearch                  https://github.com/corpusops/docker-images/     0                    
vulhub/elasticsearch                                                                     0                    
uselagoon/elasticsearch-7                                                                0                    
securecodebox/elasticsearch                                                              0                    
eucm/elasticsearch                       Elasticsearch 1.7.5 Docker Image                1                    [OK]
ilios/elasticsearch                                                                      0                    
uselagoon/elasticsearch-6                                                                0                    
openup/elasticsearch-0.90                                                                0                    
litmuschaos/elasticsearch-stress                                                         0                    
drud/elasticsearch_exporter                                                              0                    
geekzone/elasticsearch-curator                                                           0                    

安裝:

docker pull elasticsearch:7.7.1
7.7.1: Pulling from library/elasticsearch
524b0c1e57f8: Pull complete 
4f79045bc94a: Pull complete 
4602c5830f92: Pull complete 
10ef2eb1c9b1: Pull complete 
47fca9194a1b: Pull complete 
c282e1371ecc: Pull complete 
302e1effd34b: Pull complete 
50acbec75309: Pull complete 
f89bc5c60b5f: Pull complete 
Digest: sha256:dff614393a31b93e8bbe9f8d1a77be041da37eac2a7a9567166dd5a2abab7c67
Status: Downloaded newer image for elasticsearch:7.7.1
docker.io/library/elasticsearch:7.7.1

3、查看已安裝的docker鏡像

docker images
REPOSITORY                                                        TAG       IMAGE ID       CREATED       SIZE
elasticsearch                                                     7.7.1     830a894845e3   3 years ago   804MB
k8s.gcr.io/kube-proxy                                             v1.17.4   6dec7cfde1e5   3 years ago   116MB
registry.aliyuncs.com/google_containers/kube-proxy                v1.17.4   6dec7cfde1e5   3 years ago   116MB
registry.aliyuncs.com/google_containers/kube-apiserver            v1.17.4   2e1ba57fe95a   3 years ago   171MB
k8s.gcr.io/kube-apiserver                                         v1.17.4   2e1ba57fe95a   3 years ago   171MB
k8s.gcr.io/kube-controller-manager                                v1.17.4   7f997fcf3e94   3 years ago   161MB
registry.aliyuncs.com/google_containers/kube-controller-manager   v1.17.4   7f997fcf3e94   3 years ago   161MB
registry.aliyuncs.com/google_containers/kube-scheduler            v1.17.4   5db16c1c7aff   3 years ago   94.4MB
k8s.gcr.io/kube-scheduler                                         v1.17.4   5db16c1c7aff   3 years ago   94.4MB
k8s.gcr.io/coredns                                                1.6.5     70f311871ae1   3 years ago   41.6MB
k8s.gcr.io/etcd                                                   3.4.3-0   303ce5db0e90   3 years ago   288MB
registry.aliyuncs.com/google_containers/etcd                      3.4.3-0   303ce5db0e90   3 years ago   288MB
k8s.gcr.io/pause                                                  3.1       da86e6ba6ca1   5 years ago   742kB
registry.aliyuncs.com/google_containers/pause                     3.1       da86e6ba6ca1   5 years ago   742kB
kubeguide/hadoop                                                  latest    e0af06208032   6 years ago   830MB

4、創(chuàng)建掛載目錄

[root@ceph-node4 ~]# mkdir -p /data/elk/es/{config,data,logs}

5、授權(quán)

docker中elasticsearch的用戶UID是1000.

[root@ceph-node4 ~]# chown -R 1000:1000 /data/elk/es

6、創(chuàng)建掛載配置文件

cd /data/elk/es/config
touch elasticsearch.yml
vi elasticsearch.yml
#[elasticsearch.yml]
cluster.name: "my-es"
network.host: 0.0.0.0
http.port: 9200

7、運(yùn)行elasticsearch

通過鏡像,啟動一個(gè)容器,并將9200和9300端口映射到本機(jī)(elasticsearch的默認(rèn)端口是9200,我們把宿主環(huán)境9200端口映射到Docker容器中的9200端口)。

docker run -it  -d -p 9200:9200 -p 9300:9300 --name es -e ES_JAVA_OPTS="-Xms1g -Xmx1g" -e "discovery.type=single-node" --restart=always -v /data/elk/es/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml -v /data/elk/es/data:/usr/share/elasticsearch/data -v /data/elk/es/logs:/usr/share/elasticsearch/logs elasticsearch:7.7.1
9e70d30eaa571c6a54572d5babb14e688220494ca039b292d0cb62a54a982ebb

8、驗(yàn)證安裝是否成功

curl http://localhost:9200
{
  "name" : "9e70d30eaa57",
  "cluster_name" : "my-es",
  "cluster_uuid" : "nWsyXGd1RtGATFs4itJ4nQ",
  "version" : {
    "number" : "7.7.1",
    "build_flavor" : "default",
    "build_type" : "docker",
    "build_hash" : "ad56dce891c901a492bb1ee393f12dfff473a423",
    "build_date" : "2020-05-28T16:30:01.040088Z",
    "build_snapshot" : false,
    "lucene_version" : "8.5.1",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}

三、部署kibana

1、安裝kibana

docker pull kibana:7.7.1
7.7.1: Pulling from library/kibana
524b0c1e57f8: Already exists 
103dc10f20b6: Pull complete 
e397e023efd5: Pull complete 
f0ee6620405c: Pull complete 
17e4e03944f0: Pull complete 
eff8f4cc3749: Pull complete 
fa92cc28ed7e: Pull complete 
afda7e77e6ed: Pull complete 
019e109bb7c5: Pull complete 
e82949888e47: Pull complete 
15f31b4d9a52: Pull complete 
Digest: sha256:ea0eab16b0330e6b3d9083e3c8fd6e82964fc9659989a75ecda782fbd160fdaa
Status: Downloaded newer image for kibana:7.7.1
docker.io/library/kibana:7.7.1

2、查看是否完成

docker images

3、獲取elasticsearch容器esip

docker inspect --format '{{ .NetworkSettings.IPAddress }}' es
172.17.0.2

這里的esip是容器內(nèi)部通信的ip,而不是連接外部網(wǎng)絡(luò)的ip。

查看IP

docker inspect elasticsearch  |grep IPAddress

?查看es狀態(tài)和詳細(xì)esip:

docker inspect es
"IPAddress": "172.20.0.2"

4、修改配置文件

創(chuàng)建文件夾、生成yml文件并且賦予讀寫權(quán)限。

sudo mkdir -p /data/elk/kibana
sudo touch /data/elk/kibana/kibana.yml
sudo chmod +w /data/elk/kibana/kibana.yml

編輯配置文件:

vi /data/elk/kibana/kibana.yml
#[kibana.yml]
#Default Kibana configuration for docker target
server.name: kibana
server.host: "0"
elasticsearch.hosts: ["http://172.17.0.2:9200"]
xpack.monitoring.ui.container.elasticsearch.enabled: true

此處的es.hosts即為http://esip:9200

5、運(yùn)行kibana

docker run -d --restart=always --log-driver json-file --log-opt max-size=100m --log-opt max-file=2 --name kibana -p 5601:5601 -v /data/elk/kibana/kibana.yml:/usr/share/kibana/config/kibana.yml kibana:7.7.1
87b91be986938ad581fb79354bd41895eb874ce74b0688ed6e46396691e040a4

查看狀態(tài):

docker ps | grep kibana
docker ps

若要停止并刪除Kibana容器:

docker stop kibana
docker rm kibana

6、訪問UI界面

瀏覽器上輸入:http://ip:5601


若無法訪問UI界面:

1、檢查kibana容器配置文件

將配置文件中elasticsearch.hosts地址修改為elasticsearch容器地址。

docker exec -it kibana /bin/bash
vi config/kibana.yml
#[kibana.yml]
#Default Kibana configuration for docker target
server.name: kibana
server.host: "0"
elasticsearch.hosts: ["http://172.17.0.2:9200"]
xpack.monitoring.ui.container.elasticsearch.enabled: true

確保此處的配置與剛剛設(shè)置的相同,尤其注意esip,因?yàn)槭请S機(jī)分配的,每次重啟服務(wù)器,所分配的esip可能都不一樣。

2、重啟kibana

docker restart kibana
kibana

3、查看docker容器運(yùn)行情況

docker ps

?查看kibana日志

docker logs kibana

4、重新訪問http://ip:5601

若啟動較慢,可多刷新幾次。

中文設(shè)置:

Kibana 的配置文件 kibana.yml 文件應(yīng)該在 /data/elk/kibana/kibana.yml 路徑下

要將 i18n.locale 配置為 zh-CN,打開 /data/elk/kibana/kibana.yml 文件,找到末尾并添加以下行:

i18n.locale: "zh-CN"

?然后,重啟 Kibana 容器以使更改生效??梢允褂靡韵旅钪貑?Kibana 容器:

docker restart kibana

?最后,再打開網(wǎng)頁UI界面,就可以看到漢化了。

四、部署Logstash

1、獲取logstash鏡像

docker pull logstash:7.7.1
7.7.1: Pulling from library/logstash
524b0c1e57f8: Already exists 
1a7635b4d6e8: Pull complete 
92c26c13a43f: Pull complete 
189edda23928: Pull complete 
4b71f12aa7b2: Pull complete 
8eae4815fe1e: Pull complete 
4c2df663cec5: Pull complete 
bc06e285e821: Pull complete 
2fadaff2f68a: Pull complete 
89a9ec66a044: Pull complete 
724600a30902: Pull complete 
Digest: sha256:cf2a17d96e76e5c7a04d85d0f2e408a0466481b39f441e9d6d0aad652e033026
Status: Downloaded newer image for logstash:7.7.1
docker.io/library/logstash:7.7.1

2、編輯logstash.yml配置文件。所使用目錄需對應(yīng)新增。

mkdir /data/elk/logstash/
touch /data/elk/logstash/logstash.yml
vi /data/elk/logstash/logstash.yml
#[logstash.yml]
http.host: "0.0.0.0"
xpack.monitoring.elasticsearch.hosts: [ "http://172.17.0.2:9200" ]
xpack.monitoring.elasticsearch.username: elastic
xpack.monitoring.elasticsearch.password: changeme
#path.config: /data/elk/logstash/conf.d/*.conf
path.config: /data/docker/logstash/conf.d/*.conf
path.logs: /var/log/logstash

此處的es.hosts也是esip

3、編輯logstash.conf文件,此處先配置logstash直接采集本地?cái)?shù)據(jù)發(fā)送至es

mkdir /data/elk/logstash/conf.d/
touch /data/elk/logstash/conf.d/syslog.conf
vi /data/elk/logstash/conf.d/syslog.conf
cat /data/elk/logstash/conf.d/syslog.conf

?文章來源地址http://www.zghlxwxcb.cn/news/detail-761744.html

#[syslog.conf]
input {
  syslog {
    type => "system-syslog"
    port => 5044
  }
}
output {
  elasticsearch {
    hosts => ["ip:9200"] 
    index => "system-syslog-%{+YYYY.MM}" 
  }
}

此處的ip為掛載容器的服務(wù)器的ip地址

4、編輯本地rsyslog配置增加:

vi /etc/rsyslog.conf 
*.* @@ip:5044

此處的ip為掛載容器的服務(wù)器的ip地址

?

5、配置修改后重啟服務(wù)

systemctl restart rsyslog

6、運(yùn)行l(wèi)ogstash

docker run -d --restart=always --log-driver json-file --log-opt max-size=100m --log-opt max-file=2 -p 5044:5044 --name logstash -v /data/elk/logstash/logstash.yml:/usr/share/logstash/config/logstash.yml -v /data/elk/logstash/conf.d/:/data/docker/logstash/conf.d/ logstash:7.7.1

7、測試es接收logstash數(shù)據(jù)

curl http://localhost:9200/_cat/indices?v

health status index                    uuid                   pri rep docs.count docs.deleted store.size pri.store.size
green  open   .apm-custom-link         SUOJEG0hRlCrQ2cQUr1S7g   1   0          0            0       208b           208b
green  open   .kibana_task_manager_1   c7ZI_gS_T1GbFrlOMlB4bw   1   0          5            0     54.9kb         54.9kb
green  open   .apm-agent-configuration f684gzXURZK6Q13GPGZIhg   1   0          0            0       208b           208b
green  open   .kibana_1                xtNccoc-Ru2zSoXJe8AA1Q   1   0         36            2     55.8kb         55.8kb
yellow open   system-syslog-2023.07    AUPeJ5I8R6-iWkdeTTJuAw   1   1         29            0     60.9kb         60.9kb

?

獲取到system-syslog-相關(guān)日志,則es已能獲取來自logstash的數(shù)據(jù),kibana中也同步顯示數(shù)據(jù)。

五、部署filebeat

1、在需要監(jiān)測的機(jī)器yum安裝filebeat

wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.7.1-x86_64.rpm

yum install filebeat-7.7.1-x86_64.rpm

2、filebeat配置,此處先配置filebeat直接發(fā)送數(shù)據(jù)到es

vim /etc/filebeat/filebeat.yml
#=========================== Filebeat inputs =============================

filebeat.inputs:

- type: log

  enabled: true

  paths:
    - /var/log/ceph/*.log
    - /var/log/messages



完整版filbeat.yml(最新)Ceph版本

cat /etc/filebeat/filebeat.yml

###################### Filebeat Configuration Example #########################

# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html

# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.

#=========================== Filebeat inputs =============================

filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.

- type: log

  # Change to true to enable this input configuration.
  enabled: true

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /var/log/ceph/*.log
    #- c:\programdata\elasticsearch\logs\*
  fields:
    log_type: ceph
  # Exclude lines. A list of regular expressions to match. It drops the lines that are
  # matching any regular expression from the list.
  #exclude_lines: ['^DBG']

  # Include lines. A list of regular expressions to match. It exports the lines that are
  # matching any regular expression from the list.
  #include_lines: ['^ERR', '^WARN']

  # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  # are matching any regular expression from the list. By default, no files are dropped.
  #exclude_files: ['.gz$']

  # Optional additional fields. These fields can be freely picked
  # to add additional information to the crawled log files for filtering
  #fields:
  #  level: debug
  #  review: 1

  ### Multiline options

  # Multiline can be used for log messages spanning multiple lines. This is common
  # for Java Stack Traces or C-Line Continuation

  # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
  #multiline.pattern: ^\[

  # Defines if the pattern set under pattern should be negated or not. Default is false.
  #multiline.negate: false

  # Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
  # that was (not) matched before or after or as long as a pattern is not matched based on negate.
  # Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
  #multiline.match: after


#============================= Filebeat modules ===============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

#==================== Elasticsearch template setting ==========================

setup.template.settings:
  index.number_of_shards: 1
  #index.codec: best_compression
  #_source.enabled: false

#================================ General =====================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:

# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging


#============================== Dashboards =====================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
#setup.dashboards.enabled: false

# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:

#============================== Kibana =====================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  #host: "localhost:5601"

  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  #space.id:

#============================= Elastic Cloud ==================================

# These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/).

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:

# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:

#================================ Outputs =====================================

# Configure what output to use when sending the data collected by the beat.

#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:
  # Array of hosts to connect to.
  #hosts: ["ip:9200"]

  # Protocol - either `http` (default) or `https`.
  #protocol: "https"

  # Authentication credentials - either API key or username/password.
  #api_key: "id:api_key"
  #username: "elastic"
  #password: "changeme"

#----------------------------- Logstash output --------------------------------
output.logstash:
  # The Logstash hosts
  hosts: ["ip:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

#================================ Processors =====================================

# Configure processors to enhance or manipulate events generated by the beat.

processors:
  - add_host_metadata: ~
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~

#================================ Logging =====================================

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]

#============================== X-Pack Monitoring ===============================
# filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.

# Set to true to enable the monitoring reporter.
#monitoring.enabled: false

# Sets the UUID of the Elasticsearch cluster under which monitoring data for this
# Filebeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
#monitoring.cluster_uuid:

# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well.
# Note that the settings should point to your Elasticsearch *monitoring* cluster.
# Any setting that is not set is automatically inherited from the Elasticsearch
# output configuration, so if you have the Elasticsearch output configured such
# that it is pointing to your Elasticsearch monitoring cluster, you can simply
# uncomment the following line.
#monitoring.elasticsearch:

#================================= Migration ==================================

# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: true

3、啟動服務(wù)

[root@ceph-node3 ~]# systemctl restart filebeat.service

4、es接收數(shù)據(jù)查詢

curl http://localhost:9200/_cat/indices?v

health status index                            uuid                   pri rep docs.count docs.deleted store.size pri.store.size
green  open   .apm-custom-link                 SUOJEG0hRlCrQ2cQUr1S7g   1   0          0            0       208b           208b
green  open   .kibana_task_manager_1           c7ZI_gS_T1GbFrlOMlB4bw   1   0          5            0     54.9kb         54.9kb
green  open   .apm-agent-configuration         f684gzXURZK6Q13GPGZIhg   1   0          0            0       208b           208b
yellow open   filebeat-7.7.1-2023.07.28-000001 38f_nqi_TdWXDRbXdTV0ng   1   1      75872            0     19.8mb         19.8mb
green  open   .kibana_1                        xtNccoc-Ru2zSoXJe8AA1Q   1   0         39            2     70.3kb         70.3kb
yellow open   system-syslog-2023.07            AUPeJ5I8R6-iWkdeTTJuAw   1   1         31            0    111.5kb        111.5kb

可查到filebeat-7.7.1-*數(shù)據(jù),kibana中也顯示對應(yīng)數(shù)據(jù)。

六、filebeat采集數(shù)據(jù),logstash過濾,在kibana中顯示

1、刪除之前的logstash生成的測試數(shù)據(jù)

curl -XDELETE http://localhost:9200/system-syslog-2023.07
{"acknowledged":true}

2、修改filebeat.yml,后重啟服務(wù)

vim /etc/filebeat/filebeat.yml
cat /etc/filebeat/filebeat.yml
#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:
  # Array of hosts to connect to.
  #hosts: ["localhost:9200"]

  # Protocol - either `http` (default) or `https`.
  #protocol: "https"

  # Authentication credentials - either API key or username/password.
  #api_key: "id:api_key"
  #username: "elastic"
  #password: "changeme"

#----------------------------- Logstash output --------------------------------
output.logstash:
  # The Logstash hosts
  hosts: ["ip:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

后重啟服務(wù)

systemctl restart filebeat.service

3、修改lostash.conf配置

touch /data/elk/logstash/conf.d/logstash.conf
vi /data/elk/logstash/conf.d/logstash.conf
cat /data/elk/logstash/conf.d/logstash.conf

input {
  beats {
    port => 5044
  }
}
output {
  elasticsearch {
    hosts => ["172.17.0.2:9200"]
    index => "filebeat_g-%{+YYYY.MM.dd}"
  }
}

?

4、查看es是否獲取數(shù)據(jù)

curl http://localhost:9200/_cat/indices?v
health status index                            uuid                   pri rep docs.count docs.deleted store.size pri.store.size
green  open   .apm-custom-link                 SUOJEG0hRlCrQ2cQUr1S7g   1   0          0            0       208b           208b
green  open   .kibana_task_manager_1           c7ZI_gS_T1GbFrlOMlB4bw   1   0          5            0     54.9kb         54.9kb
green  open   .apm-agent-configuration         f684gzXURZK6Q13GPGZIhg   1   0          0            0       208b           208b
yellow open   filebeat-7.7.1-2023.07.28-000001 38f_nqi_TdWXDRbXdTV0ng   1   1      76257            0     19.9mb         19.9mb
green  open   .kibana_1                        xtNccoc-Ru2zSoXJe8AA1Q   1   0         39            2     70.3kb         70.3kb
yellow open   system-syslog-2023.07            -sFCBdQJTx62qc6omgKEiA   1   1         25            0      291kb          291kb

filebeat_g-*數(shù)據(jù)已經(jīng)獲取,kibana中增加相關(guān)索引即可。

七、kibana增加索引并觀測系統(tǒng)狀態(tài)

docker 安裝filebeat和elasticsearch,分布式存儲容器,docker,elk,elasticsearch

docker 安裝filebeat和elasticsearch,分布式存儲容器,docker,elk,elasticsearch?

docker 安裝filebeat和elasticsearch,分布式存儲容器,docker,elk,elasticsearch?

docker 安裝filebeat和elasticsearch,分布式存儲容器,docker,elk,elasticsearch?

docker 安裝filebeat和elasticsearch,分布式存儲容器,docker,elk,elasticsearch?

docker 安裝filebeat和elasticsearch,分布式存儲容器,docker,elk,elasticsearch?

?

?

到了這里,關(guān)于使用Docker安裝ELK(Elasticsearch+Logstash+Kibana)+filebeat____基于CentOS7.9的文章就介紹完了。如果您還想了解更多內(nèi)容,請?jiān)谟疑辖撬阉鱐OY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!

本文來自互聯(lián)網(wǎng)用戶投稿,該文觀點(diǎn)僅代表作者本人,不代表本站立場。本站僅提供信息存儲空間服務(wù),不擁有所有權(quán),不承擔(dān)相關(guān)法律責(zé)任。如若轉(zhuǎn)載,請注明出處: 如若內(nèi)容造成侵權(quán)/違法違規(guī)/事實(shí)不符,請點(diǎn)擊違法舉報(bào)進(jìn)行投訴反饋,一經(jīng)查實(shí),立即刪除!

領(lǐng)支付寶紅包贊助服務(wù)器費(fèi)用

相關(guān)文章

  • 【ELK企業(yè)級日志分析系統(tǒng)】部署Filebeat+Kafka+Logstash+Elasticsearch+Kibana集群詳解(EFLFK)

    【ELK企業(yè)級日志分析系統(tǒng)】部署Filebeat+Kafka+Logstash+Elasticsearch+Kibana集群詳解(EFLFK)

    參見安裝與部署ELK詳解 參見安裝與部署EFLK詳解 參見安裝與部署Zookeeper集群詳解 1.1.1 為什么需要消息隊(duì)列(MQ) MQ(Message Queue)主要原因是由于 在高并發(fā)環(huán)境下,同步請求來不及處理,請求往往會發(fā)生阻塞 。比如大量的并發(fā)請求,訪問數(shù)據(jù)庫,導(dǎo)致行鎖表鎖,最后請求線程會

    2024年02月16日
    瀏覽(28)
  • 使用ELK(ES+Logstash+Filebeat+Kibana)收集nginx的日志

    使用ELK(ES+Logstash+Filebeat+Kibana)收集nginx的日志

    書接上回:《ELK中Logstash的基本配置和用法》 默認(rèn)情況下,Nginx的日志記錄的是下面的格式: 在 nginx.conf 中也可以看到相關(guān)配置信息: 現(xiàn)在為了方便收集日志,我們把這里的格式改為 json格式,在 nginx.conf 中加入下面的內(nèi)容: 以上配置是手動組裝了一個(gè)json格式的配置信息。

    2024年02月11日
    瀏覽(23)
  • ELK第一講之【docker安裝(Elasticsearch、kibana、IK分詞器、Logstash)8.4.3】

    ELK第一講之【docker安裝(Elasticsearch、kibana、IK分詞器、Logstash)8.4.3】

    1、 對應(yīng)版本kibana8.4.3的安裝 2、 IK分詞器8.4.3的安裝 3、 Logstash-8.4.3的安裝 啟動出現(xiàn)以下錯誤,再執(zhí)行該操作 出現(xiàn)max virtual memory areas vm.max_map_count [65530] is too low,increase to at least [262144] 1、啟動es 2、復(fù)制elasticsearch.yml 3、復(fù)制完成后、關(guān)閉不要的校驗(yàn) 4、關(guān)閉容器 開放9200端口 打

    2023年04月23日
    瀏覽(32)
  • 使用 Docker Compose V2 快速搭建日志分析平臺 ELK (Elasticsearch、Logstash 和 Kibana)

    使用 Docker Compose V2 快速搭建日志分析平臺 ELK (Elasticsearch、Logstash 和 Kibana)

    ELK 是指 Elasticsearch、Logstash 和 Kibana 這三個(gè)開源軟件的組合。 Elasticsearch 是一個(gè)分布式的搜索和分析引擎,用于日志的存儲,搜索,分析,查詢。 Logstash 是一個(gè)數(shù)據(jù)收集、轉(zhuǎn)換和傳輸工具,用于收集過濾和轉(zhuǎn)換數(shù)據(jù),然后將其發(fā)送到 Elasticsearch 或其他目標(biāo)存儲中。 Kibana 是一個(gè)數(shù)

    2024年01月20日
    瀏覽(21)
  • Docker 安裝 Elasticsearch8.8.2\kibana8.8.2\Logstash8.8.2\Filebeat:8.8.2[親測可用]

    Docker 安裝 Elasticsearch8.8.2\kibana8.8.2\Logstash8.8.2\Filebeat:8.8.2[親測可用]

    (注:安裝ELK8.4.3,Java版本必須是java17JDK) 一、Elasticsearch8.8.2部署 1、下載elasticsearch鏡像: docker pull docker.elastic.co/elasticsearch/elasticsearch:8.8.2 2、創(chuàng)建docker網(wǎng)絡(luò): docker network create --driver bridge --subnet 172.18.0.0/16 elastic 3、創(chuàng)建Elasticsearch掛載目錄 mkdir -p /usr/elk8.8.2/elasticsearch 4、給創(chuàng)

    2024年02月01日
    瀏覽(20)
  • Elasticsearch,Logstash和Kibana安裝部署(ELK Stack)

    Elasticsearch,Logstash和Kibana安裝部署(ELK Stack)

    前言 當(dāng)今數(shù)字化時(shí)代,信息的快速增長使得各類組織和企業(yè)面臨著海量數(shù)據(jù)的處理和分析挑戰(zhàn)。在這樣的背景下,ELK Stack(Elasticsearch、Logstash 和 Kibana)作為一套強(qiáng)大的開源工具組合,成為了解決數(shù)據(jù)管理、搜索和可視化的首選方案。無論是監(jiān)控日志、實(shí)時(shí)數(shù)據(jù)分析,還是構(gòu)

    2024年02月10日
    瀏覽(51)
  • docker搭建最新ELFK分布式日志收集系統(tǒng)(elasticsearch+logstash+filebeats+kibana7.16.1)

    docker搭建最新ELFK分布式日志收集系統(tǒng)(elasticsearch+logstash+filebeats+kibana7.16.1)

    隨著分布式項(xiàng)目的集群部署,日志的存儲也分散開來,在日后出現(xiàn)問題進(jìn)行日志定位時(shí)就會出現(xiàn)很困難,服務(wù)器很多會做負(fù)載均衡,這樣最終請求所落在的服務(wù)器也隨機(jī)起來,所以好的方式就是集中收集起來,不需要一臺一臺服務(wù)器去查,方便查看。 ELFK是Elasticsearch+Logstash+F

    2024年02月08日
    瀏覽(29)
  • elasticsearch+logstash+kibana整合(ELK的使用)第一課

    elasticsearch+logstash+kibana整合(ELK的使用)第一課

    進(jìn)入 如圖,一共四個(gè)地方

    2024年02月08日
    瀏覽(41)
  • k8s部署elk+filebeat+logstash+kafka集群(一)ES集群+kibana部署

    k8s部署elk+filebeat+logstash+kafka集群(一)ES集群+kibana部署

    前言: 這次是在部署后很久才想起來整理了下文檔,如有遺漏見諒,期間也遇到過很多坑有些目前還沒頭緒希望有大佬讓我學(xué)習(xí)下 一、環(huán)境準(zhǔn)備 k8s-master01 3.127.10.209 k8s-master02 3.127.10.95 k8s-master03 3.127.10.66 k8s-node01 3.127.10.233 k8s-node02 3.127.33.173 harbor 3.127.33.174 1、k8s各節(jié)點(diǎn)部署nf

    2023年04月23日
    瀏覽(43)
  • 日志系統(tǒng)一(elasticsearch+filebeat+logstash+kibana)

    日志系統(tǒng)一(elasticsearch+filebeat+logstash+kibana)

    目錄 一、es集群部署 安裝java環(huán)境 部署es集群 安裝IK分詞器插件 二、filebeat安裝(docker方式) 三、logstash部署 四、kibana部署 背景:因業(yè)務(wù)需求需要將nginx、java、ingress日志進(jìn)行收集。 架構(gòu):filebeat+logstash+es+kibana 服務(wù)器規(guī)劃: 192.168.7.250(es) 192.168.6.216(filebeat,es) 192.168.7

    2024年02月03日
    瀏覽(27)

覺得文章有用就打賞一下文章作者

支付寶掃一掃打賞

博客贊助

微信掃一掃打賞

請作者喝杯咖啡吧~博客贊助

支付寶掃一掃領(lǐng)取紅包,優(yōu)惠每天領(lǐng)

二維碼1

領(lǐng)取紅包

二維碼2

領(lǐng)紅包