国产 无码 综合区,色欲AV无码国产永久播放,无码天堂亚洲国产AV,国产日韩欧美女同一区二区

搭建EFK(Elasticsearch+Filebeat+Kibana)日志收集系統(tǒng)[windows]

這篇具有很好參考價(jià)值的文章主要介紹了搭建EFK(Elasticsearch+Filebeat+Kibana)日志收集系統(tǒng)[windows]。希望對(duì)大家有所幫助。如果存在錯(cuò)誤或未考慮完全的地方,請(qǐng)大家不吝賜教,您也可以點(diǎn)擊"舉報(bào)違法"按鈕提交疑問(wèn)。


前言

EFK簡(jiǎn)介
Elasticsearch 是一個(gè)實(shí)時(shí)的、分布式的可擴(kuò)展的搜索引擎,允許進(jìn)行全文、結(jié)構(gòu)化搜索,它通常用于索引和搜索大量日志數(shù)據(jù),也可用于搜索許多不同類型的文檔。

FileBeats 是數(shù)據(jù)采集的得力工具。將 Beats 和您的容器一起置于服務(wù)器上,或者將 Beats 作為函數(shù)加以部署,然后便可在 Elastisearch 中集中處理數(shù)據(jù)。如果需要更加強(qiáng)大的處理性能,Beats 還能將數(shù)據(jù)輸送到 Logstash 進(jìn)行轉(zhuǎn)換和解析。

Kibana 核心產(chǎn)品搭載了一批經(jīng)典功能:柱狀圖、線狀圖、餅圖、旭日?qǐng)D,等等。不僅如此,您還可以使用 Vega 語(yǔ)法來(lái)設(shè)計(jì)獨(dú)屬于您自己的可視化圖形。所有這些都利用 Elasticsearch 的完整聚合功能。

Elasticsearch 通常與 Kibana 一起部署,Kibana 是 Elasticsearch 的一個(gè)功能強(qiáng)大的數(shù)據(jù)可視化 Dashboard,Kibana 允許你通過(guò) web 界面來(lái)瀏覽 Elasticsearch 日志數(shù)據(jù)。

ELK和EFK的區(qū)別:
ELK 是現(xiàn)階段眾多企業(yè)單位都在使用的一種日志分析系統(tǒng),它能夠方便的為我們收集你想要的日志并且展示出來(lái)

ELK是Elasticsearch、Logstash、Kibana的簡(jiǎn)稱,這三者都是開(kāi)源軟件,通常配合使用。

  1. Elasticsearch –>存儲(chǔ)數(shù)據(jù)

是一個(gè)實(shí)時(shí)的分布式搜索和分析引擎,它可以用于全文搜索,結(jié)構(gòu)化搜索以及分析。它是一個(gè)建立在全文搜索引擎 Apache Lucene 基礎(chǔ)上的搜索引擎,使用 Java 語(yǔ)言編寫,能對(duì)大容量的數(shù)據(jù)進(jìn)行接近實(shí)時(shí)的存儲(chǔ)、搜索和分析操作。

  1. Logstash –> 收集數(shù)據(jù)

數(shù)據(jù)收集引擎。它支持動(dòng)態(tài)的從各種數(shù)據(jù)源搜集數(shù)據(jù),并對(duì)數(shù)據(jù)進(jìn)行過(guò)濾、分析、豐富、統(tǒng)一格式等操作,然后存儲(chǔ)到用戶指定的位置。

  1. Kibana –> 展示數(shù)據(jù)

數(shù)據(jù)分析和可視化平臺(tái)。通常與 Elasticsearch 配合使用,對(duì)其中數(shù)據(jù)進(jìn)行搜索、分析和以統(tǒng)計(jì)圖表的方式展示。

EFK是ELK日志分析系統(tǒng)的一個(gè)變種,加入了filebeat 可以更好的收集到資源日志 來(lái)為我們的日志分析做好準(zhǔn)備工作。

優(yōu)缺點(diǎn)
Filebeat 相對(duì) Logstash 的優(yōu)點(diǎn):

侵入低,無(wú)需修改 elasticsearch 和 kibana 的配置;
性能高,IO 占用率比 logstash 小太多;
當(dāng)然 Logstash 相比于 FileBeat 也有一定的優(yōu)勢(shì),比如 Logstash 對(duì)于日志的格式化處理能力,F(xiàn)ileBeat 只是將日志從日志文件中讀取出來(lái),當(dāng)然如果收集的日志本身是有一定格式的,F(xiàn)ileBeat 也可以格式化,但是相對(duì)于Logstash 來(lái)說(shuō),效果差很多。


下面搭建一個(gè)簡(jiǎn)單的EFK日志收集系統(tǒng),基于最新的版本8.7,ELK三個(gè)軟件版本最好要保持一致,不然會(huì)出現(xiàn)問(wèn)題。

一、下載

下載版本8.7.0
Es地址:https://www.elastic.co/cn/downloads/elasticsearch
Filebeat地址:https://www.elastic.co/cn/downloads/beats/filebeat
Kibana地址:https://www.elastic.co/cn/downloads/kibana

二、使用步驟

1.安裝es

解壓ES安裝文件
修改config目錄中的elasticsearch.yml,添加兩行配置,防止跨域。

http.cors.enabled: true
http.cors.allow-origin: "*"

修改host

network.host: 192.168.100.22

進(jìn)入bin目錄
cmd中運(yùn)行elasticsearch

從啟動(dòng)日志中拷貝出默認(rèn)用戶名

修改elasticsearch.yml文件,xpack.security.http.ssl:enabled設(shè)置為false。

訪問(wèn)localhost:9200,輸入用戶elastic和拷貝出的密碼
能訪問(wèn),代表成功了

搭建EFK(Elasticsearch+Filebeat+Kibana)日志收集系統(tǒng)[windows]

2.安裝kibana

解壓kibana
修改config下kibana.yml
修改配置:

i18n.locale: "zh-CN"
server.host: "192.168.100.22"
server.port: 5601

重置es中kibana賬號(hào)的密碼
進(jìn)入es的bin目錄
執(zhí)行:

elasticsearch-reset-password -u kibana

將生成的密碼拷貝出來(lái),配置到kibana.yml中

elasticsearch.username: "kibana"
elasticsearch.password: "MK=iUF0fuJYXx-QbC=TF"

進(jìn)入kibana的bin目錄啟動(dòng)kibana.bat

訪問(wèn)http://192.168.100.22:5601,輸入用戶名elastic,密碼為安裝es時(shí)拷貝出來(lái)的密碼
搭建EFK(Elasticsearch+Filebeat+Kibana)日志收集系統(tǒng)[windows]

3.安裝filebeat

解壓filebeat
修改filebeat.yml,配置說(shuō)明見(jiàn)下面的注釋

......

# ============================== Filebeat inputs ===============================

filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.

# filestream is an input for collecting log messages from files.
- type: log

  # Unique ID among all inputs, an ID is required.
  id: my-newframe-access-id

  # Change to true to enable this input configuration.
  enabled: true

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - D:\ideaProjects\newframe-by-maven\logs\dmo-uaa\*\log_access.log
    #- c:\programdata\elasticsearch\logs\*
  
  # 設(shè)置fields,標(biāo)記此日志
  fields:
    type: newframe-log-access
  multiline.type: pattern  
  multiline.pattern: '^\d{4}-\d{2}-\d{2}'
  multiline.negate: true
  multiline.match: after
  
  
- type: log

  # Unique ID among all inputs, an ID is required.
  id: my-newframe-error-id

  # Change to true to enable this input configuration.
  enabled: true

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - D:\ideaProjects\newframe-by-maven\logs\dmo-uaa\log_error.log
    #- c:\programdata\elasticsearch\logs\*
  
  # 設(shè)置fields,標(biāo)記此日志
  fields:
    type: newframe-log-error
  multiline.type: pattern  
  multiline.pattern: '^\d{4}-\d{2}-\d{2}'
  multiline.negate: true
  multiline.match: after
  
- type: log

  # Unique ID among all inputs, an ID is required.
  id: my-newframe-info-id

  # Change to true to enable this input configuration.
  enabled: true

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - D:\ideaProjects\newframe-by-maven\logs\dmo-uaa\log_info.log
    #- c:\programdata\elasticsearch\logs\*
  
  # 設(shè)置fields,標(biāo)記此日志
  fields:
    type: newframe-log-info
  multiline.type: pattern  
  multiline.pattern: '^\d{4}-\d{2}-\d{2}'
  multiline.negate: true
  multiline.match: after
  
- type: log

  # Unique ID among all inputs, an ID is required.
  id: my-newframe-warn-id

  # Change to true to enable this input configuration.
  enabled: true

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - D:\ideaProjects\newframe-by-maven\logs\dmo-uaa\log_warn.log
    #- c:\programdata\elasticsearch\logs\*
  
  # 設(shè)置fields,標(biāo)記此日志
  fields:
    type: newframe-log-warn
  multiline.type: pattern  
  multiline.pattern: '^\d{4}-\d{2}-\d{2}'
  multiline.negate: true
  multiline.match: after

......
setup.ilm.enabled: false                # 如果要?jiǎng)?chuàng)建多個(gè)索引,需要將此項(xiàng)設(shè)置為 false
setup.template.name: station_log          # 設(shè)置模板的名稱
setup.template.pattern: station_log-*  # 設(shè)置模板的匹配方式,索引的前綴要和這里保持一致
setup.template.overwrite: true                # 開(kāi)啟新設(shè)置的模板
setup.template.enabled: false                 # 關(guān)掉默認(rèn)的模板配置

# ---------------------------- Elasticsearch Output ----------------------------
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["192.168.100.22:9200"]
  index: station_log-%{[fields.type]}-%{+yyyy.MM.dd}     # 設(shè)置索引名稱,后面引用的 fields.type 變量。此處的配置應(yīng)該可以省略(不符合下面創(chuàng)建索引條件的日志,會(huì)使用該索引)
  indices:                                             # 使用 indices 代表要?jiǎng)?chuàng)建多個(gè)索引
    - index: station_log-newframe-log-access-%{+yyyy.MM.dd}       # 設(shè)置 日志的索引,注意索引前面的 station_log 要與setup.template.pattern 的配置相匹配
      when.equals:                                     # 設(shè)置創(chuàng)建索引的條件:當(dāng) fields.type 的值等于 newframe-log-access 時(shí)才生效
        fields.type: newframe-log-access
    - index: station_log-newframe-log-error-%{+yyyy.MM.dd}
      when.equals:
        fields.type: newframe-log-error
    - index: station_log-newframe-log-info-%{+yyyy.MM.dd}
      when.equals:
        fields.type: newframe-log-info
    - index: station_log-newframe-log-warn-%{+yyyy.MM.dd}
      when.equals:
        fields.type: newframe-log-warn

  # Protocol - either `http` (default) or `https`.
  #protocol: "https"

  # Authentication credentials - either API key or username/password.
  #api_key: "id:api_key"
  username: "elastic"
  password: "QbxGioLkbOy_jNkuvCkV"
......

啟動(dòng):

filebeat.exe setup
filebeat.exe -e -c filebeat.yml

啟動(dòng)后查看索引,多了幾個(gè)日志的索引
搭建EFK(Elasticsearch+Filebeat+Kibana)日志收集系統(tǒng)[windows]

4.在kibana查看日志

訪問(wèn)kibana:
http://192.168.100.22:5601/
Discover
創(chuàng)建數(shù)據(jù)視圖,查看日志,如下圖
搭建EFK(Elasticsearch+Filebeat+Kibana)日志收集系統(tǒng)[windows]文章來(lái)源地址http://www.zghlxwxcb.cn/news/detail-475312.html

附完整的filebeat.yml

###################### Filebeat Configuration Example #########################

# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html

# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.

# ============================== Filebeat inputs ===============================

filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.

# filestream is an input for collecting log messages from files.
- type: log

  # Unique ID among all inputs, an ID is required.
  id: my-newframe-access-id

  # Change to true to enable this input configuration.
  enabled: true

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - D:\ideaProjects\newframe-by-maven\logs\dmo-uaa\*\log_access.log
    #- c:\programdata\elasticsearch\logs\*
  
  # 設(shè)置fields,標(biāo)記此日志
  fields:
    type: newframe-log-access
  multiline.type: pattern  
  multiline.pattern: '^\d{4}-\d{2}-\d{2}'
  multiline.negate: true
  multiline.match: after
  
  
- type: log

  # Unique ID among all inputs, an ID is required.
  id: my-newframe-error-id

  # Change to true to enable this input configuration.
  enabled: true

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - D:\ideaProjects\newframe-by-maven\logs\dmo-uaa\log_error.log
    #- c:\programdata\elasticsearch\logs\*
  
  # 設(shè)置fields,標(biāo)記此日志
  fields:
    type: newframe-log-error
  multiline.type: pattern  
  multiline.pattern: '^\d{4}-\d{2}-\d{2}'
  multiline.negate: true
  multiline.match: after
  
- type: log

  # Unique ID among all inputs, an ID is required.
  id: my-newframe-info-id

  # Change to true to enable this input configuration.
  enabled: true

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - D:\ideaProjects\newframe-by-maven\logs\dmo-uaa\log_info.log
    #- c:\programdata\elasticsearch\logs\*
  
  # 設(shè)置fields,標(biāo)記此日志
  fields:
    type: newframe-log-info
  multiline.type: pattern  
  multiline.pattern: '^\d{4}-\d{2}-\d{2}'
  multiline.negate: true
  multiline.match: after
  
- type: log

  # Unique ID among all inputs, an ID is required.
  id: my-newframe-warn-id

  # Change to true to enable this input configuration.
  enabled: true

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - D:\ideaProjects\newframe-by-maven\logs\dmo-uaa\log_warn.log
    #- c:\programdata\elasticsearch\logs\*
  
  # 設(shè)置fields,標(biāo)記此日志
  fields:
    type: newframe-log-warn
  multiline.type: pattern  
  multiline.pattern: '^\d{4}-\d{2}-\d{2}'
  multiline.negate: true
  multiline.match: after

  #multiline.pattern: '^[[:space:]]+(at|\.{3})[[:space:]]+\b|^Caused by:'
  #multiline.negate: false
  #multiline.match: after

  # Exclude lines. A list of regular expressions to match. It drops the lines that are
  # matching any regular expression from the list.
  # Line filtering happens after the parsers pipeline. If you would like to filter lines
  # before parsers, use include_message parser.
  #exclude_lines: ['^DBG']

  # Include lines. A list of regular expressions to match. It exports the lines that are
  # matching any regular expression from the list.
  # Line filtering happens after the parsers pipeline. If you would like to filter lines
  # before parsers, use include_message parser.
  #include_lines: ['^ERR', '^WARN']

  # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  # are matching any regular expression from the list. By default, no files are dropped.
  #prospector.scanner.exclude_files: ['.gz$']

  # Optional additional fields. These fields can be freely picked
  # to add additional information to the crawled log files for filtering
  #fields:
  #  level: debug
  #  review: 1

# ============================== Filebeat modules ==============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

# ======================= Elasticsearch template setting =======================

setup.template.settings:
  index.number_of_shards: 1
  #index.codec: best_compression
  #_source.enabled: false


# ================================== General ===================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:

# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging

# ================================= Dashboards =================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
#setup.dashboards.enabled: false

# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:

# =================================== Kibana ===================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  #host: "localhost:5601"

  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  #space.id:

# =============================== Elastic Cloud ================================

# These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/).

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:

# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:

# ================================== Outputs ===================================

# Configure what output to use when sending the data collected by the beat.


setup.ilm.enabled: false                    # 如果要?jiǎng)?chuàng)建多個(gè)索引,需要將此項(xiàng)設(shè)置為 false
setup.template.name: station_log          # 設(shè)置模板的名稱
setup.template.pattern: station_log-*         # 設(shè)置模板的匹配方式,索引的前綴要和這里保持一致
setup.template.overwrite: true                # 開(kāi)啟新設(shè)置的模板
setup.template.enabled: false                 # 關(guān)掉默認(rèn)的模板配置

# ---------------------------- Elasticsearch Output ----------------------------
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["192.168.100.22:9200"]
  index: station_log-%{[fields.type]}-%{+yyyy.MM.dd}     # 設(shè)置索引名稱,后面引用的 fields.type 變量。此處的配置應(yīng)該可以省略(不符合下面創(chuàng)建索引條件的日志,會(huì)使用該索引)
  indices:                                             # 使用 indices 代表要?jiǎng)?chuàng)建多個(gè)索引
    - index: station_log-newframe-log-access-%{+yyyy.MM.dd}       # 設(shè)置 日志的索引,注意索引前面的 station_log 要與setup.template.pattern 的配置相匹配
      when.equals:                                     # 設(shè)置創(chuàng)建索引的條件:當(dāng) fields.type 的值等于 newframe-log-access 時(shí)才生效
        fields.type: newframe-log-access
    - index: station_log-newframe-log-error-%{+yyyy.MM.dd}
      when.equals:
        fields.type: newframe-log-error
    - index: station_log-newframe-log-info-%{+yyyy.MM.dd}
      when.equals:
        fields.type: newframe-log-info
    - index: station_log-newframe-log-warn-%{+yyyy.MM.dd}
      when.equals:
        fields.type: newframe-log-warn

  # Protocol - either `http` (default) or `https`.
  #protocol: "https"

  # Authentication credentials - either API key or username/password.
  #api_key: "id:api_key"
  username: "elastic"
  password: "QbxGioLkbOy_jNkuvCkV"

# ------------------------------ Logstash Output -------------------------------
#output.logstash:
  # The Logstash hosts
  #hosts: ["localhost:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

# ================================= Processors =================================
processors:
  - add_host_metadata:
      when.not.contains.tags: forwarded
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~

# ================================== Logging ===================================

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publisher", "service".
#logging.selectors: ["*"]

# ============================= X-Pack Monitoring ==============================
# Filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.

# Set to true to enable the monitoring reporter.
#monitoring.enabled: false

# Sets the UUID of the Elasticsearch cluster under which monitoring data for this
# Filebeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
#monitoring.cluster_uuid:

# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well.
# Note that the settings should point to your Elasticsearch *monitoring* cluster.
# Any setting that is not set is automatically inherited from the Elasticsearch
# output configuration, so if you have the Elasticsearch output configured such
# that it is pointing to your Elasticsearch monitoring cluster, you can simply
# uncomment the following line.
#monitoring.elasticsearch:

# ============================== Instrumentation ===============================

# Instrumentation support for the filebeat.
#instrumentation:
    # Set to true to enable instrumentation of filebeat.
    #enabled: false

    # Environment in which filebeat is running on (eg: staging, production, etc.)
    #environment: ""

    # APM Server hosts to report instrumentation results to.
    #hosts:
    #  - http://localhost:8200

    # API Key for the APM Server(s).
    # If api_key is set then secret_token will be ignored.
    #api_key:

    # Secret token for the APM Server(s).
    #secret_token:


# ================================= Migration ==================================

# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: true


到了這里,關(guān)于搭建EFK(Elasticsearch+Filebeat+Kibana)日志收集系統(tǒng)[windows]的文章就介紹完了。如果您還想了解更多內(nèi)容,請(qǐng)?jiān)谟疑辖撬阉鱐OY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!

本文來(lái)自互聯(lián)網(wǎng)用戶投稿,該文觀點(diǎn)僅代表作者本人,不代表本站立場(chǎng)。本站僅提供信息存儲(chǔ)空間服務(wù),不擁有所有權(quán),不承擔(dān)相關(guān)法律責(zé)任。如若轉(zhuǎn)載,請(qǐng)注明出處: 如若內(nèi)容造成侵權(quán)/違法違規(guī)/事實(shí)不符,請(qǐng)點(diǎn)擊違法舉報(bào)進(jìn)行投訴反饋,一經(jīng)查實(shí),立即刪除!

領(lǐng)支付寶紅包贊助服務(wù)器費(fèi)用

相關(guān)文章

  • ELKF日志系統(tǒng)搭建部署教程:從零開(kāi)始構(gòu)建Elasticsearch + Logstash + Filebeat + Kibana

    ELKF日志系統(tǒng)搭建部署教程:從零開(kāi)始構(gòu)建Elasticsearch + Logstash + Filebeat + Kibana

    學(xué)習(xí)如何從頭開(kāi)始搭建和部署ELKF日志系統(tǒng),利用Elasticsearch、Logstash、Filebeat和Kibana來(lái)實(shí)現(xiàn)高效的數(shù)據(jù)采集、處理和可視化展示。跟隨本教程,輕松構(gòu)建穩(wěn)定的日志系統(tǒng)。

    2024年02月05日
    瀏覽(21)
  • k8s-EFK (filebeat)日志收集

    k8s-EFK (filebeat)日志收集

    使用elasticsearch+filebeat+kibana收集pod指定目錄日志,filebeat用于收集日志,es用于存儲(chǔ),kibana用于展示。本例以收集部署于k8s內(nèi)的nginx日志為例子。 1、部署es+kibana 2.創(chuàng)建filebeat配置文件(基于elasticsearch存儲(chǔ)) 3、創(chuàng)建nginx-filebeat Sidecar(基于elasticsearch的配置) 4.kibana創(chuàng)建索引,查

    2024年02月15日
    瀏覽(25)
  • docker搭建最新ELFK分布式日志收集系統(tǒng)(elasticsearch+logstash+filebeats+kibana7.16.1)

    docker搭建最新ELFK分布式日志收集系統(tǒng)(elasticsearch+logstash+filebeats+kibana7.16.1)

    隨著分布式項(xiàng)目的集群部署,日志的存儲(chǔ)也分散開(kāi)來(lái),在日后出現(xiàn)問(wèn)題進(jìn)行日志定位時(shí)就會(huì)出現(xiàn)很困難,服務(wù)器很多會(huì)做負(fù)載均衡,這樣最終請(qǐng)求所落在的服務(wù)器也隨機(jī)起來(lái),所以好的方式就是集中收集起來(lái),不需要一臺(tái)一臺(tái)服務(wù)器去查,方便查看。 ELFK是Elasticsearch+Logstash+F

    2024年02月08日
    瀏覽(28)
  • 日志系統(tǒng)一(elasticsearch+filebeat+logstash+kibana)

    日志系統(tǒng)一(elasticsearch+filebeat+logstash+kibana)

    目錄 一、es集群部署 安裝java環(huán)境 部署es集群 安裝IK分詞器插件 二、filebeat安裝(docker方式) 三、logstash部署 四、kibana部署 背景:因業(yè)務(wù)需求需要將nginx、java、ingress日志進(jìn)行收集。 架構(gòu):filebeat+logstash+es+kibana 服務(wù)器規(guī)劃: 192.168.7.250(es) 192.168.6.216(filebeat,es) 192.168.7

    2024年02月03日
    瀏覽(27)
  • docker搭建Elk+Kafka+Filebeat分布式日志收集系統(tǒng)

    docker搭建Elk+Kafka+Filebeat分布式日志收集系統(tǒng)

    目錄 一、介紹 二、集群環(huán)境 三、ES集群 四、Kibana? 五、Logstash 六、Zookeeper 七、Kafka 八、Filebeat 八、Nginx (一)架構(gòu)圖 ?(二)組件介紹 1.Elasticsearch 是一個(gè)基于Lucene的搜索服務(wù)器。提供搜集、分析、存儲(chǔ)數(shù)據(jù)三大功能。它提供了一個(gè)分布式多用戶能力的全文搜索引擎,基于

    2024年02月04日
    瀏覽(26)
  • Linux搭建ELK日志收集系統(tǒng)構(gòu)建:Filebeat+Redis+Logstash+Elasticse

    Linux搭建ELK日志收集系統(tǒng)構(gòu)建:Filebeat+Redis+Logstash+Elasticse

    一、ELK概述: ELK是一組開(kāi)源軟件的簡(jiǎn)稱,其包括Elasticsearch、Logstash 和 Kibana。ELK最近幾年發(fā)展迅速,已經(jīng)成為目前最流行的集中式日志解決方案。 Elasticsearch: 能對(duì)大容量的數(shù)據(jù)進(jìn)行接近實(shí)時(shí)的存儲(chǔ),搜索和分析操作。 本項(xiàng)目中主要通過(guò)Elasticsearch存儲(chǔ)所有獲取的日志。 Logst

    2024年02月12日
    瀏覽(59)
  • 使用ELK(ES+Logstash+Filebeat+Kibana)收集nginx的日志

    使用ELK(ES+Logstash+Filebeat+Kibana)收集nginx的日志

    書(shū)接上回:《ELK中Logstash的基本配置和用法》 默認(rèn)情況下,Nginx的日志記錄的是下面的格式: 在 nginx.conf 中也可以看到相關(guān)配置信息: 現(xiàn)在為了方便收集日志,我們把這里的格式改為 json格式,在 nginx.conf 中加入下面的內(nèi)容: 以上配置是手動(dòng)組裝了一個(gè)json格式的配置信息。

    2024年02月11日
    瀏覽(20)
  • ELK實(shí)例----使用filebeat收集tomcat日志到ES并利用kibana展示

    ELK實(shí)例----使用filebeat收集tomcat日志到ES并利用kibana展示

    節(jié)點(diǎn)名稱 節(jié)點(diǎn)名稱 ip地址 kibana kibana 10.0.0.100 elasticsearch elasticsearch 10.0.0.101 filebeat filebeat 10.0.0.102 elasticsearch、Kibana、metricbeat、filebeat版本要一致,如都是7.17.5版本 1.2.1 安裝elasticsearch CPU 2C 內(nèi)存4G或更多 1.2.2 安裝 Kibana 1.2.3 安裝metricbeat 1.2.3 安裝filebeat 1.2.4 安裝Nginx 1.2.5 安裝t

    2024年02月09日
    瀏覽(26)
  • 【ELK企業(yè)級(jí)日志分析系統(tǒng)】部署Filebeat+Kafka+Logstash+Elasticsearch+Kibana集群詳解(EFLFK)

    【ELK企業(yè)級(jí)日志分析系統(tǒng)】部署Filebeat+Kafka+Logstash+Elasticsearch+Kibana集群詳解(EFLFK)

    參見(jiàn)安裝與部署ELK詳解 參見(jiàn)安裝與部署EFLK詳解 參見(jiàn)安裝與部署Zookeeper集群詳解 1.1.1 為什么需要消息隊(duì)列(MQ) MQ(Message Queue)主要原因是由于 在高并發(fā)環(huán)境下,同步請(qǐng)求來(lái)不及處理,請(qǐng)求往往會(huì)發(fā)生阻塞 。比如大量的并發(fā)請(qǐng)求,訪問(wèn)數(shù)據(jù)庫(kù),導(dǎo)致行鎖表鎖,最后請(qǐng)求線程會(huì)

    2024年02月16日
    瀏覽(28)
  • filebeat kibana elasticsearch 日志監(jiān)控

    filebeat kibana elasticsearch 日志監(jiān)控

    解壓三個(gè)壓縮包 一、filebeat的安裝部署 1、打開(kāi)filebeat的配置文件 2、Filebeat inputs 處打開(kāi)日志輸入開(kāi)關(guān),設(shè)置要監(jiān)控的路徑 ?3、Outputs 輸出中設(shè)置Elasticsearch output的輸出地址 4、配置kibana 的地址 5、執(zhí)行??./filebeat setup -e 二、Elasticsearch 安裝部署 1、修改配置文件? 2、添加如下內(nèi)

    2024年02月14日
    瀏覽(19)

覺(jué)得文章有用就打賞一下文章作者

支付寶掃一掃打賞

博客贊助

微信掃一掃打賞

請(qǐng)作者喝杯咖啡吧~博客贊助

支付寶掃一掃領(lǐng)取紅包,優(yōu)惠每天領(lǐng)

二維碼1

領(lǐng)取紅包

二維碼2

領(lǐng)紅包