国产 无码 综合区,色欲AV无码国产永久播放,无码天堂亚洲国产AV,国产日韩欧美女同一区二区

Elasticsearch,Logstash和Kibana安裝部署(ELK Stack)

這篇具有很好參考價值的文章主要介紹了Elasticsearch,Logstash和Kibana安裝部署(ELK Stack)。希望對大家有所幫助。如果存在錯誤或未考慮完全的地方,請大家不吝賜教,您也可以點擊"舉報違法"按鈕提交疑問。

Elasticsearch,Logstash和Kibana安裝部署(ELK Stack),運(yùn)維,ELK,日志

前言

當(dāng)今數(shù)字化時代,信息的快速增長使得各類組織和企業(yè)面臨著海量數(shù)據(jù)的處理和分析挑戰(zhàn)。在這樣的背景下,ELK Stack(Elasticsearch、Logstash 和 Kibana)作為一套強(qiáng)大的開源工具組合,成為了解決數(shù)據(jù)管理、搜索和可視化的首選方案。無論是監(jiān)控日志、實時數(shù)據(jù)分析,還是構(gòu)建儀表盤來監(jiān)測業(yè)務(wù)指標(biāo),ELK Stack 都提供了一站式的解決方案。

ELK Stack 的每個組件都扮演著關(guān)鍵的角色:

  • Elasticsearch:?作為分布式搜索和分析引擎,Elasticsearch 可以高效地存儲、搜索和分析海量數(shù)據(jù)。其強(qiáng)大的全文搜索能力和分布式架構(gòu)使得在海量數(shù)據(jù)中快速定位所需信息成為可能。
  • Logstash:?這是一個用于數(shù)據(jù)收集、轉(zhuǎn)換和傳輸?shù)臄?shù)據(jù)處理引擎。它能夠從各種數(shù)據(jù)源中采集數(shù)據(jù),經(jīng)過處理后發(fā)送到 Elasticsearch 或其他目標(biāo)。無論是日志、事件數(shù)據(jù)還是指標(biāo),Logstash 可以將數(shù)據(jù)標(biāo)準(zhǔn)化,并將其準(zhǔn)確地傳送到適當(dāng)?shù)奈恢谩?/li>
  • Kibana:?作為 ELK Stack 的可視化工具,Kibana 提供了直觀友好的用戶界面,讓用戶能夠通過創(chuàng)建儀表盤、圖表和可視化來探索、分析和展示數(shù)據(jù)。這使得即便對數(shù)據(jù)分析沒有深入專業(yè)知識的人員,也能夠從數(shù)據(jù)中提取有價值的見解。

在本文檔中,我們將深入探討如何安裝、配置和使用 ELK Stack。

系統(tǒng)環(huán)境如下

  • 系統(tǒng):ubuntu20.04 LTS
  • 硬件:8核12G 500G

安裝JAVA

sudo apt-get update
#安裝對應(yīng)系統(tǒng)版本JDK,使用java --version查看相應(yīng)jdk安裝版本
apt install openjdk-16-jre-headless

添加ELK存儲庫

wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
sh -c 'echo "deb https://artifacts.elastic.co/packages/8.x/apt stable main" > /etc/apt/sources.list.d/elastic-8.x.list'

更新軟件源

apt-get update

安裝Elasticsearch

apt-get install elasticsearch

安裝完成加入開機(jī)啟動并重啟

sudo systemctl daemon-reload
systemctl enable elasticsearch.service && systemctl start elasticsearch.service

為elasticsearch生成密碼作為登錄使用,用戶名是elastic,密碼會在屏幕隨機(jī)生成。

cd /usr/share/elasticsearch && bin/elasticsearch-reset-password -u elastic

注意備份elasticsearch原始文件,以防丟失想要恢復(fù)無法恢復(fù)。

cp /etc/elasticsearch/elasticsearch.yml /etc/elasticsearch/elasticsearch.yml.bak

生成Enrollment token,第一次登錄時候需要驗證。

cd /usr/share/elasticsearch && bin/elasticsearch-create-enrollment-token --scope kibana

安裝Kibana

apt install kibana
systemctl enable kibana.service && systemctl start kibana.service
systemctl stop kibana.service && systemctl start kibana.service

生成Enrollment token后所需要的驗證碼

cd /usr/share/kibana/ && bin/kibana-verification-code

注意ELK中所說的L是指Logstash,本文以安裝filebeat為收集工具。

Logstash 和 Filebeat 都是用于數(shù)據(jù)收集和傳輸?shù)墓ぞ?,但它們在功能和使用方面有一些區(qū)別。以下是它們之間的主要區(qū)別:

Logstash:

Logstash 是一個功能強(qiáng)大的數(shù)據(jù)收集、轉(zhuǎn)換和傳輸引擎。它的主要功能是將不同來源的數(shù)據(jù)(如日志、事件、指標(biāo)等)收集起來,進(jìn)行過濾、解析、轉(zhuǎn)換,然后將處理后的數(shù)據(jù)發(fā)送到指定的目標(biāo),如 Elasticsearch、其他存儲系統(tǒng)或分析工具。Logstash 的主要特點包括:

  1. 數(shù)據(jù)處理能力:?Logstash 提供了豐富的插件,能夠?qū)?shù)據(jù)進(jìn)行多種處理,如解析、過濾、標(biāo)準(zhǔn)化等,以確保數(shù)據(jù)在傳輸之前得到適當(dāng)?shù)奶幚怼?/li>
  2. 多樣的數(shù)據(jù)源:?Logstash 可以從多種數(shù)據(jù)源中采集數(shù)據(jù),包括日志文件、網(wǎng)絡(luò)流量、消息隊列等,使得它在處理各種數(shù)據(jù)類型和格式時非常有用。
  3. 數(shù)據(jù)傳輸:?Logstash 可以將處理后的數(shù)據(jù)發(fā)送到多種目標(biāo),如 Elasticsearch、文件、消息隊列等,以滿足不同的數(shù)據(jù)存儲和分析需求。
  4. 靈活性:?Logstash 的配置非常靈活,您可以通過配置文件定義數(shù)據(jù)流的各個階段,從而實現(xiàn)高度定制化的數(shù)據(jù)處理流程。

Filebeat:

Filebeat 是一個輕量級的日志數(shù)據(jù)傳輸工具,專門用于從文件系統(tǒng)中收集日志數(shù)據(jù)并將其傳輸?shù)街醒氪鎯蚍治鱿到y(tǒng)。它的主要特點包括:

  1. 輕量級:?Filebeat 被設(shè)計為輕量級工具,占用資源較少,適用于部署在資源有限的環(huán)境中。
  2. 實時性:?Filebeat 可以實時監(jiān)測日志文件的變化,一旦日志發(fā)生更新,它會立即傳輸變更的部分,確保實時性。
  3. 簡化的數(shù)據(jù)處理:?Filebeat 的主要功能是將日志數(shù)據(jù)收集并傳輸,而數(shù)據(jù)處理方面的功能較弱。它不像 Logstash 那樣能進(jìn)行復(fù)雜的數(shù)據(jù)解析和處理。
  4. 易于部署:?由于 Filebeat 輕量級的特點,它適用于分布式部署和輕松擴(kuò)展。

總之,Logstash 更適合需要對數(shù)據(jù)進(jìn)行復(fù)雜處理和轉(zhuǎn)換的場景,而 Filebeat 則適用于輕量級、實時的日志傳輸需求。在實際應(yīng)用中,可以根據(jù)具體需求選擇使用 Logstash、Filebeat,或兩者的結(jié)合,以構(gòu)建適合的數(shù)據(jù)收集和傳輸方案。

安裝filebeat采集工具

curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-8.9.0-amd64.deb
dpkg -i filebeat-8.9.0-amd64.deb
systemctl start filebeat && systemctl enable filebeat

安裝完檢查對應(yīng)軟件status是否正常,接下來開始配置

elasticsearch配置

vi /etc/elasticsearch/elasticsearch.yml

這里主要主機(jī)端口號

network.host: 127.0.0.1
http.port: 9200

全部配置如下,僅供參考。

# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/lib/elasticsearch
#
# Path to log files:
#
path.logs: /var/log/elasticsearch
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# By default Elasticsearch is only accessible on localhost. Set a different
# address here to expose this node on the network:
#
#network.host: 192.168.0.1
#
# By default Elasticsearch listens for HTTP traffic on the first free port it
# finds starting at 9200. Set a specific HTTP port here:
#
#http.port: 9200
network.host: 127.0.0.1
http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.seed_hosts: ["host1", "host2"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
#cluster.initial_master_nodes: ["node-1", "node-2"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Allow wildcard deletion of indices:
#
#action.destructive_requires_name: false

#----------------------- BEGIN SECURITY AUTO CONFIGURATION -----------------------
#
# The following settings, TLS certificates, and keys have been automatically      
# generated to configure Elasticsearch security features on 09-08-2023 02:38:11
#
# --------------------------------------------------------------------------------

# Enable security features
xpack.security.enabled: true

xpack.security.enrollment.enabled: true

# Enable encryption for HTTP API client connections, such as Kibana, Logstash, and Agents
xpack.security.http.ssl:
  enabled: true
  keystore.path: certs/http.p12

# Enable encryption and mutual authentication between cluster nodes
xpack.security.transport.ssl:
  enabled: true
  verification_mode: certificate
  keystore.path: certs/transport.p12
  truststore.path: certs/transport.p12
# Create a new cluster with the current node only
# Additional nodes can still join the cluster later
cluster.initial_master_nodes: ["ubuntu"]

# Allow HTTP API connections from anywhere
# Connections are encrypted and require user authentication
http.host: 0.0.0.0
#logger.org.elasticsearch: "ERROR"

# Allow other nodes to join the cluster from anywhere
# Connections are encrypted and mutually authenticated
#transport.host: 0.0.0.0

#----------------------- END SECURITY AUTO CONFIGURATION -------------------------

kibana配置

# For more configuration options see the configuration guide for Kibana in
# https://www.elastic.co/guide/index.html

# =================== System: Kibana Server ===================
# Kibana is served by a back end server. This setting specifies the port to use.
#server.port: 5601

# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
server.host: "123.58.97.169"

# Enables you to specify a path to mount Kibana at if you are running behind a proxy.
# Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath
# from requests it receives, and to prevent a deprecation warning at startup.
# This setting cannot end in a slash.
#server.basePath: ""

# Specifies whether Kibana should rewrite requests that are prefixed with
# `server.basePath` or require that they are rewritten by your reverse proxy.
# Defaults to `false`.
#server.rewriteBasePath: false

# Specifies the public URL at which Kibana is available for end users. If
# `server.basePath` is configured this URL should end with the same basePath.
#server.publicBaseUrl: ""

# The maximum payload size in bytes for incoming server requests.
#server.maxPayload: 1048576

# The Kibana server's name. This is used for display purposes.
#server.name: "your-hostname"

# =================== System: Kibana Server (Optional) ===================
# Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
# These settings enable SSL for outgoing requests from the Kibana server to the browser.
#server.ssl.enabled: false
#server.ssl.certificate: /path/to/your/server.crt
#server.ssl.key: /path/to/your/server.key

# =================== System: Elasticsearch ===================
# The URLs of the Elasticsearch instances to use for all your queries.
#elasticsearch.hosts: ["http://localhost:9200"]

# If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
#elasticsearch.username: "kibana_system"
#elasticsearch.password: "pass"

# Kibana can also authenticate to Elasticsearch via "service account tokens".
# Service account tokens are Bearer style tokens that replace the traditional username/password based configuration.
# Use this token instead of a username/password.
# elasticsearch.serviceAccountToken: "my_token"

# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
# the elasticsearch.requestTimeout setting.
#elasticsearch.pingTimeout: 1500

# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
# must be a positive integer.
#elasticsearch.requestTimeout: 30000

# The maximum number of sockets that can be used for communications with elasticsearch.
# Defaults to `Infinity`.
#elasticsearch.maxSockets: 1024

# Specifies whether Kibana should use compression for communications with elasticsearch
# Defaults to `false`.
#elasticsearch.compression: false

# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
# headers, set this value to [] (an empty list).
#elasticsearch.requestHeadersWhitelist: [ authorization ]

# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
#elasticsearch.customHeaders: {}

# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
#elasticsearch.shardTimeout: 30000

# =================== System: Elasticsearch (Optional) ===================
# These files are used to verify the identity of Kibana to Elasticsearch and are required when
# xpack.security.http.ssl.client_authentication in Elasticsearch is set to required.
#elasticsearch.ssl.certificate: /path/to/your/client.crt
#elasticsearch.ssl.key: /path/to/your/client.key

# Enables you to specify a path to the PEM file for the certificate
# authority for your Elasticsearch instance.
#elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ]

# To disregard the validity of SSL certificates, change this setting's value to 'none'.
#elasticsearch.ssl.verificationMode: full

# =================== System: Logging ===================
# Set the value of this setting to off to suppress all logging output, or to debug to log everything. Defaults to 'info'
#logging.root.level: debug

# Enables you to specify a file where Kibana stores log output.
logging:
  appenders:
    file:
      type: file
      fileName: /var/log/kibana/kibana.log
      layout:
        type: json
  root:
    appenders:
      - default
      - file
#  layout:
#    type: json

# Logs queries sent to Elasticsearch.
#logging.loggers:
#  - name: elasticsearch.query
#    level: debug

# Logs http responses.
#logging.loggers:
#  - name: http.server.response
#    level: debug

# Logs system usage information.
#logging.loggers:
#  - name: metrics.ops
#    level: debug

# =================== System: Other ===================
# The path where Kibana stores persistent data not saved in Elasticsearch. Defaults to data
#path.data: data

# Specifies the path where Kibana creates the process ID file.
pid.file: /run/kibana/kibana.pid

# Set the interval in milliseconds to sample system and process performance
# metrics. Minimum is 100ms. Defaults to 5000ms.
#ops.interval: 5000

# Specifies locale to be used for all localizable strings, dates and number formats.
# Supported languages are the following: English (default) "en", Chinese "zh-CN", Japanese "ja-JP", French "fr-FR".
#i18n.locale: "en"

# =================== Frequently used (Optional)===================

# =================== Saved Objects: Migrations ===================
# Saved object migrations run at startup. If you run into migration-related issues, you might need to adjust these settings.

# The number of documents migrated at a time.
# If Kibana can't start up or upgrade due to an Elasticsearch `circuit_breaking_exception`,
# use a smaller batchSize value to reduce the memory pressure. Defaults to 1000 objects per batch.
#migrations.batchSize: 1000

# The maximum payload size for indexing batches of upgraded saved objects.
# To avoid migrations failing due to a 413 Request Entity Too Large response from Elasticsearch.
# This value should be lower than or equal to your Elasticsearch cluster’s `http.max_content_length`
# configuration option. Default: 100mb
#migrations.maxBatchSizeBytes: 100mb

# The number of times to retry temporary migration failures. Increase the setting
# if migrations fail frequently with a message such as `Unable to complete the [...] step after
# 15 attempts, terminating`. Defaults to 15
#migrations.retryAttempts: 15

# =================== Search Autocomplete ===================
# Time in milliseconds to wait for autocomplete suggestions from Elasticsearch.
# This value must be a whole number greater than zero. Defaults to 1000ms
#unifiedSearch.autocomplete.valueSuggestions.timeout: 1000

# Maximum number of documents loaded by each shard to generate autocomplete suggestions.
# This value must be a whole number greater than zero. Defaults to 100_000
#unifiedSearch.autocomplete.valueSuggestions.terminateAfter: 100000
i18n.locale: "zh-CN"

# This section was automatically generated during setup.
elasticsearch.hosts: ['https://123.58.97.169:9200']
elasticsearch.serviceAccountToken: AAEAAWVsYXN0aWMva2liYW5hL2Vucm9sbC1wcm9jZXNzLXRva2VuLTE2OTE1NDk3NTYyNDE6NE55LU1IdVFRRTY0UkVpUloyZDhQdw
elasticsearch.ssl.certificateAuthorities: [/var/lib/kibana/ca_1691549757740.crt]
xpack.fleet.outputs: [{id: fleet-default-output, name: default, is_default: true, is_default_monitoring: true, type: elasticsearch, hosts: ['https://123.58.97.169:9200'], ca_trusted_fingerprint: 27991095e8dddf17d06a00968bd1b693fc906ea2d52d9f5563134505625791f1}]

常見問題

1、為什么我添加了儀表盤面板不顯示?

答:當(dāng)確保索引配置都正確的同時,不要忘記“sudo filebeat setup”初始化面板。執(zhí)行初始化即可。

2、安裝了filebeat,啟用和配置 system 模塊后,模塊狀態(tài)點擊檢查數(shù)據(jù) 顯示“未連接”

答:造成此現(xiàn)象是filebeat的系統(tǒng)配置modules.d/system.yml文件未正確配置文件集,也就是找不到文件路徑。配置正確后,systemctl status filebeat 查看運(yùn)行狀態(tài)并檢查是否有錯誤日志。

3、為什么在索引管理里刪除不了索引?

答:刪除索引需要先暫停數(shù)據(jù)源服務(wù),例如使用filebeat,需要先systemctl stop filebeat ,隨后點擊索引管理里的數(shù)據(jù)流,點擊刪除數(shù)據(jù)流即可刪除數(shù)據(jù)流里的索引。文章來源地址http://www.zghlxwxcb.cn/news/detail-695386.html

到了這里,關(guān)于Elasticsearch,Logstash和Kibana安裝部署(ELK Stack)的文章就介紹完了。如果您還想了解更多內(nèi)容,請在右上角搜索TOY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!

本文來自互聯(lián)網(wǎng)用戶投稿,該文觀點僅代表作者本人,不代表本站立場。本站僅提供信息存儲空間服務(wù),不擁有所有權(quán),不承擔(dān)相關(guān)法律責(zé)任。如若轉(zhuǎn)載,請注明出處: 如若內(nèi)容造成侵權(quán)/違法違規(guī)/事實不符,請點擊違法舉報進(jìn)行投訴反饋,一經(jīng)查實,立即刪除!

領(lǐng)支付寶紅包贊助服務(wù)器費(fèi)用

相關(guān)文章

  • k8s部署 elk(Elasticsearch,Kibana,Logstash,Redis,Filebea)

    目錄 一、nfs存儲 二、部署鏡像,制作tag 三、?filebeat收集數(shù)據(jù) ?四、logstash過濾數(shù)據(jù) 五、elasticsearch存儲數(shù)據(jù)+nfs做存儲(自動注冊pv詳見前文) 六、kibana展示數(shù)據(jù) 七、驗證安裝 參考鏈接:k8s 配置hadoop集群,nfs作為存儲_瘋飆的蝸牛的博客-CSDN博客

    2024年02月11日
    瀏覽(21)
  • 【ELK企業(yè)級日志分析系統(tǒng)】部署Filebeat+Kafka+Logstash+Elasticsearch+Kibana集群詳解(EFLFK)

    【ELK企業(yè)級日志分析系統(tǒng)】部署Filebeat+Kafka+Logstash+Elasticsearch+Kibana集群詳解(EFLFK)

    參見安裝與部署ELK詳解 參見安裝與部署EFLK詳解 參見安裝與部署Zookeeper集群詳解 1.1.1 為什么需要消息隊列(MQ) MQ(Message Queue)主要原因是由于 在高并發(fā)環(huán)境下,同步請求來不及處理,請求往往會發(fā)生阻塞 。比如大量的并發(fā)請求,訪問數(shù)據(jù)庫,導(dǎo)致行鎖表鎖,最后請求線程會

    2024年02月16日
    瀏覽(28)
  • 【圖文詳解】Docker搭建 ELK Stack (elk) [使用es-logstash-filebeat-kibana]

    【圖文詳解】Docker搭建 ELK Stack (elk) [使用es-logstash-filebeat-kibana]

    ????????如果您對 elk 還不了解,那么可以先直戳下方官方鏈接,官方通過圖形化界面很形象地介紹了elk stack(Elastic? Stack)。 ?ELK Stack:Elasticsearch 的開發(fā)者傾心打造 | Elastic 本文使用軟件(centos7,moba) | 拉取es鏡像 || 拉取kibana鏡像? ||| 拉取logstash鏡像 |||| 拉取filebeat鏡

    2023年04月08日
    瀏覽(28)
  • ELK第一講之【docker安裝(Elasticsearch、kibana、IK分詞器、Logstash)8.4.3】

    ELK第一講之【docker安裝(Elasticsearch、kibana、IK分詞器、Logstash)8.4.3】

    1、 對應(yīng)版本kibana8.4.3的安裝 2、 IK分詞器8.4.3的安裝 3、 Logstash-8.4.3的安裝 啟動出現(xiàn)以下錯誤,再執(zhí)行該操作 出現(xiàn)max virtual memory areas vm.max_map_count [65530] is too low,increase to at least [262144] 1、啟動es 2、復(fù)制elasticsearch.yml 3、復(fù)制完成后、關(guān)閉不要的校驗 4、關(guān)閉容器 開放9200端口 打

    2023年04月23日
    瀏覽(32)
  • 使用Docker安裝ELK(Elasticsearch+Logstash+Kibana)+filebeat____基于CentOS7.9

    使用Docker安裝ELK(Elasticsearch+Logstash+Kibana)+filebeat____基于CentOS7.9

    目錄 一、安裝JDK 二、部署Elasticsearch 三、部署kibana 四、部署Logstash 五、部署filebeat 六、filebeat采集數(shù)據(jù),logstash過濾,在kibana中顯示 七、kibana增加索引 1、更新系統(tǒng) 2、安裝Java 下面是安裝OpenJDK的命令: 3、驗證安裝 1、查看是否安裝docker 安裝最新版的docker可能導(dǎo)致部分系統(tǒng)不

    2024年02月04日
    瀏覽(26)
  • ELK(ElasticSearch, Logstash, Kibana)

    ELK(ElasticSearch, Logstash, Kibana)

    ELK簡介 ELK是三個開源軟件的縮寫,分別表示:Elasticsearch , Logstash, Kibana , 它們都是開源軟件。新增了一個FileBeat,它是一個輕量級的日志收集處理工具(Agent),F(xiàn)ilebeat占用資源少,適合于在各個服務(wù)器上搜集日志后傳輸給Logstash,官方也推薦此工具。 Elasticsearch是個開源分布式搜

    2023年04月09日
    瀏覽(27)
  • ELK(elasticsearch+logstash+kibana+beats)

    ELK(elasticsearch+logstash+kibana+beats)

    Elasticsearch :Elasticsearch(以下簡稱ES) 是一個分布式、RESTful 風(fēng)格的搜索和數(shù)據(jù)分析引擎,能夠解決不斷涌現(xiàn)出的各種用例。 ES是 Elastic Stack 的核心,采用集中式數(shù)據(jù)存儲,可以通過機(jī)器學(xué)習(xí)來發(fā)現(xiàn)潛在問題。ES能夠執(zhí)行及合并多種類型的搜索(結(jié)構(gòu)化數(shù)據(jù)、非結(jié)構(gòu)化數(shù)據(jù)、地

    2024年02月16日
    瀏覽(27)
  • ELK日志平臺(elasticsearch+logstash+kibana)搭建

    ELK日志平臺(elasticsearch+logstash+kibana)搭建

    提示:文章寫完后,目錄可以自動生成,如何生成可參考右邊的幫助文檔 為了實現(xiàn)分布式日志數(shù)據(jù)統(tǒng)一收集,實現(xiàn)集中式查詢和管理 故障排查 安全信息和事件管理 ELK 是三個開源項目的首字母縮寫,這三個項目分別是: Elasticsearch 、 Logstash 和 Kibana 。 ? Elasticsearch 是一個搜索

    2024年02月03日
    瀏覽(22)
  • springboot整合elk(Elasticsearch+Logstash+Kibana)

    功能介紹 ELK 是軟件集合Elasticsearch、Logstash、Kibana的簡稱,由這三個軟件及其相關(guān)的組件可以打造大規(guī)模日志實時處理系統(tǒng)。 Elasticsearch 是一個基于 Lucene 的、支持全文索引的分布式存儲和索引引擎,主要負(fù)責(zé)將日志索引并存儲起來,方便業(yè)務(wù)方檢索查詢。 Logstash是一個日志收

    2024年02月06日
    瀏覽(25)
  • ELK(Elasticsearch+Logstash+Kibana)日志分析系統(tǒng)

    ELK(Elasticsearch+Logstash+Kibana)日志分析系統(tǒng)

    目錄 前言 一、ELK日志分析系統(tǒng)概述 1、三大組件工具介紹 1.1?Elasticsearch 1.1.1?Elasticsearch概念 1.1.2?關(guān)系型數(shù)據(jù)庫和ElasticSearch中的對應(yīng)關(guān)系 1.1.3?Elasticsearch提供的操作命令 1.2?Logstash 1.2.1 Logstash概念 1.2.2 Logstash的主要組件 1.2.3?Logstash主機(jī)分類 1.2.4?Logstash工作過程 1.3?Kiabana 2、

    2024年04月25日
    瀏覽(44)

覺得文章有用就打賞一下文章作者

支付寶掃一掃打賞

博客贊助

微信掃一掃打賞

請作者喝杯咖啡吧~博客贊助

支付寶掃一掃領(lǐng)取紅包,優(yōu)惠每天領(lǐng)

二維碼1

領(lǐng)取紅包

二維碼2

領(lǐng)紅包