国产 无码 综合区,色欲AV无码国产永久播放,无码天堂亚洲国产AV,国产日韩欧美女同一区二区

k8s Operator 部署 elasticsearch 7.10 + kibana + cerebro

這篇具有很好參考價(jià)值的文章主要介紹了k8s Operator 部署 elasticsearch 7.10 + kibana + cerebro。希望對(duì)大家有所幫助。如果存在錯(cuò)誤或未考慮完全的地方,請(qǐng)大家不吝賜教,您也可以點(diǎn)擊"舉報(bào)違法"按鈕提交疑問(wèn)。

1 部署 elasticsearch

1.1 部署 ECK

Elastic Cloud on Kubernetes,這是一款基于 Kubernetes Operator 模式的新型編排產(chǎn)品,用戶可使用該產(chǎn)品在 Kubernetes 上配置、管理和運(yùn)行 Elasticsearch 集群。ECK 的愿景是為 Kubernetes 上的 Elastic 產(chǎn)品和解決方案提供 SaaS 般的體驗(yàn)。

# 官網(wǎng)
https://www.elastic.co/guide/en/cloud-on-k8s/1.9/k8s-deploy-eck.html

k8s 版本

root@sz-k8s-master-01:~# kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:11:31Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.1-jobgc-dirty", GitCommit:"4c19bc5525dc468017cc2cf14585537ed24e7d4c", GitTreeState:"dirty", BuildDate:"2020-02-21T04:47:43Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
root@sz-k8s-master-01:~#

部署 ECK

1)If you are running a version of Kubernetes before 1.16 you have to use the legacy version of the manifests

kubectl create -f https://download.elastic.co/downloads/eck/1.9.1/crds-legacy.yaml -n logs

2)下載 yaml 文件,修改其中的 namespace 為 logs(如果不想修改,可忽略,直接執(zhí)行即可)

wget https://download.elastic.co/downloads/eck/1.9.1/operator-legacy.yaml

3)修改鏡像地址

# https://hub.docker.com/r/elastic/eck-operator/tags

docker pull elastic/eck-operator:1.9.1

因?yàn)樵械溺R像無(wú)法下載

root@sz-k8s-master-01:~# docker pull docker.elastic.co/eck-operator:1.9.1
Error response from daemon: Get https://docker.elastic.co/v2/: x509: certificate signed by unknown authority
root@sz-k8s-master-01:~#

查看文件如下

root@sz-k8s-master-01:~# cat operator-legacy.yaml
# Source: eck-operator/templates/operator-namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: elastic-system
  labels:
    name: elastic-system
---
# Source: eck-operator/templates/service-account.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: elastic-operator
  namespace: logs
  labels:
    control-plane: elastic-operator
    app.kubernetes.io/version: "1.9.1"
---
# Source: eck-operator/templates/webhook.yaml
apiVersion: v1
kind: Secret
metadata:
  name: elastic-webhook-server-cert
  namespace: logs
  labels:
    control-plane: elastic-operator
    app.kubernetes.io/version: "1.9.1"
---
# Source: eck-operator/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: elastic-operator
  namespace: logs
  labels:
    control-plane: elastic-operator
    app.kubernetes.io/version: "1.9.1"
data:
  eck.yaml: |-
    log-verbosity: 0
    metrics-port: 0
    container-registry: docker.elastic.co
    max-concurrent-reconciles: 3
    ca-cert-validity: 8760h
    ca-cert-rotate-before: 24h
    cert-validity: 8760h
    cert-rotate-before: 24h
    set-default-security-context: true
    kube-client-timeout: 60s
    elasticsearch-client-timeout: 180s
    disable-telemetry: false
    distribution-channel: all-in-one
    validate-storage-class: true
    enable-webhook: true
    webhook-name: elastic-webhook.k8s.elastic.co
---
# Source: eck-operator/templates/cluster-roles.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: elastic-operator
  labels:
    control-plane: elastic-operator
    app.kubernetes.io/version: "1.9.1"
rules:
  - apiGroups:
      - "authorization.k8s.io"
    resources:
      - subjectaccessreviews
    verbs:
      - create
  - apiGroups:
      - ""
    resources:
      - endpoints
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - pods
      - events
      - persistentvolumeclaims
      - secrets
      - services
      - configmaps
    verbs:
      - get
      - list
      - watch
      - create
      - update
      - patch
      - delete
  - apiGroups:
      - apps
    resources:
      - deployments
      - statefulsets
      - daemonsets
    verbs:
      - get
      - list
      - watch
      - create
      - update
      - patch
      - delete
  - apiGroups:
      - policy
    resources:
      - poddisruptionbudgets
    verbs:
      - get
      - list
      - watch
      - create
      - update
      - patch
      - delete
  - apiGroups:
      - elasticsearch.k8s.elastic.co
    resources:
      - elasticsearches
      - elasticsearches/status
      - elasticsearches/finalizers # needed for ownerReferences with blockOwnerDeletion on OCP
    verbs:
      - get
      - list
      - watch
      - create
      - update
      - patch
  - apiGroups:
      - kibana.k8s.elastic.co
    resources:
      - kibanas
      - kibanas/status
      - kibanas/finalizers # needed for ownerReferences with blockOwnerDeletion on OCP
    verbs:
      - get
      - list
      - watch
      - create
      - update
      - patch
  - apiGroups:
      - apm.k8s.elastic.co
    resources:
      - apmservers
      - apmservers/status
      - apmservers/finalizers # needed for ownerReferences with blockOwnerDeletion on OCP
    verbs:
      - get
      - list
      - watch
      - create
      - update
      - patch
  - apiGroups:
      - enterprisesearch.k8s.elastic.co
    resources:
      - enterprisesearches
      - enterprisesearches/status
      - enterprisesearches/finalizers # needed for ownerReferences with blockOwnerDeletion on OCP
    verbs:
      - get
      - list
      - watch
      - create
      - update
      - patch
  - apiGroups:
      - beat.k8s.elastic.co
    resources:
      - beats
      - beats/status
      - beats/finalizers # needed for ownerReferences with blockOwnerDeletion on OCP
    verbs:
      - get
      - list
      - watch
      - create
      - update
      - patch
  - apiGroups:
      - agent.k8s.elastic.co
    resources:
      - agents
      - agents/status
      - agents/finalizers # needed for ownerReferences with blockOwnerDeletion on OCP
    verbs:
      - get
      - list
      - watch
      - create
      - update
      - patch
  - apiGroups:
      - maps.k8s.elastic.co
    resources:
      - elasticmapsservers
      - elasticmapsservers/status
      - elasticmapsservers/finalizers # needed for ownerReferences with blockOwnerDeletion on OCP
    verbs:
      - get
      - list
      - watch
      - create
      - update
      - patch
  - apiGroups:
      - storage.k8s.io
    resources:
      - storageclasses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - admissionregistration.k8s.io
    resources:
      - validatingwebhookconfigurations
    verbs:
      - get
      - list
      - watch
      - create
      - update
      - patch
      - delete
---
# Source: eck-operator/templates/cluster-roles.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: "elastic-operator-view"
  labels:
    rbac.authorization.k8s.io/aggregate-to-view: "true"
    rbac.authorization.k8s.io/aggregate-to-edit: "true"
    rbac.authorization.k8s.io/aggregate-to-admin: "true"
    control-plane: elastic-operator
    app.kubernetes.io/version: "1.9.1"
rules:
  - apiGroups: ["elasticsearch.k8s.elastic.co"]
    resources: ["elasticsearches"]
    verbs: ["get", "list", "watch"]
  - apiGroups: ["apm.k8s.elastic.co"]
    resources: ["apmservers"]
    verbs: ["get", "list", "watch"]
  - apiGroups: ["kibana.k8s.elastic.co"]
    resources: ["kibanas"]
    verbs: ["get", "list", "watch"]
  - apiGroups: ["enterprisesearch.k8s.elastic.co"]
    resources: ["enterprisesearches"]
    verbs: ["get", "list", "watch"]
  - apiGroups: ["beat.k8s.elastic.co"]
    resources: ["beats"]
    verbs: ["get", "list", "watch"]
  - apiGroups: ["agent.k8s.elastic.co"]
    resources: ["agents"]
    verbs: ["get", "list", "watch"]
  - apiGroups: ["maps.k8s.elastic.co"]
    resources: ["elasticmapsservers"]
    verbs: ["get", "list", "watch"]
---
# Source: eck-operator/templates/cluster-roles.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: "elastic-operator-edit"
  labels:
    rbac.authorization.k8s.io/aggregate-to-edit: "true"
    rbac.authorization.k8s.io/aggregate-to-admin: "true"
    control-plane: elastic-operator
    app.kubernetes.io/version: "1.9.1"
rules:
  - apiGroups: ["elasticsearch.k8s.elastic.co"]
    resources: ["elasticsearches"]
    verbs: ["create", "delete", "deletecollection", "patch", "update"]
  - apiGroups: ["apm.k8s.elastic.co"]
    resources: ["apmservers"]
    verbs: ["create", "delete", "deletecollection", "patch", "update"]
  - apiGroups: ["kibana.k8s.elastic.co"]
    resources: ["kibanas"]
    verbs: ["create", "delete", "deletecollection", "patch", "update"]
  - apiGroups: ["enterprisesearch.k8s.elastic.co"]
    resources: ["enterprisesearches"]
    verbs: ["create", "delete", "deletecollection", "patch", "update"]
  - apiGroups: ["beat.k8s.elastic.co"]
    resources: ["beats"]
    verbs: ["create", "delete", "deletecollection", "patch", "update"]
  - apiGroups: ["agent.k8s.elastic.co"]
    resources: ["agents"]
    verbs: ["create", "delete", "deletecollection", "patch", "update"]
  - apiGroups: ["maps.k8s.elastic.co"]
    resources: ["elasticmapsservers"]
    verbs: ["create", "delete", "deletecollection", "patch", "update"]
---
# Source: eck-operator/templates/role-bindings.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: elastic-operator
  labels:
    control-plane: elastic-operator
    app.kubernetes.io/version: "1.9.1"
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: elastic-operator
subjects:
  - kind: ServiceAccount
    name: elastic-operator
    namespace: logs
---
# Source: eck-operator/templates/webhook.yaml
apiVersion: v1
kind: Service
metadata:
  name: elastic-webhook-server
  namespace: logs
  labels:
    control-plane: elastic-operator
    app.kubernetes.io/version: "1.9.1"
spec:
  ports:
    - name: https
      port: 443
      targetPort: 9443
  selector:
    control-plane: elastic-operator
---
# Source: eck-operator/templates/statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: elastic-operator
  namespace: logs
  labels:
    control-plane: elastic-operator
    app.kubernetes.io/version: "1.9.1"
spec:
  selector:
    matchLabels:
      control-plane: elastic-operator
  serviceName: elastic-operator
  replicas: 1
  template:
    metadata:
      annotations:
        # Rename the fields "error" to "error.message" and "source" to "event.source"
        # This is to avoid a conflict with the ECS "error" and "source" documents.
        "co.elastic.logs/raw": "[{\"type\":\"container\",\"json.keys_under_root\":true,\"paths\":[\"/var/log/containers/*${data.kubernetes.container.id}.log\"],\"processors\":[{\"convert\":{\"mode\":\"rename\",\"ignore_missing\":true,\"fields\":[{\"from\":\"error\",\"to\":\"_error\"}]}},{\"convert\":{\"mode\":\"rename\",\"ignore_missing\":true,\"fields\":[{\"from\":\"_error\",\"to\":\"error.message\"}]}},{\"convert\":{\"mode\":\"rename\",\"ignore_missing\":true,\"fields\":[{\"from\":\"source\",\"to\":\"_source\"}]}},{\"convert\":{\"mode\":\"rename\",\"ignore_missing\":true,\"fields\":[{\"from\":\"_source\",\"to\":\"event.source\"}]}}]}]"
        "checksum/config": 239de074c87fe1f7254f5c93ff9f4a0949c8f111ba15696c460d786d6279e4d6
      labels:
        control-plane: elastic-operator
    spec:
      terminationGracePeriodSeconds: 10
      serviceAccountName: elastic-operator
      securityContext:
        runAsNonRoot: true
      containers:
        - image: "elastic/eck-operator:1.9.1"
          imagePullPolicy: IfNotPresent
          name: manager
          args:
            - "manager"
            - "--config=/conf/eck.yaml"
          env:
            - name: OPERATOR_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: POD_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP
            - name: WEBHOOK_SECRET
              value: elastic-webhook-server-cert
          resources:
            limits:
              cpu: 1
              memory: 512Mi
            requests:
              cpu: 100m
              memory: 150Mi
          ports:
            - containerPort: 9443
              name: https-webhook
              protocol: TCP
          volumeMounts:
            - mountPath: "/conf"
              name: conf
              readOnly: true
            - mountPath: /tmp/k8s-webhook-server/serving-certs
              name: cert
              readOnly: true
      volumes:
        - name: conf
          configMap:
            name: elastic-operator
        - name: cert
          secret:
            defaultMode: 420
            secretName: elastic-webhook-server-cert
---
# Source: eck-operator/templates/webhook.yaml
apiVersion: admissionregistration.k8s.io/v1beta1
kind: ValidatingWebhookConfiguration
metadata:
  name: elastic-webhook.k8s.elastic.co
  labels:
    control-plane: elastic-operator
    app.kubernetes.io/version: "1.9.1"
webhooks:
  - clientConfig:
      caBundle: Cg==
      service:
        name: elastic-webhook-server
        namespace: logs
        path: /validate-agent-k8s-elastic-co-v1alpha1-agent
    failurePolicy: Ignore
    name: elastic-agent-validation-v1alpha1.k8s.elastic.co
    rules:
      - apiGroups:
          - agent.k8s.elastic.co
        apiVersions:
          - v1alpha1
        operations:
          - CREATE
          - UPDATE
        resources:
          - agents
  - clientConfig:
      caBundle: Cg==
      service:
        name: elastic-webhook-server
        namespace: logs
        path: /validate-apm-k8s-elastic-co-v1-apmserver
    failurePolicy: Ignore
    name: elastic-apm-validation-v1.k8s.elastic.co
    rules:
      - apiGroups:
          - apm.k8s.elastic.co
        apiVersions:
          - v1
        operations:
          - CREATE
          - UPDATE
        resources:
          - apmservers
  - clientConfig:
      caBundle: Cg==
      service:
        name: elastic-webhook-server
        namespace: logs
        path: /validate-apm-k8s-elastic-co-v1beta1-apmserver
    failurePolicy: Ignore
    name: elastic-apm-validation-v1beta1.k8s.elastic.co
    rules:
      - apiGroups:
          - apm.k8s.elastic.co
        apiVersions:
          - v1beta1
        operations:
          - CREATE
          - UPDATE
        resources:
          - apmservers
  - clientConfig:
      caBundle: Cg==
      service:
        name: elastic-webhook-server
        namespace: logs
        path: /validate-beat-k8s-elastic-co-v1beta1-beat
    failurePolicy: Ignore
    name: elastic-beat-validation-v1beta1.k8s.elastic.co
    rules:
      - apiGroups:
          - beat.k8s.elastic.co
        apiVersions:
          - v1beta1
        operations:
          - CREATE
          - UPDATE
        resources:
          - beats
  - clientConfig:
      caBundle: Cg==
      service:
        name: elastic-webhook-server
        namespace: logs
        path: /validate-enterprisesearch-k8s-elastic-co-v1-enterprisesearch
    failurePolicy: Ignore
    name: elastic-ent-validation-v1.k8s.elastic.co
    rules:
      - apiGroups:
          - enterprisesearch.k8s.elastic.co
        apiVersions:
          - v1
        operations:
          - CREATE
          - UPDATE
        resources:
          - enterprisesearches
  - clientConfig:
      caBundle: Cg==
      service:
        name: elastic-webhook-server
        namespace: logs
        path: /validate-enterprisesearch-k8s-elastic-co-v1beta1-enterprisesearch
    failurePolicy: Ignore
    name: elastic-ent-validation-v1beta1.k8s.elastic.co
    rules:
      - apiGroups:
          - enterprisesearch.k8s.elastic.co
        apiVersions:
          - v1beta1
        operations:
          - CREATE
          - UPDATE
        resources:
          - enterprisesearches
  - clientConfig:
      caBundle: Cg==
      service:
        name: elastic-webhook-server
        namespace: logs
        path: /validate-elasticsearch-k8s-elastic-co-v1-elasticsearch
    failurePolicy: Ignore
    name: elastic-es-validation-v1.k8s.elastic.co
    rules:
      - apiGroups:
          - elasticsearch.k8s.elastic.co
        apiVersions:
          - v1
        operations:
          - CREATE
          - UPDATE
        resources:
          - elasticsearches
  - clientConfig:
      caBundle: Cg==
      service:
        name: elastic-webhook-server
        namespace: logs
        path: /validate-elasticsearch-k8s-elastic-co-v1beta1-elasticsearch
    failurePolicy: Ignore
    name: elastic-es-validation-v1beta1.k8s.elastic.co
    rules:
      - apiGroups:
          - elasticsearch.k8s.elastic.co
        apiVersions:
          - v1beta1
        operations:
          - CREATE
          - UPDATE
        resources:
          - elasticsearches
  - clientConfig:
      caBundle: Cg==
      service:
        name: elastic-webhook-server
        namespace: logs
        path: /validate-kibana-k8s-elastic-co-v1-kibana
    failurePolicy: Ignore
    name: elastic-kb-validation-v1.k8s.elastic.co
    rules:
      - apiGroups:
          - kibana.k8s.elastic.co
        apiVersions:
          - v1
        operations:
          - CREATE
          - UPDATE
        resources:
          - kibanas
  - clientConfig:
      caBundle: Cg==
      service:
        name: elastic-webhook-server
        namespace: logs
        path: /validate-kibana-k8s-elastic-co-v1beta1-kibana
    failurePolicy: Ignore
    name: elastic-kb-validation-v1beta1.k8s.elastic.co
    rules:
      - apiGroups:
          - kibana.k8s.elastic.co
        apiVersions:
          - v1beta1
        operations:
          - CREATE
          - UPDATE
        resources:
          - kibanas

root@sz-k8s-master-01:~#

發(fā)布 operator-legacy.yaml

kubectl apply -f operator-legacy.yaml

一段時(shí)間后,查看 operator 日志

kubectl -n logs logs -f statefulset.apps/elastic-operator

一段時(shí)間后,查看 elastic-operator 是否運(yùn)行正常,ECK 中只有一個(gè) elastic-operator pod

root@sz-k8s-master-01:~# kubectl -n logs get pod|grep elastic
elastic-operator-0              1/1     Running   1          5d1h
root@sz-k8s-master-01:~#

1.2 使用ECK部署 elasticsearch 集群(存儲(chǔ)使用百度cfs)

cat <<EOF | kubectl apply -f - -n logs
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
  name: cluster01
  namespace: logs
spec:
  version: 7.10.1
  nodeSets:
  - name: master-nodes
    count: 3
    config:
      node.master: true
      node.data: false
    podTemplate:
      spec:
        initContainers:
        - name: sysctl
          securityContext:
            privileged: true
            runAsUser: 0
          command: ['sh', '-c', 'sysctl -w vm.max_map_count=262144']
        containers:
        - name: elasticsearch
          image: elasticsearch:7.10.1
          env:
          - name: ES_JAVA_OPTS
            value: "-Xms6g -Xmx6g"
          resources:
            limits:
              memory: 8Gi
              cpu: 2
            requests:
              memory: 8Gi
              cpu: 2
        imagePullSecrets:
        - name: mlpull
    volumeClaimTemplates:
    - metadata:
        name: elasticsearch-data
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 50Gi
        storageClassName: capacity-bdycloud-cfs
  - name: data-nodes
    count: 3
    config:
      node.master: false
      node.data: true
    podTemplate:
      spec:
        initContainers:
        - name: sysctl
          securityContext:
            privileged: true
            runAsUser: 0
          command: ['sh', '-c', 'sysctl -w vm.max_map_count=262144']
        containers:
        - name: elasticsearch
          image: elasticsearch:7.10.1
          env:
          - name: ES_JAVA_OPTS
            value: "-Xms8g -Xmx8g"
          resources:
            limits:
              memory: 10Gi
              cpu: 4
            requests:
              memory: 10Gi
              cpu: 4
        imagePullSecrets:
        - name: mlpull
    volumeClaimTemplates:
    - metadata:
        name: elasticsearch-data
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 1000Gi
        storageClassName: capacity-bdycloud-cfs
  http:
    tls:
      selfSignedCertificate:
        disabled: true
EOF

一段時(shí)間后,查看 elasticsearch 集群的狀態(tài),其中 HEALTH 為 green 表示部署成功

root@sz-k8s-master-01:~# kubectl -n logs get elasticsearch
NAME        HEALTH   NODES   VERSION   PHASE   AGE
cluster01   green    6       7.10.1     Ready   4m26s
root@sz-k8s-master-01:~#
# 查看 pod
root@sz-k8s-master-01:~# kubectl -n logs get pod|grep cluster01
cluster01-es-data-nodes-0       1/1     Running                 0          5m8s
cluster01-es-data-nodes-1       1/1     Running                 0          5m8s
cluster01-es-data-nodes-2       1/1     Running                 0          5m8s
cluster01-es-master-nodes-0     1/1     Running                 0          5m8s
cluster01-es-master-nodes-1     1/1     Running                 0          5m8s
cluster01-es-master-nodes-2     1/1     Running                 0          5m8s
root@sz-k8s-master-01:~#

查看申請(qǐng)的 pvc 狀態(tài),發(fā)現(xiàn)創(chuàng)建且綁定成功(Bound)

root@sz-k8s-master-01:~# kubectl -n logs get pvc
NAME                                             STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS            AGE
elasticsearch-data-cluster01-es-data-nodes-0     Bound    pvc-b253e810-e62c-11ec-bc77-fa163efe36ed   1000Gi     RWO            capacity-bdycloud-cfs   6m18s
elasticsearch-data-cluster01-es-data-nodes-1     Bound    pvc-b2561699-e62c-11ec-bc77-fa163efe36ed   1000Gi     RWO            capacity-bdycloud-cfs   6m18s
elasticsearch-data-cluster01-es-data-nodes-2     Bound    pvc-b259b21a-e62c-11ec-bc77-fa163efe36ed   1000Gi     RWO            capacity-bdycloud-cfs   6m18s
elasticsearch-data-cluster01-es-master-nodes-0   Bound    pvc-b2482de7-e62c-11ec-bc77-fa163efe36ed   50Gi       RWO            capacity-bdycloud-cfs   6m18s
elasticsearch-data-cluster01-es-master-nodes-1   Bound    pvc-b24a8bf4-e62c-11ec-bc77-fa163efe36ed   50Gi       RWO            capacity-bdycloud-cfs   6m18s
elasticsearch-data-cluster01-es-master-nodes-2   Bound    pvc-b24d8c2a-e62c-11ec-bc77-fa163efe36ed   50Gi       RWO            capacity-bdycloud-cfs   6m18s
logstash                                         Bound    pvc-7bb27b91-8ed0-11ec-8e66-fa163efe36ed   2Gi        RWX            bdycloud-cfs            111d
root@sz-k8s-master-01:~#

1.3 創(chuàng)建 ingress 訪問(wèn)

cat <<EOF | kubectl apply -f - -n logs
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: bdy-sz-es
  namespace: logs
spec:
  rules:
  - host: sz-es.test.cn
    http:
      paths:
      - backend:
          serviceName: cluster01-es-http
          servicePort: 9200
        path: /
EOF

1.4 Elastcisearch 用戶名和密碼

默認(rèn)集群開(kāi)啟了 basic 認(rèn)證,用戶名為 elastic,密碼可以通過(guò) secret 獲取。默認(rèn)集群也開(kāi)啟了自簽名證書(shū) https 訪問(wèn)。我們可以通過(guò) service 資源來(lái)訪問(wèn) elasticsearch:

root@sz-k8s-master-01:~# kubectl get secret -n logs cluster01-es-elastic-user -o=jsonpath='{.data.elastic}' | base64 --decode; echo
uOGv138dX123434342t130
root@sz-k8s-master-01:~#

集群內(nèi)訪問(wèn)測(cè)試

root@sz-k8s-master-01:~# kubectl -n logs get svc
NAME                        TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
cerebro                     ClusterIP   10.2.214.84    <none>        9000/TCP   4h21m
cluster01-es-data-nodes     ClusterIP   None           <none>        9200/TCP   7m10s
cluster01-es-http           ClusterIP   10.2.178.25    <none>        9200/TCP   7m12s
cluster01-es-master-nodes   ClusterIP   None           <none>        9200/TCP   7m10s
cluster01-es-transport      ClusterIP   None           <none>        9300/TCP   7m12s
elastic-webhook-server      ClusterIP   10.2.46.89     <none>        443/TCP    5d5h
kibana-kb-http              ClusterIP   10.2.247.192   <none>        5601/TCP   23h
logstash                    ClusterIP   10.2.97.210    <none>        5044/TCP   2y88d
root@sz-k8s-master-01:~# curl http://10.2.178.25:9200 -u 'elastic:uOGv138dX123434342t130' -k
{
  "name" : "cluster01-es-data-nodes-2",
  "cluster_name" : "cluster01",
  "cluster_uuid" : "pKUYTOzuS_i3yT2UCiQw3w",
  "version" : {
    "number" : "7.10.1",
    "build_flavor" : "default",
    "build_type" : "docker",
    "build_hash" : "1c34507e66d7db1211f66f3513706fdf548736aa",
    "build_date" : "2020-12-05T01:00:33.671820Z",
    "build_snapshot" : false,
    "lucene_version" : "8.7.0",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}
root@sz-k8s-master-01:~#

如果有要卸載 Elaticsearch 的話,執(zhí)行如下命令即可

root@sz-k8s-master-01:~# kubectl -n logs delete elasticsearch cluster01
elasticsearch.elasticsearch.k8s.elastic.co "cluster01" deleted
root@sz-k8s-master-01:~#

修改 elasticsearch 配置

kubectl -n logs edit elasticsearch cluster01

2 kibana

# 官網(wǎng)
https://www.elastic.co/guide/en/cloud-on-k8s/1.9/k8s-deploy-kibana.html

由于默認(rèn) kibana 也開(kāi)啟了自簽名證書(shū)的 https 訪問(wèn),我們可以選擇關(guān)閉,我們來(lái)使用 ECK 部署 kibana

2.1 部署 Kibana 并與 Elasticsearch 群集關(guān)聯(lián)

cat <<EOF | kubectl apply -f - -n logs
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
  name: kibana
spec:
  version: 7.10.1
  count: 1
  elasticsearchRef:
    name: cluster01   # kubectl -n logs get elasticsearch 獲取
EOF

2.2 查看狀態(tài)

root@sz-k8s-master-01:~# kubectl -n logs get kibana
NAME     HEALTH   NODES   VERSION   AGE
kibana   green    1       7.10.1     18m
root@sz-k8s-master-01:~#

2.3 集群內(nèi)訪問(wèn) kibana

查看服務(wù)地址

root@sz-k8s-master-01:~# kubectl -n logs get service kibana-kb-http
NAME             TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE
kibana-kb-http   ClusterIP   10.2.195.7   <none>        5601/TCP   17m
root@sz-k8s-master-01:~#

使用 kubectl port-forward 從集群中訪問(wèn) Kibana

root@sz-k8s-master-01:~# kubectl -n logs port-forward service/kibana-kb-http 5601
Forwarding from 127.0.0.1:5601 -> 5601
Forwarding from [::1]:5601 -> 5601
^Croot@sz-k8s-master-01:~#

2.4 創(chuàng)建 ingress 訪問(wèn)

cat <<EOF | kubectl apply -f - -n logs
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: bdy-sz-kibana
  namespace: logs
spec:
  rules:
  - host: sz-kibana.test.cn
    http:
      paths:
      - backend:
          serviceName: kibana-kb-http
          servicePort: 5601
        path: /
EOF

3 部署 cerebro 并與 Elasticsearch 群集關(guān)聯(lián)

3.1 發(fā)布配置

定義配置文件

cat application.conf
es = {
  gzip = true
}
auth = {
  type: basic
  settings {
    username = "admin"
    password = "uOGv138dX123434342t130"
  }
}
hosts = [
  {
    host = "http://cluster01-es-http:9200"
    name = "sz-cerebro"
    auth = {
      username = "elastic"
      password = "uOGv138dX123434342t130"
    }
  }
]

發(fā)布 cm文章來(lái)源地址http://www.zghlxwxcb.cn/news/detail-479658.html

root@sz-k8s-master-01:/opt/cerebro# kubectl create configmap cerebro-application --from-file=application.conf -n logs
configmap/cerebro-application created
root@sz-k8s-master-01:/opt/cerebro#

3.2 發(fā)布deploy、service、ingress

cat <<EOF | kubectl apply -f - -n logs
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: cerebro
  name: cerebro
  namespace: logs
spec:
  replicas: 1
  selector:
    matchLabels:
      app: cerebro
  template:
    metadata:
      labels:
        app: cerebro
      name: cerebro
    spec:
      containers:
      - image: lmenezes/cerebro:0.8.3
        imagePullPolicy: IfNotPresent
        name: cerebro
        resources:
          limits:
            cpu: 1
            memory: 1Gi
          requests:
            cpu: 1
            memory: 1Gi
        volumeMounts:
          - name: cerebro-conf
            mountPath: /etc/cerebro
      volumes:
      - name: cerebro-conf
        configMap:
          name: cerebro-application

---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: cerebro
  name: cerebro
  namespace: logs
spec:
  ports:
  - port: 9000
    protocol: TCP
    targetPort: 9000
  selector:
    app: cerebro
  type: ClusterIP
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: cerebro
  namespace: logs
spec:
  rules:
  - host: sz-cerebro.test.cn
    http:
      paths:
      - backend:
         serviceName: cerebro
         servicePort: 9000
        path: /
EOF

到了這里,關(guān)于k8s Operator 部署 elasticsearch 7.10 + kibana + cerebro的文章就介紹完了。如果您還想了解更多內(nèi)容,請(qǐng)?jiān)谟疑辖撬阉鱐OY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!

本文來(lái)自互聯(lián)網(wǎng)用戶投稿,該文觀點(diǎn)僅代表作者本人,不代表本站立場(chǎng)。本站僅提供信息存儲(chǔ)空間服務(wù),不擁有所有權(quán),不承擔(dān)相關(guān)法律責(zé)任。如若轉(zhuǎn)載,請(qǐng)注明出處: 如若內(nèi)容造成侵權(quán)/違法違規(guī)/事實(shí)不符,請(qǐng)點(diǎn)擊違法舉報(bào)進(jìn)行投訴反饋,一經(jīng)查實(shí),立即刪除!

領(lǐng)支付寶紅包贊助服務(wù)器費(fèi)用

相關(guān)文章

  • K8s部署輕量級(jí)日志收集系統(tǒng)EFK(elasticsearch + filebeat + kibana)

    目錄 K8s部署EFK(elasticsear + filebeat + kibana)日志收集 一.準(zhǔn)備鏡像 二.搭建Elasticsearch + kibana 1.在可執(zhí)行kubectl命令的服務(wù)器準(zhǔn)備安裝的yml文件 2.在elasticsearch-kibana目錄下創(chuàng)建配置文件elasticsearch.yml 3.創(chuàng)建kibana配置文件kibana.yml 4.在k8s中創(chuàng)建elasticsearch和kibana的配置文件configmap 5.檢查

    2024年02月08日
    瀏覽(37)
  • k8s使用ECK(2.4)形式部署elasticsearch+kibana-http協(xié)議

    k8s使用ECK(2.4)形式部署elasticsearch+kibana-http協(xié)議

    提示:文章寫完后,目錄可以自動(dòng)生成,如何生成可參考右邊的幫助文檔 之前寫了eck2.4部署es+kibana,默認(rèn)的話是https協(xié)議的,這里寫一個(gè)使用http協(xié)議的配置。 參考文章:鏈接: Kubernetes基于ECK部署elasticsearch8.8集群 首先按照https的部署方式部署crd和operator。 鏈接: k8s使用ECK(2

    2024年02月10日
    瀏覽(19)
  • k8s部署Elasticsearch集群+Kibana方案--開(kāi)啟X-Pack 安全認(rèn)證

    k8s部署Elasticsearch集群+Kibana方案--開(kāi)啟X-Pack 安全認(rèn)證

    本文中使用 StatefulSet 方式部署 Elasticsearch 集群,并且開(kāi)啟X-Pack 安全認(rèn)證,存儲(chǔ)使用的是NFS,屬于一個(gè)初學(xué)者自己探索的方案,如果有比較好的方案,還請(qǐng)不吝評(píng)論賜教。 版本說(shuō)明: Kubernetes v1.25.6 – v1.26.4 Elasticsearch, Kibana 7.13.3 NFS Subdir External Provisioner 前置環(huán)境 需要安裝好

    2024年02月11日
    瀏覽(24)
  • 使用k8s helm離線部署spark-operator(私有倉(cāng)庫(kù))

    使用k8s helm離線部署spark-operator(私有倉(cāng)庫(kù))

    將制作的鏡像上傳到目的機(jī)器中,加載鏡像 打標(biāo)簽其中xxxx.xxx/xx/為私有倉(cāng)庫(kù)的地址 將制作好的鏡像推到私有倉(cāng)庫(kù)中 Github地址: 下載上傳到機(jī)器中 解壓spark-operator-1.1.27.tgz,修改values.yaml內(nèi)容,修改副本數(shù)量為3、鏡像源、利用偽親和性使得三個(gè)副本在三臺(tái)機(jī)器上。 修改Chart

    2024年02月09日
    瀏覽(51)
  • 【云原生】Elasticsearch + kibana on k8s 講解與實(shí)戰(zhàn)操作

    【云原生】Elasticsearch + kibana on k8s 講解與實(shí)戰(zhàn)操作

    Elasticsearch是一個(gè)基于Lucene的搜索引擎。它提供了具有HTTP Web界面和無(wú)架構(gòu)JSON文檔的分布式,多租戶能力的全文搜索引擎。Elasticsearch是用Java開(kāi)發(fā)的,根據(jù)Apache許可條款作為開(kāi)源發(fā)布。 官方文檔:https://www.elastic.co/guide/en/elasticsearch/reference/master/getting-started.html GitHub: https://gith

    2023年04月08日
    瀏覽(23)
  • 在k8s集群中搭建elasticsearch+kibana+flentd日志系統(tǒng)

    在k8s集群中搭建elasticsearch+kibana+flentd日志系統(tǒng)

    在離線環(huán)境部署一套日志采集系統(tǒng)我采用的是elasticsearch+kibana+flentd日志系統(tǒng) yaml文件如下: apiVersion: v1 kind: Namespace metadata: name: logging kind: Service apiVersion: v1 metadata: name: elasticsearch namespace: logging labels: app: elasticsearch spec: selector: app: elasticsearch clusterIP: None ports: - port: 9200 name: res

    2024年02月16日
    瀏覽(18)
  • k8s部署elk+filebeat+logstash+kafka集群(一)ES集群+kibana部署

    k8s部署elk+filebeat+logstash+kafka集群(一)ES集群+kibana部署

    前言: 這次是在部署后很久才想起來(lái)整理了下文檔,如有遺漏見(jiàn)諒,期間也遇到過(guò)很多坑有些目前還沒(méi)頭緒希望有大佬讓我學(xué)習(xí)下 一、環(huán)境準(zhǔn)備 k8s-master01 3.127.10.209 k8s-master02 3.127.10.95 k8s-master03 3.127.10.66 k8s-node01 3.127.10.233 k8s-node02 3.127.33.173 harbor 3.127.33.174 1、k8s各節(jié)點(diǎn)部署nf

    2023年04月23日
    瀏覽(43)
  • 創(chuàng)建k8s operator

    創(chuàng)建k8s operator

    目錄 1.前提條件 2.進(jìn)一步準(zhǔn)備 2.1.安裝golang 2.2.安裝code(vscode的linux版本) 2.3.安裝kubebuilder 3.開(kāi)始創(chuàng)建Operator 3.1.什么是operator? 3.2.GV GVK GVR 3.3.創(chuàng)建operator 3.3.1. 生成工程框架 3.3.2.生成api(GVK)? ? ? ?? ?3.3.3.實(shí)現(xiàn)代碼邏輯,更新yaml定義 3.3.3.1.實(shí)現(xiàn)CR ?3.3.3.2.(依據(jù)代碼定義)更新

    2024年02月11日
    瀏覽(31)
  • k8s學(xué)習(xí)(三十四)飛騰2000+麒麟V10離線部署k8s

    k8s學(xué)習(xí)(三十四)飛騰2000+麒麟V10離線部署k8s

    需要在飛騰芯片和麒麟V10操作系統(tǒng)下安裝k8s,目前在飛騰2000下安裝成功,飛騰2500還沒(méi)有成功,現(xiàn)介紹飛騰2000下的安裝流程。 準(zhǔn)備多臺(tái)機(jī)器,其中一臺(tái)作為K8S主節(jié)點(diǎn),其他作為從節(jié)點(diǎn)。 這里只準(zhǔn)備了2臺(tái)機(jī)器,一臺(tái)作為主節(jié)點(diǎn),一臺(tái)作為從節(jié)點(diǎn)。 k8s版本為1.20.2 功能 IP k8s主節(jié)

    2024年02月01日
    瀏覽(26)
  • K8S學(xué)習(xí)指南(65)-Operator介紹

    隨著容器化技術(shù)的不斷發(fā)展,Kubernetes 成為了容器編排領(lǐng)域的事實(shí)標(biāo)準(zhǔn)。然而,僅僅使用 Kubernetes 運(yùn)行應(yīng)用程序并不總能滿足特定的應(yīng)用需求,特別是一些需要定制化管理的應(yīng)用。在這種背景下,Kubernetes Operator 應(yīng)運(yùn)而生,它為開(kāi)發(fā)人員提供了一種在 Kubernetes 中自動(dòng)化運(yùn)維的新

    2024年02月22日
    瀏覽(38)

覺(jué)得文章有用就打賞一下文章作者

支付寶掃一掃打賞

博客贊助

微信掃一掃打賞

請(qǐng)作者喝杯咖啡吧~博客贊助

支付寶掃一掃領(lǐng)取紅包,優(yōu)惠每天領(lǐng)

二維碼1

領(lǐng)取紅包

二維碼2

領(lǐng)紅包