国产 无码 综合区,色欲AV无码国产永久播放,无码天堂亚洲国产AV,国产日韩欧美女同一区二区

基于k8s集群容器化部署etcd集群和apisix服務(wù)

這篇具有很好參考價值的文章主要介紹了基于k8s集群容器化部署etcd集群和apisix服務(wù)。希望對大家有所幫助。如果存在錯誤或未考慮完全的地方,請大家不吝賜教,您也可以點(diǎn)擊"舉報違法"按鈕提交疑問。

前期準(zhǔn)備

創(chuàng)建StorageClass,支持動態(tài)pvc創(chuàng)建,StorageClass使用nfs-client,同時使用華為云sfs作為數(shù)據(jù)持久化存儲目錄

etcd部署步驟

  1. 角色認(rèn)證(rabc.yaml)
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
 2. apiGroups: [""]
  resources: ["persistentvolumes"]
  verbs: ["get", "list", "watch", "create", "delete"]
 3. apiGroups: [""]
  resources: ["persistentvolumeclaims"]
  verbs: ["get", "list", "watch", "update"]
 4. apiGroups: [""]
  resources: ["endpoints"]
  verbs: ["get", "list", "watch", "create", "update", "patch"]
 5. apiGroups: ["storage.k8s.io"]
  resources: ["storageclasses"]
  verbs: ["get", "list", "watch"]
 6. apiGroups: [""]
  resources: ["events"]
  verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
 7. kind: ServiceAccount
  name: nfs-client-provisioner
  namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
rules:
 8. apiGroups: [""]
  resources: ["endpoints"]
  verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
subjects:
 9. kind: ServiceAccount
  name: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io
  1. 創(chuàng)建nfs-provisioner(nfs-provisioner.yaml)
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: vbouchaud/nfs-client-provisioner:latest
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: fuseim.pri/ifs  #外部制備器名稱,根據(jù)情況修改
            - name: NFS_SERVER
              value: 172.18.0.181      # NFS服務(wù)器地址
            - name: NFS_PATH
              value: /          # NFS共享的目錄
      volumes:
        - name: nfs-client-root
          nfs:
            server: 172.18.0.181      # NFS服務(wù)器地址
            path: /  
  1. 設(shè)置nfs-client(nfs-client.yaml)
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-client
provisioner: fuseim.pri/ifs  # 外部制備器提供者,需要與nfs-provisioner.yaml deploy中保持一致
parameters:
  archiveOnDelete: "false"  # 是否存檔,false表示不存檔,會刪除oldPath下面的數(shù)據(jù),true表示存檔,會重命名路徑
reclaimPolicy: Retain    # 回收策略,默認(rèn)為Delete,可配置為Retain
volumeBindingMode: Immediate   # 默認(rèn)為Immediate,表示創(chuàng)建PVC立即進(jìn)行綁定
  1. 創(chuàng)建svc,后續(xù)apisix中會使用到(svc.yaml)
apiVersion: v1
kind: Service
metadata:
  name: apisix-etcd-headless
  namespace: default
  labels:
    app.kubernetes.io/instance: apisix-etcd
    app.kubernetes.io/name: apisix-etcd
spec:
  ports:
 1. name: client
    port: 2379
    protocol: TCP
    targetPort: 2379
 2. name: peer
    port: 2380
    protocol: TCP
    targetPort: 2380
  clusterIP: None
  selector:
    app.kubernetes.io/instance: apisix-etcd
    app.kubernetes.io/name: apisix-etcd
  publishNotReadyAddresses: true
--- 
apiVersion: v1
kind: Service
metadata:
  name: apisix-etcd
  namespace: default
  labels:
    app.kubernetes.io/instance: apisix-etcd
    app.kubernetes.io/name: apisix-etcd
spec:
  ports:
 3. name: client
    port: 2379
    protocol: TCP
    targetPort: 2379
 4. name: peer
    port: 2380
    protocol: TCP
    targetPort: 2380
  selector:
    app.kubernetes.io/instance: apisix-etcd
    app.kubernetes.io/name: apisix-etcd
  1. 挨個執(zhí)行以上yaml文件,kubectl apply -f ***.yaml

  2. 創(chuàng)建etcd有狀態(tài)服務(wù)(etcd.yaml)

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: apisix-etcd
  namespace: default
  labels:
    app.kubernetes.io/instance: apisix-etcd
    app.kubernetes.io/name: apisix-etcd
spec:
  podManagementPolicy: Parallel
  replicas: 3
  serviceName: apisix-etcd-headless
  selector:
    matchLabels:
      app.kubernetes.io/instance: apisix-etcd
      app.kubernetes.io/name: apisix-etcd
  template:
    metadata:
      labels:
        app.kubernetes.io/instance: apisix-etcd
        app.kubernetes.io/name: apisix-etcd
    spec:
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - podAffinityTerm:
              labelSelector:
                matchLabels:
                  app.kubernetes.io/instance: apisix-etcd
                  app.kubernetes.io/name: apisix-etcd
              topologyKey: kubernetes.io/hostname
            weight: 1
      containers:
      - name: apisix-etcd-app
        image: bitnami/etcd:3.4.24
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 2379
          name: client
          protocol: TCP
        - containerPort: 2380
          name: peer
          protocol: TCP
        env:
        - name: BITNAMI_DEBUG
          value: 'false'
        - name: MY_POD_IP
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: status.podIP
        - name: MY_POD_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.name
        - name: MY_STS_NAME
          value: apisix-etcd
        - name: ETCDCTL_API
          value: '3'
        - name: ETCD_ON_K8S
          value: 'yes'
        - name: ETCD_START_FROM_SNAPSHOT
          value: 'no'
        - name: ETCD_DISASTER_RECOVERY
          value: 'no'
        - name: ETCD_NAME
          value: $(MY_POD_NAME)
        - name: ETCD_DATA_DIR
          value: /bitnami/etcd/data
        - name: ETCD_LOG_LEVEL
          value: info
        - name: ALLOW_NONE_AUTHENTICATION
          value: 'yes'
        - name: ETCD_ADVERTISE_CLIENT_URLS
          value: http://$(MY_POD_NAME).apisix-etcd-headless.default.svc.cluster.local:2379
        - name: ETCD_LISTEN_CLIENT_URLS
          value: http://0.0.0.0:2379
        - name: ETCD_INITIAL_ADVERTISE_PEER_URLS
          value: http://$(MY_POD_NAME).apisix-etcd-headless.default.svc.cluster.local:2380
        - name: ETCD_LISTEN_PEER_URLS
          value: http://0.0.0.0:2380
        - name: ETCD_INITIAL_CLUSTER_TOKEN
          value: apisix-etcd-cluster-k8s
        - name: ETCD_INITIAL_CLUSTER_STATE
          value: new
        - name: ETCD_INITIAL_CLUSTER
          value: apisix-etcd-0=http://apisix-etcd-0.apisix-etcd-headless.default.svc.cluster.local:2380,apisix-etcd-1=http://apisix-etcd-1.apisix-etcd-headless.default.svc.cluster.local:2380,apisix-etcd-2=http://apisix-etcd-2.apisix-etcd-headless.default.svc.cluster.local:2380
        - name: ETCD_CLUSTER_DOMAIN
          value: apisix-etcd-headless.default.svc.cluster.local
        volumeMounts:
        - name: data
          mountPath: /bitnami/etcd
        lifecycle:
          preStop:
            exec:
              command:
              - /opt/bitnami/scripts/etcd/prestop.sh
        livenessProbe:
          exec:
            command:
            - /opt/bitnami/scripts/etcd/healthcheck.sh
          initialDelaySeconds: 60
          timeoutSeconds: 5
          periodSeconds: 30
          successThreshold: 1
          failureThreshold: 5
        readinessProbe:
          exec:
            command:
            - /opt/bitnami/scripts/etcd/healthcheck.sh
          initialDelaySeconds: 60
          timeoutSeconds: 5
          periodSeconds: 10
          successThreshold: 1
          failureThreshold: 5
      securityContext:
        fsGroup: 1001
  volumeClaimTemplates:
 1. metadata:
      name: data
    spec:
      accessModes: 
      - ReadWriteOnce
      storageClassName: nfs-client
      resources:
        requests:
          storage: 1Gi

apisix 3.7依賴的etcd,etcd,kubernetes
apisix 3.7依賴的etcd,etcd,kubernetes

apisix部署

  1. apisix-admin(apisix.yaml)
kind: Deployment
apiVersion: apps/v1
metadata:
  name: apisix
  namespace: default
  labels:
    app.kubernetes.io/instance: apisix
    app.kubernetes.io/name: apisix
    app.kubernetes.io/version: 2.10.0
spec:
  replicas: 2
  selector:
    matchLabels:
      app.kubernetes.io/instance: apisix
      app.kubernetes.io/name: apisix
  template:
    metadata:
      creationTimestamp: null
      labels:
        app.kubernetes.io/instance: apisix
        app.kubernetes.io/name: apisix
    spec:
      volumes:
        - name: apisix-config
          configMap:
            name: apisix
            defaultMode: 420
      initContainers:
        - name: wait-etcd
          image: busybox:1.28
          command:
            - sh
            - '-c'
            - >-
              until nc -z 172.18.0.14 2379; do echo
              waiting for etcd `date`; sleep 2; done;
          resources: {}
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          imagePullPolicy: IfNotPresent
      containers:
        - name: apisix
          image: apache/apisix:2.10.0-alpine
          ports:
            - name: http
              containerPort: 9080
              protocol: TCP
            - name: tls
              containerPort: 9443
              protocol: TCP
            - name: admin
              containerPort: 9180
              protocol: TCP
          resources: {}
          volumeMounts:
            - name: apisix-config
              mountPath: /usr/local/apisix/conf/config.yaml
              subPath: config.yaml
          readinessProbe:
            tcpSocket:
              port: 9080
            initialDelaySeconds: 10
            timeoutSeconds: 1
            periodSeconds: 10
            successThreshold: 1
            failureThreshold: 6
          lifecycle:
            preStop:
              exec:
                command:
                  - /bin/sh
                  - '-c'
                  - sleep 30
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          imagePullPolicy: IfNotPresent
      restartPolicy: Always
      terminationGracePeriodSeconds: 30
      dnsPolicy: ClusterFirst
      securityContext: {}
      schedulerName: default-scheduler
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 25%
      maxSurge: 25%
  revisionHistoryLimit: 10
  progressDeadlineSeconds: 600
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: apisix
  namespace: default
data:
  config.yaml: >-
    apisix:
      node_listen: 9080             # APISIX listening port
      enable_heartbeat: true
      enable_admin: true
      enable_admin_cors: true
      enable_debug: false
      enable_dev_mode: false          # Sets nginx worker_processes to 1 if set to true
      enable_reuseport: true          # Enable nginx SO_REUSEPORT switch if set to true.
      enable_ipv6: true
      config_center: etcd             # etcd: use etcd to store the config value
      proxy_cache:                     # Proxy Caching configuration
        cache_ttl: 10s                 # The default caching time if the upstream does not specify the cache time
        zones:                         # The parameters of a cache
        - name: disk_cache_one         # The name of the cache, administrator can be specify
          memory_size: 50m             # The size of shared memory, it's used to store the cache index
          disk_size: 1G                # The size of disk, it's used to store the cache data
          disk_path: "/tmp/disk_cache_one" # The path to store the cache data
          cache_levels: "1:2"           # The hierarchy levels of a cache
      allow_admin:                  # http://nginx.org/en/docs/http/ngx_http_access_module.html#allow
        - 127.0.0.1/24
      port_admin: 9180
      admin_key:
        # admin: can everything for configuration data
        - name: "admin"
          key: edd1c9f034335f136f87ad84b625c8f1
          role: admin
        # viewer: only can view configuration data
        - name: "viewer"
          key: 4054f7cf07e344346cd3f287985e76a2
          role: viewer
      router:
        http: 'radixtree_uri'         # radixtree_uri: match route by uri(base on radixtree)
                                      # radixtree_host_uri: match route by host + uri(base on radixtree)
        ssl: 'radixtree_sni'          # radixtree_sni: match route by SNI(base on radixtree)
      dns_resolver_valid: 30
      resolver_timeout: 5
      ssl:
        enable: false
        enable_http2: true
        listen_port: 9443
        ssl_protocols: "TLSv1 TLSv1.1 TLSv1.2 TLSv1.3"
        ssl_ciphers: "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA256:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA"
    discovery:
      nacos:
        host:
          - "http://nacos:nacos.123qwe.@172.18.0.127:8848"
        prefix: "/nacos/v1/"
        fetch_interval: 30
        weight: 100
        timeout:
          connect: 2000
          send: 2000
          read: 5000
    nginx_config:                     # config for render the template to
    genarate nginx.conf
      error_log: "/dev/stderr"
      error_log_level: "warn"         # warn,error
      worker_rlimit_nofile: 20480     # the number of files a worker process can open, should be larger than worker_connections
      event:
        worker_connections: 10620
      http:
        access_log: "/dev/stdout"
        keepalive_timeout: 60s         # timeout during which a keep-alive client connection will stay open on the server side.
        client_header_timeout: 60s     # timeout for reading client request header, then 408 (Request Time-out) error is returned to the client
        client_body_timeout: 60s       # timeout for reading client request body, then 408 (Request Time-out) error is returned to the client
        send_timeout: 10s              # timeout for transmitting a response to the client.then the connection is closed
        underscores_in_headers: "on"   # default enables the use of underscores in client request header fields
        real_ip_header: "X-Real-IP"    # http://nginx.org/en/docs/http/ngx_http_realip_module.html#real_ip_header
        real_ip_from:                  # http://nginx.org/en/docs/http/ngx_http_realip_module.html#set_real_ip_from
          - 127.0.0.1
          - 'unix:'

    etcd:
      host:                                 # it's possible to define multiple etcd hosts addresses of the same etcd cluster.
        - "http://172.18.0.14:2379" #etcd集群,作者將etcd服務(wù)端口 通過svc映射出來,綁定到華文云elbs上,此ip為elb地址
      prefix: "/apisix"     # apisix configurations prefix
      timeout: 30   # 30 seconds
    plugins:                          # plugin list
      - api-breaker
      - authz-keycloak
      - basic-auth
      - batch-requests
      - consumer-restriction
      - cors
      - echo
      - fault-injection
      - grpc-transcode
      - hmac-auth
      - http-logger
      - ip-restriction
      - ua-restriction
      - jwt-auth
      - kafka-logger
      - key-auth
      - limit-conn
      - limit-count
      - limit-req
      - node-status
      - openid-connect
      - authz-casbin
      - prometheus
      - proxy-cache
      - proxy-mirror
      - proxy-rewrite
      - redirect
      - referer-restriction
      - request-id
      - request-validation
      - response-rewrite
      - serverless-post-function
      - serverless-pre-function
      - sls-logger
      - syslog
      - server-info
      - tcp-logger
      - udp-logger
      - uri-blocker
      - wolf-rbac
      - zipkin
      - server-info
      - traffic-split
      - gzip
      - real-ip
    stream_plugins:
      - mqtt-proxy
      - ip-restriction
      - limit-conn
    plugin_attr:
      server-info:
        report_interval: 60
        report_ttl: 3600
---
kind: Service
apiVersion: v1
metadata:
  name: apisix-admin
  namespace: default
  labels:
    app.kubernetes.io/instance: apisix
    app.kubernetes.io/name: apisix
    app.kubernetes.io/version: 2.10.0
spec:
  ports:
    - name: apisix-admin
      protocol: TCP
      port: 9180
      targetPort: 9180
  selector:
    app.kubernetes.io/instance: apisix
    app.kubernetes.io/name: apisix
  type: ClusterIP
---
kind: Service
apiVersion: v1
metadata:
  name: apisix-gateway
  namespace: default
  labels:
    app.kubernetes.io/instance: apisix
    app.kubernetes.io/name: apisix
    app.kubernetes.io/version: 2.10.0
spec:
  ports:
    - name: apisix-gateway
      protocol: TCP
      port: 80
      targetPort: 9080
      nodePort: 31684
  selector:
    app.kubernetes.io/instance: apisix
    app.kubernetes.io/name: apisix
  type: NodePort
  sessionAffinity: None
  externalTrafficPolicy: Cluster
  1. 部署apisix dashboard(apisix-dashboard)
kind: Deployment
apiVersion: apps/v1
metadata:
  name: apisix-dashboard
  namespace: default
  labels:
    app.kubernetes.io/instance: apisix-dashboard
    app.kubernetes.io/name: apisix-dashboard
    app.kubernetes.io/version: 2.9.0
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/instance: apisix-dashboard
      app.kubernetes.io/name: apisix-dashboard
  template:
    metadata:
      creationTimestamp: null
      labels:
        app.kubernetes.io/instance: apisix-dashboard
        app.kubernetes.io/name: apisix-dashboard
    spec:
      volumes:
        - name: apisix-dashboard-config
          configMap:
            name: apisix-dashboard
            defaultMode: 420
      containers:
        - name: apisix-dashboard
          image: apache/apisix-dashboard:2.9.0
          ports:
            - name: http
              containerPort: 9000
              protocol: TCP
          resources: {}
          volumeMounts:
            - name: apisix-dashboard-config
              mountPath: /usr/local/apisix-dashboard/conf/conf.yaml
              subPath: conf.yaml
          livenessProbe:
            httpGet:
              path: /ping
              port: http
              scheme: HTTP
            timeoutSeconds: 1
            periodSeconds: 10
            successThreshold: 1
            failureThreshold: 3
          readinessProbe:
            httpGet:
              path: /ping
              port: http
              scheme: HTTP
            timeoutSeconds: 1
            periodSeconds: 10
            successThreshold: 1
            failureThreshold: 3
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          imagePullPolicy: IfNotPresent
          securityContext: {}
      restartPolicy: Always
      terminationGracePeriodSeconds: 30
      dnsPolicy: ClusterFirst
      serviceAccountName: apisix-dashboard
      serviceAccount: apisix-dashboard
      securityContext: {}
      schedulerName: default-scheduler
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 25%
      maxSurge: 25%
  revisionHistoryLimit: 10
  progressDeadlineSeconds: 600
---
#kind: Service #此處注釋是因?yàn)樽髡咧苯釉谌A為cce控制臺手動創(chuàng)建svc,將dashboard的9000端口映射出來綁定到elb上面
#apiVersion: v1
#metadata:
#  name: apisix-dashboard
#  namespace: default
#  labels:
#    app.kubernetes.io/instance: apisix-dashboard
#    app.kubernetes.io/name: apisix-dashboard
#    app.kubernetes.io/version: 2.9.0
#spec:
#  ports:
#    - name: http
#      protocol: TCP
#      port: 80
#      targetPort: http
#  selector:
#    app.kubernetes.io/instance: apisix-dashboard
#    app.kubernetes.io/name: apisix-dashboard
#  type: ClusterIP
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: apisix-dashboard
  namespace: default
  labels:
    app.kubernetes.io/instance: apisix-dashboard
    app.kubernetes.io/name: apisix-dashboard
    app.kubernetes.io/version: 2.9.0
data:
  conf.yaml: |-
    conf:
      listen:
        host: 0.0.0.0
        port: 9000
      etcd:
        endpoints:
          - 172.18.0.170:2379
      log:
        error_log:
          level: warn
          file_path: /dev/stderr
        access_log:
          file_path: /dev/stdout
    authentication:
      secert: secert
      expire_time: 3600
      users:
        - username: admin
          password: admin
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: apisix-dashboard
  namespace: default

apisix 3.7依賴的etcd,etcd,kubernetes
#本文用到的公共資源:elb、sfs共享存儲,不足之處請多多評論指出文章來源地址http://www.zghlxwxcb.cn/news/detail-804038.html

到了這里,關(guān)于基于k8s集群容器化部署etcd集群和apisix服務(wù)的文章就介紹完了。如果您還想了解更多內(nèi)容,請在右上角搜索TOY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!

本文來自互聯(lián)網(wǎng)用戶投稿,該文觀點(diǎn)僅代表作者本人,不代表本站立場。本站僅提供信息存儲空間服務(wù),不擁有所有權(quán),不承擔(dān)相關(guān)法律責(zé)任。如若轉(zhuǎn)載,請注明出處: 如若內(nèi)容造成侵權(quán)/違法違規(guī)/事實(shí)不符,請點(diǎn)擊違法舉報進(jìn)行投訴反饋,一經(jīng)查實(shí),立即刪除!

領(lǐng)支付寶紅包贊助服務(wù)器費(fèi)用

相關(guān)文章

  • k8s外接etcd集群服務(wù)異常,使用snapshot恢復(fù)etcd集群

    k8s外接etcd集群服務(wù)異常,使用snapshot恢復(fù)etcd集群

    master節(jié)點(diǎn) 主機(jī) IP 版本 master01 192.168.66.50 k8s-1.23.17 master02 192.168.66.55 k8s-1.23.17 master03 192.168.66.56 k8s-1.23.17 etcd集群節(jié)點(diǎn) 主機(jī) IP 版本 etcd01 192.168.66.58 3.5.6 etcd02 192.168.66.59 3.5.6 etcd03 192.168.66.57 3.5.6 生產(chǎn)環(huán)境中我們?yōu)榱吮苊獬霈F(xiàn)誤操作或者是服務(wù)器硬件出見異常導(dǎo)致宕機(jī),我們的虛擬

    2024年01月17日
    瀏覽(21)
  • 基于containerd容器運(yùn)行時,kubeadmin部署k8s 1.28集群

    基于containerd容器運(yùn)行時,kubeadmin部署k8s 1.28集群

    centos7u9 序號 主機(jī)名 ip地址 CPU 內(nèi)存 硬盤 1 k8s-master1 192.168.1.200 2C 2G 100G 2 k8s-worker1 192.168.1.201 2C 2G 100G 3 k8s-worker2 192.168.1.202 2C 2G 100G 1.3.1主機(jī)名配置 vi /etc/sysconfig/network-scripts/ifcfg-ens33 1.3.3主機(jī)名與IP地址解析(hosts) vi /etc/hosts 1.3.4防火墻配置 關(guān)閉防火墻firewalld 1.3.5SELINUX配置 修改

    2024年02月01日
    瀏覽(37)
  • 【云原生 | Kubernetes 系列】— 部署K8S 1.28版本集群部署(基于Containerd容器運(yùn)行)

    主機(jī)名 IP地址 備注 k8s-master01 192.168.0.109 master k8s-node1 192.168.0.108 node1 k8s-node2 192.168.0.107 node1 k8s-node3 192.168.0.105 node1 1、主機(jī)配置 2、升級內(nèi)核 3、配置內(nèi)核轉(zhuǎn)發(fā)以及過濾 4、安裝ipset ipvsadm,IPVS(IP Virtual Server)是一個用于負(fù)載均衡的 Linux 內(nèi)核模塊,它可以用來替代 kube-proxy 默認(rèn)的

    2024年02月20日
    瀏覽(101)
  • K8s基礎(chǔ)2——部署單Master節(jié)點(diǎn)K8s集群、切換containerd容器運(yùn)行時、基本命令

    K8s基礎(chǔ)2——部署單Master節(jié)點(diǎn)K8s集群、切換containerd容器運(yùn)行時、基本命令

    兩種部署方式: kubeadm工具部署 。Kubeadm 是一個K8s 部署工具,提供 kubeadm init 和kubeadm join,用于快速部署 Kubernetes集群。 kubeadm 工具功能: kubeadm init:初始化一個 Master 節(jié)點(diǎn)。 kubeadm join:將工作節(jié)點(diǎn)加入集群。 kubeadm upgrade:升級 K8s 版本。 kubeadm token:管理 kubeadm join 使用的令

    2024年02月12日
    瀏覽(27)
  • K8S最新版本集群部署(v1.28) + 容器引擎Docker部署(上)

    K8S最新版本集群部署(v1.28) + 容器引擎Docker部署(上)

    ??上一集:win11+vmware17+centos7.9環(huán)境搭建 *??主目錄:溫故知新專欄 ??下一集:K8S最新版本集群部署(v1.28) + 容器引擎Docker部署(下) 之前部署過dolphinscheduler3.1.8,看頁面增加了K8S模塊,所以想著部署一下K8S,學(xué)習(xí)一下,而且海豚調(diào)度也提供了K8S部署方式,經(jīng)過一番了解,發(fā)現(xiàn)

    2024年02月11日
    瀏覽(28)
  • K8S最新版本集群部署(v1.28) + 容器引擎Docker部署(下)

    K8S最新版本集群部署(v1.28) + 容器引擎Docker部署(下)

    ??上一集:K8S最新版本集群部署(v1.28) + 容器引擎Docker部署(上) *??主目錄:溫故知新專欄 ??下一集:Kubernetes可視化管理工具Kuboard部署使用及k8s常用命令梳理記錄 kubectl 是使用 Kubernetes API 與 Kubernetes 集群的控制面進(jìn)行通信的命令行工具。詳見官網(wǎng)安裝步驟 ??下載kube

    2024年02月09日
    瀏覽(33)
  • k8s 1.27集群部署 容器運(yùn)行時為containerd

    k8s 1.27集群部署 容器運(yùn)行時為containerd

    1.1.1 主機(jī)操作系統(tǒng)說明 序號 操作系統(tǒng)及版本 備注 1 CentOS7u9 1.1.2 主機(jī)硬件配置說明 需求 CPU 內(nèi)存 硬盤 角色 主機(jī)名 值 8C 8G 1024GB master k8s-master01 值 8C 16G 1024GB worker(node) k8s-worker01 值 8C 16G 1024GB worker(node) k8s-worker02 1.1.3 主機(jī)配置 1.1.3.1 主機(jī)名配置 由于本次使用3臺主機(jī)完成kubern

    2024年02月03日
    瀏覽(30)
  • 阿里云k8s容器部署consul集群的高可用方案

    阿里云k8s容器部署consul集群的高可用方案

    原本consul集群是由三個server節(jié)點(diǎn)搭建的,購買的是三個ecs服務(wù)器, java服務(wù)在注冊到consul的時候,隨便選擇其中一個節(jié)點(diǎn)。 從上圖可以看出, consul-01有28個服務(wù)注冊,而consul-02有94個服務(wù),consul-03則是29個。 有一次發(fā)生consul集群故障,某個conusl節(jié)點(diǎn)掛了,導(dǎo)致整個的服務(wù)發(fā)現(xiàn)

    2024年04月14日
    瀏覽(20)
  • k8s集群中etcd的備份與恢復(fù)

    創(chuàng)建備份目錄mkdir /var/lib/etcd_backup 從etcd中的一個正常節(jié)點(diǎn)上備份etcd數(shù)據(jù) 停止所節(jié)點(diǎn)的kube-apiserver和etcd 備份原始etcd的路徑中的文件 所有節(jié)點(diǎn)恢復(fù)etcd數(shù)據(jù)庫 master01執(zhí)行{etcd01節(jié)點(diǎn)} work01執(zhí)行{etcd02節(jié)點(diǎn)} work02執(zhí)行{etcd03節(jié)點(diǎn)} 為目錄賦予權(quán)限 每個節(jié)點(diǎn)啟動etcd 查看狀態(tài) 啟動kubeapi

    2024年02月11日
    瀏覽(26)
  • k8s-1.22.3集群etcd備份與恢復(fù)

    kubeadm-1.22.3-0.x86_64 kubelet-1.22.3-0.x86_64 kubectl-1.22.3-0.x86_64 kubernetes-cni-0.8.7-0.x86_64 主機(jī)名 IP VIP k8s-master01 192.168.10.61 192.168.10.70 k8s-master02 192.168.10.62 k8s-master03 192.168.10.63 k8s-node01 192.168.10.64 k8s-node02 192.168.10.65 注:etcd最新的API版本是v3,與v2相比,v3更高效更清晰。k8s默認(rèn)使用的etcd

    2024年02月13日
    瀏覽(29)

覺得文章有用就打賞一下文章作者

支付寶掃一掃打賞

博客贊助

微信掃一掃打賞

請作者喝杯咖啡吧~博客贊助

支付寶掃一掃領(lǐng)取紅包,優(yōu)惠每天領(lǐng)

二維碼1

領(lǐng)取紅包

二維碼2

領(lǐng)紅包