国产 无码 综合区,色欲AV无码国产永久播放,无码天堂亚洲国产AV,国产日韩欧美女同一区二区

第12關 精通K8s下的Ingress-Nginx控制器:生產環(huán)境實戰(zhàn)配置指南

這篇具有很好參考價值的文章主要介紹了第12關 精通K8s下的Ingress-Nginx控制器:生產環(huán)境實戰(zhàn)配置指南。希望對大家有所幫助。如果存在錯誤或未考慮完全的地方,請大家不吝賜教,您也可以點擊"舉報違法"按鈕提交疑問。

------> 課程視頻同步分享在今日頭條和B站

大家好,我是博哥愛運維,這節(jié)課帶來k8s的流量入口ingress,作為業(yè)務對外服務的公網(wǎng)入口,它的重要性不言而喻,大家一定要仔細閱讀,跟著博哥的教程一步步實操去理解。

Ingress基本概念

在Kubernetes集群中,Ingress作為集群內服務對外暴露的訪問接入點,其幾乎承載著集群內服務訪問的所有流量。Ingress是Kubernetes中的一個資源對象,用來管理集群外部訪問集群內部服務的方式。您可以通過Ingress資源來配置不同的轉發(fā)規(guī)則,從而達到根據(jù)不同的規(guī)則設置訪問集群內不同的Service后端Pod。

Ingress資源僅支持配置HTTP流量的規(guī)則,無法配置一些高級特性,例如負載均衡的算法、Sessions Affinity等,這些高級特性都需要在Ingress Controller中進行配置。

Ingress Controller工作原理

為了使得Ingress資源正常工作,集群中必須要有個Ingress Controller來解析Ingress的轉發(fā)規(guī)則。Ingress Controller收到請求,匹配Ingress轉發(fā)規(guī)則轉發(fā)到后端Service,而Service轉發(fā)到Pod,最終由Pod處理請求。Kubernetes中Service、Ingress與Ingress Controller有著以下關系:

  • Service是后端真實服務的抽象,一個Service可以代表多個相同的后端服務。
  • Ingress是反向代理規(guī)則,用來規(guī)定HTTP/HTTPS請求應該被轉發(fā)到哪個Service上。例如根據(jù)請求中不同的Host和URL路徑,讓請求落到不同的 Service上。
  • Ingress Controller是一個反向代理程序,負責解析Ingress的反向代理規(guī)則。如果Ingress有增刪改的變動,Ingress Controller會及時更新自己相應的轉發(fā)規(guī)則,當Ingress Controller收到請求后就會根據(jù)這些規(guī)則將請求轉發(fā)到對應的Service。

Ingress Controller通過API Server獲取Ingress資源的變化,動態(tài)地生成Load Balancer(例如Nginx)所需的配置文件(例如nginx.conf),然后重新加載Load Balancer(例如執(zhí)行nginx -s load重新加載Nginx。)來生成新的路由轉發(fā)規(guī)則。

這節(jié)課所用到的yaml配置比較多,但我發(fā)現(xiàn)在頭條這里發(fā)的格式會有問題,所以我另外把筆記文字部分存了一份在我的github上面,大家可以從這里面來復制yaml配置創(chuàng)建服務:

https://github.com/bogeit/LearnK8s/blob/main/2023/%E7%AC%AC12%E5%85%B3%20%20%E7%94%9F%E4%BA%A7%E7%8E%AF%E5%A2%83%E4%B8%8B%E7%9A%84%E6%B5%81%E9%87%8F%E5%85%A5%E5%8F%A3%E6%8E%A7%E5%88%B6%E5%99%A8Ingress-nginx-controller.md

我們上面學習了通過Service服務來訪問pod資源,另外通過修改Service的類型為NodePort,然后通過一些手段作公網(wǎng)IP的端口映射來提供K8s集群外的訪問,但這并不是一種很優(yōu)雅的方式。

通常,services和Pod只能通過集群內網(wǎng)絡訪問。 所有在邊界路由器上的流量都被丟棄或轉發(fā)到別處。 
從概念上講,這可能看起來像:

    internet
        |
  ------------
  [ Services ]

另外可以我們通過LoadBalancer負載均衡來提供外部流量的的訪問,但這種模式對于實際生產來說,用起來不是很方便,而且用這種模式就意味著每個服務都需要有自己的的負載均衡器以及獨立的公有IP。

我們這是用Ingress,因為Ingress只需要一個公網(wǎng)IP就能為K8s上所有的服務提供訪問,Ingress工作在7層(HTTP),Ingress會根據(jù)請求的主機名以及路徑來決定把請求轉發(fā)到相應的服務,如下圖所示:

ingress-nginx 鏡像加速,2023年Kubernetes實戰(zhàn)攻略,kubernetes,nginx,容器,運維開發(fā),云原生,k8s,運維

Ingress是允許入站連接到達集群服務的一組規(guī)則。即介于物理網(wǎng)絡和群集svc之間的一組轉發(fā)規(guī)則。 
其實就是實現(xiàn)L4 L7的負載均衡:
注意:這里的Ingress并非將外部流量通過Service來轉發(fā)到服務pod上,而只是通過Service來找到對應的Endpoint來發(fā)現(xiàn)pod進行轉發(fā)

   
    internet
        |
   [ Ingress ]   ---> [ Services ] ---> [ Endpoint ]
   --|-----|--                                 |
   [ Pod,pod,...... ]<-------------------------|

要在K8s上面使用Ingress,我們就需要在K8s上部署Ingress-controller控制器,只有它在K8s集群中運行,Ingress依次才能正常工作。Ingress-controller控制器有很多種,比如traefik,但我們這里要用到ingress-nginx這個控制器,它的底層就是用Openresty融合nginx和一些lua規(guī)則等實現(xiàn)的。

重點來了,我在講課中一直強調,本課程帶給大家的都是基于生產中實戰(zhàn)經驗,所以這里我們用的ingress-nginx不是普通的社區(qū)版本,而是經過了超大生產流量檢驗,國內最大的云平臺阿里云基于社區(qū)版分支出來,進行了魔改而成,更符合生產,基本屬于開箱即用,下面是aliyun-ingress-controller的介紹:

下面介紹只截取了最新的一部分,更多文檔資源可以查閱官檔:https://help.aliyun.com/zh/ack/product-overview/nginx-ingress-controller#title-ek8-hx4-hlm

組件介紹
Ingress基本概念
在Kubernetes集群中,Ingress作為集群內服務對外暴露的訪問接入點,其幾乎承載著集群內服務訪問的所有流量。Ingress是Kubernetes中的一個資源對象,用來管理集群外部訪問集群內部服務的方式。您可以通過Ingress資源來配置不同的轉發(fā)規(guī)則,從而達到根據(jù)不同的規(guī)則設置訪問集群內不同的Service所對應的后端Pod。

Nginx Ingress Controller工作原理
為了使得Nginx Ingress資源正常工作,集群中必須要有個Nginx Ingress Controller來解析Nginx Ingress的轉發(fā)規(guī)則。Nginx Ingress Controller收到請求,匹配Nginx Ingress轉發(fā)規(guī)則轉發(fā)到后端Service所對應的Pod,由Pod處理請求。Kubernetes中Service、Nginx Ingress與Nginx Ingress Controller有著以下關系:

Service是后端真實服務的抽象,一個Service可以代表多個相同的后端服務。

Nginx Ingress是反向代理規(guī)則,用來規(guī)定HTTP/HTTPS請求應該被轉發(fā)到哪個Service所對應的Pod上。例如根據(jù)請求中不同的Host和URL路徑,讓請求落到不同Service所對應的Pod上。

Nginx Ingress Controller是Kubernetes集群中的一個組件,負責解析Nginx Ingress的反向代理規(guī)則。如果Nginx Ingress有增刪改的變動,Nginx Ingress Controller會及時更新自己相應的轉發(fā)規(guī)則,當Nginx Ingress Controller收到請求后就會根據(jù)這些規(guī)則將請求轉發(fā)到對應Service的Pod上。



變更記錄
2023年10月





版本號  v1.9.3-aliyun.1

鏡像地址  registry-cn-hangzhou.ack.aliyuncs.com/acs/aliyun-ingress-controller:v1.9.3-aliyun.1

變更時間  2023年10月24日

變更內容
重要
由于安全原因,自該版本起,組件將會默認禁用所有snippet注解(如nginx.ingress.kubernetes.io/configuration-snippet等)。

出于安全和穩(wěn)定性風險考量,不建議您開啟snippet注解功能。如需使用,請在充分評估風險后,通過在ConfigMapkube-system/nginx-configuration中添加allow-snippet-annotations: "true"手動開啟。

默認關閉在注解中添加snippet的功能。

加入--enable-annotation-validation參數(shù),默認開啟注解內容校驗,緩解CVE-2023-5044。

修復CVE-2023-44487。


變更影響

建議在業(yè)務低峰期升級,變更過程中可能會導致已經建立的連接發(fā)生瞬斷。

aliyun-ingress-controller有一個很重要的修改,就是它支持路由配置的動態(tài)更新,大家用過Nginx的可以知道,在修改完Nginx的配置,我們是需要進行nginx -s reload來重加載配置才能生效的,在K8s上,這個行為也是一樣的,但由于K8s運行的服務會非常多,所以它的配置更新是非常頻繁的,因此,如果不支持配置動態(tài)更新,對于在高頻率變化的場景下,Nginx頻繁Reload會帶來較明顯的請求訪問問題:

  1. 造成一定的QPS抖動和訪問失敗情況
  2. 對于長連接服務會被頻繁斷掉
  3. 造成大量的處于shutting down的Nginx Worker進程,進而引起內存膨脹

詳細原理分析見這篇文章: https://developer.aliyun.com/article/692732

我們準備來部署aliyun-ingress-controller,下面直接是生產中在用的yaml配置,我們保存了aliyun-ingress-nginx.yaml準備開始部署:

詳細講解下面yaml配置的每個部分

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: ingress-nginx
  namespace: kube-system

---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.9.3
  name: ingress-nginx
  namespace: kube-system
rules:
- apiGroups:
  - ""
  resources:
  - namespaces
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - configmaps
  - pods
  - secrets
  - endpoints
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - services
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - networking.k8s.io
  resources:
  - ingresses
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - networking.k8s.io
  resources:
  - ingresses/status
  verbs:
  - update
- apiGroups:
  - networking.k8s.io
  resources:
  - ingressclasses
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resourceNames:
  - ingress-controller-leader-nginx
  resources:
  - configmaps
  verbs:
  - get
  - update
- apiGroups:
  - ""
  resources:
  - configmaps
  verbs:
  - create
- apiGroups:
  - coordination.k8s.io
  resourceNames:
  - ingress-controller-leader-nginx
  resources:
  - leases
  verbs:
  - get
  - update
- apiGroups:
  - coordination.k8s.io
  resources:
  - leases
  verbs:
  - create
- apiGroups:
  - ""
  resources:
  - events
  verbs:
  - create
  - patch
- apiGroups:
  - discovery.k8s.io
  resources:
  - endpointslices
  verbs:
  - list
  - watch
  - get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.9.3
  name: ingress-nginx
rules:
- apiGroups:
  - ""
  resources:
  - configmaps
  - endpoints
  - nodes
  - pods
  - secrets
  - namespaces
  verbs:
  - list
  - watch
- apiGroups:
  - coordination.k8s.io
  resources:
  - leases
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - services
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - networking.k8s.io
  resources:
  - ingresses
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - events
  verbs:
  - create
  - patch
- apiGroups:
  - networking.k8s.io
  resources:
  - ingresses/status
  verbs:
  - update
- apiGroups:
  - networking.k8s.io
  resources:
  - ingressclasses
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - discovery.k8s.io
  resources:
  - endpointslices
  verbs:
  - list
  - watch
  - get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.9.3
  name: ingress-nginx
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: ingress-nginx
subjects:
- kind: ServiceAccount
  name: ingress-nginx
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.9.3
  name: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ingress-nginx
subjects:
- kind: ServiceAccount
  name: ingress-nginx
  namespace: kube-system


# https://www.cnblogs.com/dudu/p/12197646.html
#---
#apiVersion: monitoring.coreos.com/v1
#kind: ServiceMonitor
#metadata:
#  labels:
#    app: ingress-nginx
#  name: nginx-ingress-scraping
#  namespace: kube-system
#spec:
#  endpoints:
#  - interval: 30s
#    path: /metrics
#    port: metrics
#  jobLabel: app
#  namespaceSelector:
#    matchNames:
#    - ingress-nginx
#  selector:
#    matchLabels:
#      app: ingress-nginx

---

apiVersion: v1
kind: Service
metadata:
  labels:
    app: ingress-nginx
  name: nginx-ingress-lb
  namespace: kube-system
spec:
  # DaemonSet need:
  # ----------------
  type: ClusterIP
  # ----------------
  # Deployment need:
  # ----------------
#  type: NodePort
  # ----------------
  ports:
  - name: http
    port: 80
    targetPort: 80
    protocol: TCP
  - name: https
    port: 443
    targetPort: 443
    protocol: TCP
  - name: metrics
    port: 10254
    protocol: TCP
    targetPort: 10254
  selector:
    app: ingress-nginx

---
apiVersion: v1
kind: Service
metadata:
  name: ingress-nginx-controller-admission
  namespace: kube-system
spec:
  type: ClusterIP
  ports:
    - name: https-webhook
      port: 443
      targetPort: 8443
  selector:
    app: ingress-nginx

---
# all configmaps means:
# https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/configmap.md

kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-configuration
  namespace: kube-system
  labels:
    app: ingress-nginx
data:
  allow-snippet-annotations: "true"
  allow-backend-server-header: "true"
  disable-access-log: "false"
  enable-underscores-in-headers: "true"
  generate-request-id: "true"
  ignore-invalid-headers: "true"
  keep-alive: "900"
  keep-alive-requests: "10000"
  large-client-header-buffers: 5 20k
  log-format-upstream: '{"@timestamp": "$time_iso8601","remote_addr": "$remote_addr","x-forward-for": "$proxy_add_x_forwarded_for","request_id": "$req_id","remote_user": "$remote_user","bytes_sent": $bytes_sent,"request_time": $request_time,"status": $status,"vhost": "$host","request_proto": "$server_protocol","path": "$uri","request_query": "$args","request_length": $request_length,"duration": $request_time,"method": "$request_method","http_referrer": "$http_referer","http_user_agent":  "$http_user_agent","upstream-sever":"$proxy_upstream_name","proxy_alternative_upstream_name":"$proxy_alternative_upstream_name","upstream_addr":"$upstream_addr","upstream_response_length":$upstream_response_length,"upstream_response_time":$upstream_response_time,"upstream_status":$upstream_status}'
  max-worker-connections: "65536"
  proxy-body-size: 20m
  proxy-connect-timeout: "10"
  proxy-read-timeout: "60"
  proxy-send-timeout: "60"
  reuse-port: "true"
  server-tokens: "false"
  ssl-redirect: "false"
  upstream-keepalive-connections: "300"
  upstream-keepalive-requests: "1000"
  upstream-keepalive-timeout: "900"
  worker-cpu-affinity: ""
  worker-processes: "1"
  http-redirect-code: "301"
  proxy_next_upstream: error timeout http_502
  ssl-ciphers: ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA
  ssl-protocols: TLSv1 TLSv1.1 TLSv1.2


---
kind: ConfigMap
apiVersion: v1
metadata:
  name: tcp-services
  namespace: kube-system
  labels:
    app: ingress-nginx

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: udp-services
  namespace: kube-system
  labels:
    app: ingress-nginx

---
apiVersion: apps/v1
kind: DaemonSet
#kind: Deployment
metadata:
  name: nginx-ingress-controller
  namespace: kube-system
  labels:
    app: ingress-nginx
  annotations:
    component.revision: "2"
    component.version: 1.9.3
spec:
  # Deployment need:
  # ----------------
#  replicas: 1
  # ----------------
  selector:
    matchLabels:
      app: ingress-nginx
  template:
    metadata:
      labels:
        app: ingress-nginx
      annotations:
        prometheus.io/port: "10254"
        prometheus.io/scrape: "true"
    spec:
      # DaemonSet need:
      # ----------------
      hostNetwork: true
      # ----------------
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - podAffinityTerm:
              labelSelector:
                matchExpressions:
                - key: app
                  operator: In
                  values:
                  - ingress-nginx
              topologyKey: kubernetes.io/hostname
            weight: 100
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: type
                operator: NotIn
                values:
                - virtual-kubelet
              - key: k8s.aliyun.com
                operator: NotIn
                values:
                - "true"
      containers:
      - args:
          - /nginx-ingress-controller
          - --election-id=ingress-controller-leader-nginx
          - --ingress-class=nginx
          - --watch-ingress-without-class
          - --controller-class=k8s.io/ingress-nginx
          - --configmap=$(POD_NAMESPACE)/nginx-configuration
          - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
          - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
          - --annotations-prefix=nginx.ingress.kubernetes.io
          - --publish-service=$(POD_NAMESPACE)/nginx-ingress-lb
          - --validating-webhook=:8443
          - --validating-webhook-certificate=/usr/local/certificates/cert
          - --validating-webhook-key=/usr/local/certificates/key
          - --enable-metrics=false
          - --v=2
        env:
          - name: POD_NAME
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
          - name: POD_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
          - name: LD_PRELOAD
            value: /usr/local/lib/libmimalloc.so
        image: registry-cn-hangzhou.ack.aliyuncs.com/acs/aliyun-ingress-controller:v1.9.3-aliyun.1
        imagePullPolicy: IfNotPresent
        lifecycle:
          preStop:
            exec:
              command:
                - /wait-shutdown
        livenessProbe:
          failureThreshold: 5
          httpGet:
            path: /healthz
            port: 10254
            scheme: HTTP
          initialDelaySeconds: 10
          periodSeconds: 10
          timeoutSeconds: 1
          successThreshold: 1
        name: nginx-ingress-controller
        ports:
          - name: http
            containerPort: 80
            protocol: TCP
          - name: https
            containerPort: 443
            protocol: TCP
          - name: webhook
            containerPort: 8443
            protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /healthz
            port: 10254
            scheme: HTTP
          initialDelaySeconds: 10
          periodSeconds: 10
          timeoutSeconds: 1
          successThreshold: 1
#        resources:
#          limits:
#            cpu: 1
#            memory: 2G
#          requests:
#            cpu: 1
#            memory: 2G
        securityContext:
          allowPrivilegeEscalation: true
          capabilities:
            drop:
              - ALL
            add:
              - NET_BIND_SERVICE
          runAsUser: 101
          # if get 'mount: mounting rw on /proc/sys failed: Permission denied', use:
#          privileged: true
#          procMount: Default
#          runAsUser: 0
        volumeMounts:
        - name: webhook-cert
          mountPath: /usr/local/certificates/
          readOnly: true
        - mountPath: /etc/localtime
          name: localtime
          readOnly: true
      dnsPolicy: ClusterFirst
      initContainers:
      - command:
        - /bin/sh
        - -c
        - |
          if [ "$POD_IP" != "$HOST_IP" ]; then
          mount -o remount rw /proc/sys
          sysctl -w net.core.somaxconn=65535
          sysctl -w net.ipv4.ip_local_port_range="1024 65535"
          sysctl -w kernel.core_uses_pid=0
          fi
        env:
        - name: POD_IP
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: status.podIP
        - name: HOST_IP
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: status.hostIP
        image: registry.cn-shanghai.aliyuncs.com/acs/busybox:v1.29.2
        imagePullPolicy: IfNotPresent
        name: init-sysctl
        resources:
          limits:
            cpu: 100m
            memory: 70Mi
          requests:
            cpu: 100m
            memory: 70Mi
        securityContext:
          capabilities:
            add:
            - SYS_ADMIN
            drop:
            - ALL
          # if get 'mount: mounting rw on /proc/sys failed: Permission denied', use:
          privileged: true
          procMount: Default
          runAsUser: 0
      # choose node with set this label running
      # kubectl label node xx.xx.xx.xx boge/ingress-controller-ready=true
      # kubectl get node --show-labels
      # kubectl label node xx.xx.xx.xx boge/ingress-controller-ready-
      nodeSelector:
        boge/ingress-controller-ready: "true"
      priorityClassName: system-node-critical
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      serviceAccount: ingress-nginx
      serviceAccountName: ingress-nginx
      terminationGracePeriodSeconds: 300
      # kubectl taint nodes xx.xx.xx.xx boge/ingress-controller-ready="true":NoExecute
      # kubectl taint nodes xx.xx.xx.xx boge/ingress-controller-ready:NoExecute-
      tolerations:
      - operator: Exists
#      tolerations:
#      - effect: NoExecute
#        key: boge/ingress-controller-ready
#        operator: Equal
#        value: "true"
      volumes:
      - name: webhook-cert
        secret:
          defaultMode: 420
          secretName: ingress-nginx-admission
      - hostPath:
          path: /etc/localtime
          type: File
        name: localtime

---
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
  name: nginx-master
  namespace: kube-system
  annotations:
   ingressclass.kubernetes.io/is-default-class: "true"
spec:
  controller: k8s.io/ingress-nginx


---
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
  labels:
    name: ingress-nginx
  name: ingress-nginx-admission
webhooks:
  - name: validate.nginx.ingress.kubernetes.io
    matchPolicy: Equivalent
    rules:
      - apiGroups:
          - networking.k8s.io
        apiVersions:
          - v1
        operations:
          - CREATE
          - UPDATE
        resources:
          - ingresses
    failurePolicy: Fail
    sideEffects: None
    admissionReviewVersions:
      - v1
    clientConfig:
      service:
        namespace: kube-system
        name: ingress-nginx-controller-admission
        path: /networking/v1/ingresses
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: ingress-nginx-admission
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: ingress-nginx-admission
rules:
  - apiGroups:
      - admissionregistration.k8s.io
    resources:
      - validatingwebhookconfigurations
    verbs:
      - get
      - update
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: ingress-nginx-admission
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ingress-nginx-admission
subjects:
  - kind: ServiceAccount
    name: ingress-nginx-admission
    namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: ingress-nginx-admission
  namespace: kube-system
rules:
  - apiGroups:
      - ""
    resources:
      - secrets
    verbs:
      - get
      - create
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: ingress-nginx-admission
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: ingress-nginx-admission
subjects:
  - kind: ServiceAccount
    name: ingress-nginx-admission
    namespace: kube-system
---
apiVersion: batch/v1
kind: Job
metadata:
  name: ingress-nginx-admission-create
  labels:
    name: ingress-nginx
  namespace: kube-system
spec:
  template:
    metadata:
      name: ingress-nginx-admission-create
      labels:
        name: ingress-nginx
    spec:
      containers:
        - name: create
#          image: registry-vpc.cn-hangzhou.aliyuncs.com/acs/kube-webhook-certgen:v1.1.1
          image: registry.cn-beijing.aliyuncs.com/acs/kube-webhook-certgen:v1.5.1
          imagePullPolicy: IfNotPresent
          args:
            - create
            - --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc
            - --namespace=$(POD_NAMESPACE)
            - --secret-name=ingress-nginx-admission
          env:
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
      restartPolicy: OnFailure
      serviceAccountName: ingress-nginx-admission
      securityContext:
        runAsNonRoot: true
        runAsUser: 2000
---
apiVersion: batch/v1
kind: Job
metadata:
  name: ingress-nginx-admission-patch
  labels:
    name: ingress-nginx
  namespace: kube-system
spec:
  template:
    metadata:
      name: ingress-nginx-admission-patch
      labels:
        name: ingress-nginx
    spec:
      containers:
        - name: patch
          image: registry.cn-hangzhou.aliyuncs.com/acs/kube-webhook-certgen:v1.1.1  # if error use this image
          imagePullPolicy: IfNotPresent
          args:
            - patch
            - --webhook-name=ingress-nginx-admission
            - --namespace=$(POD_NAMESPACE)
            - --patch-mutating=false
            - --secret-name=ingress-nginx-admission
            - --patch-failure-policy=Fail
          env:
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          securityContext:
            allowPrivilegeEscalation: false
      restartPolicy: OnFailure
      serviceAccountName: ingress-nginx-admission
      securityContext:
        runAsNonRoot: true
        runAsUser: 2000

---
## Deployment need for aliyun'k8s:
#apiVersion: v1
#kind: Service
#metadata:
#  annotations:
#    service.beta.kubernetes.io/alibaba-cloud-loadbalancer-id: "lb-xxxxxxxxxxxxxxxxx"
#    service.beta.kubernetes.io/alibaba-cloud-loadbalancer-force-override-listeners: "true"
#  labels:
#    app: nginx-ingress-lb
#  name: nginx-ingress-lb-boge
#  namespace: kube-system
#spec:
#  externalTrafficPolicy: Local
#  ports:
#  - name: http
#    port: 80
#    protocol: TCP
#    targetPort: 80
#  - name: https
#    port: 443
#    protocol: TCP
#    targetPort: 443
#  selector:
#    app: ingress-nginx
#  type: LoadBalancer



DaemonSet-00:49:30

開始部署:

# kubectl  apply -f aliyun-ingress-nginx.yaml 
namespace/ingress-nginx created
serviceaccount/nginx-ingress-controller created
clusterrole.rbac.authorization.k8s.io/nginx-ingress-controller created
clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-controller created
service/nginx-ingress-lb created
configmap/nginx-configuration created
configmap/tcp-services created
configmap/udp-services created
daemonset.apps/nginx-ingress-controller created

# 這里是以daemonset資源的形式進行的安裝
# DaemonSet資源和Deployment的yaml配置類似,但不同的是Deployment可以在每個node上運行多個pod副本,但daemonset在每個node上只能運行一個pod副本
# 這里正好就借運行ingress-nginx的情況下,把daemonset這個資源做下講解

# 我們查看下pod,會發(fā)現(xiàn)空空如也,為什么會這樣呢?
# kubectl -n kube-system get pod
注意上面的yaml配置里面,我使用了節(jié)點選擇配置,只有打了我指定lable標簽的node節(jié)點,也會被允許調度pod上去運行
      nodeSelector:
        boge/ingress-controller-ready: "true"

# 我們現(xiàn)在來打標簽
# kubectl label node 10.0.1.201 boge/ingress-controller-ready=true
node/10.0.1.201 labeled
# kubectl label node 10.0.1.202 boge/ingress-controller-ready=true
node/10.0.1.202 labeled

# 接著可以看到pod就被調試到這兩臺node上啟動了
# kubectl -n kube-system get pod -o wide
NAME                             READY   STATUS    RESTARTS   AGE    IP           NODE         NOMINATED NODE   READINESS GATES
nginx-ingress-controller-lchgr   1/1     Running   0          9m1s   10.0.1.202   10.0.1.202   <none>           <none>
nginx-ingress-controller-x87rp   1/1     Running   0          9m6s   10.0.1.201   10.0.1.201   <none>           <none>

我們基于前面學到的deployment和service,來創(chuàng)建一個nginx的相應服務資源,保存為nginx.yaml:

注意:記得把前面測試的資源刪除掉,以防沖突

---
kind: Service
apiVersion: v1
metadata:
  name: new-nginx
spec:
  selector:
    app: new-nginx
  ports:
    - name: http-port
      port: 80
      protocol: TCP
      targetPort: 80

---
# 新版本k8s的ingress配置
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: new-nginx
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/force-ssl-redirect: "false"
    nginx.ingress.kubernetes.io/whitelist-source-range: 0.0.0.0/0
    nginx.ingress.kubernetes.io/configuration-snippet: |
      if ($host != 'www.boge.com' ) {
        rewrite ^ http://www.boge.com$request_uri permanent;
      }
spec:
  rules:
    - host: boge.com
      http:
        paths:
          - backend:
              service:
                name: new-nginx
                port:
                  number: 80
            path: /
            pathType: Prefix
    - host: m.boge.com
      http:
        paths:
          - backend:
              service:
                name: new-nginx
                port:
                  number: 80
            path: /
            pathType: Prefix
    - host: www.boge.com
      http:
        paths:
          - backend:
              service:
                name: new-nginx
                port:
                  number: 80
            path: /
            pathType: Prefix
#  tls:
#      - hosts:
#          - boge.com
#          - m.boge.com
#          - www.boge.com
#        secretName: boge-com-tls

# tls secret create command:
#   kubectl -n <namespace> create secret tls boge-com-tls --key boge-com.key --cert boge-com.csr

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: new-nginx
  labels:
    app: new-nginx
spec:
  replicas: 3  # 數(shù)量可以根據(jù)NODE節(jié)點數(shù)量來定義
  selector:
    matchLabels:
      app: new-nginx
  template:
    metadata:
      labels:
        app: new-nginx
    spec:
      containers:
#--------------------------------------------------
      - name: new-nginx
        image: nginx:1.21.6
        env:
          - name: TZ
            value: Asia/Shanghai
        ports:
        - containerPort: 80
        volumeMounts:
          - name: html-files
            mountPath: "/usr/share/nginx/html"
#--------------------------------------------------
      - name: busybox
        image: registry.cn-hangzhou.aliyuncs.com/acs/busybox:v1.29.2
        args:
        - /bin/sh
        - -c
        - >
           while :; do
             if [ -f /html/index.html ];then
               echo "[$(date +%F\ %T)] ${MY_POD_NAMESPACE}-${MY_POD_NAME}-${MY_POD_IP}" > /html/index.html
               sleep 1
             else
               touch /html/index.html
             fi
           done
        env:
          - name: TZ
            value: Asia/Shanghai
          - name: MY_POD_NAME
            valueFrom:
              fieldRef:
                apiVersion: v1
                fieldPath: metadata.name
          - name: MY_POD_NAMESPACE
            valueFrom:
              fieldRef:
                apiVersion: v1
                fieldPath: metadata.namespace
          - name: MY_POD_IP
            valueFrom:
              fieldRef:
                apiVersion: v1
                fieldPath: status.podIP
        volumeMounts:
          - name: html-files
            mountPath: "/html"
          - mountPath: /etc/localtime
            name: tz-config

#--------------------------------------------------
      volumes:
        - name: html-files
          emptyDir:
            medium: Memory
            sizeLimit: 10Mi
        - name: tz-config
          hostPath:
            path: /usr/share/zoneinfo/Asia/Shanghai

---


運行它:

# kubectl apply -f nginx.yaml 

#查看創(chuàng)建的ingress資源
# kubectl get ingress
NAME        CLASS          HOSTS                              ADDRESS   PORTS   AGE
new-nginx   nginx-master   boge.com,m.boge.com,www.boge.com             80      8s


# 我們在其它節(jié)點上,加下本地hosts,來測試下效果
10.0.1.201 boge.com m.boge.com www.boge.com

# 可以看到請求成功了
[root@node-2 ~]# curl www.boge.com


# 回到201節(jié)點上,看下ingress-nginx的日志
# kubectl -n kube-system  logs --tail=1 nginx-ingress-controller-nblb9
Defaulted container "nginx-ingress-controller" out of: nginx-ingress-controller, init-sysctl (init)
{"@timestamp": "2023-11-22T22:13:14+08:00","remote_addr": "10.0.1.1","x-forward-for": "10.0.1.1","request_id": "f21a1e569751fb55299ef5f1b039852d","remote_user": "-","bytes_sent": 250,"request_time": 0.003,"status": 200,"vhost": "www.boge.com","request_proto": "HTTP/2.0","path": "/","request_query": "-","request_length": 439,"duration": 0.003,"method": "GET","http_referrer": "-","http_user_agent":  "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/119.0.0.0 Safari/537.36","upstream-sever":"default-new-nginx-80","proxy_alternative_upstream_name":"","upstream_addr":"172.20.247.18:80","upstream_response_length":71,"upstream_response_time":0.003,"upstream_status":200}

因為http屬于是明文傳輸數(shù)據(jù)不安全,在生產中我們通常會配置https加密通信,現(xiàn)在實戰(zhàn)下Ingress的tls配置

# 這里我先自簽一個https的證書

#1. 先生成私鑰key
# openssl genrsa -out boge.key 2048
Generating RSA private key, 2048 bit long modulus
..............................................................................................+++
.....+++
e is 65537 (0x10001)

#2.再基于key生成tls證書(注意:這里我用的*.boge.com,這是生成泛域名的證書,后面所有新增加的三級域名都是可以用這個證書的)
# openssl req -new -x509 -key boge.key -out boge.csr -days 360 -subj /CN=*.boge.com

# 看下創(chuàng)建結果
# ll
total 8
-rw-r--r-- 1 root root 1099 Nov 27 11:44 boge.csr
-rw-r--r-- 1 root root 1679 Nov 27 11:43 boge.key

在生產環(huán)境中,如果是自建機房,我們通常會在至少2臺node節(jié)點上運行有ingress-nginx的pod,那么有必要在這兩臺node上面部署負載均衡軟件做調度,來起到高可用的作用,這里我們用haproxy+keepalived,如果你的生產環(huán)境是在云上,假設是阿里云,那么你只需要購買一個負載均衡器SLB,將運行有ingress-nginx的pod的節(jié)點服務器加到這個SLB的后端來,然后將請求域名和這個SLB的公網(wǎng)IP做好解析即可,目前我們用二進制部署的K8s集群通信架構如下:

ingress-nginx 鏡像加速,2023年Kubernetes實戰(zhàn)攻略,kubernetes,nginx,容器,運維開發(fā),云原生,k8s,運維

注意在每臺node節(jié)點上有已經部署有了個精簡版的nginx軟件kube-lb做四層負載均衡,來轉發(fā)apiserver的請求的,那么,我們只需要選取兩臺節(jié)點,部署keepalived軟件并重新配置kube-lb,來生成VIP達到ha的效果,具體參照文檔部署:

https://github.com/easzlab/kubeasz/blob/master/docs/setup/ex-lb.md

做到這里,是不是有點成就感了呢,在已經知道了ingress能給我們帶來什么后,我們回過頭來理解Ingress的工作原理,這樣掌握ingress會更加穩(wěn)固,這也是我平時學習的方法

如下圖,Client客戶端對nginx.boge.com進行DNS查詢,DNS服務器(我們這里是配的本地hosts)返回了Ingress控制器的IP(也就是我們的VIP:10.0.1.222)。然后Client客戶端向Ingress控制器發(fā)送HTTP請求,并在請求Host頭中指定nginx.boge.com。Ingress控制器從該頭部確定Client客戶端是想訪問哪個服務,通過與該服務并聯(lián)的Endpoint對象查看具體的Pod IP,并將Client客戶端的請求轉發(fā)給其中一個pod。

ingress-nginx 鏡像加速,2023年Kubernetes實戰(zhàn)攻略,kubernetes,nginx,容器,運維開發(fā),云原生,k8s,運維

生產環(huán)境正常情況下大部分是一個Ingress對應一個Service服務,但在一些特殊情況,需要復用一個Ingress來訪問多個服務的,下面我們來實踐下

再創(chuàng)建一個nginx的deployment和service,注意名稱修改下不要沖突了文章來源地址http://www.zghlxwxcb.cn/news/detail-773815.html

# Finally you need excute this command:
# kubectl create deployment old-nginx --image=nginx:1.21.6 --replicas=1
# kubectl expose deployment old-nginx --port=80 --target-port=80

# if all done,you need test
#  curl -H "Host: www.boge.com" -H "foo: bar" http://10.0.0.201
#  curl -H "Host: www.boge.com"  http://10.0.0.201

# 1 pod  2 containers and  ingress-nginx L7 + 2 service
---
# namespace
apiVersion: v1
kind: Namespace
metadata:
  name: test-nginx

---
# SVC
kind: Service
apiVersion: v1
metadata:
  name: new-nginx
  namespace: test-nginx
spec:
  selector:
    app: new-nginx
  ports:
    - name: http-port
      port: 80
      protocol: TCP
      targetPort: 80
#      nodePort: 30088
#  type: NodePort

---
# ingress-nginx L7
#  https://yq.aliyun.com/articles/594019
#  https://help.aliyun.com/document_detail/200941.html?spm=a2c4g.11186623.6.787.254168fapBIi0A
#  KAE多個請求參數(shù)配置: query("boge_id", /^aaa$|^bbb$/)
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: test-nginx
  namespace: test-nginx
  annotations:
    nginx.ingress.kubernetes.io/ssl-redirect: "false"
    # 請求參數(shù)匹配  curl www.boge.com?boge_id=ID1
    nginx.ingress.kubernetes.io/service-match: | 
      new-nginx: query("boge_id", /^aaa$|^bbb$/)
    # 請求頭中滿足正則匹配foo=bar的請求才會被路由到新版本服務new-nginx中
    #nginx.ingress.kubernetes.io/service-match: | 
    #  new-nginx: header("foo", /^bar$/)
    # 在滿足上述匹配規(guī)則的基礎上僅允許50%的流量會被路由到新版本服務new-nginx中
    #nginx.ingress.kubernetes.io/service-weight: |
    #    new-nginx: 50, old-nginx: 50
spec:
  rules:
    - host: www.boge.com
      http:
        paths:
          - backend:
              service:
                name: new-nginx  # 新版本服務
                port:
                  number: 80
            path: /
            pathType: Prefix
          - backend:
              service:
                name: old-nginx  # 老版本服務
                port:
                  number: 80
            path: /
            pathType: Prefix

---
# deployment
apiVersion: apps/v1
kind: Deployment
metadata:
  name: new-nginx
  namespace: test-nginx
  labels:
    app: new-nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: new-nginx
  template:
    metadata:
      labels:
        app: new-nginx
    spec:
      containers:
#--------------------------------------------------
      - name: new-nginx
        image: nginx::1.21.6
        ports:
        - containerPort: 80
        volumeMounts:
          - name: html-files
            mountPath: "/usr/share/nginx/html"
#--------------------------------------------------
      - name: busybox
        image: registry.cn-hangzhou.aliyuncs.com/acs/busybox:v1.29.2
        args:
        - /bin/sh
        - -c
        - >
           while :; do
             if [ -f /html/index.html ];then
               echo "[$(date +%F\ %T)] hello" > /html/index.html
               sleep 1
             else
               touch /html/index.html
             fi
           done
        volumeMounts:
          - name: html-files
            mountPath: "/html"

#--------------------------------------------------
      volumes:
        - name: html-files
           # Disk of the working node running pod
          #emptyDir: {}
           # Memory of the working node running pod
          emptyDir:
            medium: Memory
            # use temp disk space
            #medium: ""
              # if emptyDir file size more than 1Gi ,then this pod will be Evicted, just like restert pod
            sizeLimit: 1Gi


到了這里,關于第12關 精通K8s下的Ingress-Nginx控制器:生產環(huán)境實戰(zhàn)配置指南的文章就介紹完了。如果您還想了解更多內容,請在右上角搜索TOY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關文章,希望大家以后多多支持TOY模板網(wǎng)!

本文來自互聯(lián)網(wǎng)用戶投稿,該文觀點僅代表作者本人,不代表本站立場。本站僅提供信息存儲空間服務,不擁有所有權,不承擔相關法律責任。如若轉載,請注明出處: 如若內容造成侵權/違法違規(guī)/事實不符,請點擊違法舉報進行投訴反饋,一經查實,立即刪除!

領支付寶紅包贊助服務器費用

相關文章

  • K8s實戰(zhàn)4-使用Helm在Azure上部署Ingress-Nginx和Tokengateway

    K8s實戰(zhàn)4-使用Helm在Azure上部署Ingress-Nginx和Tokengateway

    az login az account set --subscription ${sub ID} az aks get-credentials --resource-group ${groupname} --name ${aks name} curl -LO https://github.com/kubernetes/ingress-nginx/releases/download/helm-chart-4.7.1/ingress-nginx-4.7.1.tgz sudo tar -xvf ingress-nginx-4.2.5.tgz sudo cd ingress-nginx #創(chuàng)建命名空間 kubectl create ns ingress-nginx-public # 使用

    2024年02月12日
    瀏覽(61)
  • grafana呈現(xiàn)loki從k8s/ingress-nginx中收集的日志-地區(qū)經緯度部分

    grafana呈現(xiàn)loki從k8s/ingress-nginx中收集的日志-地區(qū)經緯度部分

    1,在用grafana展示loki日志的時候,想到用可視化的方式來呈現(xiàn)更直觀,于是網(wǎng)上查找,找到一篇《如何用Loki來繪制Ingress Nginx監(jiān)控大屏》被復用多次,按照其過程實踐引用了12559的面板,但效果與原文描述的不一致,可能是gf版本,或者插件版本變更等原因造成。嘗試了幾次終

    2024年02月21日
    瀏覽(21)
  • k8s 安裝 Nginx Ingress 控制器時無法下載鏡像的問題

    Ingress 是對集群中服務的外部訪問進行管理的 API 對象,典型的訪問方式是 HTTP。 Ingress 可以提供負載均衡、SSL 終結和基于名稱的虛擬托管 官方說明如下: 你必須擁有一個 Ingress 控制器 才能滿足 Ingress 的要求。 僅創(chuàng)建 Ingress 資源本身沒有任何效果 你可能需要部署 Ingress 控制

    2024年02月07日
    瀏覽(30)
  • 第13關 解決K8s中Ingress Nginx控制器無法獲取真實客戶端IP的問題

    第13關 解決K8s中Ingress Nginx控制器無法獲取真實客戶端IP的問題

    ------ 課程視頻同步分享在今日頭條和B站 大家好,我是博哥愛運維。 這節(jié)課帶大家探索并分享最全面的解決在使用Kubernetes(K8s)和Ingress-Nginx-Controller中無法獲取客戶端真實IP問題的視頻教程,幫助你快速理解并解決這一問題。 如果我們按下面網(wǎng)絡架構圖,暴露我們服務到公

    2024年02月03日
    瀏覽(24)
  • K8s: Ingress對象, 創(chuàng)建Ingress控制器, 創(chuàng)建Ingress資源并暴露服務

    K8s: Ingress對象, 創(chuàng)建Ingress控制器, 創(chuàng)建Ingress資源并暴露服務

    Ingress對象 1 )概述 Ingress 是對集群中服務的外部訪問進行管理的 API 對象,典型的訪問方式是 HTTP Ingress-nginx 本質是網(wǎng)關,當你請求 abc.com/service/a, Ingress 就把對應的地址轉發(fā)給你,底層運行了一個 nginx 但 K8s 為什么不直接使用 nginx 呢,是因為 K8s 也需要把轉發(fā)的路由規(guī)則納入

    2024年04月28日
    瀏覽(36)
  • k8s中部署nginx-ingress實現(xiàn)外部訪問k8s集群內部服務

    k8s中部署nginx-ingress實現(xiàn)外部訪問k8s集群內部服務

    k8s通過nginx-ingress實現(xiàn)集群外網(wǎng)訪問功能 1.1 ingress 工作原理 step1:ingress contronler通過與k8s的api進行交互,動態(tài)的去感知k8s集群中ingress服務規(guī)則的變化,然后讀取它,并按照定義的ingress規(guī)則,轉發(fā)到k8s集群中對應的service。 step2:而這個ingress規(guī)則寫明了哪個域名對應k8s集群中的

    2024年02月07日
    瀏覽(27)
  • K8S Nginx Ingress實現(xiàn)金絲雀發(fā)布

    K8S Nginx Ingress實現(xiàn)金絲雀發(fā)布

    通過給 Ingress 資源指定 Nginx Ingress 所支持的 annotation 可實現(xiàn)金絲雀發(fā)布。 需給服務創(chuàng)建2個 Ingress,其中 1個常規(guī) Ingress , 另1個為帶? nginx.ingress.kubernetes.io/canary: \\\"true\\\" ?固定的 annotation 的 Ingress,稱為 Canary Ingress。 Canary Ingress 一般代表新版本的服務,結合另外針對流量切分策

    2024年02月11日
    瀏覽(31)
  • 關于k8s中ingress、Gateway、nginx之間關系

    在Kubernetes中,Ingress是一種用于將外部流量路由到集群內部服務的API對象。它通常與Ingress控制器一起使用,Ingress控制器負責根據(jù)Ingress規(guī)則路由外部流量到不同的服務上。 下面是使用Ingress的一些步驟: 安裝Ingress控制器 在Kubernetes中,Ingress控制器是需要安裝和配置的。有許多

    2024年02月09日
    瀏覽(26)
  • K8s集群nginx-ingress監(jiān)控告警最佳實踐

    本文分享自華為云社區(qū)《K8s集群nginx-ingress監(jiān)控告警最佳實踐》,作者:可以交個朋友。 nginx-ingress作為K8s集群中的關鍵組成部分。主要負責k8s集群中的服務發(fā)布,請求轉發(fā)等功能。如果在訪問服務過程中出現(xiàn)404和502等情況,需要引起注意。 可以通過CCE集群插件kube-prometheus-s

    2024年04月22日
    瀏覽(31)
  • k8s 對外服務之 ingress|ingress的對外暴露方式|ingress http,https代理|ingress nginx的認證,nginx重寫

    k8s 對外服務之 ingress|ingress的對外暴露方式|ingress http,https代理|ingress nginx的認證,nginx重寫

    service的作用體現(xiàn)在兩個方面,對集群內部,它不斷跟蹤pod的變化,更新endpoint中對應pod的對象,提供了ip不斷變化的pod的服務發(fā)現(xiàn)機制;對集群外部,他類似負載均衡器,可以在集群內外部對pod進行訪問。 在Kubernetes中,Pod的IP地址和service的ClusterIP僅可以在集群網(wǎng)絡內部使用,

    2024年02月10日
    瀏覽(26)

覺得文章有用就打賞一下文章作者

支付寶掃一掃打賞

博客贊助

微信掃一掃打賞

請作者喝杯咖啡吧~博客贊助

支付寶掃一掃領取紅包,優(yōu)惠每天領

二維碼1

領取紅包

二維碼2

領紅包