国产 无码 综合区,色欲AV无码国产永久播放,无码天堂亚洲国产AV,国产日韩欧美女同一区二区

airflow v2.6.0 k8s 部署(Rancher)

這篇具有很好參考價(jià)值的文章主要介紹了airflow v2.6.0 k8s 部署(Rancher)。希望對(duì)大家有所幫助。如果存在錯(cuò)誤或未考慮完全的地方,請(qǐng)大家不吝賜教,您也可以點(diǎn)擊"舉報(bào)違法"按鈕提交疑問。

物料準(zhǔn)備

  • k8s Rancher, 阿里云的 nas 存儲(chǔ)
  • 一臺(tái)物理機(jī)(需要掛載PVC: dags plugins 和 logs)
  • mysql 數(shù)據(jù)庫(kù)和redis
  • 包含airflow 以及對(duì)應(yīng)依賴庫(kù)的基礎(chǔ)鏡像

這里使用 airflow 的 CeleryExecutor 部署在 k8s 上,并不是使用 KubernetesExecutor.
基礎(chǔ)鏡像構(gòu)建
Dockerfile 文件
這里使用的是 airflow 官方的V2.6.0 的 python3.10 的鏡像

FROM apache/airflow:slim-latest-python3.10
USER root
EXPOSE 8080 5555 8793
COPY config/airflow.cfg /opt/airflow/airflow.cfg
RUN set -ex \
    && buildDeps=' \
        freetds-dev \
        libkrb5-dev \
        libsasl2-dev \
        libssl-dev \
        libffi-dev \
        libpq-dev \
        git \
        python3-dev \
        gcc \
        sasl2-bin \
        libsasl2-2 \
        libsasl2-dev \
        libsasl2-modules \
    ' \
    && apt-get update -yqq \
    && apt-get upgrade -yqq \
    && apt-get install -yqq --no-install-recommends \
        $buildDeps \
        freetds-bin \
        build-essential \
        default-libmysqlclient-dev \
        apt-utils \
        curl \
        rsync \
        netcat \
        locales \
        procps \
        telnet

USER airflow
RUN pip install celery
RUN pip install flower
RUN pip install pymysql
RUN pip install mysqlclient
RUN pip install redis
RUN pip install livy==0.6.0
RUN pip install apache-airflow-providers-mysql
RUN pip install apache-airflow-providers-apache-hive

RUN airflow db init
# 保證基礎(chǔ)鏡像安全,執(zhí)行完數(shù)據(jù)庫(kù)初始化后刪除相關(guān)配置文件
RUN rm -rf /opt/airflow/airflow.cfg

構(gòu)建基礎(chǔ)鏡像并推送至鏡像倉(cāng)庫(kù):

  • 在構(gòu)建airflow基礎(chǔ)鏡像的時(shí)候同時(shí)會(huì)初始化對(duì)應(yīng)的元數(shù)據(jù)庫(kù)
    相關(guān)部署代碼 git 地址:https://github.com/itnoobzzy/EasyAirflow.git
    拉取完代碼后進(jìn)入 EasyAirflow項(xiàng)目 創(chuàng)建 logs 目錄并且 sudo -R chmod 777 logs
    image.png
    因?yàn)樵跇?gòu)建基礎(chǔ)鏡像的時(shí)候需要初始化元數(shù)據(jù)庫(kù),所以需要修改配置文件,這里主要需要修改四個(gè)地方:
    • mv config/default_airflow.cfg config/airflow.cfg, 并且修改 airflow.cfg 文件
    • 將 executor 修改為 CeleryExecutor
      image.png
    • 修改 sql_alchemy_conn 使用已有的 mysql 數(shù)據(jù)庫(kù), 這里需要注意連接驅(qū)動(dòng)使用 mysql+pymysql
      image.png
    • 修改 broker_url 和 result_backend, broker 需要使用 redis 通信, result_backend 使用 mysql 存儲(chǔ),這里 result_backend 需要注意使用 db+mysql 連接驅(qū)動(dòng)
      image.png
  • 執(zhí)行構(gòu)建命令,并推送至鏡像倉(cāng)庫(kù)
docker build -t airflow:2.6.0 .
docker tag airflow:2.6.0 itnoobzzy/airflow:v2.6.0-python3.10
docker push itnoobzzy/airflow:v2.6.0-python3.10

部署步驟

  1. 創(chuàng)建 namespace: airflow-v2
    image.png

  2. 創(chuàng)建 PVC
    volumes.yaml文件地址:https://github.com/itnoobzzy/EasyAirflow/blob/main/scripts/k8s/volumes.yaml
    導(dǎo)入 rancher 后將創(chuàng)建三個(gè) PVC 分別存儲(chǔ) dags, logs, plugins如下:
    image.png
    掛載PVC至物理機(jī)器上,方便管理 dags, logs 和 plugins, 查看 PVC 詳情并執(zhí)行掛載命令,下邊是掛載 airflow-dags-pv 至機(jī)器上的例子:
    image.png
    image.png
    掛載完后df -h驗(yàn)證下 dags, logs, plugins 是否都掛載正常:
    image.png

  3. 創(chuàng)建 ConfigMap
    configmap.yaml文件 Git 地址: https://github.com/itnoobzzy/EasyAirflow/blob/main/scripts/k8s/configmap.yaml
    將 yaml 文件導(dǎo)入 rancher 如下:
    image.png

  4. 創(chuàng)建 Secret(可選)
    secret.yaml文件地址:https://github.com/itnoobzzy/EasyAirflow/blob/main/scripts/k8s/secret.yaml
    將 yaml 文件導(dǎo)入 rancher 如下, 需要注意 secret.yaml 文件中的數(shù)據(jù)庫(kù)信息需要 base64 加密:
    image.png
    可以將數(shù)據(jù)庫(kù)信息使用 k8s 的 secret 存儲(chǔ), 然后再 Deployment yaml 文件中使用環(huán)境變量獲取 secret 中的配置信息。

  5. 創(chuàng)建 scheduler Deployment
    scheduler-dp.yaml文件地址:https://github.com/itnoobzzy/EasyAirflow/blob/main/scripts/k8s/scheduler-dp.yaml

    kind: Deployment
    apiVersion: apps/v1
    metadata:
      name: airflow-scheduler
      namespace: airflow-v2
    spec:
      replicas: 1
      selector:
        matchLabels:
          tier: airflow
          component: scheduler
          release: v2.6.0
      template:
        metadata:
          labels:
            tier: airflow
            component: scheduler
            release: v2.6.0
          annotations:
            cluster-autoscaler.kubernetes.io/safe-to-evict: "true"
        spec:
          restartPolicy: Always
          terminationGracePeriodSeconds: 10
          containers:
            - name: scheduler
              image: itnoobzzy/airflow:v2.6.0-python3.10
              imagePullPolicy: IfNotPresent
              args: ["airflow", "scheduler"]
              env:
                - name: AIRFLOW__CORE__FERNET_KEY
                  value: cwmLHK76Sp9XclhLzHwCNXNiAr04OSMKQ--6WXRjmss=
                - name: AIRFLOW__CORE__EXECUTOR
                  value: CeleryExecutor
                - name: AIRFLOW__DATABASE__SQL_ALCHEMY_CONN
                  valueFrom:
                    secretKeyRef:
                      name: airflow-secrets
                      key: sql_alchemy_conn
                - name: AIRFLOW__CELERY__BROKER_URL
                  valueFrom:
                    secretKeyRef:
                      name: airflow-secrets
                      key: broker_url
                - name: AIRFLOW__CELERY__RESULT_BACKEND
                  valueFrom:
                    secretKeyRef:
                      name: airflow-secrets
                      key: result_backend
              volumeMounts:
                - name: logs-pv
                  mountPath: "/opt/airflow/logs"
                - name: dags-pv
                  mountPath: "/opt/airflow/dags"
                - name: plugins-pv
                  mountPath: "/opt/airflow/plugins"
                - name: config
                  mountPath: "/opt/airflow/airflow.cfg"
                  subPath: airflow.cfg
          volumes:
            - name: config
              configMap:
                name: airflow-configmap
            - name: logs-pv
              persistentVolumeClaim:
                claimName: airflow-logs-pvc
            - name: dags-pv
              persistentVolumeClaim:
                claimName: airflow-dags-pvc
            - name: plugins-pv
              persistentVolumeClaim:
                claimName: airflow-plugins-pvc
    
  6. 創(chuàng)建 webserver Deployment 和 Service
    webserver.yaml文件地址:https://github.com/itnoobzzy/EasyAirflow/blob/main/scripts/k8s/webserver.yaml

kind: Deployment
apiVersion: apps/v1
metadata:
  name: airflow-webserver
  namespace: airflow-v2
spec:
  replicas: 1
  selector:
    matchLabels:
      tier: airflow
      component: webserver
      release: v2.6.0
  template:
    metadata:
      labels:
        tier: airflow
        component: webserver
        release: v2.6.0
      annotations:
        cluster-autoscaler.kubernetes.io/safe-to-evict: "true"
    spec:
      restartPolicy: Always
      terminationGracePeriodSeconds: 10
      containers:
        - name: webserver
          image: itnoobzzy/airflow:v2.6.0-python3.10
          imagePullPolicy: IfNotPresent
          args: ["airflow", "webserver"]
          env:
            - name: AIRFLOW__CORE__FERNET_KEY
              value: cwmLHK76Sp9XclhLzHwCNXNiAr04OSMKQ--6WXRjmss=
            - name: AIRFLOW__CORE__EXECUTOR
              value: CeleryExecutor
            - name: AIRFLOW__DATABASE__SQL_ALCHEMY_CONN
              valueFrom:
                secretKeyRef:
                  name: airflow-secrets
                  key: sql_alchemy_conn
            - name: AIRFLOW__CELERY__BROKER_URL
              valueFrom:
                secretKeyRef:
                  name: airflow-secrets
                  key: broker_url
            - name: AIRFLOW__CELERY__RESULT_BACKEND
              valueFrom:
                secretKeyRef:
                  name: airflow-secrets
                  key: result_backend
          volumeMounts:
            - name: logs-pv
              mountPath: "/opt/airflow/logs"
            - name: dags-pv
              mountPath: "/opt/airflow/dags"
            - name: plugins-pv
              mountPath: "/opt/airflow/plugins"
            - name: config
              mountPath: "/opt/airflow/airflow.cfg"
              subPath: airflow.cfg
      volumes:
        - name: config
          configMap:
            name: airflow-configmap
        - name: logs-pv
          persistentVolumeClaim:
            claimName: airflow-logs-pvc
        - name: dags-pv
          persistentVolumeClaim:
            claimName: airflow-dags-pvc
        - name: plugins-pv
          persistentVolumeClaim:
            claimName: airflow-plugins-pvc

---

apiVersion: v1
kind: Service
metadata:
  name: airflow-webserver-svc
spec:
  type: ClusterIP
  ports:
    - name: airflow-webserver
      port: 8080
      targetPort: 8080
      protocol: TCP
  selector:
    tier: airflow
    component: webserver
    release: v2.6.0
  1. 創(chuàng)建 flower Deployment 和 Service
    flower.yaml文件地址:https://github.com/itnoobzzy/EasyAirflow/blob/main/scripts/k8s/flower.yaml

    kind: Deployment
    apiVersion: apps/v1
    metadata:
      name: airflow-flower
      namespace: airflow-v2
    spec:
      replicas: 1
      selector:
        matchLabels:
          tier: airflow
          component: flower
          release: v2.6.0
      template:
        metadata:
          labels:
            tier: airflow
            component: flower
            release: v2.6.0
          annotations:
            cluster-autoscaler.kubernetes.io/safe-to-evict: "true"
        spec:
          restartPolicy: Always
          terminationGracePeriodSeconds: 10
          containers:
            - name: flower
              image: itnoobzzy/airflow:v2.6.0-python3.10
              imagePullPolicy: IfNotPresent
              args: ["airflow", "celery", "flower"]
              env:
                - name: AIRFLOW__CORE__FERNET_KEY
                  value: cwmLHK76Sp9XclhLzHwCNXNiAr04OSMKQ--6WXRjmss=
                - name: AIRFLOW__CORE__EXECUTOR
                  value: CeleryExecutor
                - name: AIRFLOW__DATABASE__SQL_ALCHEMY_CONN
                  valueFrom:
                    secretKeyRef:
                      name: airflow-secrets
                      key: sql_alchemy_conn
                - name: AIRFLOW__CELERY__BROKER_URL
                  valueFrom:
                    secretKeyRef:
                      name: airflow-secrets
                      key: broker_url
                - name: AIRFLOW__CELERY__RESULT_BACKEND
                  valueFrom:
                    secretKeyRef:
                      name: airflow-secrets
                      key: result_backend
              volumeMounts:
                - name: logs-pv
                  mountPath: "/opt/airflow/logs"
                - name: dags-pv
                  mountPath: "/opt/airflow/dags"
                - name: plugins-pv
                  mountPath: "/opt/airflow/plugins"
                - name: config
                  mountPath: "/opt/airflow/airflow.cfg"
                  subPath: airflow.cfg
          volumes:
            - name: config
              configMap:
                name: airflow-configmap
            - name: logs-pv
              persistentVolumeClaim:
                claimName: airflow-logs-pvc
            - name: dags-pv
              persistentVolumeClaim:
                claimName: airflow-dags-pvc
            - name: plugins-pv
              persistentVolumeClaim:
                claimName: airflow-plugins-pvc
    
    ---
    
    apiVersion: v1
    kind: Service
    metadata:
      name: airflow-flower-svc
    spec:
      type: ClusterIP
      ports:
        - name: airflow-flower
          port: 5555
          targetPort: 5555
          protocol: TCP
      selector:
        tier: airflow
        component: flower
        release: v2.6.0
    
  2. 創(chuàng)建 worker Deployment
    worker.yaml文件地址:https://github.com/itnoobzzy/EasyAirflow/blob/main/scripts/k8s/worker.yaml

    kind: Deployment
    apiVersion: apps/v1
    metadata:
      name: airflow-worker
      namespace: airflow-v2
    spec:
      replicas: 1
      selector:
        matchLabels:
          tier: airflow
          component: worker
          release: v2.6.0
      template:
        metadata:
          labels:
            tier: airflow
            component: worker
            release: v2.6.0
          annotations:
            cluster-autoscaler.kubernetes.io/safe-to-evict: "true"
        spec:
          restartPolicy: Always
          terminationGracePeriodSeconds: 10
          containers:
            - name: worker
              image: itnoobzzy/airflow:v2.6.0-python3.10
              imagePullPolicy: IfNotPresent
              args: ["airflow", "celery", "worker"]
              env:
                - name: AIRFLOW__CORE__FERNET_KEY
                  value: cwmLHK76Sp9XclhLzHwCNXNiAr04OSMKQ--6WXRjmss=
                - name: AIRFLOW__CORE__EXECUTOR
                  value: CeleryExecutor
                - name: AIRFLOW__DATABASE__SQL_ALCHEMY_CONN
                  valueFrom:
                    secretKeyRef:
                      name: airflow-secrets
                      key: sql_alchemy_conn
                - name: AIRFLOW__CELERY__BROKER_URL
                  valueFrom:
                    secretKeyRef:
                      name: airflow-secrets
                      key: broker_url
                - name: AIRFLOW__CELERY__RESULT_BACKEND
                  valueFrom:
                    secretKeyRef:
                      name: airflow-secrets
                      key: result_backend
              volumeMounts:
                - name: logs-pv
                  mountPath: "/opt/airflow/logs"
                - name: dags-pv
                  mountPath: "/opt/airflow/dags"
                - name: plugins-pv
                  mountPath: "/opt/airflow/plugins"
                - name: config
                  mountPath: "/opt/airflow/airflow.cfg"
                  subPath: airflow.cfg
          volumes:
            - name: config
              configMap:
                name: airflow-configmap
            - name: logs-pv
              persistentVolumeClaim:
                claimName: airflow-logs-pvc
            - name: dags-pv
              persistentVolumeClaim:
                claimName: airflow-dags-pvc
            - name: plugins-pv
              persistentVolumeClaim:
                claimName: airflow-plugins-pvc
    
  3. 創(chuàng)建 webserver 和 flower 的 Ingress
    Ingress.yaml文件地址:https://github.com/itnoobzzy/EasyAirflow/blob/main/scripts/k8s/ingress.yaml

    ---
    
    apiVersion: extensions/v1beta1
    kind: Ingress
    metadata:
       name:  airflow-ingress
    spec:
      rules:
        - host: airflow-webserver.akulaku.com
          http:
            paths:
            - path: /
              backend:
                serviceName: airflow-webserver-svc
                servicePort: 8080
        - host: airflow-flower.akulaku.com
          http:
            paths:
                - path: /
                  backend:
                    serviceName: airflow-flower-svc
                    servicePort: 5555
    

驗(yàn)證

部署完后在瀏覽器中輸入 http://airflow-webserver.akulaku.com/ 訪問 webserver 界面(需要注意 /etc/hosts 文件配置了域名解析),webserver 初始化管理員用戶名和密碼都為 admin:
image.png
在瀏覽器中輸入 http://airflow-flower.akulaku.com/ 訪問 flower 界面:
image.png
這里flower worker name 是 worker Pod 的name:
image.png
觸發(fā) DAG 運(yùn)行并且在掛載的機(jī)器上查看對(duì)應(yīng)的日志:
image.png

機(jī)器上的 /data/app/k8s/EasyAirflow/logs 這個(gè)目錄就是前邊將 k8s 的 PVC 的內(nèi)容掛載在機(jī)器上對(duì)應(yīng)的目錄:

(base) [admin@data-landsat-test03 logs]$ view /data/app/k8s/EasyAirflow/logs/dag_id\=tutorial/run_id\=manual__2023-05-15T09\:27\:44.253944+00\:00/task_id\=sleep/attempt\=1.log 
dag_id=tutorial/       dag_processor_manager/ scheduler/             
[2023-05-15T09:27:47.187+0000] {taskinstance.py:1125} INFO - Dependencies all met for dep_context=non-requeueable deps ti=<TaskInstance: tutorial.sleep manual__2023-05-15T09:27:44.253944+00:00 [queued]>
[2023-05-15T09:27:47.195+0000] {taskinstance.py:1125} INFO - Dependencies all met for dep_context=requeueable deps ti=<TaskInstance: tutorial.sleep manual__2023-05-15T09:27:44.253944+00:00 [queued]>
[2023-05-15T09:27:47.195+0000] {taskinstance.py:1331} INFO - Starting attempt 1 of 4
[2023-05-15T09:27:47.206+0000] {taskinstance.py:1350} INFO - Executing <Task(BashOperator): sleep> on 2023-05-15 09:27:44.253944+00:00
[2023-05-15T09:27:47.209+0000] {standard_task_runner.py:57} INFO - Started process 71 to run task
[2023-05-15T09:27:47.213+0000] {standard_task_runner.py:84} INFO - Running: ['airflow', 'tasks', 'run', 'tutorial', 'sleep', 'manual__2023-05-15T09:27:44.253944+00:00', '--job-id', '75', '--raw', '--subdir', 'DAGS_FOLDER/tutorial.py', '--cfg-path', '/tmp/tmpy65q1a3h']
[2023-05-15T09:27:47.213+0000] {standard_task_runner.py:85} INFO - Job 75: Subtask sleep
[2023-05-15T09:27:47.260+0000] {task_command.py:410} INFO - Running <TaskInstance: tutorial.sleep manual__2023-05-15T09:27:44.253944+00:00 [running]> on host airflow-worker-6f9ffb7fb8-t6j9p
[2023-05-15T09:27:47.330+0000] {taskinstance.py:1568} INFO - Exporting env vars: AIRFLOW_CTX_DAG_EMAIL='airflow@example.com' AIRFLOW_CTX_DAG_OWNER='airflow' AIRFLOW_CTX_DAG_ID='tutorial' AIRFLOW_CTX_TASK_ID='sleep' AIRFLOW_CTX_EXECUTION_DATE='2023-05-15T09:27:44.253944+00:00' AIRFLOW_CTX_TRY_NUMBER='1' AIRFLOW_CTX_DAG_RUN_ID='manual__2023-05-15T09:27:44.253944+00:00'
[2023-05-15T09:27:47.331+0000] {subprocess.py:63} INFO - Tmp dir root location:
 /tmp
[2023-05-15T09:27:47.331+0000] {subprocess.py:75} INFO - Running command: ['/bin/bash', '-c', 'sleep 5']
[2023-05-15T09:27:47.355+0000] {subprocess.py:86} INFO - Output:
[2023-05-15T09:27:52.360+0000] {subprocess.py:97} INFO - Command exited with return code 0
[2023-05-15T09:27:52.383+0000] {taskinstance.py:1368} INFO - Marking task as SUCCESS. dag_id=tutorial, task_id=sleep, execution_date=20230515T092744, start_date=20230515T092747, end_date=20230515T092752
[2023-05-15T09:27:52.426+0000] {local_task_job_runner.py:232} INFO - Task exited with return code 0
[2023-05-15T09:27:52.440+0000] {taskinstance.py:2674} INFO - 0 downstream tasks scheduled from follow-on schedule check

總結(jié)

這里的將 airflow 部署在 k8s 上,并沒有使用 airflow 的 k8s executor ,不能夠做到任務(wù)執(zhí)行完后自動(dòng)停止掉 Pod,縮減成本。
但是 airflow 一般運(yùn)行的都是批處理任務(wù),集中在一個(gè)時(shí)間段內(nèi)運(yùn)行,目前我們公司使用的場(chǎng)景就是在夜間使用 airflow 跑大量離線處理任務(wù),因此在白天的時(shí)候可以將 airflow 的一些 worker 給停掉,晚上再根據(jù)實(shí)際情況增加對(duì)應(yīng)的 worker Pod。
但是在啟停 worker 的 Pod 的時(shí)候也有一些注意事項(xiàng):

  • 啟停能否做成自動(dòng)化的,在白天某個(gè)時(shí)間點(diǎn)開始停止 worker Pod, 在夜間某個(gè)時(shí)間點(diǎn)開始啟動(dòng) Pod。
  • 需要優(yōu)雅停止,停止前需要等待 worker 中的任務(wù)運(yùn)行完畢(或者說(shuō)最多等待多久時(shí)間殺死任務(wù)進(jìn)程),并且不會(huì)再有新任務(wù)進(jìn)入將要停止的 Pod 中。

后邊針對(duì)上邊所說(shuō)的問題進(jìn)行研究,一旦發(fā)現(xiàn)好的解決方法和步驟,將與大家一起分享~文章來(lái)源地址http://www.zghlxwxcb.cn/news/detail-491342.html

到了這里,關(guān)于airflow v2.6.0 k8s 部署(Rancher)的文章就介紹完了。如果您還想了解更多內(nèi)容,請(qǐng)?jiān)谟疑辖撬阉鱐OY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!

本文來(lái)自互聯(lián)網(wǎng)用戶投稿,該文觀點(diǎn)僅代表作者本人,不代表本站立場(chǎng)。本站僅提供信息存儲(chǔ)空間服務(wù),不擁有所有權(quán),不承擔(dān)相關(guān)法律責(zé)任。如若轉(zhuǎn)載,請(qǐng)注明出處: 如若內(nèi)容造成侵權(quán)/違法違規(guī)/事實(shí)不符,請(qǐng)點(diǎn)擊違法舉報(bào)進(jìn)行投訴反饋,一經(jīng)查實(shí),立即刪除!

領(lǐng)支付寶紅包贊助服務(wù)器費(fèi)用

相關(guān)文章

  • rancher2.6部署k8s集群示例

    rancher2.6部署k8s集群示例

    博客主頁(yè):https://tomcat.blog.csdn.net 博主昵稱:農(nóng)民工老王 主要領(lǐng)域:Java、Linux、K8S 期待大家的關(guān)注??點(diǎn)贊??收藏?留言?? rancher 2.6已經(jīng)發(fā)布一段時(shí)間,與2.5還是有不少變動(dòng),而且目前只有英語(yǔ)文檔。為了方便大家順利使用,在此演示下rancher2.6部署k8s集群。 本文是 如何通過(guò)

    2023年04月11日
    瀏覽(17)
  • 使用docker部署rancher并導(dǎo)入k8s集群

    使用docker部署rancher并導(dǎo)入k8s集群

    前言:鑒于我已經(jīng)部署了k8s集群,那就在部署rancher一臺(tái)用于管理k8s,這是一臺(tái)單獨(dú)的虛擬環(huán)境,之前在k8s的master節(jié)點(diǎn)上進(jìn)行部署并未成功,有可能端口沖突了,這個(gè)問題我并沒有深究,如果非要通過(guò)修改端口等操作部署上去后續(xù)可能帶來(lái)的問題我處理不了,也很浪費(fèi)時(shí)間,所

    2024年02月09日
    瀏覽(30)
  • Rancher-RKE-install 部署k8s集群

    Rancher-RKE-install 部署k8s集群

    一、為什么用Rancher-RKE-install ????????1.CNCF認(rèn)證的k8s安裝程序。 ????????2.有中文文檔。 ?? 二、安裝步驟 ? ? ? ? 1.下載Rancher-Rke的二進(jìn)制包-下面是項(xiàng)目的地址 ? ? ? ? ? ? ? ??GitHub - rancher/rke: Rancher Kubernetes Engine (RKE), an extremely simple, lightning fast Kubernetes distribution

    2024年02月12日
    瀏覽(25)
  • (十二)K8S可視化工具Rancher部署項(xiàng)目應(yīng)用實(shí)戰(zhàn)

    (十二)K8S可視化工具Rancher部署項(xiàng)目應(yīng)用實(shí)戰(zhàn)

    1.進(jìn)入資源密文 2.進(jìn)入鏡像庫(kù)憑證列表,點(diǎn)擊添加憑證 3.輸入憑證名稱,選擇自定義,填入自己的私有鏡像倉(cāng)庫(kù)地址,這里使用的是阿里云,輸入用戶名和密碼 1.連接好鏡像倉(cāng)庫(kù)之后,我們點(diǎn)擊資源工作負(fù)載部署服務(wù) 2.部署工作負(fù)載,點(diǎn)擊啟動(dòng) 3.啟動(dòng)成功后,等待鏡像拉去完

    2024年02月11日
    瀏覽(33)
  • 使用rancher的rke2部署高可用的k8s集群

    目錄 1、linux環(huán)境初始化及內(nèi)核調(diào)優(yōu) (1)設(shè)置主機(jī)名、關(guān)閉防火墻、配置集群ip映射、關(guān)閉selinux (2)禁用linux的透明大頁(yè)、標(biāo)準(zhǔn)大頁(yè) (3)禁用swap分區(qū) (4)配置集群時(shí)間同步,同步阿里云服務(wù)器時(shí)間 (5)linux內(nèi)核參數(shù)調(diào)優(yōu) (6)配置ipvs 2、rke2安裝k8s (1)主節(jié)點(diǎn)安裝執(zhí)行

    2023年04月11日
    瀏覽(25)
  • 安裝部署rancher2.7.0,然后導(dǎo)入K8S集群,管理集群

    安裝部署rancher2.7.0,然后導(dǎo)入K8S集群,管理集群

    centos系統(tǒng)請(qǐng)參考博客 ubuntu系統(tǒng)請(qǐng)參考博客 默認(rèn)用戶是admin 瀏覽器打開:https://IP:443 回車后就出現(xiàn)了如下圖 利用剛才查到的密碼登錄 登錄后直接修改密碼,點(diǎn)擊繼續(xù)。 上圖就是進(jìn)入后的默認(rèn)頁(yè)面 不同版本調(diào)整中文的頁(yè)面不一樣,具體請(qǐng)根據(jù)自己的版本可以百度或者去官網(wǎng)查

    2024年02月11日
    瀏覽(49)
  • Rancher部署k8s集群測(cè)試安裝nginx(節(jié)點(diǎn)重新初始化方法,親測(cè))

    Rancher部署k8s集群測(cè)試安裝nginx(節(jié)點(diǎn)重新初始化方法,親測(cè))

    一、安裝前準(zhǔn)備工作 計(jì)算機(jī) 機(jī)器名 IP地址 部署內(nèi)容 rancher 172.16.5.221 rancher k8smaster 172.16.5.222 Control Plane, Etcd k8sworker01 172.16.5.223 worker k8sworker02 172.16.5.224 worker k8sworker03 172.16.5.225 worker 需在每個(gè)節(jié)點(diǎn)都進(jìn)行操作,可以使用xshell工具分屏進(jìn)行批量操作。 升級(jí)linux內(nèi)核 時(shí)間同步 Hos

    2024年01月20日
    瀏覽(27)
  • 基于K8S部署ZooKeeper準(zhǔn)備知識(shí)(StatefulSet)

    使用k8s部署zk時(shí),會(huì)部署一個(gè)headless service.科普一下headless service: Headless Service(無(wú)頭服務(wù))是 Kubernetes 中的一種服務(wù)類型,它與普通的 ClusterIP 服務(wù)有所不同。普通的 ClusterIP 服務(wù)會(huì)為每個(gè)服務(wù)分配一個(gè)虛擬 IP 地址,并通過(guò)負(fù)載均衡將流量轉(zhuǎn)發(fā)到后端 Pod。而 Headless Service 不分

    2024年02月08日
    瀏覽(62)
  • k8s集群中部署項(xiàng)目之?dāng)?shù)據(jù)庫(kù)準(zhǔn)備

    k8s集群中部署項(xiàng)目之?dāng)?shù)據(jù)庫(kù)準(zhǔn)備

    同理 同理 同理 同理 同理

    2024年02月07日
    瀏覽(16)
  • 保姆級(jí) ARM64 CPU架構(gòu)下安裝部署Docker + rancher + K8S 說(shuō)明文檔

    保姆級(jí) ARM64 CPU架構(gòu)下安裝部署Docker + rancher + K8S 說(shuō)明文檔

    K8S是Kubernetes的簡(jiǎn)稱,是一個(gè)開源的容器編排平臺(tái),用于自動(dòng)部署、擴(kuò)展和管理“容器化(containerized)應(yīng)用程序”的系統(tǒng)。它可以跨多個(gè)主機(jī)聚集在一起,控制和自動(dòng)化應(yīng)用的部署與更新。 K8S 架構(gòu) Kubernetes 主要由以下幾個(gè)核心組件組成: etcd 保存了整個(gè)集群的狀態(tài); apiserv

    2024年01月21日
    瀏覽(29)

覺得文章有用就打賞一下文章作者

支付寶掃一掃打賞

博客贊助

微信掃一掃打賞

請(qǐng)作者喝杯咖啡吧~博客贊助

支付寶掃一掃領(lǐng)取紅包,優(yōu)惠每天領(lǐng)

二維碼1

領(lǐng)取紅包

二維碼2

領(lǐng)紅包