国产 无码 综合区,色欲AV无码国产永久播放,无码天堂亚洲国产AV,国产日韩欧美女同一区二区

k8s 配置hadoop集群,nfs作為存儲

這篇具有很好參考價值的文章主要介紹了k8s 配置hadoop集群,nfs作為存儲。希望對大家有所幫助。如果存在錯誤或未考慮完全的地方,請大家不吝賜教,您也可以點擊"舉報違法"按鈕提交疑問。

目錄

一、簡介

二、nfs服務(wù)&nfs-provisioner配置

1、k8S服務(wù)器需安裝nfs客戶端

2、nfs服務(wù)端安裝配置

3、使用nfs-provisioner動態(tài)創(chuàng)建PV? (文件已修改)

三、hadoop配置文件

1、# cat hadoop.yaml

2、# cat hadoop-datanode.yaml

3、# cat yarn-node.yaml?

4、執(zhí)行文件并查看

5、聯(lián)通性驗證

?四、報錯&解決

1、nfs報錯?

?2、nfs報錯?


一、簡介

基礎(chǔ)環(huán)境使用kubeasz(https://github.com/easzlab/kubeasz)配置K8S環(huán)境.
K8S配置示例:https://blog.csdn.net/zhangxueleishamo/article/details/108670578
使用nfs作為資源存儲。

二、nfs服務(wù)&nfs-provisioner配置

1、k8S服務(wù)器需安裝nfs客戶端

yum -y install nfs-utils

2、nfs服務(wù)端安裝配置

yum -y install nfs-utils rpcbind nfs-server

# cat /etc/exports
/data/hadoop	*(rw,no_root_squash,no_all_squash,sync)
###權(quán)限及目錄配置,具體不再說明

systemctl start rpcbind 
systemctl enable rpcbind

systemctl start nfs
systemctl enable nfs

3、使用nfs-provisioner動態(tài)創(chuàng)建PV? (文件已修改)

# cat nfs-provisioner.yaml 

apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  namespace: dev
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["list", "watch", "create", "update", "patch"]
  - apiGroups: [""]
    resources: ["services"]
    verbs: ["get"]
  - apiGroups: ["extensions"]
    resources: ["podsecuritypolicies"]
    resourceNames: ["nfs-provisioner"]
    verbs: ["use"]
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: dev 
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io

---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: nfs-client-provisioner
  namespace: dev
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          #image: quay.io/external_storage/nfs-client-provisioner:latest
          image: jmgao1983/nfs-client-provisioner:latest
          imagePullPolicy: IfNotPresent
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              # 此處供應(yīng)者名字供storageclass調(diào)用
              value: nfs-storage
            - name: NFS_SERVER
              value: 10.2.1.190
            - name: NFS_PATH
              value: /data/hadoop
      volumes:
        - name: nfs-client-root
          nfs:
            server: 10.2.1.190
            path: /data/hadoop

---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-storage
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: nfs-storage
volumeBindingMode: Immediate
reclaimPolicy: Delete


###執(zhí)行并查看sa & sc ##

# kubectl apply -f nfs-provisioner.yaml 

# kubectl get sa,sc  -n dev 
NAME                                    SECRETS   AGE
serviceaccount/nfs-client-provisioner   1         47m

NAME                                                PROVISIONER   RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
storageclass.storage.k8s.io/nfs-storage (default)   nfs-storage   Delete          Immediate           false                  45m

三、hadoop配置文件

1、# cat hadoop.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: kube-hadoop-conf
  namespace: dev
data:
  HDFS_MASTER_SERVICE: hadoop-hdfs-master
  HDOOP_YARN_MASTER: hadoop-yarn-master
---
apiVersion: v1
kind: Service
metadata:
  name: hadoop-hdfs-master
  namespace: dev
spec:
  type: NodePort
  selector:
    name: hdfs-master
  ports:
    - name: rpc
      port: 9000
      targetPort: 9000
    - name: http
      port: 50070
      targetPort: 50070
      nodePort: 32007
---
apiVersion: v1
kind: Service
metadata:
  name: hadoop-yarn-master
  namespace: dev
spec:
  type: NodePort
  selector:
    name: yarn-master
  ports:
     - name: "8030"
       port: 8030
     - name: "8031"
       port: 8031
     - name: "8032"
       port: 8032
     - name: http
       port: 8088
       targetPort: 8088
       nodePort: 32088
---
apiVersion: v1
kind: Service
metadata:
  name: yarn-node
  namespace: dev
spec:
  clusterIP: None
  selector:
    name: yarn-node
  ports:
     - port: 8040

2、# cat hadoop-datanode.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: hdfs-master
  namespace: dev
spec:
  replicas: 1
  selector:
    matchLabels:
      name: hdfs-master
  template:
    metadata:
      labels:
        name: hdfs-master
    spec:
      containers:
        - name: hdfs-master
          image: kubeguide/hadoop:latest
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 9000
            - containerPort: 50070
          env:
            - name: HADOOP_NODE_TYPE
              value: namenode
            - name: HDFS_MASTER_SERVICE
              valueFrom:
                configMapKeyRef:
                  name: kube-hadoop-conf
                  key: HDFS_MASTER_SERVICE
            - name: HDOOP_YARN_MASTER
              valueFrom:
                configMapKeyRef:
                  name: kube-hadoop-conf
                  key: HDOOP_YARN_MASTER
      restartPolicy: Always
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: hadoop-datanode
  namespace: dev
spec:
  replicas: 3
  selector:
    matchLabels:
      name: hadoop-datanode
  serviceName: hadoop-datanode
  template:
    metadata:
      labels:
        name: hadoop-datanode
    spec:
      containers:
        - name: hadoop-datanode
          image: kubeguide/hadoop:latest
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 9000
            - containerPort: 50070
          volumeMounts:
            - name: data
              mountPath: /root/hdfs/
              subPath: hdfs
            - name: data
              mountPath: /usr/local/hadoop/logs/
              subPath: logs
          env:
            - name: HADOOP_NODE_TYPE
              value: datanode
            - name: HDFS_MASTER_SERVICE
              valueFrom:
                configMapKeyRef:
                  name: kube-hadoop-conf
                  key: HDFS_MASTER_SERVICE
            - name: HDOOP_YARN_MASTER
              valueFrom:
                configMapKeyRef:
                  name: kube-hadoop-conf
                  key: HDOOP_YARN_MASTER
      restartPolicy: Always
  volumeClaimTemplates:
    - metadata:
        name: data
        namespace: dev
      spec:
        accessModes:
          - ReadWriteMany
        resources:
          requests:
            storage: 2Gi
        storageClassName: "nfs-storage"

3、# cat yarn-node.yaml?

apiVersion: apps/v1
kind: Deployment
metadata:
  name: yarn-master
  namespace: dev
spec:
  replicas: 1
  selector:
    matchLabels:
      name: yarn-master
  template:
    metadata:
      labels:
        name: yarn-master
    spec:
      containers:
        - name: yarn-master
          image: kubeguide/hadoop:latest
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 9000
            - containerPort: 50070
          env:
            - name: HADOOP_NODE_TYPE
              value: resourceman
            - name: HDFS_MASTER_SERVICE
              valueFrom:
                configMapKeyRef:
                  name: kube-hadoop-conf
                  key: HDFS_MASTER_SERVICE
            - name: HDOOP_YARN_MASTER
              valueFrom:
                configMapKeyRef:
                  name: kube-hadoop-conf
                  key: HDOOP_YARN_MASTER
      restartPolicy: Always
---
 
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: yarn-node
  namespace: dev
spec:
  replicas: 3
  selector:
    matchLabels:
      name: yarn-node
  serviceName: yarn-node
  template:
    metadata:
      labels:
        name: yarn-node
    spec:
      containers:
        - name: yarn-node
          image: kubeguide/hadoop:latest
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8040
            - containerPort: 8041
            - containerPort: 8042
          volumeMounts:
            - name: yarn-data
              mountPath: /root/hdfs/
              subPath: hdfs
            - name: yarn-data
              mountPath: /usr/local/hadoop/logs/
              subPath: logs
          env:
            - name: HADOOP_NODE_TYPE
              value: yarnnode
            - name: HDFS_MASTER_SERVICE
              valueFrom:
                configMapKeyRef:
                  name: kube-hadoop-conf
                  key: HDFS_MASTER_SERVICE
            - name: HDOOP_YARN_MASTER
              valueFrom:
                configMapKeyRef:
                  name: kube-hadoop-conf
                  key: HDOOP_YARN_MASTER
      restartPolicy: Always
  volumeClaimTemplates:
    - metadata:
        name: yarn-data
        namespace: dev
      spec:
        accessModes:
          - ReadWriteMany
        resources:
          requests:
            storage: 2Gi
        storageClassName: "nfs-storage"

4、執(zhí)行文件并查看

kubectl apply -f hadoop.yaml
kubectl apply -f hadoop-datanode.yaml
kubectl apply -f yarn-node.yaml


# kubectl get pv,pvc -n dev 
NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                        STORAGECLASS   REASON   AGE
persistentvolume/pvc-2bf83ccf-85eb-43d7-8d49-10a2617c1bde   2Gi        RWX            Delete           Bound    dev/data-hadoop-datanode-0   nfs-storage             34m
persistentvolume/pvc-5ecff2b2-ea9d-4d6f-851b-0ab2cecbbe54   2Gi        RWX            Delete           Bound    dev/yarn-data-yarn-node-1    nfs-storage             32m
persistentvolume/pvc-91132f6d-a3e1-4938-b8d7-674d6b0656a8   2Gi        RWX            Delete           Bound    dev/data-hadoop-datanode-2   nfs-storage             34m
persistentvolume/pvc-a44adf12-2505-4133-ab57-99a61c4d4476   2Gi        RWX            Delete           Bound    dev/data-hadoop-datanode-1   nfs-storage             34m
persistentvolume/pvc-c4bf1e26-936f-46f6-8529-98d2699a916e   2Gi        RWX            Delete           Bound    dev/yarn-data-yarn-node-2    nfs-storage             32m
persistentvolume/pvc-e6d360be-2f72-4c47-a99b-fee79ca5e03b   2Gi        RWX            Delete           Bound    dev/yarn-data-yarn-node-0    nfs-storage             32m

NAME                                           STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/data-hadoop-datanode-0   Bound    pvc-2bf83ccf-85eb-43d7-8d49-10a2617c1bde   2Gi        RWX            nfs-storage    39m
persistentvolumeclaim/data-hadoop-datanode-1   Bound    pvc-a44adf12-2505-4133-ab57-99a61c4d4476   2Gi        RWX            nfs-storage    34m
persistentvolumeclaim/data-hadoop-datanode-2   Bound    pvc-91132f6d-a3e1-4938-b8d7-674d6b0656a8   2Gi        RWX            nfs-storage    34m
persistentvolumeclaim/yarn-data-yarn-node-0    Bound    pvc-e6d360be-2f72-4c47-a99b-fee79ca5e03b   2Gi        RWX            nfs-storage    32m
persistentvolumeclaim/yarn-data-yarn-node-1    Bound    pvc-5ecff2b2-ea9d-4d6f-851b-0ab2cecbbe54   2Gi        RWX            nfs-storage    32m
persistentvolumeclaim/yarn-data-yarn-node-2    Bound    pvc-c4bf1e26-936f-46f6-8529-98d2699a916e   2Gi        RWX            nfs-storage    32m


# kubectl get all -n dev  -o wide 
NAME                                         READY   STATUS    RESTARTS   AGE   IP            NODE         NOMINATED NODE   READINESS GATES
pod/hadoop-datanode-0                        1/1     Running   0          40m   172.20.4.65   10.2.1.194   <none>           <none>
pod/hadoop-datanode-1                        1/1     Running   0          35m   172.20.4.66   10.2.1.194   <none>           <none>
pod/hadoop-datanode-2                        1/1     Running   0          35m   172.20.4.67   10.2.1.194   <none>           <none>
pod/hdfs-master-5946bb8ff4-lt5mp             1/1     Running   0          40m   172.20.4.64   10.2.1.194   <none>           <none>
pod/nfs-client-provisioner-8ccc8b867-ndssr   1/1     Running   0          52m   172.20.4.63   10.2.1.194   <none>           <none>
pod/yarn-master-559c766d4c-jzz4s             1/1     Running   0          33m   172.20.4.68   10.2.1.194   <none>           <none>
pod/yarn-node-0                              1/1     Running   0          33m   172.20.4.69   10.2.1.194   <none>           <none>
pod/yarn-node-1                              1/1     Running   0          33m   172.20.4.70   10.2.1.194   <none>           <none>
pod/yarn-node-2                              1/1     Running   0          33m   172.20.4.71   10.2.1.194   <none>           <none>

NAME                         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                                                       AGE   SELECTOR
service/hadoop-hdfs-master   NodePort    10.68.193.79    <none>        9000:26007/TCP,50070:32007/TCP                                40m   name=hdfs-master
service/hadoop-yarn-master   NodePort    10.68.243.133   <none>        8030:34657/TCP,8031:35352/TCP,8032:33633/TCP,8088:32088/TCP   40m   name=yarn-master
service/yarn-node            ClusterIP   None            <none>        8040/TCP                                                      40m   name=yarn-node

NAME                                     READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS               IMAGES                                    SELECTOR
deployment.apps/hdfs-master              1/1     1            1           40m   hdfs-master              kubeguide/hadoop:latest                   name=hdfs-master
deployment.apps/nfs-client-provisioner   1/1     1            1           52m   nfs-client-provisioner   jmgao1983/nfs-client-provisioner:latest   app=nfs-client-provisioner
deployment.apps/yarn-master              1/1     1            1           33m   yarn-master              kubeguide/hadoop:latest                   name=yarn-master

NAME                                               DESIRED   CURRENT   READY   AGE   CONTAINERS               IMAGES                                    SELECTOR
replicaset.apps/hdfs-master-5946bb8ff4             1         1         1       40m   hdfs-master              kubeguide/hadoop:latest                   name=hdfs-master,pod-template-hash=5946bb8ff4
replicaset.apps/nfs-client-provisioner-8ccc8b867   1         1         1       52m   nfs-client-provisioner   jmgao1983/nfs-client-provisioner:latest   app=nfs-client-provisioner,pod-template-hash=8ccc8b867
replicaset.apps/yarn-master-559c766d4c             1         1         1       33m   yarn-master              kubeguide/hadoop:latest                   name=yarn-master,pod-template-hash=559c766d4c

NAME                               READY   AGE   CONTAINERS        IMAGES
statefulset.apps/hadoop-datanode   3/3     40m   hadoop-datanode   kubeguide/hadoop:latest
statefulset.apps/yarn-node         3/3     33m   yarn-node         kubeguide/hadoop:latest

訪問http://ip:32007 和http://ip:32088 即可看到hadoop管理界面

k8s安裝hadoop,k8s,hadoop?k8s安裝hadoop,k8s,hadoop

5、聯(lián)通性驗證

在hdfs上創(chuàng)建一個目錄

# kubectl exec -it hdfs-master-5946bb8ff4-lt5mp -n dev /bin/bash

# hdfs dfs -mkdir /BigData

在Hadoop WebUI界面查看剛才創(chuàng)建的目錄

?k8s安裝hadoop,k8s,hadoop

?

?四、報錯&解決

1、nfs報錯?

Unexpected error getting claim reference to claim "dev/data-hadoop-datanode-0": selfLink was empty, can't make reference

k8s安裝hadoop,k8s,hadoop

?問題原因:?沒有權(quán)限創(chuàng)建

解決:

1、chmod 777 /data/hadoop #配置nfs共享文件權(quán)限,方便測試就777了

2、修改nfs-provisioner.yaml的rules

rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["list", "watch", "create", "update", "patch"]
  - apiGroups: [""]
    resources: ["services"]
    verbs: ["get"]
  - apiGroups: ["extensions"]
    resources: ["podsecuritypolicies"]
    resourceNames: ["nfs-provisioner"]
    verbs: ["use"]
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]

?2、nfs報錯?

persistentvolume-controller ?waiting for a volume to be created, either by external provisioner "nfs-provisioner" or manually created by system??administrator

k8s安裝hadoop,k8s,hadoop

?問題原因:selfLink導(dǎo)致,因為kubernetes 1.20版本 禁用了 selfLink導(dǎo)致

https://github.com/kubernetes/kubernetes/pull/94397

問題解決:

1、在hadoop的3個文件里?添加namespace與nfs的統(tǒng)一

2、修改kube-apiserver.yaml配置文件,添加如下內(nèi)容

apiVersion: v1
-----
    - --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
    - --feature-gates=RemoveSelfLink=false # 添加這個配置

本k8s使用kubeasz配置未找到該文件,需要直接修改所有的master的kube-apiserver服務(wù)的配置文件

# cat /etc/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
ExecStart=/opt/kube/bin/kube-apiserver \
  --advertise-address=10.2.1.190 \
  --allow-privileged=true \
  --anonymous-auth=false \
  --api-audiences=api,istio-ca \
  --authorization-mode=Node,RBAC \
  --token-auth-file=/etc/kubernetes/ssl/basic-auth.csv \
  --bind-address=10.2.1.190 \
  --client-ca-file=/etc/kubernetes/ssl/ca.pem \
  --endpoint-reconciler-type=lease \
  --etcd-cafile=/etc/kubernetes/ssl/ca.pem \
  --etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem \
  --etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem \
  --etcd-servers=https://10.2.1.190:2379,https://10.2.1.191:2379,https://10.2.1.192:2379 \
  --kubelet-certificate-authority=/etc/kubernetes/ssl/ca.pem \
  --kubelet-client-certificate=/etc/kubernetes/ssl/admin.pem \
  --kubelet-client-key=/etc/kubernetes/ssl/admin-key.pem \
  --service-account-issuer=kubernetes.default.svc \
  --service-account-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \
  --service-account-key-file=/etc/kubernetes/ssl/ca.pem \
  --service-cluster-ip-range=10.68.0.0/16 \
  --service-node-port-range=20000-40000 \
  --tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem \
  --tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
  --feature-gates=RemoveSelfLink=false \  #添加這個配置
  --requestheader-client-ca-file=/etc/kubernetes/ssl/ca.pem \
  --requestheader-allowed-names= \
  --requestheader-extra-headers-prefix=X-Remote-Extra- \
  --requestheader-group-headers=X-Remote-Group \
  --requestheader-username-headers=X-Remote-User \
  --proxy-client-cert-file=/etc/kubernetes/ssl/aggregator-proxy.pem \
  --proxy-client-key-file=/etc/kubernetes/ssl/aggregator-proxy-key.pem \
  --enable-aggregator-routing=true \
  --v=2
Restart=always
RestartSec=5
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

?重啟服務(wù)?systemctl daemon-reload && systemctl restart kubelet

最好是整體服務(wù)器重啟下文章來源地址http://www.zghlxwxcb.cn/news/detail-693221.html

到了這里,關(guān)于k8s 配置hadoop集群,nfs作為存儲的文章就介紹完了。如果您還想了解更多內(nèi)容,請在右上角搜索TOY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!

本文來自互聯(lián)網(wǎng)用戶投稿,該文觀點僅代表作者本人,不代表本站立場。本站僅提供信息存儲空間服務(wù),不擁有所有權(quán),不承擔(dān)相關(guān)法律責(zé)任。如若轉(zhuǎn)載,請注明出處: 如若內(nèi)容造成侵權(quán)/違法違規(guī)/事實不符,請點擊違法舉報進行投訴反饋,一經(jīng)查實,立即刪除!

領(lǐng)支付寶紅包贊助服務(wù)器費用

相關(guān)文章

  • k8s通過nfs-provisioner配置持久化存儲

    一、nfs-client-provisioner簡介 Kubernetes集群中NFS類型的存儲沒有內(nèi)置 Provisioner。但是你可以在集群中為NFS配置外部Provisioner。 Nfs-client-provisioner是一個開源的NFS 外部Provisioner,利用NFS Server為Kubernetes集群提供持久化存儲,并且支持動態(tài)購買PV。但是nfs-client-provisioner本身不提供NFS,需

    2024年02月11日
    瀏覽(23)
  • Kubernetes(k8s)安裝NFS動態(tài)供給存儲類并安裝KubeSphere

    Kubernetes(k8s)安裝NFS動態(tài)供給存儲類并安裝KubeSphere

    它是一款全棧的 Kubernetes 容器云 PaaS 解決方案(來源于官網(wǎng)),而我覺得它是一款強大的Kubernetes圖形界面,它繼承了如下組件 (下面這段內(nèi)容來自官網(wǎng)): Kubernetes DevOps 系統(tǒng) 基于 Jenkins 為引擎打造的 CI/CD,內(nèi)置 Source-to-Image 和 Binary-to-Image 自動化打包部署工具 基于 Istio 的微

    2024年02月09日
    瀏覽(32)
  • K8S使用持久化卷存儲到NFS(NAS盤)

    K8S使用持久化卷存儲到NFS(NAS盤)

    參考文章:K8S-v1.20中使用PVC持久卷 - 知乎 目錄 1、概念: 1.1 基礎(chǔ)概念 1.2 PV的配置 1.2.1 靜態(tài)PV配置 1.2.2 動態(tài)PV配置 1.2.3 PVC與PV的綁定 1.2.4 PVC及PV的使用 2 部署PV及PVC 2.1 所有K8S機器都需要安裝NFS程序 2.2 僅針對需要暴露文件服務(wù)的機器開啟NFS服務(wù) ? ? ? ? 2.2.1 Linux為例開啟NFS服

    2023年04月26日
    瀏覽(27)
  • 【Kubernetes】k8s使用minio作為對象存儲

    【Kubernetes】k8s使用minio作為對象存儲

    k8s version:v1.20.15 minio version :v4.4.16 (1)安裝kubectl-minio插件 自選minio-operaterd的版本下載包 minio-operater plugin 訪問地址:http://ip:9090 1、sc-minio.yaml 創(chuàng)建 2、 創(chuàng)建所需的永久卷 序號 路徑 容量 (G) 說明 所在節(jié)點 1 /data/1 5 租戶使用 3個節(jié)點各1個 2 /data/log1 5 租戶使用 3個節(jié)點各1個

    2024年04月09日
    瀏覽(22)
  • K8s使用Ceph作為后端存儲

    K8s使用Ceph作為后端存儲

    Ceph概述 部署Ceph集群 Ceph存儲使用 Pod使用Ceph持久化數(shù)據(jù) Ceph監(jiān)控 Rook部署Ceph Ceph介紹 Ceph架構(gòu) Ceph核心概念 Ceph是一個開源的分布式存儲系統(tǒng),具有高擴展性、高性能、高可靠性等特點,提 供良好的性能、可靠性和可擴展性。支持對象存儲、塊存儲和文件系統(tǒng)。 是目前為云平臺

    2024年02月15日
    瀏覽(20)
  • ?k8s 1.24 1.25 集群使用docker作為容器

    背景 在新版本Kubernetes環(huán)境(1.24以及以上版本)下官方不在支持docker作為容器運行時了,若要繼續(xù)使用docker 需要對docker進行配置一番。需要安裝cri-docker作為Kubernetes容器 查看當(dāng)前容器運行時 安裝docker 安裝cri-docker 為kubelet配置容器運行時 關(guān)于 https://www.oiox.cn/ https://www.oiox.cn

    2024年02月12日
    瀏覽(27)
  • k8s containerd集群配置安裝完整踩坑教程

    k8s containerd集群配置安裝完整踩坑教程

    完整踩坑和精簡內(nèi)容 containerd安裝參考 k8s安裝參考 兩臺機器 系統(tǒng) CentOS 7.9 1、關(guān)閉swap 2、網(wǎng)橋設(shè)置 3、ipvs設(shè)置 4、關(guān)閉防火墻 5、禁用selinux 6、添加源 在所有節(jié)點上執(zhí)行 1、安裝最新的kubectl kubelet kubeadm 2、安裝containerd 安裝 配置 重啟 3、安裝crictl 編寫配置文件 下載鏡像 mast

    2024年02月09日
    瀏覽(34)
  • K8S部署Hadoop集群(七)

    K8S部署Hadoop集群(七)

    ?????? Hadoop是Apache軟件基金會下一個開源分布式計算平臺,以HDFS(Hadoop Distributed File System)、MapReduce(Hadoop2.0加入了YARN,Yarn是資源調(diào)度框架,能夠細粒度的管理和調(diào)度任務(wù),還能夠支持其他的計算框架,比如spark)為核心的Hadoop為用戶提供了系統(tǒng)底層細節(jié)透明的分布式基

    2024年02月07日
    瀏覽(11)
  • k8s 配置nfs-client-provisioner

    參考鏈接: K8S 實戰(zhàn)(六)| 配置 NFS 動態(tài)卷提供持久化存儲 更多詳情,公眾號: ZisFinal kubelet version: Kubernetes v1.22.0 nfs: nfs-utils-1.3.0-0.68.el7.2.x86_64 本節(jié)中 K8S 使用 NFS 遠程存儲,為托管的 pod 提供了動態(tài)存儲服務(wù),pod 創(chuàng)建者無需關(guān)心數(shù)據(jù)以何種方式存在哪里,只需要提出需要多大

    2024年02月16日
    瀏覽(16)
  • 在k8s中,使用DirectPV CSI作為分布式存儲的優(yōu)缺點

    DirectPV 提供了一種直接將物理卷(Physical Volumes)與 Kubernetes 集群中的 Pod 綁定的機制。 利用 DirectPV,你可以將相應(yīng)的 PV 直接與節(jié)點上的物理存儲設(shè)備(如磁盤)進行綁定,而無需通過網(wǎng)絡(luò)存儲服務(wù)(如 NFS 或 Ceph)來提供存儲。這種直接訪問物理卷的方式,有助于提高性能和

    2024年02月19日
    瀏覽(23)

覺得文章有用就打賞一下文章作者

支付寶掃一掃打賞

博客贊助

微信掃一掃打賞

請作者喝杯咖啡吧~博客贊助

支付寶掃一掃領(lǐng)取紅包,優(yōu)惠每天領(lǐng)

二維碼1

領(lǐng)取紅包

二維碼2

領(lǐng)紅包