kubernetes持久化存儲卷
一、存儲卷介紹
pod有生命周期,生命周期結(jié)束后 pod 里的數(shù)據(jù)會消失(如配置文件,業(yè)務(wù)數(shù)據(jù)等)。
解決: 我們需要將數(shù)據(jù)與pod分離,將數(shù)據(jù)放在專門的存儲卷上
pod在k8s集群的節(jié)點(diǎn)中是可以調(diào)度的, 如果pod掛了被調(diào)度到另一個(gè)節(jié)點(diǎn),那么數(shù)據(jù)和pod的聯(lián)系會中斷。
解決: 所以我們需要與集群節(jié)點(diǎn)分離的存儲系統(tǒng)才能實(shí)現(xiàn)數(shù)據(jù)持久化
簡單來說: volume提供了在容器上掛載外部存儲的能力
二、存儲卷的分類
kubernetes 支持的存儲卷類型非常豐富,使用下面的命令查看:
# kubectl explain pod.spec.volumes
或者參考: https://kubernetes.io/docs/concepts/storage/
kubernetes支持的存儲卷列表如下:
- awsElasticBlockStore
- azureDisk
- azureFile
- cephfs
- cinder
- configMap
- csi
- downwardAPI
- emptyDir
- fc (fibre channel)
- flexVolume
- flocker
- gcePersistentDisk
- gitRepo (deprecated)
- glusterfs
- hostPath
- iscsi
- local
- nfs
- persistentVolumeClaim
- projected
- portworxVolume
- quobyte
- rbd
- scaleIO
- secret
- storageos
- vsphereVolume
我們將上面的存儲卷列表進(jìn)行簡單的分類:
- 本地存儲卷
- emptyDir pod刪除,數(shù)據(jù)也會被清除,用于數(shù)據(jù)的臨時(shí)存儲
- hostPath 宿主機(jī)目錄映射(本地存儲卷)
- 網(wǎng)絡(luò)存儲卷
- NAS類 nfs等
- SAN類 iscsi,FC等
- 分布式存儲 glusterfs,cephfs,rbd,cinder等
- 云存儲 aws,azurefile等
三、存儲卷的選擇
市面上的存儲產(chǎn)品種類繁多,但按應(yīng)用角度主要分為三類:
- 文件存儲 如:nfs,glusterfs,cephfs等
- 優(yōu)點(diǎn): 數(shù)據(jù)共享(多pod掛載可以同讀同寫)
- 缺點(diǎn): 性能較差
- 塊存儲 如: iscsi,rbd等
- 優(yōu)點(diǎn): 性能相對于文件存儲好
- 缺點(diǎn): 不能實(shí)現(xiàn)數(shù)據(jù)共享(部分)
- 對象存儲 如: ceph 對象存儲
- 優(yōu)點(diǎn): 性能好,數(shù)據(jù)共享
- 缺點(diǎn): 使用方式特殊,支持較少
面對kubernetes支持的形形色色的存儲卷,如何選擇成了難題。在選擇存儲時(shí),我們要抓住核心需求:
- 數(shù)據(jù)是否需要持久性
- 數(shù)據(jù)可靠性 如存儲集群節(jié)點(diǎn)是否有單點(diǎn)故障,數(shù)據(jù)是否有副本等
- 性能
- 擴(kuò)展性 如是否能方便擴(kuò)容,應(yīng)對數(shù)據(jù)增長的需求
- 運(yùn)維難度 存儲的運(yùn)維難度是比較高的,盡量選擇穩(wěn)定的開源方案或商業(yè)產(chǎn)品
- 成本
總之,存儲的選擇是需要考慮很多因素的,熟悉各類存儲產(chǎn)品,了解它們的優(yōu)缺點(diǎn),結(jié)合自身需求才能選擇合適自己的。
四、本地存儲卷之emptyDir
-
應(yīng)用場景
實(shí)現(xiàn)pod內(nèi)容器之間數(shù)據(jù)共享 -
特點(diǎn)
隨著pod被刪除,該卷也會被刪除
- 創(chuàng)建yaml文件
[root@k8s-master1 ~]# vim volume-emptydir.yml
apiVersion: v1
kind: Pod
metadata:
name: volume-emptydir
spec:
containers:
- name: write
image: centos
imagePullPolicy: IfNotPresent
command: ["bash","-c","echo haha > /data/1.txt ; sleep 6000"]
volumeMounts:
- name: data
mountPath: /data
- name: read
image: centos
imagePullPolicy: IfNotPresent
command: ["bash","-c","cat /data/1.txt; sleep 6000"]
volumeMounts:
- name: data
mountPath: /data
volumes:
- name: data
emptyDir: {}
- 基于yaml文件創(chuàng)建pod
[root@k8s-master1 ~]# kubectl apply -f volume-emptydir.yml
pod/volume-emptydir created
- 查看pod啟動情況
[root@k8s-master1 ~]# kubectl get pods |grep volume-emptydir
NAME READY STATUS RESTARTS AGE
volume-emptydir 2/2 Running 0 15s
- 查看pod描述信息
[root@k8s-master1 ~]# kubectl describe pod volume-emptydir | tail -10
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 50s default-scheduler Successfully assigned default/volume-emptydir to k8s-worker1
Normal Pulling 50s kubelet Pulling image "centos:centos7"
Normal Pulled 28s kubelet Successfully pulled image "centos:centos7" in 21.544912361s
Normal Created 28s kubelet Created container write
Normal Started 28s kubelet Started container write
Normal Pulled 28s kubelet Container image "centos:centos7" already present on machine
Normal Created 28s kubelet Created container read
Normal Started 28s kubelet Started container read
- 驗(yàn)證
[root@k8s-master1 ~]# kubectl logs volume-emptydir -c write
[root@k8s-master1 ~]# kubectl logs volume-emptydir -c read
haha
五、本地存儲卷之 hostPath
-
應(yīng)用場景
pod內(nèi)與集群節(jié)點(diǎn)目錄映射(pod中容器想訪問節(jié)點(diǎn)上數(shù)據(jù),例如監(jiān)控,只有監(jiān)控訪問到節(jié)點(diǎn)主機(jī)文件才能知道集群節(jié)點(diǎn)主機(jī)狀態(tài)) -
缺點(diǎn)
如果集群節(jié)點(diǎn)掛掉,控制器在另一個(gè)集群節(jié)點(diǎn)拉起容器,數(shù)據(jù)就會變成另一臺集群節(jié)點(diǎn)主機(jī)的了(無法實(shí)現(xiàn)數(shù)據(jù)共享)
- 創(chuàng)建yaml文件
[root@k8s-master1 ~]# vim volume-hostpath.yml
apiVersion: v1
kind: Pod
metadata:
name: volume-hostpath
spec:
containers:
- name: busybox
image: busybox
imagePullPolicy: IfNotPresent
command: ["/bin/sh","-c","echo haha > /data/1.txt ; sleep 600"]
volumeMounts:
- name: data
mountPath: /data
volumes:
- name: data
hostPath:
path: /opt
type: Directory
- 基于yaml文件創(chuàng)建pod
[root@k8s-master1 ~]# kubectl apply -f volume-hostpath.yml
pod/volume-hostpath created
[root@k8s-master1 ~]# kubectl get pods -o wide |grep volume-hostpath
volume-hostpath 1/1 Running 0 29s 10.224.194.120 k8s-worker1 <none> <none>
可以看到pod是在k8s-worker1節(jié)點(diǎn)上
- 驗(yàn)證pod所在機(jī)器上的掛載文件
[root@k8s-worker1 ~]# cat /opt/1.txt
haha
六、網(wǎng)絡(luò)存儲卷之nfs
- 搭建nfs服務(wù)器
[root@nfsserver ~]# mkdir -p /data/nfs
[root@nfsserver ~]# vim /etc/exports
/data/nfs *(rw,no_root_squash,sync)
[root@nfsserver ~]# systemctl restart nfs-server
[root@nfsserver ~]# systemctl enable nfs-server
- 所有node節(jié)點(diǎn)安裝nfs客戶端相關(guān)軟件包
[root@k8s-worker1 ~]# yum install nfs-utils -y
[root@k8s-worker2 ~]# yum install nfs-utils -y
- 驗(yàn)證nfs可用性
[root@node1 ~]# showmount -e 192.168.10.129
Export list for 192.168.10.129:
/data/nfs *
[root@node2 ~]# showmount -e 192.168.10.129
Export list for 192.168.10.129:
/data/nfs *
- master節(jié)點(diǎn)上創(chuàng)建yaml文件
[root@k8s-master1 ~]# vim volume-nfs.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: volume-nfs
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.15-alpine
imagePullPolicy: IfNotPresent
volumeMounts:
- name: documentroot
mountPath: /usr/share/nginx/html
ports:
- containerPort: 80
volumes:
- name: documentroot
nfs:
server: 192.168.10.129
path: /data/nfs
- 應(yīng)用yaml創(chuàng)建
[root@k8s-master1 ~]# kubectl apply -f volume-nfs.yml
deployment.apps/nginx-deployment created
- 在nfs服務(wù)器共享目錄中創(chuàng)建驗(yàn)證文件
[root@nfsserver ~]# echo "volume-nfs" > /data/nfs/index.html
- 驗(yàn)證pod
[root@k8s-master1 ~]# kubectl get pod |grep volume-nfs
volume-nfs-649d848b57-qg4bz 1/1 Running 0 10s
volume-nfs-649d848b57-wrnpn 1/1 Running 0 10s
[root@k8s-master1 ~]# kubectl exec -it volume-nfs-649d848b57-qg4bz -- /bin/sh
/ # ls /usr/share/nginx/html/
index.html
/ # cat /usr/share/nginx/html/index.html
volume-nfs # 文件內(nèi)容與nfs服務(wù)器上創(chuàng)建的一致
/ # exit
[root@k8s-master1 ~]# kubectl exec -it volume-nfs-649d848b57-wrnpn -- /bin/sh
/ # ls /usr/share/nginx/html/
index.html
/ # cat /usr/share/nginx/html/index.html
volume-nfs # 文件內(nèi)容與nfs服務(wù)器上創(chuàng)建的一致
/ # exit
七、PV(持久存儲卷)與PVC(持久存儲卷聲明)
7.1 認(rèn)識pv與pvc
kubernetes存儲卷的分類太豐富了,每種類型都要寫相應(yīng)的接口與參數(shù)才行,這就讓維護(hù)與管理難度加大。
persistenvolume(PV) 是配置好的一段存儲(可以是任意類型的存儲卷)
- 也就是說將網(wǎng)絡(luò)存儲共享出來,配置定義成PV。
PersistentVolumeClaim(PVC)是用戶pod使用PV的申請請求。
- 用戶不需要關(guān)心具體的volume實(shí)現(xiàn)細(xì)節(jié),只需要關(guān)心使用需求。
7.2 pv與pvc之間的關(guān)系
-
pv提供存儲資源(生產(chǎn)者)
-
pvc使用存儲資源(消費(fèi)者)
-
使用pvc綁定pv
7.3 實(shí)現(xiàn)nfs類型pv與pvc
- 編寫創(chuàng)建pv的YAML文件
[root@k8s-master1 ~]# vim pv-nfs.yml
apiVersion: v1
kind: PersistentVolume # 類型為PersistentVolume(pv)
metadata:
name: pv-nfs # 名稱
spec:
capacity:
storage: 1Gi # 大小
accessModes:
- ReadWriteMany # 訪問模式
nfs:
path: /data/nfs # nfs共享目錄
server: 192.168.10.129 # nfs服務(wù)器IP
訪問模式有3種 參考 鏈接
- ReadWriteOnce 單節(jié)點(diǎn)讀寫掛載
- ReadOnlyMany 多節(jié)點(diǎn)只讀掛載
- ReadWriteMany 多節(jié)點(diǎn)讀寫掛載
cephfs 存儲卷3種類型都支持,我們要實(shí)現(xiàn)多個(gè)nginx跨節(jié)點(diǎn)之間的數(shù)據(jù)共享,所以選擇ReadWriteMany模式。
- 創(chuàng)建pv并驗(yàn)證
[root@k8s-master1 ~]# kubectl apply -f pv-nfs.yml
persistentvolume/pv-nfs created
[root@k8s-master1 ~]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv-nfs 1Gi RWX Retain Available 81s
說明:
- RWX為ReadWriteMany的簡寫
- Retain是回收策略
- Retain表示需要不使用了需要手動回收
- 參考 回收策略
- 編寫創(chuàng)建pvc的YAML文件
[root@k8s-master1 ~]# vim pvc-nfs.yml
apiVersion: v1
kind: PersistentVolumeClaim # 類型為PersistentVolumeClaim(pvc)
metadata:
name: pvc-nfs # pvc的名稱
spec:
accessModes:
- ReadWriteMany # 訪問模式
resources:
requests:
storage: 1Gi # 大小要與pv的大小保持一致
- 創(chuàng)建pvc并驗(yàn)證
[root@k8s-master1 ~]# kubectl apply -f pvc-nfs.yml
persistentvolumeclaim/pvc-nfs created
[root@k8s-master1 ~]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc-nfs Bound pv-nfs 1Gi RWX 38s
注意: STATUS 必須為Bound狀態(tài) (Bound狀態(tài)表示pvc與pv綁定OK)
- 編寫deployment的YMAL
[root@k8s-master1 ~]# vim deploy-nginx-nfs.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-nginx-nfs
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.15-alpine
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumes:
- name: www
persistentVolumeClaim:
claimName: pvc-nfs
- 應(yīng)用YAML創(chuàng)建deploment
[root@k8s-master1 ~]# kubectl apply -f deploy-nginx-nfs.yml
deployment.apps/deploy-nginx-nfs created
- 驗(yàn)證pod
[root@k8s-master1 ~]# kubectl get pod |grep deploy-nginx-nfs
deploy-nginx-nfs-6f9bc4546c-gbzcl 1/1 Running 0 1m46s
deploy-nginx-nfs-6f9bc4546c-hp4cv 1/1 Running 0 1m46s
- 驗(yàn)證pod內(nèi)卷的數(shù)據(jù)
[root@k8s-master1 ~]# kubectl exec -it deploy-nginx-nfs-6f9bc4546c-gbzcl -- /bin/sh
/ # ls /usr/share/nginx/html/
index.html
/ # cat /usr/share/nginx/html/index.html
volume-nfs
/ # exit
[root@k8s-master1 ~]# kubectl exec -it deploy-nginx-nfs-6f9bc4546c-hp4cv -- /bin/sh
/ # ls /usr/share/nginx/html/
index.html
/ # cat /usr/share/nginx/html/index.html
volume-nfs
/ # exit
7.4 subpath使用
subpath 是指可以把相同目錄下不同子目錄掛載到容器中不同的目錄中使用的方法。以下通過案例演示:
編輯文件
# vim 01_create_pod.yaml
編輯后查看,保持內(nèi)容一致即可
# cat 01_create_pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod1
spec:
containers:
- name: c1
image: busybox
command: ["/bin/sleep","100000"]
volumeMounts:
- name: data
mountPath: /opt/data1
subPath: data1
- name: data
mountPath: /opt/data2
subPath: data2
volumes:
- name: data
persistentVolumeClaim:
claimName: pvc-nfs
執(zhí)行文件,創(chuàng)建pod
# kubectl apply -f 01_create_pod.yaml
pod/pod1 created
編輯文件
# vim 02_create_pvc.yaml
查看編輯后文件,保持內(nèi)容一致即可
# cat 02_create_pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim # 類型為PersistentVolumeClaim(pvc)
metadata:
name: pvc-nfs # pvc的名稱
spec:
accessModes:
- ReadWriteMany # 訪問模式
resources:
requests:
storage: 1Gi # 大小要與pv的大小保持一致
執(zhí)行文件,創(chuàng)建pvc
# kubectl apply -f 02_create_pvc.yaml
persistentvolumeclaim/pvc-nfs created
編輯文件
# vim 03_create_pv_nfs.yaml
查看編輯后文件,保持內(nèi)容一致,注意修改nfs服務(wù)器及其共享的目錄
# cat 03_create_pv_nfs.yaml
apiVersion: v1
kind: PersistentVolume # 類型為PersistentVolume(pv)
metadata:
name: pv-nfs # 名稱
spec:
capacity:
storage: 1Gi # 大小
accessModes:
- ReadWriteMany # 訪問模式
nfs:
path: /sdb # nfs共享目錄
server: 192.168.10.214
執(zhí)行文件,創(chuàng)建pv
# kubectl apply -f 03_create_pv_nfs.yaml
persistentvolume/pv-nfs created
在nfs服務(wù)器查看pod中目錄是否自動添加到nfs服務(wù)器/sdb目錄中
[root@nfsserver ~]# ls /sdb
data1 data2
八、存儲的動態(tài)供給
8.1 什么是動態(tài)供給
每次使用存儲要先創(chuàng)建pv, 再創(chuàng)建pvc,真累!所以我們可以實(shí)現(xiàn)使用存儲的動態(tài)供給特性。
- 靜態(tài)存儲需要用戶申請PVC時(shí)保證容量和讀寫類型與預(yù)置PV的容量及讀寫類型完全匹配,而動態(tài)存儲則無需如此。
- 管理員無需預(yù)先創(chuàng)建大量的PV作為存儲資源
Kubernetes從1.4 版起引入了一個(gè)新的資源對象 StorageClass,可用于將存儲資源定義為具有顯著特性的類(Class)而不是具體的PV。用戶通過PVC直接向意向的類別發(fā)出申請,匹配由管理員事先創(chuàng)建的PV,或者由其按需為用戶動態(tài)創(chuàng)建PV,這樣就免去了需要先創(chuàng)建PV的過程。
8.2 使用NFS文件系統(tǒng)創(chuàng)建存儲動態(tài)供給
PV對存儲系統(tǒng)的支持可通過其插件來實(shí)現(xiàn),目前,Kubernetes支持如下類型的插件。
官方地址:https://kubernetes.io/docs/concepts/storage/storage-classes/
官方插件是不支持NFS動態(tài)供給的,但是我們可以用第三方的插件來實(shí)現(xiàn)
第三方插件地址: https://github.com/kubernetes-retired/external-storage
- 下載并創(chuàng)建storageclass
[root@k8s-master1 ~]# wget https://raw.githubusercontent.com/kubernetes-sigs/nfs-subdir-external-provisioner/master/deploy/class.yaml
[root@k8s-master1 ~]# mv class.yaml storageclass-nfs.yml
[root@k8s-master1 ~]# cat storageclass-nfs.yml
apiVersion: storage.k8s.io/v1
kind: StorageClass # 類型
metadata:
name: nfs-client # 名稱,要使用就需要調(diào)用此名稱
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner # 動態(tài)供給插件
parameters:
archiveOnDelete: "false" # 刪除數(shù)據(jù)時(shí)是否存檔,false表示不存檔,true表示存檔
[root@k8s-master1 ~]# kubectl apply -f storageclass-nfs.yml
storageclass.storage.k8s.io/managed-nfs-storage created
[root@k8s-master1 ~]# kubectl get storageclass
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
nfs-client k8s-sigs.io/nfs-subdir-external-provisioner Delete Immediate false 10s
# RECLAIMPOLICY pv回收策略,pod或pvc被刪除后,pv是否刪除還是保留。
# VOLUMEBINDINGMODE Immediate 模式下PVC與PV立即綁定,主要是不等待相關(guān)Pod調(diào)度完成,不關(guān)心其運(yùn)行節(jié)點(diǎn),直接完成綁定。相反的 WaitForFirstConsumer模式下需要等待Pod調(diào)度完成后進(jìn)行PV綁定。
# ALLOWVOLUMEEXPANSION pvc擴(kuò)容
- 下載并創(chuàng)建rbac
因?yàn)閟torage自動創(chuàng)建pv需要經(jīng)過kube-apiserver,所以需要授權(quán)。文章來源:http://www.zghlxwxcb.cn/news/detail-606805.html
[root@k8s-master1 ~]# wget https://raw.githubusercontent.com/kubernetes-sigs/nfs-subdir-external-provisioner/master/deploy/rbac.yaml
[root@k8s-master1 ~]# mv rbac.yaml storageclass-nfs-rbac.yaml
[root@k8s-master1 ~]# cat storageclass-nfs-rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
roleRef:
kind: Role
name: leader-locking-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io
[root@k8s-master1 ~]# kubectl apply -f rbac.yaml
serviceaccount/nfs-client-provisioner created
clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner created
role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
- 創(chuàng)建動態(tài)供給的deployment
需要一個(gè)deployment來專門實(shí)現(xiàn)pv與pvc的自動創(chuàng)建
[root@k8s-master1 ~]# vim deploy-nfs-client-provisioner.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-client-provisioner
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccount: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: registry.cn-beijing.aliyuncs.com/pylixm/nfs-subdir-external-provisioner:v4.0.0
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: k8s-sigs.io/nfs-subdir-external-provisioner
- name: NFS_SERVER
value: 192.168.10.129
- name: NFS_PATH
value: /data/nfs
volumes:
- name: nfs-client-root
nfs:
server: 192.168.10.129
path: /data/nfs
[root@k8s-master1 ~]# kubectl apply -f deploy-nfs-client-provisioner.yml
deployment.apps/nfs-client-provisioner created
[root@k8s-master1 ~]# kubectl get pods |grep nfs-client-provisioner
nfs-client-provisioner-5b5ddcd6c8-b6zbq 1/1 Running 0 34s
測試存儲動態(tài)供給是否可用
# vim nginx-sc.yaml
---
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
app: nginx
serviceName: "nginx"
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
imagePullSecrets:
- name: huoban-harbor
terminationGracePeriodSeconds: 10
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "nfs-client"
resources:
requests:
storage: 1Gi
[root@k8s-master1 nfs]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nfs-client-provisioner-9c988bc46-pr55n 1/1 Running 0 95s
web-0 1/1 Running 0 95s
web-1 1/1 Running 0 61s
[root@nfsserver ~]# ls /data/nfs/
default-www-web-0-pvc-c4f7aeb0-6ee9-447f-a893-821774b8d11f default-www-web-1-pvc-8b8a4d3d-f75f-43af-8387-b7073d07ec01
擴(kuò)展:文章來源地址http://www.zghlxwxcb.cn/news/detail-606805.html
批量下載文件:
# for file in class.yaml deployment.yaml rbac.yaml ;
do wget https://raw.githubusercontent.com/kubernetes-incubator/external-storage/master/nfs-client/deploy/$file ;
done
到了這里,關(guān)于kubernetes持久化存儲卷的文章就介紹完了。如果您還想了解更多內(nèi)容,請?jiān)谟疑辖撬阉鱐OY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!