一、Volume
1、什么是Volume
Volume官網(wǎng):https://kubernetes.io/docs/concepts/storage/volumes/
On-disk files in a Container are ephemeral, which presents some problems for non-trivial applications when running in Containers. First, when a Container crashes, kubelet will restart it, but the files will be lost - the Container starts with a clean state. Second, when running Containers together in a Pod it is often necessary to share files between those Containers. The Kubernetes Volume abstraction solves both of these problems.
容器中的磁盤上文件是短暫的,這給在容器中運行的重要應(yīng)用程序帶來了一些問題。首先,當(dāng)一個容器崩潰時,kubelet會重啟它,但是文件會丟失——容器以干凈的狀態(tài)開始。其次,當(dāng)在一個Pod中一起運行容器時,通常需要在這些容器之間共享文件。Kubernetes的Volume 抽象解決了這兩個問題。
2、Host類型volume實戰(zhàn)(不推薦)
定義一個Pod,其中包含兩個Container,都使用Pod的Volume
創(chuàng)建volume-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: volume-pod
spec:
containers:
- name: nginx-container # nginx container
image: nginx
ports:
- containerPort: 80
volumeMounts: # 使用某個volume
- name: volume-pod # volume名稱
mountPath: /nginx-volume # 對應(yīng)container中的目錄
- name: busybox-container # busybox的container
image: busybox
command: ['sh', '-c', 'echo The app is running! && sleep 3600']
volumeMounts:
- name: volume-pod
mountPath: /busybox-volume # 對應(yīng)container目錄
volumes: # 定義volume
- name: volume-pod # volume名稱
hostPath: # volume類型
path: /tmp/volume-pod # 對應(yīng)宿主機(jī)的位置
# 運行
kubectl apply -f volume-pod.yaml
# 查看pod,發(fā)現(xiàn)pod運行在了w1節(jié)點
kubectl get pods -o wide
# 進(jìn)入w1的容器
docker exec -it a2e9dbc52a11 /bin/bash
docker exec -it 27c66caa2b85 sh
# 看容器的/nginx-volume目錄中的內(nèi)容與宿主機(jī)的/tmp/volume-pod內(nèi)容是不是一樣,再折騰一下看文件會不會同步,看兩個pod中的內(nèi)容是否會同步
(1)小總結(jié)
我們發(fā)現(xiàn),使用volume形式,會與宿主機(jī)共享目錄,里面數(shù)據(jù)內(nèi)容是一致的。
如果pod掛掉了,宿主機(jī)的文件仍然不會丟失,可以完美解決數(shù)據(jù)保存的問題,保證數(shù)據(jù)不丟失。
但是host的方式,只能保證pod與當(dāng)前宿主機(jī)目錄的綁定關(guān)系,集群下無法保證(如果下次pod運行在另一臺宿主機(jī),文件仍然無法關(guān)聯(lián))。
或許我們可以使用打標(biāo)簽的方式,指定該pod永遠(yuǎn)部署在某一臺機(jī)器,但是如果該機(jī)器掛了,就無法使用了。
二、PersistentVolume持久化volume(推薦)
1、什么是PersistentVolume
官網(wǎng):https://kubernetes.io/docs/concepts/storage/persistent-volumes/
# 實例 ,定義一個PV
apiVersion: v1
kind: PersistentVolume
metadata:
name: my-pv
spec:
capacity:
storage: 5Gi # 存儲空間大小
volumeMode: Filesystem
accessModes:
- ReadWriteOnce # 只允許一個Pod進(jìn)行獨占式讀寫操作
persistentVolumeReclaimPolicy: Recycle
storageClassName: slow
mountOptions:
- hard
- nfsvers=4.1
nfs:
path: /tmp # 遠(yuǎn)端服務(wù)器的目錄
server: 172.17.0.2 # 遠(yuǎn)端的服務(wù)器
說白了,PV是K8s中的資源,volume的plugin實現(xiàn),生命周期獨立于Pod,封裝了底層存儲卷實現(xiàn)的細(xì)節(jié)。這種資源可以和第三方存儲技術(shù)綁定,比如DFS。
注意:PV的維護(hù)通常是由運維人員、集群管理員進(jìn)行維護(hù)的。
2、什么是PersistentVolumeClaim
官網(wǎng):https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims
有了PV,那Pod如何使用呢?為了方便使用,我們可以設(shè)計出一個PVC(PersistentVolumeClaim)來綁定PV,然后把PVC交給Pod來使用即可。
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myclaim
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 8Gi
storageClassName: slow
selector:
matchLabels:
release: "stable"
matchExpressions:
- {key: environment, operator: In, values: [dev]}
說白了,PVC會匹配滿足要求的PV[是根據(jù)size和訪問模式進(jìn)行匹配的
],進(jìn)行一一綁定,然后它們的狀態(tài)都會變成Bound。
也就是PVC負(fù)責(zé)請求PV的大小和訪問方式,然后Pod中就可以直接使用PVC咯。
注意:PVC通常由開發(fā)小伙伴維護(hù),開發(fā)小伙伴無需關(guān)注與存儲細(xì)節(jié),開發(fā)小伙伴只需要聲明使用的資源大小,由PVC來聲明,PVC會去找合適的PV。
3、Pod中如何使用PVC
官網(wǎng):https://kubernetes.io/docs/concepts/storage/persistent-volumes/#claims-as-volumes
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: myfrontend
image: nginx
volumeMounts:
- mountPath: "/var/www/html"
name: mypd
volumes:
- name: mypd
persistentVolumeClaim:
claimName: myclaim # 指定pvc
三、Pod中使用PVC實戰(zhàn)
(1)共享存儲使用nfs,我們選擇在m節(jié)點
(2)創(chuàng)建pv和pvc
(3)nginx pod中使用pvc
NFS(network file system)網(wǎng)絡(luò)文件系統(tǒng),是FreeBSD支持的文件系統(tǒng)中的一種,允許網(wǎng)絡(luò)中的計算機(jī)之間通過TCP/IP網(wǎng)絡(luò)共享資源。
1、搭建NFS
在master節(jié)點(192.168.56.100)上搭建一個NFS服務(wù)器,目錄為/nfs/data
01 選擇master節(jié)點作為nfs的server,所以在master節(jié)點上
# 安裝nfs
yum install -y nfs-utils
# 創(chuàng)建nfs目錄
mkdir -p /nfs/data/
mkdir -p /nfs/data/mysql
# 授予權(quán)限
chmod -R 777 /nfs/data
# 編輯export文件,輸入下面一行
vi /etc/exports
/nfs/data *(rw,no_root_squash,sync)
# 使得配置生效
exportfs -r
# 查看生效 顯示:/nfs/data <world>
exportfs
# 啟動rpcbind、nfs服務(wù)
systemctl restart rpcbind && systemctl enable rpcbind
systemctl restart nfs && systemctl enable nfs
# 查看rpc服務(wù)的注冊情況
rpcinfo -p localhost
# showmount測試 showmount -e 192.168.56.100
showmount -e master-ip
02 所有node上安裝客戶端
yum -y install nfs-utils
systemctl start nfs && systemctl enable nfs
2、創(chuàng)建PV&PVC&Nginx
(1)在nfs服務(wù)器創(chuàng)建所需要的目錄
mkdir -p /nfs/data/nginx
(2)定義PV,PVC和Nginx的yaml文件
nginx-pv-demo.yaml
# 定義PV
apiVersion: v1
kind: PersistentVolume
metadata:
name: nginx-pv # pv名字
spec:
accessModes:
- ReadWriteMany # 多個pod多次訪問模式
capacity:
storage: 2Gi # 大小
nfs:
path: /nfs/data/nginx # 服務(wù)器path
server: 192.168.56.100 # 服務(wù)器ip
---
# 定義PVC,用于消費PV
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nginx-pvc # pvc名稱
spec:
accessModes:
- ReadWriteMany # 訪問模式,需要與pv對應(yīng),會進(jìn)行匹配
resources:
requests:
storage: 2Gi # 需要的空間大小
---
# 定義Pod,指定需要使用的PVC
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
ports:
- containerPort: 80
volumeMounts:
- name: nginx-persistent-storage
mountPath: /usr/share/nginx/html # container共享目錄
volumes:
- name: nginx-persistent-storage
persistentVolumeClaim:
claimName: nginx-pvc # 指定pvc
(3)根據(jù)yaml文件創(chuàng)建資源并查看資源
# 啟動
[root@m ~]# kubectl apply -f nginx-pv-demo.yaml
persistentvolume/nginx-pv created
persistentvolumeclaim/nginx-pvc created
deployment.apps/nginx created
# 查看pv和pvc
[root@m ~]# kubectl get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/nginx-pv 2Gi RWX Retain Bound default/nginx-pvc 10s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/nginx-pvc Bound nginx-pv 2Gi RWX 9s
# 查看pod
[root@m ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-77945f44db-8rfhr 1/1 Running 0 53s 192.168.80.207 w2 <none> <none>
# 刪除
kubectl delete -f nginx-pv-demo.yaml
(4)測試持久化存儲
01 在nfs服務(wù)器的/nfs/data/nginx新建文件1.html,寫上內(nèi)容hello nginx
02 kubectl get pods -o wide 得到nginx-pod的ip地址 :192.168.80.207
03 curl nginx-pod-ip/1.html
curl 192.168.80.207/1.html :打印hello nginx
04 kubectl exec -it nginx-pod bash 進(jìn)入/usr/share/nginx/html目錄查看,發(fā)現(xiàn)有了1.html
05 kubectl delete pod nginx-pod
發(fā)現(xiàn)原來的pod被刪了,又自動啟動了一個新的pod,新的pod地址為:192.168.80.208
06 查看新nginx-pod的ip并且訪問nginx-pod-ip/1.html
curl 192.168.80.208/1.html 確實仍然能夠訪問,打印出hello nginx
四、StorageClass
1、什么是StorageClass
上面手動管理PV的方式還是有點low,能不能更加靈活一點呢?
官網(wǎng):https://kubernetes.io/docs/concepts/storage/storage-classes/
nfs github:github:https://github.com/kubernetes-incubator/external-storage/tree/master/nfs
A StorageClass provides a way for administrators to describe the “classes” of storage they offer. Different classes might map to quality-of-service levels, or to backup policies, or to arbitrary policies determined by the cluster administrators. Kubernetes itself is unopinionated about what classes represent. This concept is sometimes called “profiles” in other storage systems.
Each StorageClass contains the fields provisioner, parameters, and reclaimPolicy, which are used when a PersistentVolume belonging to the class needs to be dynamically provisioned.
The name of a StorageClass object is significant, and is how users can request a particular class. Administrators set the name and other parameters of a class when first creating StorageClass objects, and the objects cannot be updated once they are created.
StorageClass聲明存儲插件,用于自動創(chuàng)建PV。
說白了就是創(chuàng)建PV的模板,其中有兩個重要部分:PV屬性和創(chuàng)建此PV所需要的插件。
這樣PVC就可以按“Class”來匹配PV。
可以為PV指定storageClassName屬性,標(biāo)識PV歸屬于哪一個Class。
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: standard
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
reclaimPolicy: Retain
allowVolumeExpansion: true
mountOptions:
- debug
volumeBindingMode: Immediate
01 對于PV或者StorageClass只能對應(yīng)一種后端存儲
02 對于手動的情況,一般我們會創(chuàng)建很多的PV,等有PVC需要使用的時候就可以直接使用了
03 對于自動的情況,那么就由StorageClass來自動管理創(chuàng)建
04 如果Pod想要使用共享存儲,一般會在創(chuàng)建PVC,PVC中描述了想要什么類型的后端存儲、空間等,K8s從而會匹配對應(yīng)的PV,如果沒有匹配成功,Pod就會處于Pending狀態(tài)。Pod中使用只需要像使用volumes一樣,指定名字就可以使用了
05 一個Pod可以使用多個PVC,一個PVC也可以給多個Pod使用
06 一個PVC只能綁定一個PV,一個PV只能對應(yīng)一種后端存儲
有了StorageClass之后的PVC可以變成這樣:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-claim1
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Mi
storageClassName: nfs
StorageClass之所以能夠動態(tài)供給PV,是因為Provisioner,也就是Dynamic Provisioning
但是NFS這種類型,K8s中默認(rèn)是沒有Provisioner插件的,需要自己創(chuàng)建。
2、StorageClass實戰(zhàn)
github:https://github.com/kubernetes-incubator/external-storage/tree/master/nfs
(1)準(zhǔn)備好NFS服務(wù)器[并且確保nfs可以正常工作],創(chuàng)建持久化需要的目錄
mkdir -p /nfs/data/cxf
chmod 777 /nfs/data
# server: 192.168.56.100
(2)根據(jù)rbac.yaml文件創(chuàng)建資源
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
- apiGroups: [""]
resources: ["services", "endpoints"]
verbs: ["get"]
- apiGroups: ["extensions"]
resources: ["podsecuritypolicies"]
resourceNames: ["nfs-provisioner"]
verbs: ["use"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-provisioner
subjects:
- kind: ServiceAccount
name: nfs-provisioner
# replace with namespace where provisioner is deployed
namespace: default
roleRef:
kind: ClusterRole
name: nfs-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-provisioner
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-provisioner
subjects:
- kind: ServiceAccount
name: nfs-provisioner
# replace with namespace where provisioner is deployed
namespace: default
roleRef:
kind: Role
name: leader-locking-nfs-provisioner
apiGroup: rbac.authorization.k8s.io
kubectl apply -f rbac.yaml
(3)根據(jù)deployment.yaml文件創(chuàng)建資源
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-provisioner
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: nfs-provisioner
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: nfs-provisioner
spec:
serviceAccount: nfs-provisioner
containers:
- name: nfs-provisioner
image: registry.cn-hangzhou.aliyuncs.com/open-ali/nfs-client-provisioner
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: example.com/nfs
- name: NFS_SERVER
value: 192.168.56.100 # 指定nfs地址
- name: NFS_PATH
value: /nfs/data/cxf # 指定nfs目錄
volumes:
- name: nfs-client-root
nfs:
server: 192.168.56.100 # 指定nfs地址
path: /nfs/data/cxf # 指定nfs目錄
# 創(chuàng)建
kubectl apply -f deployment.yaml
(4)根據(jù)class.yaml創(chuàng)建資源
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: example-nfs
provisioner: example.com/nfs
# 創(chuàng)建
kubectl apply -f class.yaml
(5)根據(jù)pvc.yaml創(chuàng)建資源
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: my-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Mi
# 這個名字要和上面創(chuàng)建的storageclass名稱一致
storageClassName: example-nfs
# 創(chuàng)建
kubectl apply -f pvc.yaml
# 獲取pvc,發(fā)現(xiàn)狀態(tài)已經(jīng)是Bound了
kubectl get pvc
# 獲取pv,發(fā)現(xiàn)自動創(chuàng)建了pv
kubectl get pv
(6)根據(jù)nginx-pod.yaml創(chuàng)建資源
kind: Pod
apiVersion: v1
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- name: my-pvc # 指定pvc
mountPath: "/usr/cxf" # 容器地址
restartPolicy: "Never"
volumes:
- name: my-pvc # 指定pvc
persistentVolumeClaim:
claimName: my-pvc
# 創(chuàng)建
kubectl apply -f nginx-pod.yaml
# 查看pod
kubectl get pods -o wide
# 進(jìn)入nginx
kubectl exec -it nginx bash
cd /usr/cxf
# 進(jìn)行同步數(shù)據(jù)測試
五、PV的狀態(tài)和回收策略
- PV的狀態(tài)
Available:表示當(dāng)前的pv沒有被綁定
Bound:表示已經(jīng)被pvc掛載
Released:pvc沒有在使用pv, 需要管理員手工釋放pv
Failed:資源回收失敗
- PV回收策略
Retain:表示刪除PVC的時候,PV不會一起刪除,而是變成Released狀態(tài)等待管理員手動清理
Recycle:在Kubernetes新版本就不用了,采用動態(tài)PV供給來替代
Delete:表示刪除PVC的時候,PV也會一起刪除,同時也刪除PV所指向的實際存儲空間文章來源:http://www.zghlxwxcb.cn/news/detail-535087.html
注意
:目前只有NFS和HostPath支持Recycle策略。AWS EBS、GCE PD、Azure Disk和Cinder支持Delete策略文章來源地址http://www.zghlxwxcb.cn/news/detail-535087.html
到了這里,關(guān)于Kubernetes(k8s)實戰(zhàn):深入詳解Volume,詳解k8s文件同步存儲的文章就介紹完了。如果您還想了解更多內(nèi)容,請在右上角搜索TOY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!