1、概念
StorageClass是一個存儲類,通過創(chuàng)建StorageClass可以動態(tài)生成一個存儲卷,供k8s用戶使用。
使用StorageClass可以根據(jù)PVC動態(tài)的創(chuàng)建PV,減少管理員手工創(chuàng)建PV的工作。
StorageClass的定義主要包括名稱、后端存儲的提供者(privisioner)和后端存儲的相關(guān)參數(shù)配置。StorageClass一旦被創(chuàng)建,就無法修改,如需修改,只能刪除重建。
2、創(chuàng)建
要使用 StorageClass,我們就得安裝對應(yīng)的自動配置程序,比如本篇文章使用的存儲后端是 nfs,那么我們就需要使用到一個 NFS-Subdir-External-Provisioner 的自動配置程序,我們也叫它 Provisioner,
教程:https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner,這個程序使用我們已經(jīng)配置好的 nfs 服務(wù)器,來自動創(chuàng)建持久卷,也就是自動幫我們創(chuàng)建 PV。
自動創(chuàng)建的 PV 以{namespace}-{pvcName}-{pvName} 這樣的命名
格式創(chuàng)建在 NFS 服務(wù)器上的共享數(shù)據(jù)目錄中
當(dāng)這個 PV 被回收后會以archieved-{namespace}-{pvcName}-{pvName} 這樣的命名格式存在 NFS 服務(wù)器上。
在部署NFS-Subdir-External-Provisioner 之前,我們需要先成功安裝上 nfs 服務(wù)器,安裝方法,在前面文章已經(jīng)講解過了https://blog.csdn.net/u011837804/article/details/128588864,
2.1、集群環(huán)境
[root@k8s-master ~]# kubectl cluster-info
Kubernetes control plane is running at https://10.211.55.11:6443
CoreDNS is running at https://10.211.55.11:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
[root@k8s-master ~]#
[root@k8s-master ~]# kubectl get nodes -owide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-master Ready control-plane 19h v1.25.0 10.211.55.11 <none> CentOS Stream 8 4.18.0-408.el8.x86_64 docker://20.10.22
k8s-node1 Ready <none> 19h v1.25.0 10.211.55.12 <none> CentOS Stream 8 4.18.0-408.el8.x86_64 docker://20.10.22
k8s-node2 Ready <none> 19h v1.25.0 10.211.55.13 <none> CentOS Stream 8 4.18.0-408.el8.x86_64 docker://20.10.22
[root@k8s-master ~]#
2.2、創(chuàng)建 ServiceAccount
現(xiàn)在的 Kubernetes 集群大部分是基于 RBAC 的權(quán)限控制,所以我們需要創(chuàng)建一個擁有一定權(quán)限的 ServiceAccount 與后面要部署的 NFS Subdir Externa Provisioner 組件綁定。
注意:ServiceAccount是必須的,否則將不會動態(tài)創(chuàng)建PV,PVC狀態(tài)一直為Pending
RBAC 資源文件 nfs-rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
namespace: dev
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
namespace: dev
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
namespace: dev
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
namespace: dev
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
namespace: dev
roleRef:
kind: Role
name: leader-locking-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io
2.3、部署NFS-Subdir-External-Provisioner
我們以master(10.211.55.11)為nfs服務(wù)器,共享目錄為/root/data/nfs,StorageClass名稱為storage-nfs 部署NFS-Subdir-External-Provisioner
創(chuàng)建nfs-provisioner-deploy.yaml
kind: Deployment
apiVersion: apps/v1
metadata:
name: nfs-client-provisioner
namespace: dev
spec:
replicas: 1
selector:
matchLabels:
app: nfs-client-provisioner
strategy:
type: Recreate #設(shè)置升級策略為刪除再創(chuàng)建(默認(rèn)為滾動更新)
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner #上一步創(chuàng)建的ServiceAccount名稱
containers:
- name: nfs-client-provisioner
image: registry.cn-beijing.aliyuncs.com/mydlq/nfs-subdir-external-provisioner:v4.0.0
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME # Provisioner的名稱,以后設(shè)置的storageclass要和這個保持一致
value: storage-nfs
- name: NFS_SERVER? ? ? ? # NFS服務(wù)器地址,需和valumes參數(shù)中配置的保持一致
value: 10.211.55.11
- name: NFS_PATH? ? ? ? ? # NFS服務(wù)器數(shù)據(jù)存儲目錄,需和valumes參數(shù)中配置的保持一致
value: /root/data/nfs
- name: ENABLE_LEADER_ELECTION
value: "true"
volumes:
- name: nfs-client-root
nfs:
server: 10.211.55.11? ? ? ? # NFS服務(wù)器地址
path: /root/data/nfs? ? ? ? # NFS共享目錄
執(zhí)行效果
# 創(chuàng)建
[root@k8s-master ~]# kubectl apply -f nfs-provisioner-deploy.yaml
deployment.apps/nfs-client-provisioner created
[root@k8s-master ~]#
[root@k8s-master ~]#
# 查看
[root@k8s-master ~]# kubectl get deploy,pod -n dev
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nfs-client-provisioner 1/1 1 1 9s
NAME READY STATUS RESTARTS AGE
pod/nfs-client-provisioner-59b496764-5kts2 1/1 Running 0 9s
[root@k8s-master ~]#
2.4、創(chuàng)建 NFS StorageClass
我們在創(chuàng)建 PVC 時經(jīng)常需要指定 storageClassName 名稱,這個參數(shù)配置的就是一個 StorageClass 資源名稱,PVC 通過指定該參數(shù)來選擇使用哪個 StorageClass,并與其關(guān)聯(lián)的 Provisioner 組件來動態(tài)創(chuàng)建 PV 資源。所以,這里我們需要提前創(chuàng)建一個 Storagelcass 資源。
創(chuàng)建nfs-storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
namespace: dev
name: nfs-storage
annotations:
storageclass.kubernetes.io/is-default-class: "false" ## 是否設(shè)置為默認(rèn)的storageclass
provisioner: storage-nfs ## 動態(tài)卷分配者名稱,必須和上面創(chuàng)建的deploy中環(huán)境變量“PROVISIONER_NAME”變量值一致
parameters:
archiveOnDelete: "true" ## 設(shè)置為"false"時刪除PVC不會保留數(shù)據(jù),"true"則保留數(shù)據(jù)
mountOptions:
- hard ## 指定為硬掛載方式
- nfsvers=4 ## 指定NFS版本,這個需要根據(jù)NFS Server版本號設(shè)置
查看nfs-server版本號
# nfs 服務(wù)器版本號查看 其中 “Server nfs v4” 說明版本為4
[root@k8s-master ~]# nfsstat -v
Server packet stats:
packets udp tcp tcpconn
813 0 813 237
Server rpc stats:
calls badcalls badfmt badauth badclnt
585 228 228 0 0
Server reply cache:
hits misses nocache
0 0 585
Server io stats:
read write
0 0
Server read ahead cache:
size 0-10% 10-20% 20-30% 30-40% 40-50% 50-60% 60-70% 70-80% 80-90% 90-100% notfound
32 0 0 0 0 0 0 0 0 0 0 0
Server file handle cache:
lookup anon ncachedir ncachenondir stale
0 0 0 0 0
Server nfs v4:
null compound
8 1% 577 98%
Server nfs v4 operations:
op0-unused op1-unused op2-future access close
0 0% 0 0% 0 0% 36 2% 0 0%
commit create delegpurge delegreturn getattr
0 0% 5 0% 0 0% 0 0% 335 22%
getfh link lock lockt locku
55 3% 0 0% 0 0% 0 0% 0 0%
lookup lookup_root nverify open openattr
51 3% 0 0% 0 0% 0 0% 0 0%
open_conf open_dgrd putfh putpubfh putrootfh
0 0% 0 0% 344 23% 0 0% 21 1%
read readdir readlink remove rename
0 0% 3 0% 0 0% 0 0% 3 0%
renew restorefh savefh secinfo setattr
0 0% 0 0% 3 0% 0 0% 5 0%
setcltid setcltidconf verify write rellockowner
0 0% 0 0% 0 0% 0 0% 0 0%
bc_ctl bind_conn exchange_id create_ses destroy_ses
0 0% 0 0% 17 1% 10 0% 8 0%
free_stateid getdirdeleg getdevinfo getdevlist layoutcommit
0 0% 0 0% 0 0% 0 0% 0 0%
layoutget layoutreturn secinfononam sequence set_ssv
0 0% 0 0% 10 0% 535 36% 0 0%
test_stateid want_deleg destroy_clid reclaim_comp allocate
0 0% 0 0% 7 0% 9 0% 0 0%
copy copy_notify deallocate ioadvise layouterror
0 0% 0 0% 0 0% 0 0% 0 0%
layoutstats offloadcancel offloadstatus readplus seek
0 0% 0 0% 0 0% 0 0% 0 0%
write_same
0 0%
[root@k8s-master ~]#
# nfs客戶端版本號查看 其中 “Client nfs v4” 說明客戶端版本為4
[root@k8s-node1 ~]# nfsstat -c
Client rpc stats:
calls retrans authrefrsh
586 0 586
Client nfs v4:
null read write commit open
8 1% 0 0% 0 0% 0 0% 0 0%
open_conf open_noat open_dgrd close setattr
0 0% 0 0% 0 0% 0 0% 5 0%
fsinfo renew setclntid confirm lock
30 5% 0 0% 0 0% 0 0% 0 0%
lockt locku access getattr lookup
0 0% 0 0% 36 6% 46 7% 51 8%
lookup_root remove rename link symlink
10 1% 0 0% 3 0% 0 0% 0 0%
create pathconf statfs readlink readdir
5 0% 20 3% 92 15% 0 0% 3 0%
server_caps delegreturn getacl setacl fs_locations
50 8% 0 0% 0 0% 0 0% 0 0%
rel_lkowner secinfo fsid_present exchange_id create_session
0 0% 0 0% 0 0% 17 2% 10 1%
destroy_session sequence get_lease_time reclaim_comp layoutget
8 1% 165 28% 1 0% 9 1% 0 0%
getdevinfo layoutcommit layoutreturn secinfo_no test_stateid
0 0% 0 0% 0 0% 10 1% 0 0%
free_stateid getdevicelist bind_conn_to_ses destroy_clientid seek
0 0% 0 0% 0 0% 7 1% 0 0%
allocate deallocate layoutstats clone
0 0% 0 0% 0 0% 0 0%
執(zhí)行效果
# 創(chuàng)建
[root@k8s-master ~]# kubectl apply -f nfs-storageclass.yaml
storageclass.storage.k8s.io/nfs-storage created
[root@k8s-master ~]#
# 查看
[root@k8s-master ~]# kubectl get sc -n dev
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
nfs-storage storage-nfs Delete Immediate false 7s
3、測試PVC使用StorageClass
創(chuàng)建storage-pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: storage-pvc
namespace: dev
spec:
storageClassName: nfs-storage ## 需要與上面創(chuàng)建的storageclass的名稱一致
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Mi
執(zhí)行效果文章來源:http://www.zghlxwxcb.cn/news/detail-785744.html
# 創(chuàng)建
[root@k8s-master ~]# kubectl apply -f storage-pvc.yaml
persistentvolumeclaim/storage-pvc created
[root@k8s-master ~]#
[root@k8s-master ~]#
# 查看pvc
[root@k8s-master ~]# kubectl get pvc -n dev
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
storage-pvc Bound pvc-8b5da590-2436-472e-a671-038822f15252 1Mi RWO nfs-storage 6s
[root@k8s-master ~]#
[root@k8s-master ~]#
# 查看是否動態(tài)創(chuàng)建了pv
[root@k8s-master ~]# kubectl get pv -n dev
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-8b5da590-2436-472e-a671-038822f15252 1Mi RWO Delete Bound dev/storage-pvc nfs-storage 19s
[root@k8s-master ~]#
# 查看共享目錄是否動態(tài)創(chuàng)建了文件
[root@k8s-master ~]# cd /root/data/nfs/
[root@k8s-master nfs]# ls
dev-storage-pvc-pvc-8b5da590-2436-472e-a671-038822f15252
[root@k8s-master nfs]#
4、異常處理
如果出現(xiàn)異常,請查看https://blog.csdn.net/u011837804/article/details/128693933 需要解決方案文章來源地址http://www.zghlxwxcb.cn/news/detail-785744.html
到了這里,關(guān)于Kubernetes(K8S)中StorageClass(SC)詳解、實(shí)例的文章就介紹完了。如果您還想了解更多內(nèi)容,請?jiān)谟疑辖撬阉鱐OY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!