前言
環(huán)境:centos 7.9 k8s 1.22.17 ceph集群
首先明確一點(diǎn),rbd塊存儲(chǔ)只能獨(dú)占掛載,換句話說就是只能一個(gè)客戶端掛載使用,不管這個(gè)客戶端是真實(shí)的服務(wù)器也好還是k8s的pod也好,一句話,rbd塊存儲(chǔ)只能一個(gè)客戶端掛載使用,這就決定了k8s中pv的讀寫模式不能也不支持ReadWriteMany。
基于上面說的rbd塊存儲(chǔ)只能一個(gè)客戶端掛載使用,那么在k8s中,掛載rbd的pod必須只能是1個(gè)副本的deployment或者也可以是多副本的statefulset。
安裝ceph集群
首先得有ceph集群,ceph集群的安裝詳情參考https://blog.csdn.net/MssGuo/article/details/122280657
,這里僅簡要給出ceph的安裝步驟:
注意:這里的安裝ceph集群是使用ceph-deploy工具安裝的,官方已經(jīng)不建議使用該工具安裝,請(qǐng)參考ceph官網(wǎng)。
#準(zhǔn)備3臺(tái)服務(wù)器,配置主機(jī)名本地解析
vim /etc/hosts
192.168.118.128 ceph1
192.168.118.129 ceph2
192.168.118.130 ceph3
#關(guān)閉防火墻等基本操作
systemctl stop firewalld
systemctl disable firewalld
vim /etc/selinux/config
setenforce 0
yum install ntp
systemctl enable ntpd
systemctl start ntpd;
#安裝epel源和ceph源
yum install epel-release -y #安裝配置epel源
vim /etc/yum.repos.d/ceph.repo #配置ceph源
[ceph]
name=ceph
baseurl=http://mirrors.aliyun.com/ceph/rpm-mimic/el7/x86_64/
enabled=1
gpgcheck=0
priority=1
[ceph-noarch]
name=cephnoarch
baseurl=http://mirrors.aliyun.com/ceph/rpm-mimic/el7/noarch/
enabled=1
gpgcheck=0
priority=1
[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-mimic/el7/SRPMS
enabled=1
gpgcheck=0
priority=1
#ssh免密登錄
ssh-keygen -t rsa
ssh-copy-id -i /root/.ssh/id_rsa.pub root@ceph1
ssh-copy-id -i /root/.ssh/id_rsa.pub root@ceph2
ssh-copy-id -i /root/.ssh/id_rsa.pub root@ceph3
ssh root@ceph1
ssh root@ceph2
ssh root@ceph3
#ceph1節(jié)點(diǎn)安裝部署工具
yum install ceph-deploy -y
mkdir /etc/ceph && cd /etc/ceph #在創(chuàng)建一個(gè)目錄,用于保存ceph-deploy生成的配置文件
yum install -y python-setuptools #先安裝python-setuptools依賴,防止報(bào)錯(cuò)
ceph-deploy new node1 #創(chuàng)建一個(gè)集群,node1是主機(jī)名,不是集群名
yum install ceph ceph-radosgw -y #在node1、node2、node3上安裝軟件
#在client客戶端服務(wù)器(如有)安裝
yum -y install ceph-common
cd /etc/ceph/ #以下操作的目錄均在集群的配置目錄下面進(jìn)行操作
vim ceph.conf
public_network = 192.168.118.0/24 #monitor網(wǎng)絡(luò),寫網(wǎng)段即可
ceph-deploy mon create-initial #創(chuàng)建初始化monitor監(jiān)控
ceph-deploy admin node1 node2 node3 #將配置文件信息同步到所有節(jié)點(diǎn)
ceph-deploy mon add node2 #加多一個(gè)mon
ceph-deploy mon add node3 #再加多一個(gè)mon
ceph-deploy mgr create node1 #創(chuàng)建一個(gè)mgr,node1是主機(jī)名
ceph-deploy mgr create node2 #同理再創(chuàng)建一個(gè)node2
ceph-deploy mgr create node3 #再創(chuàng)建一個(gè)node3
#列表所有node節(jié)點(diǎn)的磁盤,都有sda和sdb兩個(gè)盤,sdb為我們要加入分布式存儲(chǔ)的盤
ceph-deploy disk list node1 #列出node1節(jié)點(diǎn)服務(wù)器的磁盤
ceph-deploy disk list node2 #列出node2節(jié)點(diǎn)服務(wù)器的磁盤
ceph-deploy disk list node3 #列出node3節(jié)點(diǎn)服務(wù)器的磁盤
#zap表示干掉磁盤上的數(shù)據(jù),相當(dāng)于格式化
ceph-deploy disk zap node1 /dev/sdb #格式化node1上的sdb磁盤
ceph-deploy disk zap node2 /dev/sdb #格式化node2上的sdb磁盤
ceph-deploy disk zap node3 /dev/sdb #格式化node3上的sdb磁盤
ceph-deploy osd create --data /dev/sdb node1 #將node1上的sdb磁盤創(chuàng)建為osd
ceph-deploy osd create --data /dev/sdb node2 #繼續(xù)將node2上的sdb磁盤創(chuàng)建為osd
ceph-deploy osd create --data /dev/sdb node3 #繼續(xù)將node3上的sdb磁盤創(chuàng)建為osd
ceph集群創(chuàng)建rbd塊存儲(chǔ)
我們要模擬的是k8s靜態(tài)pv,所以要使用ceph的rbd塊存儲(chǔ),首先ceph集群得有rbd塊存儲(chǔ),下面將在ceph集群管理節(jié)點(diǎn)ceph1上演示創(chuàng)建rbd塊存儲(chǔ)。
#首先的有pool池,創(chuàng)建pool池
ceph osd pool create k8s-pool 16 #創(chuàng)建了一個(gè)pool池,名稱叫做k8s-pool
rbd create k8s --pool k8s-pool --size 1024 #創(chuàng)建了一個(gè)名稱叫做k8s的rbd塊存儲(chǔ),大小為1G
rbd feature disable k8s-pool/k8s object-map fast-diff deep-flatten
#不要執(zhí)行rbd map k8s-pool/k8s 映射成為設(shè)備,否則k8s的pod會(huì)報(bào)已經(jīng)使用
rbd塊存儲(chǔ)不支持ReadWriteMany
在官網(wǎng),如下,我們知道rbd塊存儲(chǔ)不支持ReadWriteMany掛載。
https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes
k8s配置rbd塊存儲(chǔ)(靜態(tài)供給)
創(chuàng)建secret
我們需要得到ceph集群的客戶端訪問秘鑰,然后將秘鑰信息定義為k8s的secret資源對(duì)象,如下操作:
#在ceph集群上執(zhí)行這個(gè)命令就可以得到keying
[root@node1 ceph]# ceph auth get-key client.admin
AQAgv4ZkabOqHBAAq+8Eh/Q/8raOcRLW/atLxA==
[root@node1 ceph]# cat /etc/ceph/ceph.mon.keyring #或者查看ceph集群配置目錄的ceph.mon.keyring 文件也可以
[mon.]
key = AQA8vYZkAAAAABAAhuMfp97xZYf8JgkWlHZsCA==
caps mon = allow *
[root@node1 ceph]#
#上面我們知道ceph集群的keyring了,即client.admin的keying
#客戶端要掛載rbd塊設(shè)備就必須知道這個(gè)keying,所以我們要?jiǎng)?chuàng)建secret保存它
echo -n 'AQAgv4ZkabOqHBAAq+8Eh/Q/8raOcRLW/atLxA==' | base64 #對(duì)字符串進(jìn)行加密,由于echo 默認(rèn)字符串后面換行,所以-n參數(shù)很重要,可以去掉換行符
QVFBZ3Y0WmthYk9xSEJBQXErOEVoL1EvOHJhT2NSTFcvYXRMeEE9PQ== #得到秘文
[root@master ceph]# echo 'QVFBZ3Y0WmthYk9xSEJBQXErOEVoL1EvOHJhT2NSTFcvYXRMeEE9PQ==' | base64 --decode #解密驗(yàn)證看看對(duì)不對(duì)
AQAgv4ZkabOqHBAAq+8Eh/Q/8raOcRLW/atLxA==[root@master ceph]# #沒有換行符,正確的
或者直接在ceph集群中進(jìn)行base64加密亦可:
ceph auth get-key client.admin |base64 #得到的秘文和上面的秘文是一樣的
#編寫secret文件
vim ceph-secret.yaml
apiVersion: v1
data:
key: QVFBZ3Y0WmthYk9xSEJBQXErOEVoL1EvOHJhT2NSTFcvYXRMeEE9PQ== #上面的秘文
kind: Secret
metadata:
name: ceph-secret
namespace: default
type: kubernetes.io/rbd #這個(gè)類型k8s內(nèi)置的rbd類型
kubectl apply -f ceph-secret.yaml #創(chuàng)建secret
創(chuàng)建pv
#我們先查看pv的rbd塊存儲(chǔ)的字段有哪些
[root@master ceph]# kubectl explain pv.spec.rbd
FIELDS:
fsType 文件系統(tǒng)類型,"ext4", "xfs",默認(rèn)ext4,
image The rados image name.必須參數(shù)
keyring RBDUser的Keyring文件. 默認(rèn)是/etc/ceph/keyring.
monitors Ceph monitors,即監(jiān)視器,必須參數(shù)
pool The rados pool name. 不寫默認(rèn)是rbd. More info:
readOnly 是否只讀掛載,默認(rèn)是false. More info:
secretRef 包含RBDUser認(rèn)證的秘鑰,如果提供在覆蓋keyring.
user The rados user name. Default is admin。
#編寫pv資源清單
vim rbd-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: rdb-pv
namespace: default
spec:
accessModes: #rbd塊存儲(chǔ)只支持ReadWriteOnce、ReadOnlyMany
- ReadWriteOnce
capacity:
storage: 200M
rbd:
monitors: #ceph集群的monitor的IP+端口,可以寫多個(gè),多個(gè)可以提供高可用
- '192.168.158.142:6789' #ceph集群的monitor端口就是6789
- '192.168.158.143:6789'
- '192.168.158.144:6789'
pool: k8s-pool #rbd塊存儲(chǔ)所在的pool,即上面在ceph創(chuàng)建的k8s-pool 池
image: k8s #image其實(shí)就是rbd塊存儲(chǔ)的名稱,即上面在ceph集群創(chuàng)建的k8s 塊存儲(chǔ),這里只是在ceph集群中叫法不一樣而已
fsType: xfs #rbd塊設(shè)備掛載到pod里面的掛載點(diǎn)文件系統(tǒng)
readOnly: false
user: admin #ceph集群中的rados用戶名,默認(rèn)是admin,我們ceph集群默認(rèn)就是這個(gè)用戶名
secretRef:
name: ceph-secret #保存了admin用戶的keying 秘鑰
persistentVolumeReclaimPolicy: Delete
kubectl apply -f rbd-pv.yaml
創(chuàng)建pvc
[root@master ceph]# cat rbd-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: rbd-pvc
namespace: default
spec:
accessModes:
- ReadWriteOnce #rbd塊存儲(chǔ)只支持ReadWriteOnce、ReadOnlyMany
resources:
requests:
storage: 200M
storageClassName: "" #寫空字符串,表示不使用存儲(chǔ)類
kubectl apply -f rbd-pvc.yaml
k8s節(jié)點(diǎn)安裝客戶端依賴包
#由于不知道pod將會(huì)調(diào)度到哪個(gè)節(jié)點(diǎn),所以每個(gè)k8s集群都要安裝ceph-common
#這個(gè)依賴包里面有相應(yīng)的命令,kubelet會(huì)使用到相應(yīng)的命令進(jìn)行rbd存儲(chǔ)掛載
yum install ceph-common -y
部署pod
vim nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 1 #先設(shè)定1個(gè)副本
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.18
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumes:
- name: www
persistentVolumeClaim:
claimName: rbd-pvc
kubectl apply -f nginx-deployment.yaml
查看pod
[root@master ceph]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-77cbdf8dc8-sqwt2 1/1 Running 0 9m22s
驗(yàn)證是否持久化
[root@master ceph]# kubectl exec -it nginx-77cbdf8dc8-sqwt2 -- bash #進(jìn)入pod里面
root@nginx-77cbdf8dc8-sqwt2:/# cd /usr/share/nginx/html
root@nginx-77cbdf8dc8-sqwt2://usr/share/nginx/html# echo "good" >index.html #創(chuàng)建一個(gè)首頁文件并寫點(diǎn)內(nèi)容
root@nginx-77cbdf8dc8-sqwt2://usr/share/nginx/html# curl localhost:80
good
root@nginx-77cbdf8dc8-sqwt2://usr/share/nginx/html# exit
[root@master ceph]# kubectl get pod -owide
NAME READY STATUS RESTARTS AGE IP NODE
nginx-77cbdf8dc8-sqwt2 1/1 Running 0 12m 10.244.1.45 node1
[root@master ceph]# curl 10.244.1.45:80 #正常訪問
good
#刪除容器
[root@master ceph]# kubectl delete pod nginx-77cbdf8dc8-sqwt2 --grace-period=0 --force
[root@master ceph]# kubectl get pod -owide
NAME READY STATUS RESTARTS AGE IP NODE
nginx-77cbdf8dc8-fkxrc 1/1 Running 0 20s 10.244.2.45 node2
[root@master ceph]# curl 10.244.2.45:80 #正常訪問,說明持久化成功
good
[root@master ceph]#
#pod擴(kuò)容副本為2個(gè),驗(yàn)證是否正常
[root@master ceph]# kubectl scale deployment nginx --replicas=2
[root@master ceph]# kubectl get pod -owide
NAME READY STATUS RESTARTS AGE IP NODE
nginx-77cbdf8dc8-564pv 0/1 ContainerCreating 0 9s <none> node1
nginx-77cbdf8dc8-fkxrc 1/1 Running 0 118s 10.244.2.45 node2
[root@master ceph]# kubectl describe pod nginx-77cbdf8dc8-564pv
.......
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 17s default-scheduler Successfully assigned default/nginx-77cbdf8dc8-564pv to node1
Warning FailedAttachVolume 17s attachdetach-controller Multi-Attach error for volume "rdb-pv" Volume is already used by pod(s) nginx-77cbdf8dc8-fkxrc
[root@master ceph]#
以上驗(yàn)證說明,deployment類型的pod只能有1個(gè)pod掛載rbd塊存儲(chǔ),如果是兩個(gè)pod或多個(gè),就會(huì)報(bào)錯(cuò),原因很簡單,官方也說不支持多個(gè)pod掛載rbd塊存儲(chǔ),rbd塊存儲(chǔ)也不支持被多個(gè)客戶端掛載使用。
k8s配置rbd塊存儲(chǔ)(動(dòng)態(tài)供給)
先將上面靜態(tài)驗(yàn)證的資源全部刪除干凈。
查看官網(wǎng)
k8s內(nèi)置的rbd的制備器(Provisioner)有坑,先往下看。
#通過查看官網(wǎng),如下,我們知道,k8s內(nèi)置了rbd的制備器(Provisioner),所以我們不需要手動(dòng)創(chuàng)建Provisioner。
https://kubernetes.io/zh-cn/docs/concepts/storage/storage-classes/
文章來源:http://www.zghlxwxcb.cn/news/detail-490562.html
ceph集群創(chuàng)建pool
默認(rèn)你已經(jīng)有ceph集群了,這里我們要在k8s中的存儲(chǔ)類使用ceph的rbd塊存儲(chǔ),存儲(chǔ)類會(huì)動(dòng)態(tài)的創(chuàng)建rbd塊,所以我們只需要事先在ceph集群中創(chuàng)建pool池即可:文章來源地址http://www.zghlxwxcb.cn/news/detail-490562.html
#在ceph集群中創(chuàng)建pool池
ceph osd pool create k8s-pool 16 #創(chuàng)建了一個(gè)pool池,名稱叫做k8s-pool
創(chuàng)建secret
#我們要知道ceph集群的keyring,即client.admin的keying
#客戶端要掛載rbd塊設(shè)備就必須知道這個(gè)keying,所以我們要?jiǎng)?chuàng)建secret保存它
ceph auth get-key client.admin |base64 #在ceph集群管理節(jié)點(diǎn)執(zhí)行
QVFBZ3Y0WmthYk9xSEJBQXErOEVoL1EvOHJhT2NSTFcvYXRMeEE9PQ== #得到秘文
#編寫secret文件
vim ceph-secret.yaml
apiVersion: v1
data:
key: QVFBZ3Y0WmthYk9xSEJBQXErOEVoL1EvOHJhT2NSTFcvYXRMeEE9PQ== #上面的秘文
kind: Secret
metadata:
name: ceph-secret
namespace: default
type: kubernetes.io/rbd #這個(gè)類型k8s內(nèi)置的rbd類型
kubectl apply -f ceph-secret.yaml #創(chuàng)建secret
創(chuàng)建rbd存儲(chǔ)類
#官網(wǎng)例子:https://kubernetes.io/zh-cn/docs/concepts/storage/storage-classes/#ceph-rbd
vim rbd-storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ceph-rbd-storageclass
provisioner: kubernetes.io/rbd #k8s內(nèi)置的rbd的sc制備器
allowVolumeExpansion: true #允許自動(dòng)擴(kuò)容
parameters: #monitor寫多個(gè),寫為一行,道號(hào)分隔
monitors: 192.168.158.142:6789,192.168.158.143:6789,192.168.158.144:6789
adminId: admin #ceph集群的admin用戶
adminSecretName: ceph-secret #秘鑰
adminSecretNamespace: default
pool: k8s-pool #ceph中創(chuàng)建好的pool池
userId: admin #這里應(yīng)該是有兩種用戶,user這種應(yīng)該應(yīng)該是ceph集群創(chuàng)建的普通用戶,這里默認(rèn)是admin
userSecretName: ceph-secret
userSecretNamespace: default
fsType: ext4
imageFormat: "2"
imageFeatures: "layering"
創(chuàng)建pvc
vim rbd-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: rbd-pvc
namespace: default
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 200M
storageClassName: "ceph-rbd-storageclass"
kubectl apply -f rbd-pvc.yaml
pvc一直Pending
[root@master ceph]# kubectl describe pvc rbd-pvc
Warning ProvisioningFailed 9s (x2 over 18s) persistentvolume-controller Failed to provision volume with StorageClass "ceph-rbd-storageclass": failed to create rbd image: executable file not found in $PATH, command output:
#排查發(fā)現(xiàn),該問題早在多年前就已經(jīng)出現(xiàn)了,而根本原因在于,k8s內(nèi)置的rbd provisioner存在問題,通過查看controller-manager日志可以看到
[root@master ceph]# kubectl logs kube-controller-manager-master -n kube-system
E0613 03:26:28.072518 1 rbd.go:706] rbd: create volume failed, err: failed to create rbd image: executable file not found in $PATH, command output:
E0613 03:26:28.072599 1 goroutinemap.go:150] Operation for "provision-default/rbd-pvc[2759f972-7d36-44eb-bbdf-d35c049f4f9d]" failed. No retries permitted until 2023-06-13 03:28:30.072573469 +0000 UTC m=+6526.000817530 (durationBeforeRetry 2m2s). Error: failed to create rbd image: executable file not found in $PATH, command output:
I0613 03:26:28.097211 1 event.go:291] "Event occurred" object="default/rbd-pvc" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="Failed to provision volume with StorageClass \"ceph-rbd-storageclass\": failed to create rbd image: executable file not found in $PATH, command output: "
#解決辦法就是創(chuàng)建新的provisioner,不在使用k8s內(nèi)置的rbd provisioner。
創(chuàng)建存儲(chǔ)制備器provisioner
官方文檔:https://github.com/kubernetes-retired/external-storage/tree/master/ceph/rbd/deploy
官方提供兩種安裝provisioner的方法,一種是沒有rbac,一種是由rbac,任選一種即可
方法一(no-rbac):
vim deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: rbd-provisioner
spec:
replicas: 1
selector:
matchLabels:
app: rbd-provisioner
strategy:
type: Recreate
template:
metadata:
labels:
app: rbd-provisioner
spec:
containers:
- name: rbd-provisioner
image: "quay.io/external_storage/rbd-provisioner:latest"
env:
- name: PROVISIONER_NAME
value: ceph.com/rbd #記住這個(gè)值,這個(gè)是provisioner制備器的名稱
kubectl apply -f deployment.yaml
方法二(rbac):
[root@master rbac]# vim clusterrole.yaml
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: rbd-provisioner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
- apiGroups: [""]
resources: ["services"]
resourceNames: ["kube-dns","coredns"]
verbs: ["list", "get"]
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
[root@master rbac]#
[root@master rbac]# vim clusterrolebinding.yaml
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: rbd-provisioner
subjects:
- kind: ServiceAccount
name: rbd-provisioner
namespace: default #命名空間可以自行修改
roleRef:
kind: ClusterRole
name: rbd-provisioner
apiGroup: rbac.authorization.k8s.io
[root@master rbac]#
[root@master rbac]# vim role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: rbd-provisioner
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
[root@master rbac]#
[root@master rbac]# cat rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: rbd-provisioner
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: rbd-provisioner
subjects:
- kind: ServiceAccount
name: rbd-provisioner
namespace: default #命名空間可以自行修改
[root@master rbac]#
[root@master rbac]# vim serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: rbd-provisioner
[root@master rbac]#
[root@master rbac]# vim deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: rbd-provisioner
spec:
replicas: 1
selector:
matchLabels:
app: rbd-provisioner
strategy:
type: Recreate
template:
metadata:
labels:
app: rbd-provisioner
spec:
containers:
- name: rbd-provisioner
image: "quay.io/external_storage/rbd-provisioner:latest"
env:
- name: PROVISIONER_NAME
value: ceph.com/rbd #記住這個(gè)值,這個(gè)是provisioner制備器的名稱
serviceAccount: rbd-provisioner
[root@master rbac]#
kubectl apply -f ./rabc
重存修改storageclass的provisioner
[root@master ceph]# cat rbd-storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ceph-rbd-storageclass
provisioner: ceph.com/rbd #改為制備器的名稱,注意不是deployment的名稱
allowVolumeExpansion: true
parameters:
monitors: 192.168.158.142:6789,192.168.158.143:6789,192.168.158.144:6789
adminId: admin
adminSecretName: ceph-secret
adminSecretNamespace: default
pool: k8s-pool
userId: admin
userSecretName: ceph-secret
userSecretNamespace: default
fsType: ext4
imageFormat: "2"
imageFeatures: "layering"
[root@master ceph]#
kubectl delete rbd-storageclass.yaml;
kubectl apply -f rbd-storageclass.yaml;
#然后創(chuàng)建pvc,查看正常,已經(jīng)馬上創(chuàng)建了pv
繼續(xù)創(chuàng)建pvc
kubectl delete-f rbd-pvc.yaml
kubectl apply -f rbd-pvc.yaml
[root@master ceph]# kubectl get -f rbd-pvc.yaml #可以看到存儲(chǔ)類已經(jīng)正常創(chuàng)建了pv
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
rbd-pvc Bound pvc-ab63babe-c743-4ea4-8d59-9944dc9ac000 191Mi RWO ceph-rbd-storageclass 8m44s
[root@master ceph]#
#我們回到ceph集群的管理節(jié)點(diǎn)看看
[root@node1 ~]# rbd ls k8s-pool #可以看到,已經(jīng)創(chuàng)建了image,即rbd塊
kubernetes-dynamic-pvc-d2ce2db1-09a7-11ee-a9cd-72b4f5f91329 #(名字不用管,查看rbd-provisioner的日志可以看到創(chuàng)建rbd image的日志信息)
[root@node1 ~]#
創(chuàng)建deployment驗(yàn)證
[root@master ceph]# cat nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.18
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumes:
- name: www
persistentVolumeClaim:
claimName: rbd-pvc
[root@master ceph]#
[root@master ceph]# kubectl apply -f nginx-deployment.yaml
創(chuàng)建驗(yàn)證
[root@master ceph]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-77cbdf8dc8-nsdwn 1/1 Running 0 2m20s
[root@master ceph]# kubectl exec -it nginx-77cbdf8dc8-nsdwn -- bash
root@nginx-77cbdf8dc8-nsdwn:/# df -Th
Filesystem Type Size Used Avail Use% Mounted on
/dev/rbd0 ext4 181M 1.6M 176M 1% /usr/share/nginx/html
root@nginx-77cbdf8dc8-nsdwn:/# echo "good" >>/usr/share/nginx/html/index.html
root@nginx-77cbdf8dc8-nsdwn:/# curl localhost:80
good
root@nginx-77cbdf8dc8-nsdwn:/# exit
[root@master ceph]# kubectl describe pod nginx-77cbdf8dc8-nsdwn | grep -i ip
IP: 10.244.1.47
[root@master ceph]# curl 10.244.1.47:80
good
[root@master ceph]# kubectl delete pod nginx-77cbdf8dc8-nsdwn
[root@master ceph]# kubectl describe pod nginx-77cbdf8dc8-dczhq | grep -i ip
IP: 10.244.2.48
[root@master ceph]# curl 10.244.2.48:80
good
[root@master ceph]#
#驗(yàn)證成功,rbd不能有2個(gè)或多個(gè)pod同時(shí)讀寫掛載這里就不在驗(yàn)證了
總結(jié)
1、首先得有ceph集群;
2、k8s集群節(jié)點(diǎn)安裝yum -y install cepe-common;
3、靜態(tài)供給的話,首先得在ceph集群創(chuàng)建pool和rbd塊存儲(chǔ)(也稱image),然后創(chuàng)建secret,secret主要是保存keying;
4、創(chuàng)建pv、pvc、pod;
5、動(dòng)態(tài)供給的話,首先得在ceph集群創(chuàng)建pool(不用創(chuàng)建rbd塊存儲(chǔ)即image,存儲(chǔ)類會(huì)自動(dòng)創(chuàng)建),同理創(chuàng)建secret,secret主要是保存keying;
6、創(chuàng)建存儲(chǔ)制備器,由于k8s內(nèi)置的存儲(chǔ)制備器有點(diǎn)問題,所以需要根據(jù)官網(wǎng)手動(dòng)創(chuàng)建一個(gè)rbd存儲(chǔ)制備器;
7、創(chuàng)建存儲(chǔ)類,使用上面創(chuàng)建的的存儲(chǔ)制備器和secret;
8、創(chuàng)建pvc,存儲(chǔ)類就會(huì)自動(dòng)創(chuàng)建pv,在ceph集群管理節(jié)點(diǎn)上查看rbd 的image,如命令rbd ls k8s-pool,就能看到自動(dòng)創(chuàng)建的image了。
9、pod掛載pvc使用;
10、由于rbd不能使用ReadWriteMany的pv訪問模式,所以rbd塊存儲(chǔ)不適合多個(gè)pod對(duì)一個(gè)pvc同讀同寫的應(yīng)用場(chǎng)景,rbd更合適StatefulSet創(chuàng)建的pod。因?yàn)镾tatefulSet創(chuàng)建的pod,每個(gè)pod都獨(dú)占一個(gè)pv,這正好合適rbd存儲(chǔ)。
到了這里,關(guān)于k8s如何使用ceph rbd塊存儲(chǔ)(靜態(tài)供給、存儲(chǔ)類動(dòng)態(tài)供給)的文章就介紹完了。如果您還想了解更多內(nèi)容,請(qǐng)?jiān)谟疑辖撬阉鱐OY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!