初始化操作
ceph創(chuàng)建rbd存儲(chǔ)池
ceph osd pool create k8s-data 32 32 replicated
ceph osd pool application enable k8s-data rbd
rbd pool init -p k8s-data
ceph添加授權(quán),需要?jiǎng)?chuàng)建兩個(gè)用戶,一個(gè)掛載rbd時(shí)使用,另一個(gè)掛載cephfs時(shí)使用
ceph auth get-or-create client.k8s-user mon 'allow r' mds 'allow' osd 'allow * pool=k8s-data' -o /etc/ceph/ceph.client.k8s-user.keyring
ceph auth get-or-create client.k8s-cephfs-user mon 'allow r' mds 'allow' osd 'allow * pool=cephfs_data' -o /etc/ceph/ceph.client.k8s-cephfs-user.keyring
k8s集群中所有節(jié)點(diǎn)安裝ceph-common
apt -y install ceph-common
將ceph配置文件和keyring文件拷貝至所有k8s集群中節(jié)點(diǎn)
scp /etc/ceph/ceph.conf /etc/ceph/ceph.client.k8s-user.keyring /etc/ceph/ceph.client.k8s-cephfs-user.keyring root@xxx:/etc/ceph/
k8s使用ceph rbd
k8s集群中的pod使用rbd時(shí)可以直接通過(guò)pod的volume進(jìn)行掛載,也可以以pv形式掛載使用
volume
提前在存儲(chǔ)池中創(chuàng)建image
rbd create volume1 -s 2G -p k8s-data
配置pod通過(guò)volume掛載volume1
apiVersion: v1
kind: Pod
metadata:
name: pod-with-rbd-volume
spec:
containers:
- name: busybox
image: busybox
imagePullPolicy: IfNotPresent
command: ["/bin/sh", "-c", "sleep 3600"]
volumeMounts:
- name: data-volume
mountPath: /data/
volumes:
- name: data-volume
rbd:
monitors: ["192.168.211.23:6789", "192.168.211.24:6789", "192.168.211.25:6789"] #指定mon節(jié)點(diǎn)地址
pool: k8s-data #指定pool
image: volume1 #指定要掛載的rbd image
user: k8s-user #指定掛載rbd image時(shí)使用的用戶,默認(rèn)admin用戶
keyring: /etc/ceph/ceph.client.k8s-user.keyring #用戶的keyring文件路徑
fsType: xfs #指定掛載的rbd image格式化的文件系統(tǒng)類型,默認(rèn)ext4
創(chuàng)建pod到集群中,等待pod就緒后可以進(jìn)入pod驗(yàn)證
由于pod使用的的是宿主機(jī)內(nèi)核,所以rbd image實(shí)際是在宿主機(jī)掛載的
另外keyring文件也可以保存在secret中,在pod中通過(guò)secret來(lái)引用keyring文件。
例如將k8s-user的keyring保存到secret中(注意,需要對(duì)用戶的key先進(jìn)行base64編碼):
apiVersion: v1
kind: Secret
metadata:
name: k8s-user-keyring
type: "kubernetes.io/rbd" #類型必須是這個(gè)
data:
key: "QVFCSWlvVmszNVh5RXhBQWhkK1lwb3k3VHhvQkswQ2VkRE1zcWc9PQo="
將pod創(chuàng)建到集群中,等待pod就緒后進(jìn)入pod驗(yàn)證
配置pod通過(guò)secret引用keyring文件
apiVersion: v1
kind: Pod
metadata:
name: pod-use-scret-keyring
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
volumeMounts:
- name: web
mountPath: /usr/share/nginx/html/
volumes:
- name: web
rbd:
monitors: ["192.168.211.23:6789", "192.168.211.24:6789", "192.168.211.25:6789"]
pool: k8s-data
image: volume2
user: k8s-user
secretRef: #通過(guò)secret引用keyring
name: k8s-user-keyring
fsType: xfs
PV
靜態(tài)pv
靜態(tài)pv的方式也需要提前在存儲(chǔ)池中創(chuàng)建好image
創(chuàng)建pv
apiVersion: v1
kind: PersistentVolume
metadata:
name: rbd-pv
spec:
accessModes: ["ReadWriteOnce"]
capacity:
storage: 2Gi
persistentVolumeReclaimPolicy: Retain
rbd:
monitors: ["192.168.211.23:6789", "192.168.211.24:6789", "192.168.211.25:6789"]
pool: k8s-data
image: volume3
user: k8s-user
secretRef:
name: k8s-user-keyring
fsType: xfs
創(chuàng)建pvc
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: rbd-pvc
namespace: default
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 2Gi
pod使用pvc
apiVersion: v1
kind: Pod
metadata:
name: pod-use-rbd-pvc
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
volumeMounts:
- name: web
mountPath: /usr/share/nginx/html/
volumes:
- name: web
persistentVolumeClaim:
claimName: rbd-pvc
readOnly: false
將pod創(chuàng)建到集群中,等待pod就緒后進(jìn)入pod驗(yàn)證
動(dòng)態(tài)pv
創(chuàng)建存儲(chǔ)類
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: rbd-storageclass
provisioner: kubernetes.io/rbd
reclaimPolicy: Retain
parameters:
monitors: 192.168.211.23:6789,192.168.211.24:6789,192.168.211.25:6789 #mon節(jié)點(diǎn)地址
adminId: k8s-user #用戶名稱,這個(gè)用戶是用于在pool中創(chuàng)建image時(shí)使用
adminSecretName: k8s-user-keyring #用戶keyring對(duì)應(yīng)的secret
adminSecretNamespace: default #用戶keyring對(duì)應(yīng)的secret所在的名稱空間
pool: k8s-data #存儲(chǔ)池
userId: k8s-user #這個(gè)用戶是用于掛載rbd image時(shí)使用
userSecretName: k8s-user-keyring
userSecretNamespace: default
fsType: xfs
imageFormat: "2" #創(chuàng)建的rbd image的格式
imageFeatures: "layering" #創(chuàng)建的rbd image啟用的特性
創(chuàng)建pvc
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: rbd-pvc-with-sc
namespace: default
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 2Gi
storageClassName: rbd-storageclass
pod使用動(dòng)態(tài)pvc
apiVersion: v1
kind: Pod
metadata:
name: pod-use-rdynamic-pvc
spec:
containers:
- name: redis
image: redis
imagePullPolicy: IfNotPresent
volumeMounts:
- name: data
mountPath: /data/redis/
volumes:
- name: data
persistentVolumeClaim:
claimName: rbd-pvc-with-sc
readOnly: false
將pod創(chuàng)建到集群中,等待pod就緒后進(jìn)入pod驗(yàn)證
k8s使用cephfs
cephfs可以同時(shí)掛載給多個(gè)pod使用,實(shí)現(xiàn)數(shù)據(jù)共享
volume
先將前面創(chuàng)建的ceph用戶client.k8s-cephfs-user的keyring保存到secret中
key=$(cat /etc/ceph/ceph.client.k8s-cephfs-user.keyring |grep key|awk '{print $3}')
kubectl create secret generic k8s-cephfs-user-keyring --type=kubernetes.io/rbd --from-literal=key=$key
配置pod掛載cephfs
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
volumeMounts:
- name: web-data
mountPath: /usr/share/nginx/html
volumes:
- name: web-data
cephfs:
monitors:
- 192.168.211.23:6789
- 192.168.211.24:6789
- 192.168.211.25:6789
path: /
user: k8s-cephfs-user
secretRef:
name: k8s-cephfs-user-keyring
在pod中驗(yàn)證掛載
在其中一個(gè)pod寫(xiě)入測(cè)試頁(yè)面,從其他的pod訪問(wèn)測(cè)試頁(yè)面驗(yàn)證數(shù)據(jù)共享
靜態(tài)pv
創(chuàng)建cephfs pv和pvc
apiVersion: v1
kind: PersistentVolume
metadata:
name: cephfs-pv
spec:
accessModes: ["ReadWriteOnce"]
capacity:
storage: 2Gi
persistentVolumeReclaimPolicy: Retain
cephfs:
monitors: ["192.168.211.23:6789", "192.168.211.24:6789", "192.168.211.25:6789"]
path: /
user: k8s-cephfs-user
secretRef:
name: k8s-cephfs-user
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: cephfs-pvc
namespace: default
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 2Gi
配置pod使用cephfs-pv
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-with-pvc
spec:
replicas: 2
selector:
matchLabels:
app: nginx
strage: cephfs-pvc
template:
metadata:
labels:
app: nginx
strage: cephfs-pvc
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
volumeMounts:
- name: web
mountPath: /usr/share/nginx/html/
volumes:
- name: web
persistentVolumeClaim:
claimName: cephfs-pvc
readOnly: false
進(jìn)入pod驗(yàn)證文章來(lái)源:http://www.zghlxwxcb.cn/news/detail-601123.html
目前k8s內(nèi)置的cephfs存儲(chǔ)插件還不能實(shí)現(xiàn)動(dòng)態(tài)pv,可以通過(guò)ceph官方的ceph-csi插件來(lái)實(shí)現(xiàn)cephfs的動(dòng)態(tài)pv功能:https://github.com/ceph/ceph-csi文章來(lái)源地址http://www.zghlxwxcb.cn/news/detail-601123.html
到了這里,關(guān)于k8s使用ceph存儲(chǔ)的文章就介紹完了。如果您還想了解更多內(nèi)容,請(qǐng)?jiān)谟疑辖撬阉鱐OY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!