一、ceph–RadosGW對象存儲
數(shù)據(jù)不需要放置在目錄層次結(jié)構(gòu)中,而是存在于平面地址空間內(nèi)的同一級別;
應(yīng)用通過唯一地址來識別每個單獨的數(shù)據(jù)對象;
每個對象可包含有助于檢索的元數(shù)據(jù);
在Ceph中的對象存儲網(wǎng)關(guān)中,通過RESTful API在應(yīng)用級別進(jìn)行訪問意味著應(yīng)用程序可以直接通過HTTP/HTTPS使用API與對象存儲網(wǎng)關(guān)進(jìn)行交互。這種訪問方式是針對整個應(yīng)用程序而不是特定用戶進(jìn)行的,允許應(yīng)用程序以編程方式執(zhí)行與對象存儲相關(guān)的操作,如創(chuàng)建、讀取、更新和刪除對象,管理桶(bucket)、權(quán)限等。它提供了一種基于Web的接口,用于與Ceph對象存儲系統(tǒng)進(jìn)行通信,并且能夠以應(yīng)用程序的身份進(jìn)行操作,而不是依賴于用戶的身份認(rèn)證。
1、RadosGW對象存儲簡介
RadosGW 是對象存儲(OSS,Object Storage Service)的一種實現(xiàn)方式,RADOS 網(wǎng)關(guān)也稱為Ceph 對象網(wǎng)關(guān)、RadosGW、RGW,是一種服務(wù),使客戶端能夠利用標(biāo)準(zhǔn)對象存儲 API 來訪問 Ceph 集群,它支持 AWS S3 和 Swift API,在 ceph 0.8 版本之后使用Civetweb(https://github.com/civetweb/civetweb) 的 web 服務(wù)器來響應(yīng) api 請求,客戶端使用http/https 協(xié)議通過 RESTful API 與 RGW 通信,而 RGW 則通過 librados 與 ceph 集群通信,RGW 客戶端通過 s3 或者 swift api 使用 RGW 用戶進(jìn)行身份驗證,然后 RGW 網(wǎng)關(guān)代表用戶利用 cephx 與 ceph 存儲進(jìn)行身份驗證。
2、對象存儲的特點
- 通過對象存儲將數(shù)據(jù)存儲為對象,每個對象除了包含數(shù)據(jù)還包含數(shù)據(jù)自身的元數(shù)據(jù)。
- 對象通過object ID來檢索,無法通過普通文件系統(tǒng)的方式通過文件路徑及文件名稱操作來直接訪問對象,只能通過API來訪問,或者第三方客戶端工具(實際上也是對API的封裝)
- 對象存儲中的對象不整理到目錄樹中,而是存儲在扁平的命名空間中,Amazon S3將這個扁平命名空間成為bucket(存儲桶),而非swift則其稱為容器。
- bucket 需要被授權(quán)才能訪問到,一個帳戶可以對多個 bucket 授權(quán),而權(quán)限可以不同;方便橫向擴(kuò)展、快速檢索數(shù)據(jù);不支持客戶端掛載,且需要客戶端在訪問的時候指定文件名稱;不是很適用于文件過于頻繁修改及刪除的場景。
ceph 使用 bucket 作為存儲桶(存儲空間),實現(xiàn)對象數(shù)據(jù)的存儲和多用戶隔離,數(shù)據(jù)存儲在bucket 中,用戶的權(quán)限也是針對 bucket 進(jìn)行授權(quán),可以設(shè)置用戶對不同的 bucket 擁有不通的權(quán)限,以實現(xiàn)權(quán)限管理。
2.1 bucket特性
(1)存儲空間是用于存儲對象(Object)的容器,所有的對象都必須隸屬于某個存儲空間,可以設(shè)置和修改存儲空間屬性用來控制地域、訪問權(quán)限、生命周期等,這些屬性設(shè)置直接作用于該存儲空間內(nèi)所有對象,因此可以通過靈活創(chuàng)建不同的存儲空間來完成不同的管理功能。
(2)同一個存儲空間的內(nèi)部是扁平的,沒有文件系統(tǒng)的目錄等概念,所有的對象都直接隸屬于其對應(yīng)的存儲空間。
(3)每個用戶可以擁有多個存儲空間
(4)存儲空間的名稱在 OSS 范圍內(nèi)必須是全局唯一的,一旦創(chuàng)建之后無法修改名稱。
(5)存儲空間內(nèi)部的對象數(shù)目沒有限制。
2.2 bucket命名規(guī)范
1、只能包括小寫字母、數(shù)字和短橫線(-)
2、必須以小寫字母或者數(shù)字開頭和結(jié)尾。
3、長度必須在 3-63 字節(jié)之間。
4、存儲桶名稱不能使用用 IP 地址格式。
5、Bucket 名稱必須全局唯一。
3、對象存儲訪問對比
Amazon S3:提供了 user、bucket 和 object 分別表示用戶、存儲桶和對象,其中 bucket
隸屬于 user,可以針對 user 設(shè)置不同 bucket 的名稱空間的訪問權(quán)限,而且不同用戶允許
訪問相同的 bucket。
OpenStack Swift:提供了 user、container 和 object 分別對應(yīng)于用戶、存儲桶和對象,不
過它還額外為user提供了父級組件account,account用于表示一個項目或租戶(OpenStack
用戶),因此一個 account 中可包含一到多個 user,它們可共享使用同一組 container,并為
container 提供名稱空間。
RadosGW:提供了 user、subuser、bucket 和 object,其中的 user 對應(yīng)于 S3 的 user,而
subuser 則對應(yīng)于 Swift 的 user,不過 user 和 subuser 都不支持為 bucket 提供名稱空間,
因此,不同用戶的存儲桶也不允許同名;不過,自 Jewel 版本起,RadosGW 引入了 tenant
(租戶)用于為 user 和 bucket 提供名稱空間,但它是個可選組件,RadosGW 基于 ACL
為不同的用戶設(shè)置不同的權(quán)限控制,如:
Read 讀權(quán)限
Write 寫權(quán)限
Readwrite 讀寫權(quán)限
full-control 全部控制權(quán)限
4、部署RadosGW服務(wù)
4.1 安裝RadosGW服務(wù)并初始化
root@ceph-mgr1:~#apt -y install radosgw
root@ceph-mgr2:~#apt -y install radosgw
cephadmin@ceph-mon1:~/ceph-cluster$ceph-deploy rgw create ceph-mgr1
cephadmin@ceph-mon1:~/ceph-cluster$ceph-deploy rgw create ceph-mgr2
#驗證RadosGW服務(wù)狀態(tài)
root@ceph-mon1:~# su - cephadmin
cephadmin@ceph-mon1:~$ cd ceph-cluster/
cephadmin@ceph-mon1:~/ceph-cluster$ ceph -s
cluster:
id: 3bc181dd-a0ef-4d72-a58d-ee4776e9870f
health: HEALTH_OK
services:
mon: 3 daemons, quorum ceph-mon1,ceph-mon2,ceph-mon3 (age 9m)
mgr: ceph-mgr1(active, since 2h), standbys: ceph-mgr2
mds: 2/2 daemons up, 2 standby
osd: 9 osds: 9 up (since 2h), 9 in (since 4d)
rgw: 2 daemons active (2 hosts, 1 zones)
data:
volumes: 1/1 healthy
pools: 9 pools, 289 pgs
objects: 358 objects, 200 MiB
usage: 781 MiB used, 1.8 TiB / 1.8 TiB avail
pgs: 289 active+clean
4.2驗證RadosGW服務(wù)進(jìn)程
root@ceph-mgr1:~# ps -ef|grep rados
ceph 4716 1 0 01:48 ? 00:00:41 /usr/bin/radosgw -f --cluster ceph --name client.rgw.ceph-mgr1 --setuser ceph --setgroup ceph
root 5606 5583 0 03:57 pts/0 00:00:00 grep --color=auto rados
4.3RadosGW的存儲池類型
root@ceph-mgr1:~# ceph osd pool ls
cephfs-metadata
cephfs-data
.rgw.root #包含realm(領(lǐng)域信息),比如zone和zonegroup
default.rgw.log #存儲日志信息,用于記錄各種log信息
default.rgw.control #系統(tǒng)控制池,在有數(shù)據(jù)更新時,通知其他RGW更新緩存
default.rgw.meta #元數(shù)據(jù)存儲池,通過不同的名稱空間分別存儲不同的 rados 對象,這些名稱空間包括??UID 及其 bucket 映射信息的名稱空間 users.uid、??的密鑰名稱空間users.keys、??的 email 名稱空間 users.email、??的 subuser 的名稱空間 users.swift,以及 bucket 的名稱空間 root 等。
device_health_metrics
default.rgw.buckets.index #存放bucket到object的索引信息
default.rgw.buckets.data #存放對象的數(shù)據(jù)
#驗證RGW zone的信息
root@ceph-mgr1:~# radosgw-admin zone get --rgw-zone=default
{
"id": "055243e4-d13b-4858-b7cd-90aca81befe2",
"name": "default",
"domain_root": "default.rgw.meta:root",
"control_pool": "default.rgw.control",
"gc_pool": "default.rgw.log:gc",
"lc_pool": "default.rgw.log:lc",
"log_pool": "default.rgw.log",
"intent_log_pool": "default.rgw.log:intent",
"usage_log_pool": "default.rgw.log:usage",
"roles_pool": "default.rgw.meta:roles",
"reshard_pool": "default.rgw.log:reshard",
"user_keys_pool": "default.rgw.meta:users.keys",
"user_email_pool": "default.rgw.meta:users.email",
"user_swift_pool": "default.rgw.meta:users.swift",
"user_uid_pool": "default.rgw.meta:users.uid",
"otp_pool": "default.rgw.otp",
"system_key": {
"access_key": "",
"secret_key": ""
},
"placement_pools": [
{
"key": "default-placement",
"val": {
"index_pool": "default.rgw.buckets.index",
"storage_classes": {
"STANDARD": {
"data_pool": "default.rgw.buckets.data"
}
},
"data_extra_pool": "default.rgw.buckets.non-ec",
"index_type": 0
}
}
],
"realm_id": "",
"notif_pool": "default.rgw.log:notif"
}
5、radosgw服務(wù)高可用配置
5.1 radosgw http
5.1.1 自定義http端口
配置文件可以在ceph-deploy部署節(jié)點修改然后統(tǒng)一推送,或者單獨修改每個radosgw服務(wù)器的配置為統(tǒng)一配置,然后重啟RGW服務(wù)。
root@ceph-mgr1:~# cat /etc/ceph/ceph.conf
[global]
fsid = 3bc181dd-a0ef-4d72-a58d-ee4776e9870f
public_network = 172.17.0.0/16
cluster_network = 192.168.10.0/24
mon_initial_members = ceph-mon1
mon_host = 172.17.10.61
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
mon clock drift allowed = 2
mon clock drift warn backoff = 30
mon_allow_pool_delete = true
osd pool default ec profile = /var/log/ceph_pool_healthy
[client.rgw.ceph-mgr1]
rgw_host = ceph-mgr1
rgw_frontends = civetweb port=9900
request_timeout_ms=30000 num_threads=200
rgw_dns_name = rgw.qiange.com
5.2 radosgw https
在rgw節(jié)點生成簽名證書并配置radosgw啟用SSL
5.2.1自簽名證書
root@ceph-mgr1:/etc/ceph# mkdir certs
root@ceph-mgr1:/etc/ceph# cd certs
root@ceph-mgr1:/etc/ceph/certs#openssl genrsa -out web.key 2048
root@ceph-mgr1:/etc/ceph/certs#openssl req -new -x509 -key /etc/ceph/certs/web.key -out web.crt -subj "/CN=rgw.qiange.com"
root@ceph-mgr1:/etc/ceph/certs#cat web.crt web.key > web.pem
root@ceph-mgr1:/etc/ceph/certs# tree
.
├── web.crt
├── web.key
└── web.pem
0 directories, 3 files
5.2.2 SSL的配置
[root@ceph-mgr2 certs]# vim /etc/ceph/ceph.conf
[client.rgw.ceph-mgr2]
rgw_host = ceph-mgr2
rgw_frontends = "civetweb port=9900+9443s ssl_certificate=/etc/ceph/certs/web.pem"
#重啟服務(wù)
[root@ceph-mgr1 certs]# systemctl restart ceph-radosgw@rgw.ceph-mgr1.service
#驗證:查看服務(wù)的端口是否起來
[root@ceph-mgr2 certs]# lsof -i:9900
5.3 日志及其他優(yōu)化配置
#創(chuàng)建日志目錄:
[root@ceph-mgr1 certs]# mkdir /var/log/radosgw
[root@ceph-mgr1 certs]# chown ceph.ceph /var/log/radosgw
#當(dāng)前配置
[root@ceph-mgr1 ceph]# vim ceph.conf
[client.rgw.ceph-mgr1]
rgw_host = ceph-mgr1
rgw_frontends = civetweb port=9900+8443s
ssl_certificate=/etc/ceph/certs/civetweb.pem
error_log_file=/var/log/radosgw/civetweb.error.log
access_log_file=/var/log/radosgw/civetweb.access.log
request_timeout_ms=30000 num_threads=200
#重啟服務(wù)
[root@ceph-mgr2 certs]# systemctl restart ceph-radosgw@rgw.ceph-mgr2.service
#訪問測試:
[root@ceph-mgr2 certs]# curl -k https://172.31.6.108:8443
6、測試數(shù)據(jù)讀寫
6.1 RGW Server配置
在實際生產(chǎn)中,每個RGW節(jié)點的配置參數(shù)都是一致的
root@ceph-mgr1:/etc/ceph# cat ceph.conf
[global]
fsid = 3bc181dd-a0ef-4d72-a58d-ee4776e9870f
public_network = 172.17.0.0/16
cluster_network = 192.168.10.0/24
mon_initial_members = ceph-mon1
mon_host = 172.17.10.61
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
mon clock drift allowed = 2
mon clock drift warn backoff = 30
mon_allow_pool_delete = true
osd pool default ec profile = /var/log/ceph_pool_healthy
[client.rgw.ceph-mgr1]
rgw_host = ceph-mgr1
rgw_frontends = civetweb port=9900
request_timeout_ms=30000 num_threads=200
rgw_dns_name = rgw.qiange.com
[client.rgw.ceph-mgr2]
rgw_host = ceph-mgr2
rgw_frontends = civetweb port=9900
request_timeout_ms=30000 num_threads=200
rgw_dns_name = rgw.qiange.com
6.2創(chuàng)建RGW賬戶
cephadmin@ceph-mon1:~/ceph-cluster$ radosgw-admin user create --uid="user1" --display-name="user2"
{
"user_id": "user1",
"display_name": "user2",
"email": "",
"suspended": 0,
"max_buckets": 1000,
"subusers": [],
"keys": [
{
"user": "user1",
"access_key": "NETEGA5FB1O2QK9OHOEA",
"secret_key": "7DivWbNrfEdc5usucGFqvCtJDPCMNVF0QcSjfTIy"
}
],
"swift_keys": [],
"caps": [],
"op_mask": "read, write, delete",
"default_placement": "",
"default_storage_class": "",
"placement_tags": [],
"bucket_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
},
"user_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
},
"temp_url_keys": [],
"type": "rgw",
"mfa_ids": []
}
6.3安裝s3cmd客戶端
s3cmd是一個通過命令行訪問ceph RGW實現(xiàn)創(chuàng)建存儲同桶、上傳、下載以及管理數(shù)據(jù)到對象存儲的命令行客戶端工具。
1、下載安裝s3cmd工具
cephadmin@ceph-mon1:~/ceph-cluster$sudo apt-cache madison s3cmd
cephadmin@ceph-mon1:~/ceph-cluster$sudo apt install s3cmd
2、配置命令執(zhí)行環(huán)境
cephadmin@ceph-mon1:~/ceph-cluster$ s3cmd --configure
Enter new values or accept defaults in brackets with Enter.
Refer to user manual for detailed description of all options.
Access key and Secret key are your identifiers for Amazon S3. Leave them empty for using the env variables.
Access Key: NETEGA5FB1O2QK9OHOEA #輸入用戶access key
Secret Key: 7DivWbNrfEdc5usucGFqvCtJDPCMNVF0QcSjfTIy #輸入用戶secret key
Default Region [US]: #默認(rèn)區(qū)域,直接按回車鍵即可
Use "s3.amazonaws.com" for S3 Endpoint and not modify it to the target Amazon S3.
S3 Endpoint [s3.amazonaws.com]: rgw.qiange.com:9900 #RGW的域名
Use "%(bucket)s.s3.amazonaws.com" to the target Amazon S3. "%(bucket)s" and "%(location)s" vars can be used
if the target S3 system supports dns based buckets.
DNS-style bucket+hostname:port template for accessing a bucket [%(bucket)s.s3.amazonaws.com]: rgw.qiange.com:9900/%(bucket) #bucket域名格式
Encryption password is used to protect your files from reading
by unauthorized persons while in transfer to S3
Encryption password:
Path to GPG program [/usr/bin/gpg]: #直接按回車鍵即可,gpg命令路徑,用于認(rèn)證管理
When using secure HTTPS protocol all communication with Amazon S3
servers is protected from 3rd party eavesdropping. This method is
slower than plain HTTP, and can only be proxied with Python 2.7 or newer
Use HTTPS protocol [Yes]: No #是否使用https
On some networks all internet access must go through a HTTP proxy.
Try setting it here if you can't connect to S3 directly
HTTP Proxy server name:
On some networks all internet access must go through a HTTP proxy.
Try setting it here if you can't connect to S3 directly
HTTP Proxy server name:
New settings: #最終配置
Access Key: NETEGA5FB1O2QK9OHOEA
Secret Key: 7DivWbNrfEdc5usucGFqvCtJDPCMNVF0QcSjfTIy
Default Region: region
S3 Endpoint: rgw.qiange.com:9900
DNS-style bucket+hostname:port template for accessing a bucket: rgw.qiange.com:9900/%(bucket)
Encryption password:
Path to GPG program: /usr/bin/gpg
Use HTTPS protocol: False
HTTP Proxy server name:
HTTP Proxy server port: 0
Test access with supplied credentials? [Y/n] y #是否保存以上配置
Please wait, attempting to list all buckets...
Success. Your access key and secret key worked fine :-)
Now verifying that encryption works...
Not configured. Never mind.
Save settings? [y/N] y
Configuration saved to '/home/cephadmin/.s3cfg' #配置文件的報錯路徑
6.4 命令客戶端s3cmd驗證數(shù)據(jù)上傳
6.4.1 創(chuàng)建bucket以驗證權(quán)限
存儲空間(Bucket)是用于存儲對象(Object)的容器,在上傳任意類型的 Object 前,您
需要先創(chuàng)建 Bucket。
cephadmin@ceph-mon1:~/ceph-cluster$ s3cmd mb s3://test-bucket
Bucket 's3://test-bucket/' created
cephadmin@ceph-mon1:~/ceph-cluster$ s3cmd mb s3://test1-bucket
Bucket 's3://test1-bucket/' created
cephadmin@ceph-mon1:~/ceph-cluster$ s3cmd mb s3://test2-bucket
Bucket 's3://test2-bucket/' created
6.4.2 上傳并驗證數(shù)據(jù)
#上傳數(shù)據(jù)
cephadmin@ceph-mon1:~$ s3cmd put 1.jpg s3://test-bucket
upload: '1.jpg' -> 's3://test-bucket/1.jpg' [1 of 1]
11532 of 11532 100% in 0s 204.64 kB/s done
cephadmin@ceph-mon1:~$ s3cmd put /etc/passwd s3://test1-bucket
upload: '/etc/passwd' -> 's3://test1-bucket/passwd' [1 of 1]
1778 of 1778 100% in 0s 46.85 kB/s done
cephadmin@ceph-mon1:~$ s3cmd put /etc/hosts s3://test2-bucket
upload: '/etc/hosts' -> 's3://test2-bucket/hosts' [1 of 1]
411 of 411 100% in 0s 7.22 kB/s done
#驗證數(shù)據(jù)
cephadmin@ceph-mon1:~$ s3cmd ls s3://test-bucket
2023-07-17 04:34 11532 s3://test-bucket/1.jpg
cephadmin@ceph-mon1:~$ s3cmd ls s3://test1-bucket
2023-07-17 04:34 1778 s3://test1-bucket/passwd
cephadmin@ceph-mon1:~$ s3cmd ls s3://test2-bucket
2023-07-17 04:34 411 s3://test2-bucket/hosts
6.4.3 驗證下載數(shù)據(jù)
cephadmin@ceph-mon1:~$ s3cmd get s3://test2-bucket/hosts /tmp
download: 's3://test2-bucket/hosts' -> '/tmp/hosts' [1 of 1]
411 of 411 100% in 0s 7.49 kB/s done
cephadmin@ceph-mon1:~$ s3cmd get s3://test-bucket/1.jpg /tmp
download: 's3://test-bucket/1.jpg' -> '/tmp/1.jpg' [1 of 1]
11532 of 11532 100% in 0s 741.34 kB/s done
6.4.4 刪除文件
cephadmin@ceph-mon1:/tmp$ s3cmd rm s3://test2-bucket/hosts
delete: 's3://test2-bucket/hosts'
cephadmin@ceph-mon1:/tmp$ s3cmd ls s3://test2-bucket
7、報錯
原因:在給客戶端配置命令執(zhí)行環(huán)境時,手動設(shè)置了域?qū)е碌?/h4>
二、ceph dashborad
2.1 啟用dashborad插件
Ceph mgr 是一個多插件(模塊化)的組件,其組件可以單獨的啟用或關(guān)閉,以下為在
ceph-deploy 服務(wù)器操作:
注意:新版本需要安裝 dashboard 安保,而且必須安裝在 mgr 節(jié)點,否則報錯
[ceph@ceph-deploy ceph-cluster]$ ceph mgr module enable dashboard #啟用模塊
注意:模塊啟用后還不能直接訪問,需要配置關(guān)閉SSL或啟用SSL以及指定監(jiān)聽地址
2.2 啟用dashborad模塊
Ceph dashboard 在 mgr 節(jié)點進(jìn)行開啟設(shè)置,并且可以配置開啟或者關(guān)閉 SSL,如下:
[ceph@ceph-deploy ceph-cluster]$ceph config set mgr mgr/dashboard/ssl false #關(guān)閉SSL
[ceph@ceph-deploy ceph-cluster]$ceph config set mgr mgr/dashboard/ceph-mgr1/server_addr 172.17.10.64 #指定dashborad監(jiān)聽地址
[ceph@ceph-deploy ceph-cluster]$ceph config set mgr mgr/dashboard/ceph-mgr1/server_port 9999 #指定dashborad監(jiān)聽端口
#驗證ceph集群的狀態(tài)
cephadmin@ceph-mon1:~/ceph-cluster$ ceph -s
cluster:
id: 3bc181dd-a0ef-4d72-a58d-ee4776e9870f
health: HEALTH_OK
services:
mon: 3 daemons, quorum ceph-mon1,ceph-mon2,ceph-mon3 (age 78m)
mgr: ceph-mgr1(active, since 10m), standbys: ceph-mgr2
mds: 2/2 daemons up, 2 standby
osd: 9 osds: 9 up (since 3h), 9 in (since 4d)
rgw: 2 daemons active (2 hosts, 1 zones)
data:
volumes: 1/1 healthy
pools: 9 pools, 289 pgs
objects: 401 objects, 200 MiB
usage: 860 MiB used, 1.8 TiB / 1.8 TiB avail
pgs: 289 active+clean
2.3 設(shè)置dashborad賬號和密碼
ceph@ceph-deploy:/home/ceph/ceph-cluster$ echo "12345678" > pass.txt
#設(shè)置賬號為admin,密碼為12345678
ceph@ceph-deploy:/home/ceph/ceph-cluster$ ceph dashboard set-login-credentials admin -i pass.txt
p, 2 standby
osd: 9 osds: 9 up (since 3h), 9 in (since 4d)
rgw: 2 daemons active (2 hosts, 1 zones)
data:
volumes: 1/1 healthy
pools: 9 pools, 289 pgs
objects: 401 objects, 200 MiB
usage: 860 MiB used, 1.8 TiB / 1.8 TiB avail
pgs: 289 active+clean文章來源:http://www.zghlxwxcb.cn/news/detail-578855.html
## 2.3 設(shè)置dashborad賬號和密碼
```bash
ceph@ceph-deploy:/home/ceph/ceph-cluster$ echo "12345678" > pass.txt
#設(shè)置賬號為admin,密碼為12345678
ceph@ceph-deploy:/home/ceph/ceph-cluster$ ceph dashboard set-login-credentials admin -i pass.txt
文章來源地址http://www.zghlxwxcb.cn/news/detail-578855.html
到了這里,關(guān)于ceph對象存儲和安裝dashborad的文章就介紹完了。如果您還想了解更多內(nèi)容,請在右上角搜索TOY模板網(wǎng)以前的文章或繼續(xù)瀏覽下面的相關(guān)文章,希望大家以后多多支持TOY模板網(wǎng)!