You can paste both outputs in your question.

Finally, set pools to use the rules.

. .

.

.

Multiple F: lines acceptable. . .

.

. The second rule works a little different:. 2e is.

rgw. Finally, set pools to use the rules.

ceph osd crush add-bucket CK13 rack; ceph osd crush move CK13 room=0513-R-0050; ceph osd crush move 0513-R-0050 root=default; ceph osd crush move cephflash21a-ff5578c275 rack=CK13; Now you are one step away from having a functional cluster.

2 修改test.

CRUSH Rules: When you store data in a pool, placement of the object and its replicas (or chunks for erasure coded pools) in your cluster is governed by CRUSH rules. 1.

. May 6, 2017 · Because systemctl status ceph-osd@60 reports success and running ceph -s shows it as up and in.

To get the CRUSH map for your cluster, execute the following: ceph osd getcrushmap -o {compiled-crushmap-filename} Ceph will output (-o) a compiled CRUSH map to the filename you specified.
On a fresh cluster, or one without any custom rulesets, you’d find the following being printed to stdout.
Chapter 10.

e.

2.

# ceph osd pool set $POOL_NAME crush_rule replicated_nvme. CRUSH Hierarchies. num} 删除 OSD 认证密钥 ceph osd rm osd.

rgw. . ceph daemon osd. . . For example: ceph osd crush rule create-simple deleteme.

.

The first two commands are simply removing and adding a distinct label to each OSD you want to create a new pool for. .

.

.

py) to ensure that the OSDs in all the PGs reside in separate failure.

0 ).

Chapter 7.