cephCeph

在不同的OSDS上放置不同的pool

2018-08-13  本文已影响13人  Joncc

编辑crushmap

ceph osd getcrushmap -o crush

crushtool -d crush -o  crushmap.new

vi crushmap.new

crushtool -c crushmap.new  -o new.crush

ceph osd setcrushmap -i new.crush

PLACING DIFFERENT POOLS ON DIFFERENT OSDS:

Suppose you want to have most pools default to OSDs backed by large hard drives, but have some pools mapped to OSDs backed by fast solid-state drives (SSDs). It’s possible to have multiple independent CRUSH hierarchies within the same CRUSH map. Define two hierarchies with two different root nodes–one for hard disks (e.g., “root platter”) and one for SSDs (e.g., “root ssd”) as shown below:

假如你想配置多数pool默认为通过大容量硬盘hdd支持的osd,但将一些pool映射到快速的固态硬盘(SSD)支持的OSD。 在同一个CRUSH映射中可以有多个独立的CRUSH层次结构。 使用两个不同的根节点定义两个层次结构 ,一个用于普通硬盘hdd(例如“root platter”),另一个用于SSD(例如“root ssd”),如下所示:

device 0 osd.0
device 1 osd.1
device 2 osd.2
device 3 osd.3
device 4 osd.4
device 5 osd.5
device 6 osd.6
device 7 osd.7

      host ceph-osd-ssd-server-1 {
              id -1
              alg straw
              hash 0
              item osd.0 weight 1.00
              item osd.1 weight 1.00
      }

      host ceph-osd-ssd-server-2 {
              id -2
              alg straw
              hash 0
              item osd.2 weight 1.00
              item osd.3 weight 1.00
      }

      host ceph-osd-platter-server-1 {
              id -3
              alg straw
              hash 0
              item osd.4 weight 1.00
              item osd.5 weight 1.00
      }

      host ceph-osd-platter-server-2 {
              id -4
              alg straw
              hash 0
              item osd.6 weight 1.00
              item osd.7 weight 1.00
      }

      root platter {
              id -5
              alg straw
              hash 0
              item ceph-osd-platter-server-1 weight 2.00
              item ceph-osd-platter-server-2 weight 2.00
      }

      root ssd {
              id -6
              alg straw
              hash 0
              item ceph-osd-ssd-server-1 weight 2.00
              item ceph-osd-ssd-server-2 weight 2.00
      }

      rule data {
              ruleset 0
              type replicated
              min_size 2
              max_size 2
              step take platter
              step chooseleaf firstn 0 type host
              step emit
      }

      rule metadata {
              ruleset 1
              type replicated
              min_size 0
              max_size 10
              step take platter
              step chooseleaf firstn 0 type host
              step emit
      }

      rule rbd {
              ruleset 2
              type replicated
              min_size 0
              max_size 10
              step take platter
              step chooseleaf firstn 0 type host
              step emit
      }

      rule platter {
              ruleset 3
              type replicated
              min_size 0
              max_size 10
              step take platter
              step chooseleaf firstn 0 type host
              step emit
      }

      rule ssd {
              ruleset 4
              type replicated
              min_size 0
              max_size 4
              step take ssd
              step chooseleaf firstn 0 type host
              step emit
      }

      rule ssd-primary {
              ruleset 5
              type replicated
              min_size 5
              max_size 10
              step take ssd
              step chooseleaf firstn 1 type host
              step emit
              step take platter
              step chooseleaf firstn -1 type host
              step emit
      }

You can then set a pool to use the SSD rule by:
然后,您可以通过以下方式设置池以使用SSD规则:

ceph osd pool set <poolname> crush_rule ssd

Similarly, using the ssd-primary rule will cause each placement group in the pool to be placed with an SSD as the primary and platters as the replicas.
同样,使用该ssd-primary规则将导致池中的每个放置组放置SSD作为主要副本,拼盘作为副本放置。

TUNING CRUSH, THE HARD WAY---调整CRUSH

If you can ensure that all clients are running recent code, you can adjust the tunables by extracting the CRUSH map, modifying the values, and reinjecting it into the cluster.
如果可以确保所有客户端都运行最近的代码,则可以通过提取CRUSH映射,修改值并将其重新注入集群来调整可调参数。

LEGACY VALUES

For reference, the legacy values for the CRUSH tunables can be set with:
作为参考,可以使用以下命令设置CRUSH可调参数的旧值

crushtool -i /tmp/crush --set-choose-local-tries 2 --set-choose-local-fallback-tries 5 --set-choose-total-tries 19 --set-chooseleaf-descend-once 0 --set-chooseleaf-vary-r 0 -o /tmp/crush.legacy

上一篇 下一篇

猜你喜欢

热点阅读