Docker容器OpenStackCeph

在kolla ceph中引入device classes特性

2019-04-23  本文已影响0人  wangwDavid

CRUSH device classes

kolla ceph目前使用的版本是Luminous, 这个版本中引入了device classes特性.

https://ceph.com/community/new-luminous-crush-device-classes/

下面简要描述一下为什么要引入这个特性:

CRUSH管理问题

Ceph集群通常使用多种类型的存储设备构建:HDD,SSD,NVMe等。Ceph将这些不同类型的存储设备称为设备类用来避免与其他CRUSH bucket(如主机/机架等)的类型混淆. 我们都知 Ceph可以为不同的数据集或工作负载创建RADOS池,并分配不同的CRUSH规则来控制这些池的数据放置。

但是,设置CRUSH规则将数据放在某类设备上是一个很繁琐的过程。为了解决这个问题,Luminous为每个OSD添加了一个新属性:设备类。默认情况下,OSD会根据Linux内核公开的硬件属性自动将其设备类设置为hdd,ssd或nvme。 可以使用ceph osd tree来查看.

相关命令

$ ceph osd crush rm-device-class osd.2 osd.3
done removing class of osd(s): 2,3
$ ceph osd crush set-device-class ssd osd.2 osd.3
set osd(s) 2,3 to class 'ssd'

crush rule可以限制放置数据到特定的设备类。以下命令创建一个规则规定数据放置在device class上:

osd crush rule create-replicated <rule-name> <root> <failure-domain-type> <device-class>

例如:

$ ceph osd crush rule create-replicated fast default host ssd
$ ceph osd erasure-code-profile set myprofile k = 4 m = 2 crush-device-class = ssd crush-failure-domain = host

然后创建对应的pool

$ ceph osd pool create ecpool 64 erasure myprofile

上述命令会自动创建一个与pool name同名的crush rule.

rule ecpool {
    id 2
    type erasure
    min_size 3
    max_size 6
    step set_chooseleaf_tries 5
    step set_choose_tries 100
    step take default class ssd
    step chooseleaf indep 0 type host
    step emit
}
$ ceph osd crush class ls
[
  “hdd”,
  “ssd”
]
$ ceph osd crush class ls-osd ssd
0
1
ceph osd crush class rename <srcname> <dstname>

在kolla中引入ceph device classes 特性

commit: https://review.opendev.org/#/c/610896/

配置如下:

[storage-osd]
ceph-node1 ansible_user=kollasu network_interface=eth0 api_interface=eth0 storage_interface=eth0 cluster_interface=eth0 device_class=hdd
ceph-node2 ansible_user=kollasu network_interface=eth0 api_interface=eth0 storage_interface=eth0 cluster_interface=eth0 device_class=hdd
ceph-node3 ansible_user=kollasu network_interface=eth0 api_interface=eth0 storage_interface=eth0 cluster_interface=eth0 device_class=hdd

然后重新部署ceph, 就可以看到所有的osd前面有个device class:

ID CLASS WEIGHT  TYPE NAME              STATUS REWEIGHT PRI-AFF
-1       0.44989 root default                                   
-3       0.14996     host 192.168.10.11                         
 1   hdd 0.04999         osd.1              up  1.00000 1.00000
 4   hdd 0.04999         osd.4              up  1.00000 1.00000
 7   hdd 0.04999         osd.7              up  1.00000 1.00000
-2       0.14996     host 192.168.10.12                         
 0   hdd 0.04999         osd.0              up  1.00000 1.00000
 3   hdd 0.04999         osd.3              up  1.00000 1.00000
 5   hdd 0.04999         osd.5              up  1.00000 1.00000
-4       0.14996     host 192.168.10.13                          
 2   hdd 0.04999         osd.2              up  1.00000 1.00000
 6   hdd 0.04999         osd.6              up  1.00000 1.00000
 8   hdd 0.04999         osd.8              up  1.00000 1.00000

然后在创建pool的时候就可以使用对应的device class了.

kolla之前创建pool的方式,对纠删码和device class的支持不是特别好, 比如自定义pool参数需要覆盖kolla在default/main.yml中定义的一些参数, 而且不支持任意个pool的创建,有点死板. 我修改了pool的创建方式, 用字典的方式来定义pool的集合,比如cinder pool, 你可以在globals.yml中自定义cinder_pools, 然后定义想创建的pool的字典就可以了. 例如以下:

cinder_pools:
  cinder_volume:
    pool_name: "{{ ceph_cinder_pool_name }}"
    pool_type: "erasure"
    pool_pg_num: 512
    pool_pgp_num: 512
    pool_erasure_name: "erasure-profile-cinder"
    pool_erasure_profile: "k=2 m=1 ruleset-failure-domain=host crush-device-class=hdd"
    pool_cache_enable: "true"
    pool_cache_mode: "writeback"
    pool_cache_rule_name: "cache-cinder"
    pool_cache_rule: "cache host ssd"
    pool_cache_pg_num: 128
    pool_cache_pgp_num: 128
    pool_application: "rbd"
  cinder_backup:
    pool_name: "{{ ceph_cinder_backup_pool_name }}"
    pool_type: "replicated"
    pool_pg_num: 128
    pool_pgp_num: 128
    pool_rule_name: "disks-cinder-backup"
    pool_rule: "default host hdd"
    pool_application: "rbd"
  cinder_volume2:
    pool_name: "volumes2"
    pool_type: "replicated"
    pool_pg_num: 128
    pool_pgp_num: 128
    pool_rule_name: "disks-cinder-volumes2"
    pool_rule: "default host hdd"
    pool_application: "rbd"

以上定义了三个pool,可以自定义每个pool的类型和参数.

具体规则如下:

A complete pool definition contains the following items:
==================================================
item name               required
==================================================
pool_name               Required
pool_type               Required
pool_pg_num           Required
pool_pgp_num            Required
pool_erasure_name       Optional, required when pool_type is erasure
pool_erasure_profile    Optional, required when pool_type is erasure
pool_rule_name        Optional, required when pool_type is replicated
pool_rule               Optional, required when pool_type is replicated
pool_cache_enable       Optional, default is false
pool_cache_mode       Optional, required when pool_cache_enable is true
pool_cache_rule_name    Optional, required when pool_cache_enable is true
pool_cache_rule       Optional, required when pool_cache_enable is true
pool_cache_pg_num       Optional, required when pool_cache_enable is true
pool_cache_pgp_num    Optional, required when pool_cache_enable is true
pool_application        Required

对于纠删码的pool, 只需要创建pool_erasure_profile,对应名称为pool_erasure_name,然后创建pool的时候指定该pool_erasure_name, ceph会自动创建一个与pool名称同名的crush rule.

上一篇下一篇

猜你喜欢

热点阅读