Ceph 部署
一. 前期准备
1. 环境:
作用 | Hostname | IP |
---|---|---|
ceph node 1 | ceph-node1 | 10.0.0.2 |
ceph node 2 | ceph-node1 | 10.0.0.3 |
ceph node 3 | ceph-node1 | 10.0.0.4 |
ceph-deploy admin node | ceph-deploy | 10.0.0.5 |
client (用于挂载测试) | whatever | 10.0.0.6 |
三台 ceph node (需要有一块空盘,没有挂载,没有格式化)
系统:centos 7.5 (关闭防火墙,关闭 selinux)
2. 推荐在 ceph node 上安装 NTP,配置 NTP 对时
sudo yum install ntp ntpdate ntp-doc
关闭 SELinux ; 关闭防火墙(或者开放相应端口)
3. 在所有节点上修改 yum 源
换成阿里 yum 源,并添加 epel 源
cp -a /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.bak
cp -a /etc/yum.repos.d/epel.repo /etc/yum.repos.d/epel.repo.bak
curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
curl -o /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
添加 ceph yum 源
cat << EOM > /etc/yum.repos.d/ceph.repo
[Ceph]
name=Ceph packages for \$basearch
baseurl=https://mirrors.aliyun.com/ceph/rpm-nautilus/el7/\$basearch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1
[Ceph-noarch]
name=Ceph noarch packages
baseurl=https://mirrors.aliyun.com/ceph/rpm-nautilus/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1
[ceph-source]
name=Ceph source packages
baseurl=https://mirrors.aliyun.com/ceph/rpm-nautilus/el7/SRPMS
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1
EOM
3. 给每个 ceph node 和 ceph-deploy admin node 添加用户 cephdeploy
groupadd cephdeploy -g 1024
useradd cephdeploy -u 1024 -g 1024
赋予 sudo 权限
echo "cephdeploy ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
4. 给每个 ceph node 和 ceph-deploy admin node 配置 hosts
host 和 hostname 必须一致, 而且 hostname 不能以数字开头
vim /etc/hosts
10.0.0.2 ceph-node1
10.0.0.3 ceph-node2
10.0.0.4 ceph-node3
10.0.0.5 ceph-deploy
5. 配置 SSH 免密登陆
使用 cephdeploy 用户在 ceph-deploy admin node 生成 SSH 公钥
su - cephdeploy
ssh-keygen
在 ceph node 上配置使 ceph-deploy admin node 可以使用 cephdeploy 用户免密登陆
sed -i 's/#PubkeyAuthentication yes/PubkeyAuthentication yes/g' /etc/ssh/sshd_config
systemctl restart sshd
mkdir /home/cephdeploy/.ssh
chown -R cephdeploy:cephdeploy /home/cephdeploy/.ssh
chown -R cephdeploy:cephdeploy /home/cephdeploy/.ssh/authorized_keys
chmod 600 /home/cephdeploy/.ssh/authorized_keys
6. 在 ceph-deploy admin node 上配置 ~/.ssh/config
切换到 cephdeploy 用户
vim .ssh/config
Host ceph-node1
Hostname ceph-node1
User cephdeploy
Port 22
Host ceph-node2
Hostname ceph-node2
User cephdeploy
Port 22
Host ceph-node3
Hostname ceph-node3
User cephdeploy
Port 22
修改权限
chmod 600 .ssh/config
7. 在 ceph-deploy admin node 上安装 ceph-deploy
sudo yum install ceph-deploy -y
8. 在 ceph node 上安装 ceph
sudo yum -y install ceph ceph-radosgw
二. Ceph 集群
1. 初始化 ceph 集群
在管理节点上,以 cephdeploy 登陆,创建一个目录,以维护为集群生成的配置文件和密钥
mkdir my-cluster
cd my-cluster
ceph-deploy new ceph-node1 ceph-node2 ceph-node3
Deploy the initial monitor(s) and gather the keys:
ceph-deploy mon create-initial
Use ceph-deploy to copy the configuration file and admin key to your admin node and your Ceph Nodes so that you can use the ceph CLI without having to specify the monitor address and ceph.client.admin.keyring each time you execute a command
ceph-deploy admin ceph-node1 ceph-node2 ceph-node3
Deploy a manager daemon.
ceph-deploy mgr create ceph-node1 ceph-node2 ceph-node3
Add three OSDs. For the purposes of these instructions, we assume you have an unused disk in each node called /dev/vdb. Be sure that the device is not currently in use and does not contain any important data.
ceph-deploy osd create --data {device} {ceph-node}
For example:
ceph-deploy osd create --data /dev/vdb ceph-node1
ceph-deploy osd create --data /dev/vdb ceph-node2
ceph-deploy osd create --data /dev/vdb ceph-node3
ceph fs
ceph-deploy mds create ceph-node1 ceph-node2 ceph-node3
ceph osd pool create cephfs_data 32 32
ceph osd pool create cephfs_metadata 32 32
ceph fs new mycephfs cephfs_metadata cephfs_data
cat /etc/ceph/ceph.client.admin.keyring
mount.ceph ceph-node1:6789:/ /mnt/ -o name=admin,secret="xxxxx"
ceph fs authorize cephfs client.testuser /testdir rw
mount.ceph 10.205.117.101:6789,10.205.117.102:6789,10.205.117.103:6789,10.205.117.104:6789:/testdir /mnt/ -o name=testuser,secret="pass"
部署RGW实例
ceph-deploy rgw create ceph-node1 ceph-node2 ceph-node3 ceph-node4
快存储
rbd create foo --size 4096 --image-feature layering -m 10.205.205.41,10.205.205.43,10.205.207.171 -k /etc/ceph/ceph.client.admin.keyring -p rbdpool01
sudo rbd map foo --name client.admin -m 10.205.205.41,10.205.205.43,10.205.207.171 -k /etc/ceph/ceph.client.admin.keyring -p rbdpool01
mkfs.ext4 -m0 /dev/rbd/rbdpool01/foo
mkdir /mnt/ceph-block-device
mount /dev/rbd/rbdpool01/foo /mnt/ceph-block-device
Ceph Dashboard
官方文档:https://docs.ceph.com/docs/master/mgr/dashboard/
yum -y install ceph-mgr-dashboard
ceph config set mgr mgr/dashboard/ssl false
ceph mgr module enable dashboard
ceph dashboard ac-user-create <username> <password> administrator
1. 开启仪表板 Object Gateway 管理功能
radosgw-admin user create --uid=CephDashboard --display-name=CephDashboard --system
radosgw-admin user info --uid=<user_id>
ceph dashboard set-rgw-api-access-key <access_key>
ceph dashboard set-rgw-api-secret-key <secret_key>
系统调优
echo "net.ipv4.ip_local_port_range = 1024 65535" >> /etc/sysctl.conf
参考文档:
https://docs.ceph.com/docs/master/start/quick-ceph-deploy/
李航:分布式存储 Ceph 介绍及原理架构分享