【MongoDB】— CentOS76部署mongodb分片集群
CentOS76部署mongodb分片集群Sharding Cluster通过多节点和多实例部署。
1分片集群规划
1.1 服务器节点
[root@mysql-master data]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
172.21.209.122 mysql-manager mysql-manager.cloudserver.com
172.21.209.123 mysql-master mysql-master.cloudserver.com
172.21.209.124 mysql-slave01 mysql-slave01.cloudserver.com
172.21.209.125 mysql-slave02 mysql-slave02.cloudserver.com
[root@mysql-master data]#
1.2 分片集群节点角色
服务器节点及端口:
服务器:172.21.209.123 端口:38017,38018,38019,38020,38021
服务器:172.21.209.124:端口:38017,38018,38019,38020,38021
服务器:172.21.209.125:端口:38017,38018,38019,38020,38021
端口角色说明:
mongos :【 38017】分布在三个节点上,做高可用。
config server : 【38018】 分布在三个节点上,做副本集,不支持arbiter。
每个节点上构成一个分片集:
分片集群1:172.21.209.123 :【主节点(38019) 副节点(38020) 仲裁(38021)】
分片集群2:172.21.209.124 :【主节点(38019) 副节点(38020) 仲裁(38021)】
分片集群3:172.21.209.125 :【主节点(38019) 副节点(38020) 仲裁(38021)】
集群名称说明:
节点/角色 路由服务器 配置服务器 分片集群
172.21.209.123 mongos config server shard server1 主节点 shard server1 副节点 shard server1 仲裁
172.21.209.124 mongos config server shard server2 主节点 shard server2 副节点 shard server2 仲裁
172.21.209.125 mongos config server shard server3 主节点 shard server3 副节点 shard server3 仲裁
注:生产环境可根据此规划部署。
1.3 分片集群节点角色及端口详细规划表
节点/角色 路由服务器 配置服务器 分片集群
172.21.209.123 mongos(38017) config server(38018) shard server1主节点(38019) shard server1副节点(38020) shard server1 仲裁(38021)
172.21.209.124 mongos(38017) config server(38018) shard server2主节点(38019) shard server2副节点(38020) shard server2 仲裁(38021)
172.21.209.125 mongos(38017) config server(38018) shard server3主节点(38019) shard server3副节点(38020) shard server3 仲裁(38021)
注:本规划用于部署实验环境。且config server副本集不支持arbiter。
2 系统部署
2.1 下载上传mongodb软件并部署三个节点环境
mongodb的部署请参考《【MongoDB】— CentOS7.6部署MongoDB数据库》
2.2集群部署环境准备
1、三个节点上创建集群目录
[root@mysql-master mongodb]su -mongod
[mongod@mysql-master mongodb]# mkdir -pv /data/mongodb/shard-mongodb
2、创建多实例目录
#三个节点都执行命令
[root@mysql-master mongodb]su -mongod
[mongod@mysql-master mongodb]$ for path in {38018..38021}
> do
> mkdir -pv /data/mongodb/shard-mongodb/$path/{log,data,conf}
> done
mkdir: created directory ‘/data/mongodb/shard-mongodb’
mkdir: created directory ‘/data/mongodb/shard-mongodb/38018’
mkdir: created directory ‘/data/mongodb/shard-mongodb/38018/log’
mkdir: created directory ‘/data/mongodb/shard-mongodb/38018/data’
mkdir: created directory ‘/data/mongodb/shard-mongodb/38018/conf’
mkdir: created directory ‘/data/mongodb/shard-mongodb/38019’
mkdir: created directory ‘/data/mongodb/shard-mongodb/38019/log’
mkdir: created directory ‘/data/mongodb/shard-mongodb/38019/data’
mkdir: created directory ‘/data/mongodb/shard-mongodb/38019/conf’
mkdir: created directory ‘/data/mongodb/shard-mongodb/38020’
mkdir: created directory ‘/data/mongodb/shard-mongodb/38020/log’
mkdir: created directory ‘/data/mongodb/shard-mongodb/38020/data’
mkdir: created directory ‘/data/mongodb/shard-mongodb/38020/conf’
mkdir: created directory ‘/data/mongodb/shard-mongodb/38021’
mkdir: created directory ‘/data/mongodb/shard-mongodb/38021/log’
mkdir: created directory ‘/data/mongodb/shard-mongodb/38021/data’
mkdir: created directory ‘/data/mongodb/shard-mongodb/38021/conf’
[mongod@mysql-master mongodb]$
#查看创建路径
[mongod@mysql-master shard-mongodb]$ pwd
/data/mongodb/shard-mongodb
[mongod@mysql-master shard-mongodb]$ ll
total 0
drwxrwxr-x 5 mongod mongod 41 Sep 24 10:58 38018
drwxrwxr-x 5 mongod mongod 41 Sep 24 10:58 38019
drwxrwxr-x 5 mongod mongod 41 Sep 24 10:58 38020
drwxrwxr-x 5 mongod mongod 41 Sep 24 10:58 38021
3 准备配置文件
3.1 第一组分片集群(1主 1从 1Arb)
节点172.21.209.123及多实例配置文件准备
1、准备配置文件
cat > /data/mongodb/shard-mongodb/38019/conf/mongodb.conf <<EOF
systemLog:
destination: file
path: /data/mongodb/shard-mongodb/38019/log/mongodb.log
logAppend: true
storage:
journal:
enabled: true
dbPath: /data/mongodb/shard-mongodb/38019/data
directoryPerDB: true
#engine: wiredTiger
wiredTiger:
engineConfig:
cacheSizeGB: 1
directoryForIndexes: true
collectionConfig:
blockCompressor: zlib
indexConfig:
prefixCompression: true
net:
bindIp: 172.21.209.123,127.0.0.1
port: 38019
replication:
oplogSizeMB: 2048
replSetName: my_ReplSet1
sharding:
clusterRole: shardsvr
processManagement:
fork: true
EOF
2、复制配置文件到其他多实例
cp /data/mongodb/shard-mongodb/38019/conf/mongodb.conf /data/mongodb/shard-mongodb/38020/conf/
cp /data/mongodb/shard-mongodb/38019/conf/mongodb.conf /data/mongodb/shard-mongodb/38021/conf/
3、修改配置文件及端口
sed 's#38019#38020#g' /data/mongodb/shard-mongodb/38020/conf/mongodb.conf -i
sed 's#38019#38021#g' /data/mongodb/shard-mongodb/38021/conf/mongodb.conf -i
3.2 第二组分片集群(1主 1从 1Arb)
节点172.21.209.124及多实例配置文件准备
1、准备配置文件
cat > /data/mongodb/shard-mongodb/38019/conf/mongodb.conf <<EOF
systemLog:
destination: file
path: /data/mongodb/shard-mongodb/38019/log/mongodb.log
logAppend: true
storage:
journal:
enabled: true
dbPath: /data/mongodb/shard-mongodb/38019/data
directoryPerDB: true
#engine: wiredTiger
wiredTiger:
engineConfig:
cacheSizeGB: 1
directoryForIndexes: true
collectionConfig:
blockCompressor: zlib
indexConfig:
prefixCompression: true
net:
bindIp: 172.21.209.124,127.0.0.1
port: 38019
replication:
oplogSizeMB: 2048
replSetName: my_ReplSet2
sharding:
clusterRole: shardsvr
processManagement:
fork: true
EOF
2、复制配置文件到其他多实例
cp /data/mongodb/shard-mongodb/38019/conf/mongodb.conf /data/mongodb/shard-mongodb/38020/conf/
cp /data/mongodb/shard-mongodb/38019/conf/mongodb.conf /data/mongodb/shard-mongodb/38021/conf/
3、修改配置文件及端口
sed 's#38019#38020#g' /data/mongodb/shard-mongodb/38020/conf/mongodb.conf -i
sed 's#38019#38021#g' /data/mongodb/shard-mongodb/38021/conf/mongodb.conf -i
3.3 第三组分片集群(1主 1从 1Arb)
节点172.21.209.125及多实例配置文件准备
准备好复制集,待分片集群部署成功后,可后期加入,达到扩展分片集群的效果。
1、准备配置文件
cat > /data/mongodb/shard-mongodb/38019/conf/mongodb.conf <<EOF
systemLog:
destination: file
path: /data/mongodb/shard-mongodb/38019/log/mongodb.log
logAppend: true
storage:
journal:
enabled: true
dbPath: /data/mongodb/shard-mongodb/38019/data
directoryPerDB: true
#engine: wiredTiger
wiredTiger:
engineConfig:
cacheSizeGB: 1
directoryForIndexes: true
collectionConfig:
blockCompressor: zlib
indexConfig:
prefixCompression: true
net:
bindIp: 172.21.209.125,127.0.0.1
port: 38019
replication:
oplogSizeMB: 2048
replSetName: my_ReplSet3
sharding:
clusterRole: shardsvr
processManagement:
fork: true
EOF
2、复制配置文件到其他多实例
cp /data/mongodb/shard-mongodb/38019/conf/mongodb.conf /data/mongodb/shard-mongodb/38020/conf/
cp /data/mongodb/shard-mongodb/38019/conf/mongodb.conf /data/mongodb/shard-mongodb/38021/conf/
3、修改配置文件及端口
sed 's#38019#38020#g' /data/mongodb/shard-mongodb/38020/conf/mongodb.conf -i
sed 's#38019#38021#g' /data/mongodb/shard-mongodb/38021/conf/mongodb.conf -i
4 三个节点的实例启动,查看进程和端口
4.1 启动实例
172.21.209.123实例启动如下,其他节点启动方法相同。
mongod -f /data/mongodb/shard-mongodb/38019/conf/mongodb.conf
mongod -f /data/mongodb/shard-mongodb/38020/conf/mongodb.conf
mongod -f /data/mongodb/shard-mongodb/38021/conf/mongodb.conf
4.2 查看实例进程和端口
#进程
[root@mysql-slave02 38019]# ps -ef|grep mongo
root 30481 1 5 07:47 ? 00:00:01 mongod -f /data/mongodb/shard-mongodb/38019/conf/mongodb.conf
root 30518 1 7 07:47 ? 00:00:01 mongod -f /data/mongodb/shard-mongodb/38020/conf/mongodb.conf
root 30556 1 13 07:48 ? 00:00:01 mongod -f /data/mongodb/shard-mongodb/38021/conf/mongodb.conf
root 30592 2615 0 07:48 pts/0 00:00:00 grep --color=auto mongo
#监听端口
[root@mysql-slave02 38019]# netstat -lntp|grep 380
tcp 0 0 172.21.209.125:38019 0.0.0.0:* LISTEN 30481/mongod
tcp 0 0 127.0.0.1:38019 0.0.0.0:* LISTEN 30481/mongod
tcp 0 0 172.21.209.125:38020 0.0.0.0:* LISTEN 30518/mongod
tcp 0 0 127.0.0.1:38020 0.0.0.0:* LISTEN 30518/mongod
tcp 0 0 172.21.209.125:38021 0.0.0.0:* LISTEN 30556/mongod
tcp 0 0 127.0.0.1:38021 0.0.0.0:* LISTEN 30556/mongod
5 部署分片集群
节点及多实例:172.21.209.123 实例端口:38019,38020,38021
5.1 第一组分片集群部署
登录副本集主节点
mongo --port 38019
use admin
config = {_id: 'my_ReplSet1', members: [
{_id: 0, host: '172.21.209.123:38019'},
{_id: 1, host: '172.21.209.123:38020'},
{_id: 2, host: '172.21.209.123:38021',"arbiterOnly":true}]
}
rs.initiate(config)
配置注释:my_ReplSet1表示副本集的名称,要与配置文件中的名称一致。
节点及多实例:172.21.209.124 实例端口:38019,38020,38021
5.2第二组分片集群部署
登录副本集主节点
mongo --port 38019
use admin
config = {_id: 'my_ReplSet2', members: [
{_id: 0, host: '172.21.209.124:38019'},
{_id: 1, host: '172.21.209.124:38020'},
{_id: 2, host: '172.21.209.124:38021',"arbiterOnly":true}]
}
rs.initiate(config)
配置注释:my_ReplSet2表示副本集的名称,要与配置文件中的名称一致。
5.3 第三组分片集群部署
节点及多实例:172.21.209.125 实例端口:38019,38020,38021
登录副本集主节点
mongo --port 38019
use admin
config = {_id: 'my_ReplSet3', members: [
{_id: 0, host: '172.21.209.125:38019'},
{_id: 1, host: '172.21.209.125:38020'},
{_id: 2, host: '172.21.209.125:38021',"arbiterOnly":true}]
}
rs.initiate(config)
配置注释:my_ReplSet2表示副本集的名称,要与配置文件中的名称一致。
6 config server 副本集配置
节点及端口:172.21.209.123:38018,172.21.209.124 :38018,172.21.209.125 :38018
1、数据目录如下,已经创建。
[mongod@mysql-master 38018]$ pwd
/data/mongodb/shard-mongodb/38018
[mongod@mysql-master 38018]$ ll
total 0
drwxrwxr-x 2 mongod mongod 6 Sep 24 10:58 conf
drwxrwxr-x 2 mongod mongod 6 Sep 24 10:58 data
drwxrwxr-x 2 mongod mongod 6 Sep 24 10:58 log
2、配置文件准备,三个节点都操作
cat > /data/mongodb/shard-mongodb/38018/conf/mongodb.conf <<EOF
systemLog:
destination: file
path: /data/mongodb/shard-mongodb/38018/log/mongodb.conf
logAppend: true
storage:
journal:
enabled: true
dbPath: /data/mongodb/shard-mongodb/38018/data
directoryPerDB: true
#engine: wiredTiger
wiredTiger:
engineConfig:
cacheSizeGB: 1
directoryForIndexes: true
collectionConfig:
blockCompressor: zlib
indexConfig:
prefixCompression: true
net:
bindIp: 172.21.209.123,127.0.0.1
port: 38018
replication:
oplogSizeMB: 2048
replSetName: my_configReplSet
sharding:
clusterRole: configsvr
processManagement:
fork: true
EOF
3、启动服务
mongod -f /data/mongodb/shard-mongodb/38018/conf/mongodb.conf
注: bindIp的监听地址配置。
4、 config server副本集配置
mongo --port 38018
use admin
config = {_id: 'my_configReplSet', members: [
{_id: 0, host: '172.21.209.123:38018'},
{_id: 1, host: '172.21.209.124:38018'},
{_id: 2, host: '172.21.209.125:38018'}]
}
rs.initiate(config)
注:configserver 必须是复制集。configserver不支持arbiter。
7 mongos节点配置
节点及端口:172.21.209.123:38017,172.21.209.124:38017,172.21.209.12,5:38017
7.1 配置三个节点的mongos
三个节点都执行以下操作
1、创建配置文件和日志目录
mkdir -p /data/mongodb/shard-mongodb/38017/conf
mkdir -p /data/mongodb/shard-mongodb/38017/log
2、准备配置文件
cat > /data/mongodb/shard-mongodb/38017/conf/mongos.conf <<EOF
systemLog:
destination: file
path: /data/mongodb/shard-mongodb/38017/log/mongos.log
logAppend: true
net:
bindIp: 172.21.209.123,127.0.0.1
port: 38017
sharding:
configDB: my_configReplSet/172.21.209.123:38018,172.21.209.124:38018,172.21.209.125:38018
processManagement:
fork: true
EOF
3、启动mongos服务
mongos -f /data/mongodb/shard-mongodb/38017/conf/mongos.conf
4、查看检查和端口
[mongod@mysql-slave01 ~]$ ps -ef|grep mongos
mongod 31056 28832 0 09:15 pts/0 00:00:00 vim mongos.conf
mongod 31184 28832 0 09:27 pts/0 00:00:00 vim mongos.conf
mongod 31275 1 0 09:33 ? 00:00:00 mongos -f /data/mongodb/shard-mongodb/38017/conf/mongo.conf
mongod 31302 31251 0 09:33 pts/1 00:00:00 grep --color=auto mongos
[mongod@mysql-slave01 ~]$
[mongod@mysql-master 38017]$ netstat -lntp|grep mongo
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
tcp 0 0 172.21.209.123:38017 0.0.0.0:* LISTEN 5269/mongos
tcp 0 0 127.0.0.1:38017 0.0.0.0:* LISTEN 5269/mongos
tcp 0 0 172.21.209.123:38018 0.0.0.0:* LISTEN 5014/mongod
tcp 0 0 127.0.0.1:38018 0.0.0.0:* LISTEN 5014/mongod
tcp 0 0 172.21.209.123:38019 0.0.0.0:* LISTEN 3763/mongod
tcp 0 0 127.0.0.1:38019 0.0.0.0:* LISTEN 3763/mongod
tcp 0 0 172.21.209.123:38020 0.0.0.0:* LISTEN 4762/mongod
tcp 0 0 127.0.0.1:38020 0.0.0.0:* LISTEN 4762/mongod
tcp 0 0 172.21.209.123:38021 0.0.0.0:* LISTEN 3935/mongod
tcp 0 0 127.0.0.1:38021 0.0.0.0:* LISTEN 3935/mongod
7.2 分片集群配置,将副本集添加到分片集群中
登录到其中一个节点中配置,本次选择172.21.209.123
1、连接到mongs的admin数据库
[mongod@mysql-master shard-mongodb]$ mongo 172.21.209.123:38017/admin
MongoDB shell version v4.2.12
connecting to: mongodb://172.21.209.123:38017/admin?compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("2d7e3566-d0b0-4e48-9cf8-107e4605c6fe") }
MongoDB server version: 4.2.12
Server has startup warnings:
2021-09-25T09:38:27.292-0400 I CONTROL [main]
2021-09-25T09:38:27.292-0400 I CONTROL [main] ** WARNING: Access control is not enabled for the database.
2021-09-25T09:38:27.292-0400 I CONTROL [main] ** Read and write access to data and configuration is unrestricted.
2021-09-25T09:38:27.292-0400 I CONTROL [main]
mongos>
2、添加分片
db.runCommand( { addshard : "my_ReplSet1/172.21.209.123:38019,172.21.209.123:38020,172.21.209.123:38021",name:"my_shard1"} )
db.runCommand( { addshard : "my_ReplSet2/172.21.209.124:38019,172.21.209.124:38020,172.21.209.124:38021",name:"my_shard2"} )
3、查看分片
mongos> db.runCommand( { listshards : 1 } )
{
"shards" : [
{
"_id" : "my_shard1",
"host" : "my_ReplSet1/172.21.209.123:38019,172.21.209.123:38020",
"state" : 1
},
{
"_id" : "my_shard2",
"host" : "my_ReplSet2/172.21.209.124:38019,172.21.209.124:38020",
"state" : 1
}
],
"ok" : 1,
"operationTime" : Timestamp(1632577863, 26),
"$clusterTime" : {
"clusterTime" : Timestamp(1632577863, 26),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
mongos>
4、查看分片的状态
mongos> sh.status();
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("614f1eef4006239b861de8e0")
}
shards:
{ "_id" : "my_shard1", "host" : "my_ReplSet1/172.21.209.123:38019,172.21.209.123:38020", "state" : 1 }
{ "_id" : "my_shard2", "host" : "my_ReplSet2/172.21.209.124:38019,172.21.209.124:38020", "state" : 1 }
active mongoses:
"4.2.12" : 2
autosplit:
Currently enabled: yes
balancer:
Currently enabled: yes
Currently running: no
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
81 : Success
databases:
{ "_id" : "config", "primary" : "config", "partitioned" : true }
config.system.sessions
shard key: { "_id" : 1 }
unique: false
balancing: true
chunks:
my_shard1 943
my_shard2 81
too many chunks to print, use verbose if you want to force print
mongos>
8分片集群管理和配置测试,分片策略
分片策略和chuck迁移
image.png
image.png
8.1 分片策略配置:RANGE配置及测试
部署完分片集群后,需要登录到mongos上对需要分片的库和分片的表进行开启分片功能,才能使分片生效。对分片的表必须创建索引。
案例1:如需要对student库下的users表进行范围分片。
1、激活MongoDB分片功能
[mongod@mysql-master shard-mongodb]$ mongo --port 38017 admin
mongos> db.runCommand( { enablesharding : "数据库名称" } )
eg:
mongos> use admin
mongos> db.runCommand( { enablesharding : "student" } )
{
"ok" : 1,
"operationTime" : Timestamp(1632617427, 5),
"$clusterTime" : {
"clusterTime" : Timestamp(1632617427, 5),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
2、指定分片键对集合分片
eg:范围片键
创建索引
mongos> use student
switched to db student
mongos> db.users.ensureIndex( { id: 1 } )
{
"raw" : {
"my_ReplSet2/172.21.209.124:38019,172.21.209.124:38020" : {
"createdCollectionAutomatically" : false,
"numIndexesBefore" : 1,
"numIndexesAfter" : 2,
"ok" : 1
}
},
"ok" : 1,
"operationTime" : Timestamp(1632617491, 8),
"$clusterTime" : {
"clusterTime" : Timestamp(1632617491, 8),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
mongos>
3、对某个库下的表开启分片,如student表的users库
mongos> use admin
switched to db admin
mongos> db.runCommand( { shardcollection : "student.users",key : {id: 1} } )
{
"collectionsharded" : "student.users",
"collectionUUID" : UUID("d768960a-2262-4671-9e6f-72291121915b"),
"ok" : 1,
"operationTime" : Timestamp(1632617551, 8),
"$clusterTime" : {
"clusterTime" : Timestamp(1632617551, 8),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
mongos>
注意:key : {id: 1} 表示范围分片
4、集合分片测试验证
mongos> use student
switched to db student
mongos> db
student
mongos> for(i=1;i<=1000;i++){ db.users.insert({id:i,name:"RonnyStack",age:20,addr:"Chain",date:new Date()}); }
WriteResult({ "nInserted" : 1 })
5、测试分片结果
5.1、登录mongos,在mongos中查看插入的记录个数。
mongos> db.users.count()
1000
5.2、登录到分片副本集1,在分片集群1中查看插入的记录个数。
[mongod@mysql-slave01 shard-mongodb]$ mongo --port 38019 #当前节点登录
[mongod@mysql-slave01 shard-mongodb]$ mongo 172.21.209.123:38019/admin #远程节点登录
my_ReplSet1:PRIMARY> show dbs
admin 0.000GB
config 0.000GB
local 0.000GB
my_ReplSet1:PRIMARY> use student
switched to db student
my_ReplSet1:PRIMARY> show tables
my_ReplSet1:PRIMARY>
5.3、在分片集群2中查看插入的记录个数。
[mongod@mysql-slave02 38017]# mongo 172.21.209.124:38019/admin
my_ReplSet2:PRIMARY> show dbs
admin 0.000GB
config 0.000GB
local 0.003GB
student 0.000GB ##
my_ReplSet2:PRIMARY> use student
switched to db student
my_ReplSet2:PRIMARY>
my_ReplSet2:PRIMARY> db.users.count()
1000
my_ReplSet2:PRIMARY>
问题:如果插入1000条数据时,范围分片效果未达到,分片集1上没有数据,全部到分片集2上。为什么,是数据太少吗?增加到1000000条记录呢?测试结果一样。原因后续查找。
案例2:如需要对ronnystack库下的courses表进行hash分片。
8.2 分片策略配置:Hash配置及测试
1、登录mongos数据库,在admin库中,激活数据库ronnystack的分片功能。
[mongod@mysql-master ~]$ mongo --port 38017 admin
mongos> use admin
switched to db admin
mongos> db.runCommand( { enablesharding : "ronnystack" } )
{
"ok" : 1,
"operationTime" : Timestamp(1632620756, 682),
"$clusterTime" : {
"clusterTime" : Timestamp(1632620756, 691),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
mongos>
2、对ronnystack库下的courses表创建hash索引
mongos> use ronnystack
switched to db ronnystack
mongos> db.courses.ensureIndex( { id: "hashed" } )
{
"raw" : {
"my_ReplSet1/172.21.209.123:38019,172.21.209.123:38020" : {
"createdCollectionAutomatically" : false,
"numIndexesBefore" : 1,
"numIndexesAfter" : 2,
"ok" : 1
}
},
"ok" : 1,
"operationTime" : Timestamp(1632620867, 177),
"$clusterTime" : {
"clusterTime" : Timestamp(1632620867, 198),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
mongos>
3、对库下的表开启分片 。
mongos> sh.shardCollection( "ronnystack.courses", { id: "hashed" } )
{
"collectionsharded" : "ronnystack.courses",
"collectionUUID" : UUID("94582910-999a-4cd6-84f7-117ded878d7b"),
"ok" : 1,
"operationTime" : Timestamp(1632620929, 316),
"$clusterTime" : {
"clusterTime" : Timestamp(1632620929, 328),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
mongos>
4、登录到数据库并插入测试数据
mongos> use ronnystack
switched to db ronnystack
mongos>
mongos> for(i=1;i<=100000;i++){ db.courses.insert({"id":i,"name":"Python","keshi":240,"date":new Date()}); }
WriteResult({ "nInserted" : 1 })
mongos>
5、测试hash分片效果
5.1、登录mongos,在mongos中查看插入的记录个数。
mongos> show dbs
admin 0.000GB
config 0.003GB
ronnystack 0.005GB
student 0.017GB
mongos> db
ronnystack
mongos> show tables
courses
mongos> db.courses.count()
100000
mongos>
5.2、登录到分片副本集1,在分片集群1中查看插入的记录个数。
[mongod@mysql-slave01 shard-mongodb]$ [mongod@mysql-slave01 shard-mongodb]$ mongo 172.21.209.123:38019/admin #远程节点登录
my_ReplSet1:PRIMARY> show dbs
admin 0.000GB
config 0.000GB
local 0.001GB
ronnystack 0.003GB
my_ReplSet1:PRIMARY> use ronnystack
switched to db ronnystack
my_ReplSet1:PRIMARY> show tables
courses
my_ReplSet1:PRIMARY> db.courses.count()
50393
my_ReplSet1:PRIMARY>
5.3、在分片集群2中查看插入的记录个数。
[mongod@mysql-slave01 shard-mongodb]$ mongo 172.21.209.124:38019/admin
my_ReplSet2:PRIMARY> show dbs
admin 0.000GB
config 0.000GB
local 0.026GB
ronnystack 0.003GB
student 0.019GB
my_ReplSet2:PRIMARY> use ronnystack
switched to db ronnystack
my_ReplSet2:PRIMARY> show tables
courses
my_ReplSet2:PRIMARY> db.courses.count()
49607
my_ReplSet2:PRIMARY>
测试结果:符合hash分片的效果。
8.3 分片集群的查询及管理
1、判断是否Shard集群
mongos> db.runCommand({ isdbgrid : 1})
{
"isdbgrid" : 1,
"hostname" : "mysql-master",
"ok" : 1,
"operationTime" : Timestamp(1632621882, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1632621882, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
2、列出分片信息
mongos> use admin
switched to db admin
mongos> db.runCommand({ listshards : 1})
{
"shards" : [
{
"_id" : "my_shard1",
"host" : "my_ReplSet1/172.21.209.123:38019,172.21.209.123:38020",
"state" : 1
},
{
"_id" : "my_shard2",
"host" : "my_ReplSet2/172.21.209.124:38019,172.21.209.124:38020",
"state" : 1
}
],
"ok" : 1,
"operationTime" : Timestamp(1632621918, 2),
"$clusterTime" : {
"clusterTime" : Timestamp(1632621918, 2),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
mongos>
3、列出开启分片的数据库
mongos> use config
switched to db config
mongos> db.databases.find( { "partitioned": true } )
{ "_id" : "student", "primary" : "my_shard2", "partitioned" : true, "version" : { "uuid" : UUID("8a85a0ed-9c0b-4f35-8118-e02dcaa1ff33"), "lastMod" : 1 } }
{ "_id" : "ronnystack", "primary" : "my_shard1", "partitioned" : true, "version" : { "uuid" : UUID("c9941a41-a5f1-4132-a8aa-5ccc5c48b539"), "lastMod" : 1 } }
4、查看分片的片键
mongos> db.collections.find().pretty()
{
"_id" : "config.system.sessions",
"lastmodEpoch" : ObjectId("614f29099e23bfe1e57052c5"),
"lastmod" : ISODate("1970-02-19T17:02:47.296Z"),
"dropped" : false,
"key" : {
"_id" : 1
},
"unique" : false,
"uuid" : UUID("5521fcbf-01b7-4f18-b831-90efc55f7ffe")
}
{
"_id" : "student.users",
"lastmodEpoch" : ObjectId("000000000000000000000000"),
"lastmod" : ISODate("2021-09-26T01:37:45.327Z"),
"dropped" : true
}
{
"_id" : "ronnystack.courses",
"lastmodEpoch" : ObjectId("614fd1819e23bfe1e572673f"),
"lastmod" : ISODate("1970-02-19T17:02:47.299Z"),
"dropped" : false,
"key" : {
"id" : "hashed"
},
"unique" : false,
"uuid" : UUID("94582910-999a-4cd6-84f7-117ded878d7b")
}
mongos>
5、查看分片的详细信息和状态
mongos> db.collections.find().pretty()
{
"_id" : "config.system.sessions",
"lastmodEpoch" : ObjectId("614f29099e23bfe1e57052c5"),
"lastmod" : ISODate("1970-02-19T17:02:47.296Z"),
"dropped" : false,
"key" : {
"_id" : 1
},
"unique" : false,
"uuid" : UUID("5521fcbf-01b7-4f18-b831-90efc55f7ffe")
}
{
"_id" : "student.users",
"lastmodEpoch" : ObjectId("000000000000000000000000"),
"lastmod" : ISODate("2021-09-26T01:37:45.327Z"),
"dropped" : true
}
{
"_id" : "ronnystack.courses",
"lastmodEpoch" : ObjectId("614fd1819e23bfe1e572673f"),
"lastmod" : ISODate("1970-02-19T17:02:47.299Z"),
"dropped" : false,
"key" : {
"id" : "hashed"
},
"unique" : false,
"uuid" : UUID("94582910-999a-4cd6-84f7-117ded878d7b")
}
mongos>
mongos>
mongos> sh.status()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("614f1eef4006239b861de8e0")
}
shards:
{ "_id" : "my_shard1", "host" : "my_ReplSet1/172.21.209.123:38019,172.21.209.123:38020", "state" : 1 }
{ "_id" : "my_shard2", "host" : "my_ReplSet2/172.21.209.124:38019,172.21.209.124:38020", "state" : 1 }
active mongoses:
"4.2.12" : 3
autosplit:
Currently enabled: yes
balancer:
Currently enabled: yes
Currently running: no
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
512 : Success
databases:
{ "_id" : "config", "primary" : "config", "partitioned" : true }
config.system.sessions
shard key: { "_id" : 1 }
unique: false
balancing: true
chunks:
my_shard1 512
my_shard2 512
too many chunks to print, use verbose if you want to force print
{ "_id" : "ronnystack", "primary" : "my_shard1", "partitioned" : true, "version" : { "uuid" : UUID("c9941a41-a5f1-4132-a8aa-5ccc5c48b539"), "lastMod" : 1 } }
ronnystack.courses
shard key: { "id" : "hashed" }
unique: false
balancing: true
chunks:
my_shard1 2
my_shard2 2
{ "id" : { "$minKey" : 1 } } -->> { "id" : NumberLong("-4611686018427387902") } on : my_shard1 Timestamp(1, 0)
{ "id" : NumberLong("-4611686018427387902") } -->> { "id" : NumberLong(0) } on : my_shard1 Timestamp(1, 1)
{ "id" : NumberLong(0) } -->> { "id" : NumberLong("4611686018427387902") } on : my_shard2 Timestamp(1, 2)
{ "id" : NumberLong("4611686018427387902") } -->> { "id" : { "$maxKey" : 1 } } on : my_shard2 Timestamp(1, 3)
{ "_id" : "student", "primary" : "my_shard2", "partitioned" : true, "version" : { "uuid" : UUID("8a85a0ed-9c0b-4f35-8118-e02dcaa1ff33"), "lastMod" : 1 } }
mongos>
9分片集群管理(添加分片集群和删除分片集群)
MongoDB数据库在添加节点和删除节点时会自动触发chuck均衡。
删除分片节点时需要谨慎操作,分片副本集上有数据时,需要迁移到其他的副本集上,最后试试删除分片节点,该操作由分片集群自动完成。
9.1分片集群添加分片节点
添加分片集群的基本步骤:
1、准备好副本集所需的节点设备,安装好操作系统,如果有raid,需要配置好raid和网络。
2、系统初始化操作,如设置主机名,与集群保持一致,配置/etc/hosts,关闭防火墙和Selinux。
3、上传数据库软件并做好软件安装。
4、配置副本集。
5、将该副本集添加到当前的分片集群中。
添加分片节点:
以上1-4操作在部署时以基本完成,现在需要将副本集添加到分片集群中并察看状态。
1、连接到mongs的admin数据库
[root@mysql-master shard-mongodb]$ su - mongod
[mongod@mysql-master ~]$ mongo 172.21.209.123:38017/admin
2、添加分片
mongos> db.runCommand( { addshard : "my_ReplSet3/172.21.209.125:38019,172.21.209.125:38020,172.21.209.125:38021",name:"my_shard3"} )
{
"shardAdded" : "my_shard3",
"ok" : 1,
"operationTime" : Timestamp(1632622809, 5),
"$clusterTime" : {
"clusterTime" : Timestamp(1632622809, 5),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
mongos>
3、列出分片
mongos> db.runCommand( { listshards : 1 } )
{
"shards" : [
{
"_id" : "my_shard1",
"host" : "my_ReplSet1/172.21.209.123:38019,172.21.209.123:38020",
"state" : 1
},
{
"_id" : "my_shard2",
"host" : "my_ReplSet2/172.21.209.124:38019,172.21.209.124:38020",
"state" : 1
},
{
"_id" : "my_shard3",
"host" : "my_ReplSet3/172.21.209.125:38019,172.21.209.125:38020",
"state" : 1
}
],
"ok" : 1,
"operationTime" : Timestamp(1632622833, 40),
"$clusterTime" : {
"clusterTime" : Timestamp(1632622833, 40),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
4、整体状态查看
mongos> sh.status()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("614f1eef4006239b861de8e0")
}
shards:
{ "_id" : "my_shard1", "host" : "my_ReplSet1/172.21.209.123:38019,172.21.209.123:38020", "state" : 1 }
{ "_id" : "my_shard2", "host" : "my_ReplSet2/172.21.209.124:38019,172.21.209.124:38020", "state" : 1 }
{ "_id" : "my_shard3", "host" : "my_ReplSet3/172.21.209.125:38019,172.21.209.125:38020", "state" : 1 }
active mongoses:
"4.2.12" : 3
autosplit:
Currently enabled: yes
balancer:
Currently enabled: yes
Currently running: no
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
567 : Success
databases:
{ "_id" : "config", "primary" : "config", "partitioned" : true }
config.system.sessions
shard key: { "_id" : 1 }
unique: false
balancing: true
chunks:
my_shard1 485
my_shard2 485
my_shard3 54
too many chunks to print, use verbose if you want to force print
{ "_id" : "ronnystack", "primary" : "my_shard1", "partitioned" : true, "version" : { "uuid" : UUID("c9941a41-a5f1-4132-a8aa-5ccc5c48b539"), "lastMod" : 1 } }
ronnystack.courses
shard key: { "id" : "hashed" }
unique: false
balancing: true
chunks:
my_shard1 2
my_shard2 1
my_shard3 1
{ "id" : { "$minKey" : 1 } } -->> { "id" : NumberLong("-4611686018427387902") } on : my_shard1 Timestamp(1, 0)
{ "id" : NumberLong("-4611686018427387902") } -->> { "id" : NumberLong(0) } on : my_shard1 Timestamp(1, 1)
{ "id" : NumberLong(0) } -->> { "id" : NumberLong("4611686018427387902") } on : my_shard3 Timestamp(2, 0)
{ "id" : NumberLong("4611686018427387902") } -->> { "id" : { "$maxKey" : 1 } } on : my_shard2 Timestamp(2, 1)
{ "_id" : "student", "primary" : "my_shard2", "partitioned" : true, "version" : { "uuid" : UUID("8a85a0ed-9c0b-4f35-8118-e02dcaa1ff33"), "lastMod" : 1 } }
mongos>
5、登录到分片副本集三
[mongod@mysql-slave01 shard-mongodb]$ mongo 172.21.209.125:38019/admin
可以察看到hash分片策略已经生效,迁移到了分片集3上。
my_ReplSet3:PRIMARY> show dbs
admin 0.000GB
config 0.000GB
local 0.001GB
ronnystack 0.001GB
my_ReplSet3:PRIMARY> use ronnystack
switched to db ronnystack
my_ReplSet3:PRIMARY> show tables
courses
my_ReplSet3:PRIMARY> db.courses.count()
24887
my_ReplSet3:PRIMARY>
9.2分片集群删除分片节点
1 确认blance是否正常
mongos> sh.getBalancerState()
true
2 删除分片集群2
mongos> db.runCommand( { removeShard: "my_shard1" } )
{
"msg" : "draining started successfully",
"state" : "started",
"shard" : "my_shard1",
"note" : "you need to drop or movePrimary these databases",
"dbsToMove" : [
"ronnystack"
],
"ok" : 1,
"operationTime" : Timestamp(1632623190, 32),
"$clusterTime" : {
"clusterTime" : Timestamp(1632623190, 33),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
mongos>
注意:删除需谨慎,删除操作一定会立即触发blancer均衡,需要一定的时间,一般完成操作。删除时先触发chuck迁移平衡,平衡完后,会自动删除改节点。
my_shard1:表示添加分片集的名称。
3、删除后察看分片集的状态
mongos> sh.status()
10 分片集群数据平衡,balancer操作
mongos的一个重要功能,根据集群状态,自动巡查所有分片集群shard节点上的chunk的情况,并且根据需要自动做chunk迁移。
10.1、触发迁移时机
1、根据集群运行状态,会检测系统不繁忙的时候做迁移
2、在做节点删除的时候,立即开始迁移工作
3、balancer只能在预设定的时间窗口内运行
4、备份时手动操作,有需要时可以关闭和开启blancer
mongos> sh.stopBalancer()
mongos> sh.startBalancer()
10.2 、自定义自动平衡进行的时间段
mongos> use config #登录配置库
switched to db config
查询平衡的状态
mongos> sh.setBalancerState( true )
{
"ok" : 1,
"operationTime" : Timestamp(1632628197, 5),
"$clusterTime" : {
"clusterTime" : Timestamp(1632628197, 5),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
mongos>
设置平衡的时间
mongos> db.settings.update({ _id : "balancer" }, { $set : { activeWindow : { start : "3:00", stop : "5:00" } } }, true )
WriteResult({ "nMatched" : 1, "nUpserted" : 0, "nModified" : 1 })
mongos>
查看平衡时间
mongos> sh.getBalancerWindow()
{ "start" : "3:00", "stop" : "5:00" }
查看状态
mongos> sh.status()
关闭某个集合的平衡balance
mongos> sh.disableBalancing("students.users")
WriteResult({ "nMatched" : 0, "nUpserted" : 0, "nModified" : 0 })
打开某个集合的balancer
mongos> sh.enableBalancing("students.users")
WriteResult({ "nMatched" : 0, "nUpserted" : 0, "nModified" : 0 })
查看某个集合的balance是开启或者关闭
mongos> db.getSiblingDB("config").collections.findOne({_id : "students.users"}).noBalance;
至此:mongodb分片集群实战完毕。