部署mysql innodb cluster——mysql官方的
0. 后续更新一些问题
0.1 导入数据时报错Specified key was too long; max key length is 767 bytes
参考https://www.jianshu.com/p/87467d0ff1b4
0.2 too many connections
set max_connections=1000
最大连接数设置大一点,但是还是有可能再次不够用。这时候需要设置wait_timeout和interactive_timeout两个值,代表超时没有活动就断开连接的时间。
- wait_timeout:非交互式连接,比如jdbc连接
- interactive_timeout:交互式连接,例如shell
但是通常我们应该去代码里面设置连接池,使得所有使用该数据库的应用程序的连接池里面的总连接数不要超过max_connections。
0.3 Can't connect to remote MySQL server for client '0.0.0.0:6446'
使用一段时间后(可能几个小时,也可能几天、几个月,不确定)mysql-router无法连接到mysql,参考官方讨论 ,具体原因也没说,我也没搞明白。
起初是因为MySQL使用的5.7版本,所以mysql-shell和mysql-router都是用的比较低的版本,后来发现mysql-shell和mysql-router都可以使用最新版本,不影响MySQL。mysql-router升级到8.0就没再出现这个问题。
0.4 节点重启
mysql innodb cluster集群中节点有主从(读写权限、只读权限的区别)之分,主节点挂了,会有从节点从新被选为主节点,从节点挂了则没有影响。但是需要注意的是,节点重启后并不能自动加入集群,需要手动执行:
cluster.rejoinInstance("{userName}@{host}:{port}");
从节点重新加入之后状态是RECOVER,同步完数据之后变成ONLINE。
实际操作中,出现过一次节点重启之后无法同步数据,看日志是执行binlog文件一直报错,一直没找到原因,怀疑是数据损坏。最后是导出数据,重建集群,再导入数据,才恢复。
0.5 开启多线程主从复制
slave-parallel-type=LOGICAL_CLOCK
slave-parallel-workers=8
从节点重启正常,但是加入集群时报错:
The server is not configured properly to be an active member of the group. Please see more details on error log.. Query: START group_replication (RuntimeError)
咋一看“节点没有准备好加入集群”以为是节点dba.configureLocalInstance()配置好的信息丢失了,但是只是重启了,不应该,看提示去看从节点错误日志:
Plugin group_replication reported: 'Group Replication requires slave-preserve-commit-order to be set to ON when using more than 1 applier threads.'
原来是my.cnf还需要加上配置:
slave_preserve_commit_order=on
一些问题的分析与正文的分割线
1. 提前准备
mysql官网下载mysql-community-server(需要依赖三个包mysql-community-common、mysql-community-libs、mysql-community-client)、mysql-router、mysql-shell。我安装使用的版本分别是5.7.23、2.1.4、1.0.9。
2. 环境
centos7(卸载mariadb,与mysql冲突)
3. 安装说明
数据库节点至少需要3个,这样在最多一个节点下线后仍能正常工作。配置hosts文件:
192.168.0.22 db-node01
192.168.0.182 db-node02
192.168.0.248 db-node03
这三个节点作为数据存储节点,一主二从,需要安装mysql和mysql-shell。
需要至少一个管理节点,安装mysql-router和mysql-shell。管理节点可以有多个,使用nginx代理形成高可用。
4. 安装数据节点
记得卸载mariadb。
// 查找
rpm -qa | grep mariadb
// 卸载
rpm -e {搜索得到要卸载的包}
安装mysql和shell
rpm -ivh mysql-community-common-5.7.23-1.el7.x86_64.rpm
rpm -ivh mysql-community-libs-5.7.23-1.el7.x86_64.rpm
rpm -ivh mysql-community-client-5.7.23-1.el7.x86_64.rpm
rpm -ivh mysql-community-server-5.7.23-1.el7.x86_64.rpm
rpm -ivh mysql-shell-1.0.9-1.el7.x86_64.rpm
修改/etc/my.cnf文件:
[mysqld]
port=3000
datadir = /data1/mysql/data
socket = /var/lib/mysql/mysql.sock
symbolic-links = 0
log-error = /data1/mysql/log/mysqld.log
pid-file = /var/run/mysqld/mysqld.pid
server_id=1
gtid_mode=ON
enforce_gtid_consistency=ON
binlog_checksum=NONE
log_bin=binlog
log_slave_updates=ON
binlog_format=ROW
master_info_repository=TABLE
relay_log_info_repository=TABLE
transaction_write_set_extraction=XXHASH64
loose-group_replication_group_name="aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa"
loose-group_replication_start_on_boot=off
loose-group_replication_local_address="db-node01:33061"
loose-group_replication_group_seeds="db-node01:33061,db-node02:33061,db-node03:33061"
loose-group_replication_bootstrap_group=off
loose-group_replication_single_primary_mode = on
disabled_storage_engines = MyISAM,BLACKHOLE,FEDERATED,CSV,ARCHIVE
report_port = 3000
group_replication = ON
loose-group_replication_allow_local_disjoint_gtids_join=on
启动mysql:
systemctl start mysqld
查看密码(数据库首次启动自动为mysql的root用户生成的密码,注意改成你在my.cnf文件中定义的日志文件):
cat /data1/mysql/log/mysqld.log | grep 'A temporary password'
使用得到的密码登录,并重置密码:
mysql -h localhost -P 3000 -u root -p #回车输入密码
mysql> set password=password("123456");
mysql> flush privileges;
如果你的密码设置的太简单,报错ERROR 1819 (HY000): Your password does not satisfy the current policy requirements
,那么改一下设置:
set global validate_password_policy=0;
set global validate_password_length=1;
三个节点都安装好。注意配置文件中需要修改server_id=2
和server_id=3
,同时修改loose-group_replication_local_address="db-node02:33061"
和loose-group_replication_local_address="db-node03:33061"
。
题外话,这时候虽然还不是集群,但是三台机器的mysql都可以单独使用了。可以:
mysql -h localhost -P 3000 -u root -p
可以访问到。
5. 配置数据节点
设置供其他主机(集群中的其他主机)访问的用户和密码, 否则其他机器连接不上不本机的mysql,连接不上则复制数据就无从谈起了。为了方便直接使用mysql的root用户了。
执行:
mysqlsh
进入shell程序,标志是左边的mysql-js>
,后续在此shell中执行:
mysql-js> shell.connect('root@localhost:3000'); # 上面安装在本机的mysql,用户名root,端口3000
Please provide the password for 'root@localhost:3000': # 输入密码,就是上面自己改的密码
Creating a Session to 'root@localhost:3000'
Classic Session successfully established. No default schema selected. # 此shell后续操作都连接到了mysql
mysql-js> dba.configureLocalInstance("localhost:3000");
Please provide the password for 'root@localhost:3000': # 输入密码
Detecting the configuration file...
Found configuration file at standard location: /etc/my.cnf
Do you want to modify this file? [Y|n]: # 使用上面自己创建的的/etc/my.cnf配置文件
MySQL user 'root' cannot be verified to have access to other hosts in the network.
1) Create root@% with necessary grants
2) Create account with different name
3) Continue without creating account
4) Cancel
Please select an option [1]: 1 # 选择第1项,为root用户授权, 即拿root用户供其他主机访问,并设置密码
Password for new account: # 输入密码,为了简单,仍然使用上面的密码
Confirm password:
Validating instance...
The instance 'localhost:3000' is valid for Cluster usage
You can now use it in an InnoDB Cluster. # 本机节点可以加入cluster了
{
"status": "ok"
}
mysql-js>
mysql innodb数据节点可以加入cluster了。
6. 创建集群
找一台可以访问数据节点的机器,安装mysql-shell。连接数据节点创建集群。执行:
mysqlsh
进入mysql-js>
,后续在此shell中执行:
mysql-js> shell.connect('root@db-node01:3000'); # 连接db-node01,主节点,创建集群后是读写权限,其他节点只读
Please provide the password for 'root@db-node01:3000': # 输入密码
Creating a Session to 'root@db-node01:3000'
Classic Session successfully established. No default schema selected.
mysql-js> var cluster = dba.createCluster('testCluster'); # 创建一个 cluster,命名为 'testCluster'
A new InnoDB cluster will be created on instance 'root@db-node01:3000'.
Creating InnoDB cluster 'testCluster' on 'root@db-node01:3000'...
Adding Seed Instance...
Cluster successfully created. Use Cluster.addInstance() to add MySQL instances.
At least 3 instances are needed for the cluster to be able to withstand up to
one server failure.
如上的信息, 如果创建成功, 则会输出的信息中会有类似“Cluster successfully created.”的语句
#创建成功后,查看cluster状态
mysql-js> cluster.status();
{
"clusterName": "testCluster",
"defaultReplicaSet": {
"name": "default",
"primary": "db-node01:3000",
"status": "OK_NO_TOLERANCE",
"statusText": "Cluster is NOT tolerant to any failures.",
"topology": {
"db-node01:3000": {
"address": "db-node01:3000",
"mode": "R/W",
"readReplicas": {},
"role": "HA",
"status": "ONLINE"
}
}
}
}
mysql-js> cluster.addInstance('root@db-node02:3000');
A new instance will be added to the InnoDB cluster. Depending on the amount of
data on the cluster this might take from a few seconds to several hours.
Please provide the password for 'root@db-node02:3000':
Adding instance to the cluster ...
The instance 'root@db-node02:3000' was successfully added to the cluster.
mysql-js> cluster.addInstance('root@db-node03:3000');
A new instance will be added to the InnoDB cluster. Depending on the amount of
data on the cluster this might take from a few seconds to several hours.
Please provide the password for 'root@db-node03:3000':
Adding instance to the cluster ...
The instance 'root@db-node03:3000' was successfully added to the cluster.
mysql-js> cluster.status();
{
"clusterName": "testCluster",
"defaultReplicaSet": {
"name": "default",
"primary": "db-node01:3000",
"status": "OK",
"statusText": "Cluster is ONLINE and can tolerate up to ONE failure.",
"topology": {
"db-node01:3000": {
"address": "db-node01:3000",
"mode": "R/W",
"readReplicas": {},
"role": "HA",
"status": "ONLINE"
},
"db-node02:3000": {
"address": "db-node02:3000",
"mode": "R/O",
"readReplicas": {},
"role": "HA",
"status": "ONLINE"
},
"db-node03:3000": {
"address": "db-node03:3000",
"mode": "R/O",
"readReplicas": {},
"role": "HA",
"status": "ONLINE"
}
}
}
}
通过上面cluster集群信息可知, db-node01节点是主节点, 具有R/W读写权限, 其他两个节点是从节点, 具有R/O 只读权限。
中间如果退出了shell程序再进入,需要执行:
shell.connect('root@db-node01:3000'); # 连接主节点,如果因为主节点挂掉从节点变成了主节点,需要改成新的主节点
cluster = dba.getCluster("testCluster"); # 获取之前创建的cluster
7. 集群通过mysql-router提供访问入口
rpm安装mysql-router。执行:
/usr/local/mysql-route/bin/mysqlrouter --bootstrap root@db-node01:3306 -d myrouter --user=root
看到下面一大段话:
Please enter MySQL password for root:
Bootstrapping MySQL Router instance at /root/myrouter...
MySQL Router has now been configured for the InnoDB cluster 'myCluster'.
The following connection information can be used to connect to the cluster.
Classic MySQL protocol connections to cluster 'myCluster':
- Read/Write Connections: localhost:6446 #读写端口
- Read/Only Connections: localhost:6447 #只读端口
X protocol connections to cluster 'myCluster':
- Read/Write Connections: localhost:64460
- Read/Only Connections: localhost:64470
这里会在当前目录下产生mysql-router 目录, 并生成router配置文件,如下:
data log mysqlrouter.conf mysqlrouter.key run start.sh stop.sh
可以修改配置文件mysqlrouter.conf
,也可以不改(主要是把数据文件例如日志等,改到自己指定的位置)。没找到哪里可以修改端口,所以使用的默认端口,通过route连接mysql后, 6446端口连接后可以进行读写操作,6447端口连接后只能进行只读操作。
然后启动mysqlroute:
myrouter/start.sh
可以查看是否有router进程:
ps -ef | grep route
可以查看是否监听6446和6447端口:
netstat -ntlp | grep 18486
可以直接连接:
mysql -u root -h 127.0.0.1 -P 6446 -p # 很奇怪的一点,这里写localhost就访问不到
接下来就是测试数据的主从复制,这里不写了。读者自己尝试。