springboot整合ShardingSphere5.0.0-
背景
最近要做一个统一的评论微服务,所有在营的app的评论统一通过一个服务记录到库里,因为数据量是不断扩大的,如果用单库分表的话,以后扩展性会差一点,所以用到了ShardingSphere这个开源中间件实现分库分表,这个中间件在5.x的版本中可以实现不停机扩容或者缩容,就可以省很多事情,以下记录是简单的分库分表配置,读写分离,不停机扩容缩容等以后有待进一步学习研究。
官网中文文档:https://shardingsphere.apache.org/document/current/cn/overview/
官方使用示例github地址:https://github.com/apache/shardingsphere/tree/master/examples
数据准备
mysql中创建两个库zhaohy,zhaohy1
create database zhaohy;
create database zhaohy1;
两个库中分别创建两个表:t_test_0,t_test_1,保持表结构一样
CREATE TABLE IF NOT EXISTS `t_test_0`(
`id` INT UNSIGNED AUTO_INCREMENT,
`title` VARCHAR(100) NOT NULL,
`author` VARCHAR(40) NOT NULL,
`date` DATETime,
title_id varchar(32),
column_id varchar(32),
PRIMARY KEY ( `id` )
)ENGINE=InnoDB DEFAULT CHARSET=utf8;
CREATE TABLE IF NOT EXISTS `t_test_1`(
`id` INT UNSIGNED AUTO_INCREMENT,
`title` VARCHAR(100) NOT NULL,
`author` VARCHAR(40) NOT NULL,
`date` DATETime,
title_id varchar(32),
column_id varchar(32),
PRIMARY KEY ( `id` )
)ENGINE=InnoDB DEFAULT CHARSET=utf8;
框架整合(hikari,dpcp连接池为例)
maven引入
<!-- https://mvnrepository.com/artifact/org.apache.shardingsphere/shardingsphere-jdbc-core -->
<dependency>
<groupId>org.apache.shardingsphere</groupId>
<artifactId>shardingsphere-jdbc-core</artifactId>
<version>5.0.0-alpha</version>
</dependency>
<!-- https://mvnrepository.com/artifact/org.apache.commons/commons-dbcp2 -->
<dependency>
<groupId>org.apache.commons</groupId>
<artifactId>commons-dbcp2</artifactId>
<version>2.1.1</version>
</dependency>
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<version>5.1.47</version>
<scope>provided</scope>
</dependency>
<dependency>
注释掉原来application.properties里面的数据库连接配置,改用java api的方式配置数据库(试了好几天,只有这个方法可以配置成功,无奈脸...)
springboot启动类注入DataSource(根据官网最新版的java api示例配置):
@Bean
public DataSource dataSource() {
// 配置真实数据源
Map<String, DataSource> dataSourceMap = new HashMap<>();
// 配置第 1 个数据源
BasicDataSource dataSource1 = new BasicDataSource();
dataSource1.setDriverClassName("com.mysql.jdbc.Driver");
dataSource1.setUrl("jdbc:mysql://127.0.0.1:3306/zhaohy?useUnicode=true&characterEncoding=UTF-8&zeroDateTimeBehavior=convertToNull&allowMultiQueries=true&serverTimezone=UTC");
dataSource1.setUsername("root");
dataSource1.setPassword("root");
dataSourceMap.put("ds0", dataSource1);
// 配置第 2 个数据源
BasicDataSource dataSource2 = new BasicDataSource();
dataSource2.setDriverClassName("com.mysql.jdbc.Driver");
dataSource2.setUrl("jdbc:mysql://127.0.0.1:3306/zhaohy1?useUnicode=true&characterEncoding=UTF-8&zeroDateTimeBehavior=convertToNull&allowMultiQueries=true&serverTimezone=UTC");
dataSource2.setUsername("root");
dataSource2.setPassword("root");
dataSourceMap.put("ds1", dataSource2);
// 配置 t_order 表规则
ShardingTableRuleConfiguration orderTableRuleConfig = new ShardingTableRuleConfiguration("t_test", "ds${0..1}.t_test_${0..1}");
// 配置分库策略
orderTableRuleConfig.setDatabaseShardingStrategy(new StandardShardingStrategyConfiguration("column_id", "dbShardingAlgorithm"));
// 配置分表策略
orderTableRuleConfig.setTableShardingStrategy(new StandardShardingStrategyConfiguration("title_id", "tableShardingAlgorithm"));
// 省略配置 t_order_item 表规则...
// ...
// 配置分片规则
ShardingRuleConfiguration shardingRuleConfig = new ShardingRuleConfiguration();
shardingRuleConfig.getTables().add(orderTableRuleConfig);
// 配置分库算法
Properties dbShardingAlgorithmrProps = new Properties();
dbShardingAlgorithmrProps.setProperty("algorithm-expression", "ds${column_id % 2}");
shardingRuleConfig.getShardingAlgorithms().put("dbShardingAlgorithm", new ShardingSphereAlgorithmConfiguration("INLINE", dbShardingAlgorithmrProps));
// 配置分表算法
Properties tableShardingAlgorithmrProps = new Properties();
tableShardingAlgorithmrProps.setProperty("algorithm-expression", "t_test_${title_id % 2}");
shardingRuleConfig.getShardingAlgorithms().put("tableShardingAlgorithm", new ShardingSphereAlgorithmConfiguration("INLINE", tableShardingAlgorithmrProps));
DataSource dataSource = null;
try {
dataSource = ShardingSphereDataSourceFactory.createDataSource(dataSourceMap, Collections.singleton(shardingRuleConfig), new Properties());
} catch (SQLException e) {
e.printStackTrace();
}
//logger.info("datasource : {}", dataSource);
return dataSource;
}
如上代码所示,配置两个数据库 zhaohy,zhaohy1,以column_id取模分两个库,以取名t_test来代替t_test_0和t_test_1表中的title_id取模分两个表去取值,BasicDataSource是dbcp连接池的配置类。
application.properties基本是原来的配置,可以贴出来一下:
spring.profiles.active=sit
# 上传文件总的最大值
spring.servlet.multipart.max-request-size=50MB
# 单个文件的最大值
spring.servlet.multipart.max-file-size=10MB
#spring.datasource.url=jdbc\:oracle\:thin\:@127.0.0.1\:1521\:xe
#spring.datasource.username=zhaohy
#spring.datasource.password=oracle
#spring.datasource.driver-class-name=oracle.jdbc.OracleDriver
#spring.datasource.hikari.driver-class-name=net.sf.log4jdbc.sql.jdbcapi.DriverSpy
#spring.datasource.hikari.jdbc-url=jdbc:log4jdbc:oracle:thin:@192.168.1.16:1521:DBNT
#spring.datasource.hikari.username=IDS
#spring.datasource.hikari.password=IDS
## 数据库配置
#spring.datasource.driver-class-name=com.mysql.jdbc.Driver
#spring.datasource.url=jdbc:mysql://127.0.0.1:3306/zhaohy?useUnicode=true&characterEncoding=UTF-8&zeroDateTimeBehavior=convertToNull&allowMultiQueries=true&serverTimezone=UTC
#spring.datasource.username=root
#spring.datasource.password=root
## Hikari 连接池配置 ------ 详细配置请访问:https://github.com/brettwooldridge/HikariCP
## 最小空闲连接数量
#spring.datasource.hikari.minimum-idle=5
### 空闲连接存活最大时间,默认600000(10分钟)
#spring.datasource.hikari.idle-timeout=180000
### 连接池最大连接数,默认是10
#spring.datasource.hikari.maximum-pool-size=10
### 此属性控制从池返回的连接的默认自动提交行为,默认值:true
#spring.datasource.hikari.auto-commit=true
### 连接池母子
#spring.datasource.hikari.pool-name=MyHikariCP
### 此属性控制池中连接的最长生命周期,值0表示无限生命周期,默认1800000即30分钟
#spring.datasource.hikari.max-lifetime=1800000
### 数据库连接超时时间,默认30秒,即30000
#spring.datasource.hikari.connection-timeout=30000
#spring.datasource.hikari.connection-test-query=SELECT 1
#set druid
#spring.datasource.druid.stat-view-servlet.login-username=admin
#spring.datasource.druid.stat-view-servlet.login-password=123456
#set default datasource
#spring.datasource.dynamic.primary=master
#spring.datasource.dynamic.datasource.master.url=jdbc\:oracle\:thin\:@127.0.0.1\:1521\:xe
#spring.datasource.dynamic.datasource.master.username=zhaohy
#spring.datasource.dynamic.datasource.master.password=oracle
#spring.datasource.dynamic.datasource.master.driver-class-name=oracle.jdbc.OracleDriver
#spring.datasource.dynamic.datasource.slave_1.url=jdbc\:oracle\:thin\:@39.100.143.84\:1521\:xe
#spring.datasource.dynamic.datasource.slave_1.username=zhaohy
#spring.datasource.dynamic.datasource.slave_1.password=oracle
#spring.datasource.dynamic.datasource.slave_1.driver-class-name=oracle.jdbc.OracleDriver
#spring.datasource.dynamic.datasource.master.druid.initial-size=3
#spring.datasource.dynamic.datasource.master.druid.max-active=8
#spring.datasource.dynamic.datasource.master.druid.min-idle=2
#spring.datasource.dynamic.datasource.master.druid.max-wait=-1
#spring.datasource.dynamic.datasource.master.druid.min-evictable-idle-time-millis=30000
#spring.datasource.dynamic.datasource.master.druid.max-evictable-idle-time-millis=30000
#spring.datasource.dynamic.datasource.master.druid.time-between-eviction-runs-millis=0
#spring.datasource.dynamic.datasource.master.druid.validation-query=select 1 from dual
#spring.datasource.dynamic.datasource.master.druid.validation-query-timeout=-1
#spring.datasource.dynamic.datasource.master.druid.test-on-borrow=false
#spring.datasource.dynamic.datasource.master.druid.test-on-return=false
#spring.datasource.dynamic.datasource.master.druid.test-while-idle=true
#spring.datasource.dynamic.datasource.master.druid.pool-prepared-statements=true
#spring.datasource.dynamic.datasource.master.druid.filters=stat,wall
#spring.datasource.dynamic.datasource.master.druid.share-prepared-statements=true
#spring.datasource.hikari.auto-commit=false
#mybatis.config-location=classpath:mybatis/mybatis-config.xml
mybatis.mapper-locations=classpath:mapper/*Mapper.xml
mybatis.configuration.jdbc-type-for-null=null
#thymeleaf
spring.thymeleaf.prefix=classpath:/templates/
spring.thymeleaf.suffix=.html
spring.thymeleaf.mode=LEGACYHTML5
spring.thymeleaf.encoding=UTF-8
spring.thymeleaf.servlet.content-type=text/html
spring.thymeleaf.cache=false
#redis
spring.redis.host=127.0.0.1
spring.redis.port=6379
spring.redis.password=8533
#Redis数据库索引(默认为0)
spring.redis.database=0
#连接超时时间(毫秒)
spring.redis.timeout=5000
#连接池最大连接数(使用负值表示没有限制)
spring.redis.jedis.pool.max-active=8
#连接池最大阻塞等待时间(使用负值表示没有限制)
spring.redis.jedis.pool.max-wait=-1
#连接池中的最大空闲连接
spring.redis.jedis.pool.max-idle=8
#连接池中的最小空闲连接
spring.redis.jedis.pool.min-idle=0
#spring.redis.cluster.nodes=115.28.208.105:6379,47.105.92.89:6379,118.190.151.92:6379
logging.config=classpath:log4j2-spring.xml
pom.xml也贴出来,springboot的版本是2.1.1:
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.2.1.RELEASE</version>
<relativePath /> <!-- lookup parent from repository -->
</parent>
<groupId>com.zhaohy</groupId>
<artifactId>springbootSSM</artifactId>
<version>0.0.1-SNAPSHOT</version>
<name>springbootSSM-myblog</name>
<description>Demo project for Spring Boot</description>
<packaging>jar</packaging>
<properties>
<java.version>1.8</java.version>
</properties>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-thymeleaf</artifactId>
<exclusions>
<exclusion>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-logging</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
<!-- 排除默认的logback日志,使用log4j -->
<exclusions>
<exclusion>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-logging</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.mybatis.spring.boot</groupId>
<artifactId>mybatis-spring-boot-starter</artifactId>
<version>2.1.1</version>
</dependency>
<!-- <dependency>
<groupId>com.oracle.ojdbc</groupId>
<artifactId>ojdbc8</artifactId>
<scope>runtime</scope>
</dependency> -->
<!--springboot热部署 -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-devtools</artifactId>
<optional>true</optional>
</dependency>
<!--JSON依赖 -->
<dependency>
<groupId>com.alibaba</groupId>
<artifactId>fastjson</artifactId>
<version>1.2.58</version>
</dependency>
<!-- log4j -->
<!-- 支持log4j2的模块,注意把spring-boot-starter和spring-boot-starter-web包中的logging去掉 -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-log4j2</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-tomcat</artifactId>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
<exclusions>
<exclusion>
<groupId>org.junit.vintage</groupId>
<artifactId>junit-vintage-engine</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>commons-fileupload</groupId>
<artifactId>commons-fileupload</artifactId>
<version>1.4</version>
</dependency>
<dependency>
<groupId>commons-io</groupId>
<artifactId>commons-io</artifactId>
<version>2.6</version>
</dependency>
<dependency>
<groupId>net.sf.flexjson</groupId>
<artifactId>flexjson</artifactId>
<version>3.3</version>
</dependency>
<dependency>
<groupId>nl.bitwalker</groupId>
<artifactId>UserAgentUtils</artifactId>
<version>1.2.4</version>
</dependency>
<dependency>
<groupId>com.auth0</groupId>
<artifactId>java-jwt</artifactId>
<version>3.10.2</version>
</dependency>
<!-- httpClient用到的jar包 -->
<dependency>
<groupId>org.apache.httpcomponents</groupId>
<artifactId>httpcore</artifactId>
<version>4.4.13</version>
</dependency><dependency>
<groupId>org.apache.httpcomponents</groupId>
<artifactId>httpclient</artifactId>
<version>4.5.12</version>
</dependency>
<!-- https://mvnrepository.com/artifact/org.apache.httpcomponents/httpmime -->
<dependency>
<groupId>org.apache.httpcomponents</groupId>
<artifactId>httpmime</artifactId>
<version>4.5.12</version>
</dependency>
<!-- 切面所用到的jar包 -->
<!-- https://mvnrepository.com/artifact/org.aspectj/aspectjweaver -->
<dependency>
<groupId>org.aspectj</groupId>
<artifactId>aspectjweaver</artifactId>
<version>1.9.6</version>
<scope>runtime</scope>
</dependency>
<!-- https://mvnrepository.com/artifact/aspectj/aspectjrt -->
<dependency>
<groupId>aspectj</groupId>
<artifactId>aspectjrt</artifactId>
<version>1.5.4</version>
</dependency>
<!-- 动态数据源jar -->
<!-- <dependency>
<groupId>com.baomidou</groupId>
<artifactId>dynamic-datasource-spring-boot-starter</artifactId>
<version>3.2.1</version>
</dependency> -->
<!-- https://mvnrepository.com/artifact/com.alibaba/druid-spring-boot-starter -->
<!-- <dependency>
<groupId>com.alibaba</groupId>
<artifactId>druid-spring-boot-starter</artifactId>
<version>1.1.24</version>
</dependency> -->
<!-- redis jar -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-redis</artifactId>
<version>2.1.1.RELEASE</version>
</dependency>
<!-- https://mvnrepository.com/artifact/org.apache.shardingsphere/shardingsphere-jdbc-core -->
<dependency>
<groupId>org.apache.shardingsphere</groupId>
<artifactId>shardingsphere-jdbc-core</artifactId>
<version>5.0.0-alpha</version>
</dependency>
<!-- https://mvnrepository.com/artifact/org.apache.commons/commons-dbcp2 -->
<dependency>
<groupId>org.apache.commons</groupId>
<artifactId>commons-dbcp2</artifactId>
<version>2.1.1</version>
</dependency>
<!-- https://mvnrepository.com/artifact/org.apache.shardingsphere/shardingsphere-jdbc-spring-boot-starter-infra -->
<!-- <dependency>
<groupId>org.apache.shardingsphere</groupId>
<artifactId>shardingsphere-jdbc-spring-boot-starter-infra</artifactId>
<version>5.0.0-alpha</version>
</dependency> -->
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<version>5.1.47</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-configuration-processor</artifactId>
<optional>true</optional>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
</project>
启动类也贴一下:
package com.zhaohy.app;
import java.sql.SQLException;
import java.util.Collections;
import java.util.HashMap;
import java.util.Map;
import java.util.Properties;
import javax.sql.DataSource;
import org.apache.commons.dbcp2.BasicDataSource;
import org.apache.shardingsphere.driver.api.ShardingSphereDataSourceFactory;
import org.apache.shardingsphere.infra.config.algorithm.ShardingSphereAlgorithmConfiguration;
import org.apache.shardingsphere.sharding.api.config.ShardingRuleConfiguration;
import org.apache.shardingsphere.sharding.api.config.rule.ShardingTableRuleConfiguration;
import org.apache.shardingsphere.sharding.api.config.strategy.sharding.StandardShardingStrategyConfiguration;
import org.mybatis.spring.annotation.MapperScan;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.boot.web.servlet.FilterRegistrationBean;
import org.springframework.boot.web.servlet.ServletListenerRegistrationBean;
import org.springframework.context.annotation.Bean;
import com.zhaohy.app.sys.filter.LoginProcessFilter;
import com.zhaohy.app.utils.OnLineCountListener;
//@SpringBootApplication(exclude = DruidDataSourceAutoConfigure.class)
@SpringBootApplication
@MapperScan("com.zhaohy.app.dao")
//@ImportResource({"classpath:applicationContext.xml"})
public class App {
public static void main(String[] args) {
SpringApplication.run(App.class, args);
System.out.println("springboot started...");
}
@SuppressWarnings({ "rawtypes", "unchecked" })
@Bean
public FilterRegistrationBean myFilterRegistration() {
FilterRegistrationBean regist = new FilterRegistrationBean(new LoginProcessFilter());
// 过滤全部请求
regist.addUrlPatterns("/*");//过滤url
regist.setOrder(1);//过滤器顺序
return regist;
}
@SuppressWarnings({ "rawtypes", "unchecked" })
@Bean
public ServletListenerRegistrationBean listenerRegist() {
ServletListenerRegistrationBean srb = new ServletListenerRegistrationBean();
srb.setListener(new OnLineCountListener());
System.out.println("listener====");
return srb;
}
@Bean
public DataSource dataSource() {
// 配置真实数据源
Map<String, DataSource> dataSourceMap = new HashMap<>();
// 配置第 1 个数据源
BasicDataSource dataSource1 = new BasicDataSource();
dataSource1.setDriverClassName("com.mysql.jdbc.Driver");
dataSource1.setUrl("jdbc:mysql://127.0.0.1:3306/zhaohy?useUnicode=true&characterEncoding=UTF-8&zeroDateTimeBehavior=convertToNull&allowMultiQueries=true&serverTimezone=UTC");
dataSource1.setUsername("root");
dataSource1.setPassword("root");
dataSourceMap.put("ds0", dataSource1);
// 配置第 2 个数据源
BasicDataSource dataSource2 = new BasicDataSource();
dataSource2.setDriverClassName("com.mysql.jdbc.Driver");
dataSource2.setUrl("jdbc:mysql://127.0.0.1:3306/zhaohy1?useUnicode=true&characterEncoding=UTF-8&zeroDateTimeBehavior=convertToNull&allowMultiQueries=true&serverTimezone=UTC");
dataSource2.setUsername("root");
dataSource2.setPassword("root");
dataSourceMap.put("ds1", dataSource2);
// 配置 t_order 表规则
ShardingTableRuleConfiguration orderTableRuleConfig = new ShardingTableRuleConfiguration("t_test", "ds${0..1}.t_test_${0..1}");
// 配置分库策略
orderTableRuleConfig.setDatabaseShardingStrategy(new StandardShardingStrategyConfiguration("column_id", "dbShardingAlgorithm"));
// 配置分表策略
orderTableRuleConfig.setTableShardingStrategy(new StandardShardingStrategyConfiguration("title_id", "tableShardingAlgorithm"));
// 省略配置 t_order_item 表规则...
// ...
// 配置分片规则
ShardingRuleConfiguration shardingRuleConfig = new ShardingRuleConfiguration();
shardingRuleConfig.getTables().add(orderTableRuleConfig);
// 配置分库算法
Properties dbShardingAlgorithmrProps = new Properties();
dbShardingAlgorithmrProps.setProperty("algorithm-expression", "ds${column_id % 2}");
shardingRuleConfig.getShardingAlgorithms().put("dbShardingAlgorithm", new ShardingSphereAlgorithmConfiguration("INLINE", dbShardingAlgorithmrProps));
// 配置分表算法
Properties tableShardingAlgorithmrProps = new Properties();
tableShardingAlgorithmrProps.setProperty("algorithm-expression", "t_test_${title_id % 2}");
shardingRuleConfig.getShardingAlgorithms().put("tableShardingAlgorithm", new ShardingSphereAlgorithmConfiguration("INLINE", tableShardingAlgorithmrProps));
DataSource dataSource = null;
try {
dataSource = ShardingSphereDataSourceFactory.createDataSource(dataSourceMap, Collections.singleton(shardingRuleConfig), new Properties());
} catch (SQLException e) {
e.printStackTrace();
}
//logger.info("datasource : {}", dataSource);
return dataSource;
}
}
至此,框架配置好了,接下来就写测试代码
业务代码及测试
controller:
package com.zhaohy.app.controller;
import java.util.concurrent.TimeUnit;
import javax.servlet.http.HttpServletRequest;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.data.redis.core.StringRedisTemplate;
import org.springframework.stereotype.Controller;
import org.springframework.web.bind.annotation.RequestMapping;
import com.zhaohy.app.service.TestService;
import com.zhaohy.app.sys.annotation.RecordLog;
@Controller
public class TestController {
@Autowired
TestService testService;
@RequestMapping("/test/insert.do")
public void test3(HttpServletRequest request) {
testService.test3();
}
@RequestMapping("/test/select.do")
public void select(HttpServletRequest request) {
testService.select();
}
}
serviceImpl:
package com.zhaohy.app.service.impl;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import com.alibaba.fastjson.JSON;
import com.zhaohy.app.dao.TestMapper;
import com.zhaohy.app.service.TestService;
@Service("TestService")
public class TestServiceImpl implements TestService {
@Autowired
TestMapper testMapper;
@Override
public void test3() {
Map<String, Object> paramsMap = new HashMap<String, Object>();
//paramsMap.put("id", "2");
paramsMap.put("columnId", 2);
paramsMap.put("title", "标题2");
paramsMap.put("author", "zhaohy2");
paramsMap.put("titleId", 2);
testMapper.insertTest(paramsMap);
System.out.println("插入完成!");
}
@Override
public void select() {
List<Map<String, Object>> list = new ArrayList<Map<String,Object>>();
Map<String, Object> paramsMap = new HashMap<String, Object>();
list = testMapper.getListByDb(paramsMap);
System.out.println("===" + JSON.toJSONString(list));
}
}
dapo mapper接口:
package com.zhaohy.app.dao;
import java.util.List;
import java.util.Map;
//import com.baomidou.dynamic.datasource.annotation.DS;
public interface TestMapper {
List<Map<String, Object>> getListByDb(Map<String, Object> paramsMap);
void insertTest(Map<String, Object> paramsMap);
}
mapper.xml:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE mapper PUBLIC "-//mybatis.org//DTD Mapper 3.0//EN" "http://mybatis.org/dtd/mybatis-3-mapper.dtd">
<mapper namespace="com.zhaohy.app.dao.TestMapper">
<select id="getListByDb" resultType="java.util.HashMap">
select * from t_test t
</select>
<insert id="insertTest" parameterType="java.util.HashMap">
insert into t_test (title,author,date,title_id,column_id)values(#{title},#{author},now(),#{titleId},#{columnId})
</insert>
</mapper>
测试运行:http://127.0.0.1:8081/test/insert.do 可顺利根据分库分表规则插入数据库
http://127.0.0.1:8081/test/select.do 可顺利把两个库的所有数据整合之后查出来,注意这里使用的时候用的是java api里配置的t_test。
至此就可以简单的实现分库分表操作了。
补充druid连接池配置:
如果不用springboot默认的hikari dbcp连接池的组合,也可以用阿里的druid
maven 引入druid-spring-boot-starter并注释掉commons-dbcp2依赖:
<!-- https://mvnrepository.com/artifact/com.alibaba/druid-spring-boot-starter -->
<dependency>
<groupId>com.alibaba</groupId>
<artifactId>druid-spring-boot-starter</artifactId>
<version>1.1.24</version>
</dependency>
<!-- https://mvnrepository.com/artifact/org.apache.commons/commons-dbcp2 -->
<!-- <dependency>
<groupId>org.apache.commons</groupId>
<artifactId>commons-dbcp2</artifactId>
<version>2.1.1</version>
</dependency> -->
启动类排除DruidDataSourceAutoConfigure.class
,通过DruidDataSource dataSource1 = new DruidDataSource();
来新建数据源
package com.zhaohy.app;
import java.sql.SQLException;
import java.util.Collections;
import java.util.HashMap;
import java.util.Map;
import java.util.Properties;
import javax.sql.DataSource;
import org.apache.shardingsphere.driver.api.ShardingSphereDataSourceFactory;
import org.apache.shardingsphere.infra.config.algorithm.ShardingSphereAlgorithmConfiguration;
import org.apache.shardingsphere.sharding.api.config.ShardingRuleConfiguration;
import org.apache.shardingsphere.sharding.api.config.rule.ShardingTableRuleConfiguration;
import org.apache.shardingsphere.sharding.api.config.strategy.sharding.StandardShardingStrategyConfiguration;
import org.mybatis.spring.annotation.MapperScan;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.boot.web.servlet.FilterRegistrationBean;
import org.springframework.boot.web.servlet.ServletListenerRegistrationBean;
import org.springframework.context.annotation.Bean;
import com.alibaba.druid.pool.DruidDataSource;
import com.alibaba.druid.spring.boot.autoconfigure.DruidDataSourceAutoConfigure;
import com.zhaohy.app.config.DatabaseConfig;
import com.zhaohy.app.sys.filter.LoginProcessFilter;
import com.zhaohy.app.utils.OnLineCountListener;
@SpringBootApplication(exclude = DruidDataSourceAutoConfigure.class)
//@SpringBootApplication
@MapperScan("com.zhaohy.app.dao")
//@ImportResource({"classpath:applicationContext.xml"})
public class App {
private final static Logger logger = (Logger) LoggerFactory.getLogger(App.class);
public static void main(String[] args) {
SpringApplication.run(App.class, args);
System.out.println("springboot started...");
}
@SuppressWarnings({ "rawtypes", "unchecked" })
@Bean
public FilterRegistrationBean myFilterRegistration() {
FilterRegistrationBean regist = new FilterRegistrationBean(new LoginProcessFilter());
// 过滤全部请求
regist.addUrlPatterns("/*");//过滤url
regist.setOrder(1);//过滤器顺序
return regist;
}
@SuppressWarnings({ "rawtypes", "unchecked" })
@Bean
public ServletListenerRegistrationBean listenerRegist() {
ServletListenerRegistrationBean srb = new ServletListenerRegistrationBean();
srb.setListener(new OnLineCountListener());
System.out.println("listener====");
return srb;
}
@Bean
public DataSource dataSource() {
// 配置真实数据源
Map<String, DataSource> dataSourceMap = new HashMap<>();
// 配置第 1 个数据源
DruidDataSource dataSource1 = new DruidDataSource();
dataSource1.setDriverClassName("com.mysql.jdbc.Driver");
dataSource1.setUrl("jdbc:mysql://127.0.0.1:3306/zhaohy?useUnicode=true&characterEncoding=UTF-8&zeroDateTimeBehavior=convertToNull&allowMultiQueries=true&serverTimezone=UTC");
dataSource1.setUsername("root");
dataSource1.setPassword("root");
dataSourceMap.put("ds0", dataSource1);
// 配置第 2 个数据源
DruidDataSource dataSource2 = new DruidDataSource();
dataSource2.setDriverClassName("com.mysql.jdbc.Driver");
dataSource2.setUrl("jdbc:mysql://127.0.0.1:3306/zhaohy1?useUnicode=true&characterEncoding=UTF-8&zeroDateTimeBehavior=convertToNull&allowMultiQueries=true&serverTimezone=UTC");
dataSource2.setUsername("root");
dataSource2.setPassword("root");
dataSourceMap.put("ds1", dataSource2);
// 配置 t_order 表规则
ShardingTableRuleConfiguration orderTableRuleConfig = new ShardingTableRuleConfiguration("t_test", "ds${0..1}.t_test_${0..1}");
// 配置分库策略
orderTableRuleConfig.setDatabaseShardingStrategy(new StandardShardingStrategyConfiguration("column_id", "dbShardingAlgorithm"));
// 配置分表策略
orderTableRuleConfig.setTableShardingStrategy(new StandardShardingStrategyConfiguration("title_id", "tableShardingAlgorithm"));
// 省略配置 t_order_item 表规则...
// ...
// 配置分片规则
ShardingRuleConfiguration shardingRuleConfig = new ShardingRuleConfiguration();
shardingRuleConfig.getTables().add(orderTableRuleConfig);
// 配置分库算法
Properties dbShardingAlgorithmrProps = new Properties();
dbShardingAlgorithmrProps.setProperty("algorithm-expression", "ds${column_id % 2}");
shardingRuleConfig.getShardingAlgorithms().put("dbShardingAlgorithm", new ShardingSphereAlgorithmConfiguration("INLINE", dbShardingAlgorithmrProps));
// 配置分表算法
Properties tableShardingAlgorithmrProps = new Properties();
tableShardingAlgorithmrProps.setProperty("algorithm-expression", "t_test_${title_id % 2}");
shardingRuleConfig.getShardingAlgorithms().put("tableShardingAlgorithm", new ShardingSphereAlgorithmConfiguration("INLINE", tableShardingAlgorithmrProps));
DataSource dataSource = null;
try {
dataSource = ShardingSphereDataSourceFactory.createDataSource(dataSourceMap, Collections.singleton(shardingRuleConfig), new Properties());
} catch (SQLException e) {
e.printStackTrace();
}
logger.info("datasource : {}", dataSource);
return dataSource;
}
}
其他改动不变,运行起来也可以实现同样的效果。
可通过下面配置druid的其他数据源属性
//configuration
datasource.setInitialSize(initialSize);
datasource.setMinIdle(minIdle);
datasource.setMaxActive(maxActive);
datasource.setMaxWait(maxWait);
datasource.setTimeBetweenEvictionRunsMillis(timeBetweenEvictionRunsMillis);
datasource.setMinEvictableIdleTimeMillis(minEvictableIdleTimeMillis);
datasource.setValidationQuery(validationQuery);
datasource.setTestWhileIdle(testWhileIdle);
datasource.setTestOnBorrow(testOnBorrow);
datasource.setTestOnReturn(testOnReturn);
datasource.setPoolPreparedStatements(poolPreparedStatements);
datasource.setMaxPoolPreparedStatementPerConnectionSize(maxPoolPreparedStatementPerConnectionSize);
try {
datasource.setFilters(filters);
} catch (SQLException e) {
System.err.println("druid configuration initialization filter: " + e);
}
datasource.setConnectionProperties(connectionProperties);
详细Druid数据库连接池配置可参考:https://cloud.tencent.com/developer/article/1080560