SpringCloud微服务实战——搭建企业级开发框架(二十九)
微服务应用中图片、文件等存储区别于单体应用,单体应用可以放到本地读写磁盘文件,微服务应用必需用到分布式存储,将图片、文件等存储到服务稳定的分布式存储服务器。目前,很多云服务商提供了存储的云服务,比如阿里云OSS、腾讯云COS、七牛云对象存储Kodo、百度云对象存储BOS等等、还有开源对象存储服务器,比如FastDFS、MinIO等。
如果我们的框架只支持一种存储服务,那么在后期扩展或者修改时会有局限性,所以,这里希望能够定义一个抽象接口,想使用哪种服务就实现哪种服务,在配置多个服务时,调用的存储时可以进行选择。在这里云服务选择七牛云,开源服务选择MinIO进行集成,如果需要其他服务可以自行扩展。
首先,在框架搭建前,我们先准备环境,这里以MinIO和七牛云为例,MinIO的安装十分简单,我们这里选择Linux安装包的方式来安装,具体方式参考:http://docs.minio.org.cn/docs/,七牛云只需要到官网注册并实名认证即可获得10G免费存储容量https://www.qiniu.com/。
一、基础底层库实现
1、在GitEgg-Platform中新建gitegg-platform-dfs (dfs: Distributed File System分布式文件系统)子工程用于定义对象存储服务的抽象接口,新建IDfsBaseService用于定义文件上传下载常用接口
/**
* 分布式文件存储操作接口定义
* 为了保留系统操作记录,原则上不允许上传文件物理删除,修改等操作。
* 业务操作的修改删除文件,只是关联关系的修改,重新上传文件后并与业务关联即可。
*/
public interface IDfsBaseService {
/**
* 获取简单上传凭证
* @param bucket
* @return
*/
String uploadToken(String bucket);
/**
* 获取覆盖上传凭证
* @param bucket
* @return
*/
String uploadToken(String bucket, String key);
/**
* 创建 bucket
* @param bucket
*/
void createBucket(String bucket);
/**
* 通过流上传文件,指定文件名
* @param inputStream
* @param fileName
* @return
*/
GitEggDfsFile uploadFile(InputStream inputStream, String fileName);
/**
* 通过流上传文件,指定文件名和bucket
* @param inputStream
* @param bucket
* @param fileName
* @return
*/
GitEggDfsFile uploadFile(InputStream inputStream, String bucket, String fileName);
/**
* 通过文件名获取文件访问链接
* @param fileName
* @return
*/
String getFileUrl(String fileName);
/**
* 通过文件名和bucket获取文件访问链接
* @param fileName
* @param bucket
* @return
*/
String getFileUrl(String bucket, String fileName);
/**
* 通过文件名和bucket获取文件访问链接,设置有效期
* @param bucket
* @param fileName
* @param duration
* @param unit
* @return
*/
String getFileUrl(String bucket, String fileName, int duration, TimeUnit unit);
/**
* 通过文件名以流的形式下载一个对象
* @param fileName
* @return
*/
OutputStream getFileObject(String fileName, OutputStream outputStream);
/**
* 通过文件名和bucket以流的形式下载一个对象
* @param fileName
* @param bucket
* @return
*/
OutputStream getFileObject(String bucket, String fileName, OutputStream outputStream);
/**
* 根据文件名删除文件
* @param fileName
*/
String removeFile(String fileName);
/**
* 根据文件名删除指定bucket下的文件
* @param bucket
* @param fileName
*/
String removeFile(String bucket, String fileName);
/**
* 根据文件名列表批量删除文件
* @param fileNames
*/
String removeFiles(List<String> fileNames);
/**
* 根据文件名列表批量删除bucket下的文件
* @param bucket
* @param fileNames
*/
String removeFiles(String bucket, List<String> fileNames);
}
2、在GitEgg-Platform中新建gitegg-platform-dfs-minio子工程,新建MinioDfsServiceImpl和MinioDfsProperties用于实现IDfsBaseService文件上传下载接口
@Data
@Component
@ConfigurationProperties(prefix = "dfs.minio")
public class MinioDfsProperties {
/**
* AccessKey
*/
private String accessKey;
/**
* SecretKey
*/
private String secretKey;
/**
* 区域,需要在MinIO配置服务器的物理位置,默认是us-east-1(美国东区1),这也是亚马逊S3的默认区域。
*/
private String region;
/**
* Bucket
*/
private String bucket;
/**
* 公开还是私有
*/
private Integer accessControl;
/**
* 上传服务器域名地址
*/
private String uploadUrl;
/**
* 文件请求地址前缀
*/
private String accessUrlPrefix;
/**
* 上传文件夹前缀
*/
private String uploadDirPrefix;
}
@Slf4j
@AllArgsConstructor
public class MinioDfsServiceImpl implements IDfsBaseService {
private final MinioClient minioClient;
private final MinioDfsProperties minioDfsProperties;
@Override
public String uploadToken(String bucket) {
return null;
}
@Override
public String uploadToken(String bucket, String key) {
return null;
}
@Override
public void createBucket(String bucket) {
BucketExistsArgs bea = BucketExistsArgs.builder().bucket(bucket).build();
try {
if (!minioClient.bucketExists(bea)) {
MakeBucketArgs mba = MakeBucketArgs.builder().bucket(bucket).build();
minioClient.makeBucket(mba);
}
} catch (ErrorResponseException e) {
e.printStackTrace();
} catch (InsufficientDataException e) {
e.printStackTrace();
} catch (InternalException e) {
e.printStackTrace();
} catch (InvalidKeyException e) {
e.printStackTrace();
} catch (InvalidResponseException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
} catch (NoSuchAlgorithmException e) {
e.printStackTrace();
} catch (ServerException e) {
e.printStackTrace();
} catch (XmlParserException e) {
e.printStackTrace();
}
}
@Override
public GitEggDfsFile uploadFile(InputStream inputStream, String fileName) {
return this.uploadFile(inputStream, minioDfsProperties.getBucket(), fileName);
}
@Override
public GitEggDfsFile uploadFile(InputStream inputStream, String bucket, String fileName) {
GitEggDfsFile dfsFile = new GitEggDfsFile();
try {
dfsFile.setBucket(bucket);
dfsFile.setBucketDomain(minioDfsProperties.getUploadUrl());
dfsFile.setFileUrl(minioDfsProperties.getAccessUrlPrefix());
dfsFile.setEncodedFileName(fileName);
minioClient.putObject(PutObjectArgs.builder()
.bucket(bucket)
.stream(inputStream, -1, 5*1024*1024)
.object(fileName)
.build());
} catch (ErrorResponseException e) {
e.printStackTrace();
} catch (InsufficientDataException e) {
e.printStackTrace();
} catch (InternalException e) {
e.printStackTrace();
} catch (InvalidKeyException e) {
e.printStackTrace();
} catch (InvalidResponseException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
} catch (NoSuchAlgorithmException e) {
e.printStackTrace();
} catch (ServerException e) {
e.printStackTrace();
} catch (XmlParserException e) {
e.printStackTrace();
}
return dfsFile;
}
@Override
public String getFileUrl(String fileName) {
return this.getFileUrl(minioDfsProperties.getBucket(), fileName);
}
@Override
public String getFileUrl(String bucket, String fileName) {
return this.getFileUrl(bucket, fileName, DfsConstants.DFS_FILE_DURATION, DfsConstants.DFS_FILE_DURATION_UNIT);
}
@Override
public String getFileUrl(String bucket, String fileName, int duration, TimeUnit unit) {
String url = null;
try {
url = minioClient.getPresignedObjectUrl(
GetPresignedObjectUrlArgs.builder()
.method(Method.GET)
.bucket(bucket)
.object(fileName)
.expiry(duration, unit)
.build());
} catch (ErrorResponseException e) {
e.printStackTrace();
} catch (InsufficientDataException e) {
e.printStackTrace();
} catch (InternalException e) {
e.printStackTrace();
} catch (InvalidKeyException e) {
e.printStackTrace();
} catch (InvalidResponseException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
} catch (NoSuchAlgorithmException e) {
e.printStackTrace();
} catch (XmlParserException e) {
e.printStackTrace();
} catch (ServerException e) {
e.printStackTrace();
}
return url;
}
@Override
public OutputStream getFileObject(String fileName, OutputStream outputStream) {
return this.getFileObject(minioDfsProperties.getBucket(), fileName, outputStream);
}
@Override
public OutputStream getFileObject(String bucket, String fileName, OutputStream outputStream) {
BufferedInputStream bis = null;
InputStream stream = null;
try {
stream = minioClient.getObject(
GetObjectArgs.builder()
.bucket(bucket)
.object(fileName)
.build());
bis = new BufferedInputStream(stream);
IOUtils.copy(bis, outputStream);
} catch (ErrorResponseException e) {
e.printStackTrace();
} catch (InsufficientDataException e) {
e.printStackTrace();
} catch (InternalException e) {
e.printStackTrace();
} catch (InvalidKeyException e) {
e.printStackTrace();
} catch (InvalidResponseException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
} catch (NoSuchAlgorithmException e) {
e.printStackTrace();
} catch (ServerException e) {
e.printStackTrace();
} catch (XmlParserException e) {
e.printStackTrace();
} finally {
if (stream != null) {
try {
stream.close();
} catch (IOException e) {
e.printStackTrace();
}
}
if (bis != null) {
try {
bis.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
return outputStream;
}
@Override
public String removeFile(String fileName) {
return this.removeFile(minioDfsProperties.getBucket(), fileName);
}
@Override
public String removeFile(String bucket, String fileName) {
return this.removeFiles(bucket, Collections.singletonList(fileName));
}
@Override
public String removeFiles(List<String> fileNames) {
return this.removeFiles(minioDfsProperties.getBucket(), fileNames);
}
@Override
public String removeFiles(String bucket, List<String> fileNames) {
List<DeleteObject> deleteObject = new ArrayList<>();
if (!CollectionUtils.isEmpty(fileNames))
{
fileNames.stream().forEach(item -> {
deleteObject.add(new DeleteObject(item));
});
}
Iterable<Result<DeleteError>> result = minioClient.removeObjects(RemoveObjectsArgs.builder()
.bucket(bucket)
.objects(deleteObject)
.build());
try {
return JsonUtils.objToJsonIgnoreNull(result);
} catch (Exception e) {
e.printStackTrace();
}
return null;
}
}
3、在GitEgg-Platform中新建gitegg-platform-dfs-qiniu子工程,新建QiNiuDfsServiceImpl和QiNiuDfsProperties用于实现IDfsBaseService文件上传下载接口
@Data
@Component
@ConfigurationProperties(prefix = "dfs.qiniu")
public class QiNiuDfsProperties {
/**
* AccessKey
*/
private String accessKey;
/**
* SecretKey
*/
private String secretKey;
/**
* 七牛云机房
*/
private String region;
/**
* Bucket 存储块
*/
private String bucket;
/**
* 公开还是私有
*/
private Integer accessControl;
/**
* 上传服务器域名地址
*/
private String uploadUrl;
/**
* 文件请求地址前缀
*/
private String accessUrlPrefix;
/**
* 上传文件夹前缀
*/
private String uploadDirPrefix;
}
@Slf4j
@AllArgsConstructor
public class QiNiuDfsServiceImpl implements IDfsBaseService {
private final Auth auth;
private final UploadManager uploadManager;
private final BucketManager bucketManager;
private final QiNiuDfsProperties qiNiuDfsProperties;
/**
*
* @param bucket
* @return
*/
@Override
public String uploadToken(String bucket) {
Auth auth = Auth.create(qiNiuDfsProperties.getAccessKey(), qiNiuDfsProperties.getSecretKey());
String upToken = auth.uploadToken(bucket);
return upToken;
}
/**
*
* @param bucket
* @param key
* @return
*/
@Override
public String uploadToken(String bucket, String key) {
Auth auth = Auth.create(qiNiuDfsProperties.getAccessKey(), qiNiuDfsProperties.getSecretKey());
String upToken = auth.uploadToken(bucket, key);
return upToken;
}
@Override
public void createBucket(String bucket) {
try {
String[] buckets = bucketManager.buckets();
if (!ArrayUtil.contains(buckets, bucket)) {
bucketManager.createBucket(bucket, qiNiuDfsProperties.getRegion());
}
} catch (QiniuException e) {
e.printStackTrace();
}
}
/**
*
* @param inputStream
* @param fileName
* @return
*/
@Override
public GitEggDfsFile uploadFile(InputStream inputStream, String fileName) {
return this.uploadFile(inputStream, qiNiuDfsProperties.getBucket(), fileName);
}
/**
*
* @param inputStream
* @param bucket
* @param fileName
* @return
*/
@Override
public GitEggDfsFile uploadFile(InputStream inputStream, String bucket, String fileName) {
GitEggDfsFile dfsFile = null;
//默认不指定key的情况下,以文件内容的hash值作为文件名
String key = null;
if (!StringUtils.isEmpty(fileName))
{
key = fileName;
}
try {
String upToken = auth.uploadToken(bucket);
Response response = uploadManager.put(inputStream, key, upToken,null, null);
//解析上传成功的结果
dfsFile = JsonUtils.jsonToPojo(response.bodyString(), GitEggDfsFile.class);
if (dfsFile != null) {
dfsFile.setBucket(bucket);
dfsFile.setBucketDomain(qiNiuDfsProperties.getUploadUrl());
dfsFile.setFileUrl(qiNiuDfsProperties.getAccessUrlPrefix());
dfsFile.setEncodedFileName(fileName);
}
} catch (QiniuException ex) {
Response r = ex.response;
log.error(r.toString());
try {
log.error(r.bodyString());
} catch (QiniuException ex2) {
log.error(ex2.toString());
}
} catch (Exception e) {
log.error(e.toString());
}
return dfsFile;
}
@Override
public String getFileUrl(String fileName) {
return this.getFileUrl(qiNiuDfsProperties.getBucket(), fileName);
}
@Override
public String getFileUrl(String bucket, String fileName) {
return this.getFileUrl(bucket, fileName, DfsConstants.DFS_FILE_DURATION, DfsConstants.DFS_FILE_DURATION_UNIT);
}
@Override
public String getFileUrl(String bucket, String fileName, int duration, TimeUnit unit) {
String finalUrl = null;
try {
Integer accessControl = qiNiuDfsProperties.getAccessControl();
if (accessControl != null && DfsConstants.DFS_FILE_PRIVATE == accessControl.intValue()) {
String encodedFileName = URLEncoder.encode(fileName, "utf-8").replace("+", "%20");
String publicUrl = String.format("%s/%s", qiNiuDfsProperties.getAccessUrlPrefix(), encodedFileName);
String accessKey = qiNiuDfsProperties.getAccessKey();
String secretKey = qiNiuDfsProperties.getSecretKey();
Auth auth = Auth.create(accessKey, secretKey);
long expireInSeconds = unit.toSeconds(duration);
finalUrl = auth.privateDownloadUrl(publicUrl, expireInSeconds);
}
else {
finalUrl = String.format("%s/%s", qiNiuDfsProperties.getAccessUrlPrefix(), fileName);
}
} catch (UnsupportedEncodingException e) {
e.printStackTrace();
}
return finalUrl;
}
@Override
public OutputStream getFileObject(String fileName, OutputStream outputStream) {
return this.getFileObject(qiNiuDfsProperties.getBucket(), fileName, outputStream);
}
@Override
public OutputStream getFileObject(String bucket, String fileName, OutputStream outputStream) {
URL url = null;
HttpURLConnection conn = null;
BufferedInputStream bis = null;
try {
String path = this.getFileUrl(bucket, fileName, DfsConstants.DFS_FILE_DURATION, DfsConstants.DFS_FILE_DURATION_UNIT);
url = new URL(path);
conn = (HttpURLConnection)url.openConnection();
//设置超时间
conn.setConnectTimeout(DfsConstants.DOWNLOAD_TIMEOUT);
//防止屏蔽程序抓取而返回403错误
conn.setRequestProperty("User-Agent", "Mozilla/4.0 (compatible; MSIE 5.0; Windows NT; DigExt)");
conn.connect();
//得到输入流
bis = new BufferedInputStream(conn.getInputStream());
IOUtils.copy(bis, outputStream);
} catch (Exception e) {
log.error("读取网络文件异常:" + fileName);
}
finally {
conn.disconnect();
if (bis != null) {
try {
bis.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
return outputStream;
}
/**
*
* @param fileName
* @return
*/
@Override
public String removeFile(String fileName) {
return this.removeFile( qiNiuDfsProperties.getBucket(), fileName);
}
/**
*
* @param bucket
* @param fileName
* @return
*/
@Override
public String removeFile(String bucket, String fileName) {
String resultStr = null;
try {
Response response = bucketManager.delete(bucket, fileName);
resultStr = JsonUtils.objToJson(response);
} catch (QiniuException e) {
Response r = e.response;
log.error(r.toString());
try {
log.error(r.bodyString());
} catch (QiniuException ex2) {
log.error(ex2.toString());
}
} catch (Exception e) {
log.error(e.toString());
}
return resultStr;
}
/**
*
* @param fileNames
* @return
*/
@Override
public String removeFiles(List<String> fileNames) {
return this.removeFiles(qiNiuDfsProperties.getBucket(), fileNames);
}
/**
*
* @param bucket
* @param fileNames
* @return
*/
@Override
public String removeFiles(String bucket, List<String> fileNames) {
String resultStr = null;
try {
if (!CollectionUtils.isEmpty(fileNames) && fileNames.size() > GitEggConstant.Number.THOUSAND)
{
throw new BusinessException("单次批量请求的文件数量不得超过1000");
}
BucketManager.BatchOperations batchOperations = new BucketManager.BatchOperations();
batchOperations.addDeleteOp(bucket, (String [])fileNames.toArray());
Response response = bucketManager.batch(batchOperations);
BatchStatus[] batchStatusList = response.jsonToObject(BatchStatus[].class);
resultStr = JsonUtils.objToJson(batchStatusList);
} catch (QiniuException ex) {
log.error(ex.response.toString());
} catch (Exception e) {
log.error(e.toString());
}
return resultStr;
}
}
4、在GitEgg-Platform中新建gitegg-platform-dfs-starter子工程,用于集成所有文件上传下载子工程,方便业务统一引入所有实现
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<parent>
<artifactId>GitEgg-Platform</artifactId>
<groupId>com.gitegg.platform</groupId>
<version>1.0-SNAPSHOT</version>
</parent>
<modelVersion>4.0.0</modelVersion>
<artifactId>gitegg-platform-dfs-starter</artifactId>
<name>${project.artifactId}</name>
<packaging>jar</packaging>
<dependencies>
<!-- gitegg 分布式文件自定义扩展-minio -->
<dependency>
<groupId>com.gitegg.platform</groupId>
<artifactId>gitegg-platform-dfs-minio</artifactId>
</dependency>
<!-- gitegg 分布式文件自定义扩展-七牛云 -->
<dependency>
<groupId>com.gitegg.platform</groupId>
<artifactId>gitegg-platform-dfs-qiniu</artifactId>
</dependency>
</dependencies>
</project>
5、gitegg-platform-bom中添加文件存储相关依赖
<!-- gitegg 分布式文件自定义扩展 -->
<dependency>
<groupId>com.gitegg.platform</groupId>
<artifactId>gitegg-platform-dfs</artifactId>
<version>${gitegg.project.version}</version>
</dependency>
<!-- gitegg 分布式文件自定义扩展-minio -->
<dependency>
<groupId>com.gitegg.platform</groupId>
<artifactId>gitegg-platform-dfs-minio</artifactId>
<version>${gitegg.project.version}</version>
</dependency>
<!-- gitegg 分布式文件自定义扩展-七牛云 -->
<dependency>
<groupId>com.gitegg.platform</groupId>
<artifactId>gitegg-platform-dfs-qiniu</artifactId>
<version>${gitegg.project.version}</version>
</dependency>
<!-- gitegg 分布式文件自定义扩展-starter -->
<dependency>
<groupId>com.gitegg.platform</groupId>
<artifactId>gitegg-platform-dfs-starter</artifactId>
<version>${gitegg.project.version}</version>
</dependency>
<!-- minio文件存储服务 https://mvnrepository.com/artifact/io.minio/minio -->
<dependency>
<groupId>io.minio</groupId>
<artifactId>minio</artifactId>
<version>${dfs.minio.version}</version>
</dependency>
<!--七牛云文件存储服务-->
<dependency>
<groupId>com.qiniu</groupId>
<artifactId>qiniu-java-sdk</artifactId>
<version>${dfs.qiniu.version}</version>
</dependency>
二、业务功能实现
分布式文件存储功能作为系统扩展功能放在gitegg-service-extension工程中,首先需要分为几个模块:
- 文件服务器的基本配置模块
- 文件的上传、下载记录模块(下载只记录私有文件,对于公共可访问的文件不需要记录)
- 前端访问下载实现
1、新建文件服务器配置表,用于存放文件服务器相关配置,定义好表结构,使用代码生成工具生成增删改查代码。
CREATE TABLE `t_sys_dfs` (
`id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT '主键',
`tenant_id` bigint(20) NOT NULL DEFAULT 0 COMMENT '租户id',
`dfs_type` bigint(20) NULL DEFAULT NULL COMMENT '分布式存储分类',
`dfs_code` varchar(32) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '分布式存储编号',
`access_url_prefix` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '文件访问地址前缀',
`upload_url` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '分布式存储上传地址',
`bucket` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '空间名称',
`app_id` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '应用ID',
`region` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT '区域',
`access_key` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT 'accessKey',
`secret_key` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL COMMENT 'secretKey',
`dfs_default` tinyint(2) NOT NULL DEFAULT 0 COMMENT '是否默认存储 0否,1是',
`dfs_status` tinyint(2) NOT NULL DEFAULT 1 COMMENT '状态 0禁用,1 启用',
`access_control` tinyint(2) NOT NULL DEFAULT 0 COMMENT '访问控制 0私有,1公开',
`comments` varchar(255) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL COMMENT '备注',
`create_time` datetime(0) NULL DEFAULT NULL COMMENT '创建时间',
`creator` bigint(20) NULL DEFAULT NULL COMMENT '创建者',
`update_time` datetime(0) NULL DEFAULT NULL COMMENT '更新时间',
`operator` bigint(20) NULL DEFAULT NULL COMMENT '更新者',
`del_flag` tinyint(2) NULL DEFAULT 0 COMMENT '1:删除 0:不删除',
PRIMARY KEY (`id`) USING BTREE
) ENGINE = InnoDB AUTO_INCREMENT = 1 CHARACTER SET = utf8mb4 COLLATE = utf8mb4_general_ci COMMENT = '分布式存储配置表' ROW_FORMAT = DYNAMIC;
2、新建DfsQiniuFactory和DfsMinioFactory接口实现工厂类,用于根据当前用户的选择,实例化需要的接口实现类
/**
* 七牛云上传服务接口工厂类
*/
public class DfsQiniuFactory {
public static IDfsBaseService getDfsBaseService(DfsDTO dfsDTO) {
Auth auth = Auth.create(dfsDTO.getAccessKey(), dfsDTO.getSecretKey());
Configuration cfg = new Configuration(Region.autoRegion());
UploadManager uploadManager = new UploadManager(cfg);
BucketManager bucketManager = new BucketManager(auth, cfg);
QiNiuDfsProperties qiNiuDfsProperties = new QiNiuDfsProperties();
qiNiuDfsProperties.setAccessKey(dfsDTO.getAccessKey());
qiNiuDfsProperties.setSecretKey(dfsDTO.getSecretKey());
qiNiuDfsProperties.setRegion(dfsDTO.getRegion());
qiNiuDfsProperties.setBucket(dfsDTO.getBucket());
qiNiuDfsProperties.setUploadUrl(dfsDTO.getUploadUrl());
qiNiuDfsProperties.setAccessUrlPrefix(dfsDTO.getAccessUrlPrefix());
qiNiuDfsProperties.setAccessControl(dfsDTO.getAccessControl());
return new QiNiuDfsServiceImpl(auth, uploadManager, bucketManager, qiNiuDfsProperties);
}
}
/**
* MINIO上传服务接口工厂类
*/
public class DfsMinioFactory {
public static IDfsBaseService getDfsBaseService(DfsDTO dfsDTO) {
MinioClient minioClient =
MinioClient.builder()
.endpoint(dfsDTO.getUploadUrl())
.credentials(dfsDTO.getAccessKey(), dfsDTO.getSecretKey()).build();;
MinioDfsProperties minioDfsProperties = new MinioDfsProperties();
minioDfsProperties.setAccessKey(dfsDTO.getAccessKey());
minioDfsProperties.setSecretKey(dfsDTO.getSecretKey());
minioDfsProperties.setRegion(dfsDTO.getRegion());
minioDfsProperties.setBucket(dfsDTO.getBucket());
minioDfsProperties.setUploadUrl(dfsDTO.getUploadUrl());
minioDfsProperties.setAccessUrlPrefix(dfsDTO.getAccessUrlPrefix());
minioDfsProperties.setAccessControl(dfsDTO.getAccessControl());
return new MinioDfsServiceImpl(minioClient, minioDfsProperties);
}
}
3、新建DfsFactory工厂类,添加@Component使用容器管理该类(默认单例),用于根据系统用户配置,生成及缓存对应的上传下载接口实现
/**
* DfsFactory工厂类,根据系统用户配置,生成及缓存对应的上传下载接口实现
*/
@Component
public class DfsFactory {
/**
* DfsService 缓存
*/
private final static Map<Long, IDfsBaseService> dfsBaseServiceMap = new ConcurrentHashMap<>();
/**
* 获取 DfsService
*
* @param dfsDTO 分布式存储配置
* @return dfsService
*/
public IDfsBaseService getDfsBaseService(DfsDTO dfsDTO) {
//根据dfsId获取对应的分布式存储服务接口,dfsId是唯一的,每个租户有其自有的dfsId
Long dfsId = dfsDTO.getId();
IDfsBaseService dfsBaseService = dfsBaseServiceMap.get(dfsId);
if (null == dfsBaseService) {
Class cls = null;
try {
cls = Class.forName(DfsFactoryClassEnum.getValue(String.valueOf(dfsDTO.getDfsType())));
Method staticMethod = cls.getDeclaredMethod(DfsConstants.DFS_SERVICE_FUNCTION, DfsDTO.class);
dfsBaseService = (IDfsBaseService) staticMethod.invoke(cls, dfsDTO);
dfsBaseServiceMap.put(dfsId, dfsBaseService);
} catch (ClassNotFoundException | NoSuchMethodException e) {
e.printStackTrace();
} catch (IllegalAccessException e) {
e.printStackTrace();
} catch (InvocationTargetException e) {
e.printStackTrace();
}
}
return dfsBaseService;
}
}
4、新建枚举类DfsFactoryClassEnum,用于DfsFactory 工厂类通过反射实例化对应文件服务器的接口实现类
/**
* @ClassName: DfsFactoryClassEnum
* @Description: 分布式存储工厂类枚举 ,因dfs表存的是数据字典表的id,这里省一次数据库查询,所以就用数据字典的id
* @author GitEgg
* @date 2020年09月19日 下午11:49:45
*/
public enum DfsFactoryClassEnum {
/**
* MINIO MINIO
*/
MINIO("2", "com.gitegg.service.extension.dfs.factory.DfsMinioFactory"),
/**
* 七牛云Kodo QINIUYUN_KODO
*/
QI_NIU("3", "com.gitegg.service.extension.dfs.factory.DfsQiniuFactory"),
/**
* 阿里云OSS ALIYUN_OSS
*/
ALI_YUN("4", "com.gitegg.service.extension.dfs.factory.DfsAliyunFactory"),
/**
* 腾讯云COS TENCENT_COS
*/
TENCENT("5", "com.gitegg.service.extension.dfs.factory.DfsTencentFactory");
public String code;
public String value;
DfsFactoryClassEnum(String code, String value) {
this.code = code;
this.value = value;
}
public static String getValue(String code) {
DfsFactoryClassEnum[] smsFactoryClassEnums = values();
for (DfsFactoryClassEnum smsFactoryClassEnum : smsFactoryClassEnums) {
if (smsFactoryClassEnum.getCode().equals(code)) {
return smsFactoryClassEnum.getValue();
}
}
return null;
}
public String getCode() {
return code;
}
public void setCode(String code) {
this.code = code;
}
public String getValue() {
return value;
}
public void setValue(String value) {
this.value = value;
}
}
5、新建IGitEggDfsService接口,用于定义业务需要的文件上传下载接口
/**
* 业务文件上传下载接口实现
*
*/
public interface IGitEggDfsService {
/**
* 获取文件上传的 token
* @param dfsCode
* @return
*/
String uploadToken(String dfsCode);
/**
* 上传文件
*
* @param dfsCode
* @param file
* @return
*/
GitEggDfsFile uploadFile(String dfsCode, MultipartFile file);
/**
* 获取文件访问链接
* @param dfsCode
* @param fileName
* @return
*/
String getFileUrl(String dfsCode, String fileName);
/**
* 下载文件
* @param dfsCode
* @param fileName
* @return
*/
OutputStream downloadFile(String dfsCode, String fileName, OutputStream outputStream);
}
6、新建IGitEggDfsService接口实现类GitEggDfsServiceImpl,用于实现业务需要的文件上传下载接口
@Slf4j
@Service
@RequiredArgsConstructor(onConstructor_ = @Autowired)
public class GitEggDfsServiceImpl implements IGitEggDfsService {
private final DfsFactory dfsFactory;
private final IDfsService dfsService;
private final IDfsFileService dfsFileService;
@Override
public String uploadToken(String dfsCode) {
QueryDfsDTO queryDfsDTO = new QueryDfsDTO();
queryDfsDTO.setDfsCode(dfsCode);
DfsDTO dfsDTO = dfsService.queryDfs(queryDfsDTO);
IDfsBaseService dfsBaseService = dfsFactory.getDfsBaseService(dfsDTO);
String token = dfsBaseService.uploadToken(dfsDTO.getBucket());
return token;
}
@Override
public GitEggDfsFile uploadFile(String dfsCode, MultipartFile file) {
QueryDfsDTO queryDfsDTO = new QueryDfsDTO();
DfsDTO dfsDTO = null;
// 如果上传时没有选择存储方式,那么取默认存储方式
if(StringUtils.isEmpty(dfsCode)) {
queryDfsDTO.setDfsDefault(GitEggConstant.ENABLE);
}
else {
queryDfsDTO.setDfsCode(dfsCode);
}
GitEggDfsFile gitEggDfsFile = null;
DfsFile dfsFile = new DfsFile();
try {
dfsDTO = dfsService.queryDfs(queryDfsDTO);
IDfsBaseService dfsFileService = dfsFactory.getDfsBaseService(dfsDTO);
//获取文件名
String originalName = file.getOriginalFilename();
//获取文件后缀
String extension = FilenameUtils.getExtension(originalName);
String hash = Etag.stream(file.getInputStream(), file.getSize());
String fileName = hash + "." + extension;
// 保存文件上传记录
dfsFile.setDfsId(dfsDTO.getId());
dfsFile.setOriginalName(originalName);
dfsFile.setFileName(fileName);
dfsFile.setFileExtension(extension);
dfsFile.setFileSize(file.getSize());
dfsFile.setFileStatus(GitEggConstant.ENABLE);
//执行文件上传操作
gitEggDfsFile = dfsFileService.uploadFile(file.getInputStream(), fileName);
if (gitEggDfsFile != null)
{
gitEggDfsFile.setFileName(originalName);
gitEggDfsFile.setKey(hash);
gitEggDfsFile.setHash(hash);
gitEggDfsFile.setFileSize(file.getSize());
}
dfsFile.setAccessUrl(gitEggDfsFile.getFileUrl());
} catch (IOException e) {
log.error("文件上传失败:{}", e);
dfsFile.setFileStatus(GitEggConstant.DISABLE);
dfsFile.setComments(String.valueOf(e));
} finally {
dfsFileService.save(dfsFile);
}
return gitEggDfsFile;
}
@Override
public String getFileUrl(String dfsCode, String fileName) {
String fileUrl = null;
QueryDfsDTO queryDfsDTO = new QueryDfsDTO();
DfsDTO dfsDTO = null;
// 如果上传时没有选择存储方式,那么取默认存储方式
if(StringUtils.isEmpty(dfsCode)) {
queryDfsDTO.setDfsDefault(GitEggConstant.ENABLE);
}
else {
queryDfsDTO.setDfsCode(dfsCode);
}
try {
dfsDTO = dfsService.queryDfs(queryDfsDTO);
IDfsBaseService dfsFileService = dfsFactory.getDfsBaseService(dfsDTO);
fileUrl = dfsFileService.getFileUrl(fileName);
}
catch (Exception e)
{
log.error("文件上传失败:{}", e);
}
return fileUrl;
}
@Override
public OutputStream downloadFile(String dfsCode, String fileName, OutputStream outputStream) {
QueryDfsDTO queryDfsDTO = new QueryDfsDTO();
DfsDTO dfsDTO = null;
// 如果上传时没有选择存储方式,那么取默认存储方式
if(StringUtils.isEmpty(dfsCode)) {
queryDfsDTO.setDfsDefault(GitEggConstant.ENABLE);
}
else {
queryDfsDTO.setDfsCode(dfsCode);
}
try {
dfsDTO = dfsService.queryDfs(queryDfsDTO);
IDfsBaseService dfsFileService = dfsFactory.getDfsBaseService(dfsDTO);
outputStream = dfsFileService.getFileObject(fileName, outputStream);
}
catch (Exception e)
{
log.error("文件上传失败:{}", e);
}
return outputStream;
}
}
7、新建GitEggDfsController用于文件上传下载通用访问控制器
@RestController
@RequestMapping("/extension")
@RequiredArgsConstructor(onConstructor_ = @Autowired)
@Api(value = "GitEggDfsController|文件上传前端控制器")
@RefreshScope
public class GitEggDfsController {
private final IGitEggDfsService gitEggDfsService;
/**
* 上传文件
* @param uploadFile
* @param dfsCode
* @return
*/
@PostMapping("/upload/file")
public Result<?> uploadFile(@RequestParam("uploadFile") MultipartFile[] uploadFile, String dfsCode) {
GitEggDfsFile gitEggDfsFile = null;
if (ArrayUtils.isNotEmpty(uploadFile))
{
for (MultipartFile file : uploadFile) {
gitEggDfsFile = gitEggDfsService.uploadFile(dfsCode, file);
}
}
return Result.data(gitEggDfsFile);
}
/**
* 通过文件名获取文件访问链接
*/
@GetMapping("/get/file/url")
@ApiOperation(value = "查询分布式存储配置表详情")
public Result<?> query(String dfsCode, String fileName) {
String fileUrl = gitEggDfsService.getFileUrl(dfsCode, fileName);
return Result.data(fileUrl);
}
/**
* 通过文件名以文件流的方式下载文件
*/
@GetMapping("/get/file/download")
public void downloadFile(HttpServletResponse response,HttpServletRequest request,String dfsCode, String fileName) {
if (fileName != null) {
response.setCharacterEncoding(request.getCharacterEncoding());
response.setContentType("application/octet-stream");
response.addHeader("Content-Disposition", "attachment;fileName=" + fileName);
OutputStream os = null;
try {
os = response.getOutputStream();
os = gitEggDfsService.downloadFile(dfsCode, fileName, os);
os.flush();
os.close();
} catch (Exception e) {
e.printStackTrace();
} finally {
if (os != null) {
try {
os.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
}
}
}
8、前端上传下载实现,注意的是:axios请求下载文件流时,需要设置 responseType: 'blob'
- 上传
handleUploadTest (row) {
this.fileList = []
this.uploading = false
this.uploadForm.dfsType = row.dfsType
this.uploadForm.dfsCode = row.dfsCode
this.uploadForm.uploadFile = null
this.dialogTestUploadVisible = true
},
handleRemove (file) {
const index = this.fileList.indexOf(file)
const newFileList = this.fileList.slice()
newFileList.splice(index, 1)
this.fileList = newFileList
},
beforeUpload (file) {
this.fileList = [...this.fileList, file]
return false
},
handleUpload () {
this.uploadedFileName = ''
const { fileList } = this
const formData = new FormData()
formData.append('dfsCode', this.uploadForm.dfsCode)
fileList.forEach(file => {
formData.append('uploadFile', file)
})
this.uploading = true
dfsUpload(formData).then(() => {
this.fileList = []
this.uploading = false
this.$message.success('上传成功')
}).catch(err => {
console.log('uploading', err)
this.$message.error('上传失败')
})
}
- 下载
getFileUrl (row) {
this.listLoading = true
this.fileDownload.dfsCode = row.dfsCode
this.fileDownload.fileName = row.fileName
dfsGetFileUrl(this.fileDownload).then(response => {
window.open(response.data)
this.listLoading = false
})
},
downLoadFile (row) {
this.listLoading = true
this.fileDownload.dfsCode = row.dfsCode
this.fileDownload.fileName = row.fileName
this.fileDownload.responseType = 'blob'
dfsDownloadFileUrl(this.fileDownload).then(response => {
const blob = new Blob([response.data])
var fileName = row.originalName
const elink = document.createElement('a')
elink.download = fileName
elink.style.display = 'none'
elink.href = URL.createObjectURL(blob)
document.body.appendChild(elink)
elink.click()
URL.revokeObjectURL(elink.href)
document.body.removeChild(elink)
this.listLoading = false
})
}
- 前端接口
import request from '@/utils/request'
export function dfsUpload (formData) {
return request({
url: '/gitegg-service-extension/extension/upload/file',
method: 'post',
data: formData
})
}
export function dfsGetFileUrl (query) {
return request({
url: '/gitegg-service-extension/extension/get/file/url',
method: 'get',
params: query
})
}
export function dfsDownloadFileUrl (query) {
return request({
url: '/gitegg-service-extension/extension/get/file/download',
method: 'get',
responseType: 'blob',
params: query
})
}
三、功能测试界面
1、批量上传
上传界面
2、文件流下载及获取文件地址
文件流下载及获取文件地址
备注
1、防止文件名重复,这里文件名统一采用七牛云的hash算法,可以防止文件重复,在界面需要展示的文件名,则存储到数据库一个文件名字段进行展示。所有的上传文件都留有记录。
GitEgg-Cloud是一款基于SpringCloud整合搭建的企业级微服务应用开发框架,开源项目地址:
Gitee: https://gitee.com/wmz1930/GitEgg
GitHub: https://github.com/wmz1930/GitEgg
欢迎感兴趣的小伙伴Star支持一下。