rclone 使用
2023-05-09 本文已影响0人
酱油王0901
安装
- 下载官方 release 二进制安装包,以
rclone v1.60.0
为例
wget https://github.com/rclone/rclone/releases/download/v1.60.0/rclone-v1.60.0-linux-amd64.zip
- 解压并安装
➜ /home/deeproute/filestash ☞ unzip rclone-v1.60.0-linux-amd64.zip
Archive: rclone-v1.60.0-linux-amd64.zip
creating: rclone-v1.60.0-linux-amd64/
inflating: rclone-v1.60.0-linux-amd64/git-log.txt
inflating: rclone-v1.60.0-linux-amd64/README.txt
inflating: rclone-v1.60.0-linux-amd64/README.html
inflating: rclone-v1.60.0-linux-amd64/rclone
inflating: rclone-v1.60.0-linux-amd64/rclone.1
解压后的 rclone-v1.60.0-linux-amd64/rclone
就可以直接使用了。
使用
- 配置文件,可以调用
rclone config file
命令去查看 rclone 使用的配置文件位置。
🍺 /home/deeproute/filestash ☞ rclone config file
Configuration file is stored at:
/root/.config/rclone/rclone.conf
可以通过 --config
来指定配置文件
- 配置
rclone.conf
是以 [ini](https://en.wikipedia.org/wiki/INI_file)
的格式配置的,如下所示
[huarui-cold]
type = s3
provider = Other
access_key_id = <ak>
secret_access_key = <sk>
endpoint = http://s3-cold.deeproute.cn:80
[smd]
type = s3
provider = Other
access_key_id = <ak>
secret_access_key = <sk>
endpoint = http://10.9.8.72:80
用户可以根据存储修改 AK
, SK
, endpoint
。可以设置多个 S3 配置项。
- 访问
-
lsd
List all directories/containers/buckets in the path.
🍺 /root ☞ rclone lsd huarui-cold:
-1 2022-11-07 11:20:42 -1 bucket
-1 2022-11-11 16:03:35 -1 bucket2
-1 2022-11-07 14:45:03 -1 simulation-platform-dev-backup
-1 2022-11-07 14:45:19 -1 simulation-platform-prod-backup
-1 2022-11-07 11:21:35 -1 simulation-platform-staging
-1 2022-11-07 16:46:15 -1 simulation-platform-staging-backup
-
ls
List the objects in the path with size and path.
🍺 /root ☞ rclone ls huarui-cold:simulation-platform-staging-backup
11 wonderful-rhino-zcc4t/wonderful-rhino-zcc4t/main.log
- 数据备份,指定源端和目标端即可,源端和目标端可以是本地或挂载点,也可以是 s3
- 本地拷贝到 nfs
rclone copy /home/deeproute/external/ /mnt/nfs --log-level DEBUG --log-file /var/log/backup.log --stats-log-level DEBUG --ignore-times --progress --checkers 32 --transfers 32
- 可以做数据校验
rclone check /home/deeproute/external/ /mnt/nfs --log-level DEBUG --log-file /var/log/backup.log --stats-log-level DEBUG --checksum
- 本地拷贝到 S3
rclone copy /home/deeproute/external/ huarui-cold:bucket --log-level DEBUG --log-file /var/log/rclone.log --checkers 128 --transfers 64
- S3 到 S3
rclone copy smd:test-bucket huarui-cold:bucket2 --log-level DEBUG --log-file /var/log/rclone.log --checkers 128 --transfers 64
rclone flags
--config string Config file (default "/root/.config/rclone/rclone.conf")
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5Mi)
--s3-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4.656Gi)
--s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
--size-only Skip based on size only, not mod-time or checksum
--bwlimit BwTimetable Bandwidth limit in KiB/s, or use suffix B|K|M|G|T|P or a full timetable
--bwlimit-file BwTimetable Bandwidth limit per file in KiB/s, or use suffix B|K|M|G|T|P or a full timetable
--checkers int Number of checkers to run in parallel (default 8)
-c, --checksum Skip based on checksum (if available) & size, not mod-time & size
-P, --progress Show progress during transfer
--transfers int Number of file transfers to run in parallel (default 4)
--log-file string Log everything to this file
--log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
--retries int Retry operations this many times if they fail (default 3)
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum
-I, --ignore-times Don't skip files that match size and time - transfer all files