Window下如何下载SRA数据跑DADA2流程

2020-07-02  本文已影响0人  Dayueban

由于没有服务器,只能自己捣鼓着用windows的cmd命令行来批量下载一些数据,但是windows下高通量测序数据(如sra)下载体验真的很差,没办法,只能硬着头皮去搞

一、下载sratoolkit软件

一)软件位置:https://trace.ncbi.nlm.nih.gov/Traces/sra/sra.cgi?view=software

图1.1.1 选择箭头所指windows版本

二)软件安装

图1.2.1 图1.2.2 图1.2.3 图1.2.4 图1.2.5

三)检测软件是否可以正常工作

图1.3.2

二、下载测试数据

如果你自己没有数据(16S rRNA基因测序数据),那就去下载别人上传到网络各大数据库的数据,比如以NCBI为例

  1. 打开NCBI官网:https://www.ncbi.nlm.nih.gov/,随便输入关键词如16S rRNA sequencing,选择SRA数据库,进入。

    图2.1.1
  2. 选择所需下载的个体


    图2.2.1
    图2.2.2
F:\>bio-tools\sratoolkit\sratoolkit.2.10.7-win64\bin\prefetch.exe --option-file C:\Users\jnzd_\Desktop\SRR_Acc_List.txt

可以看到下载信息:表示下载成功

2020-06-28T00:49:08 prefetch.2.10.7: 1) Downloading 'SRR12073641'...
2020-06-28T00:49:08 prefetch.2.10.7:  Downloading via HTTPS...
2020-06-28T00:54:27 prefetch.2.10.7:  HTTPS download succeed
2020-06-28T00:54:27 prefetch.2.10.7:   verifying 'SRR12073641'...
2020-06-28T00:54:27 prefetch.2.10.7:  'SRR12073641' is valid
2020-06-28T00:54:27 prefetch.2.10.7: 1) 'SRR12073641' was downloaded successfully
2020-06-28T00:54:28 prefetch.2.10.7: 'SRR12073641' has 0 unresolved dependencies

2020-06-28T00:54:30 prefetch.2.10.7: 2) Downloading 'SRR12073642'...
2020-06-28T00:54:30 prefetch.2.10.7:  Downloading via HTTPS...
2020-06-28T01:00:17 prefetch.2.10.7:  HTTPS download succeed
2020-06-28T01:00:17 prefetch.2.10.7:   verifying 'SRR12073642'...
2020-06-28T01:00:17 prefetch.2.10.7:  'SRR12073642' is valid
2020-06-28T01:00:17 prefetch.2.10.7: 2) 'SRR12073642' was downloaded successfully
2020-06-28T01:00:17 prefetch.2.10.7: 'SRR12073642' has 0 unresolved dependencies

2020-06-28T01:00:20 prefetch.2.10.7: 3) Downloading 'SRR12073643'...
2020-06-28T01:00:20 prefetch.2.10.7:  Downloading via HTTPS...
2020-06-28T01:08:38 prefetch.2.10.7:  HTTPS download succeed
2020-06-28T01:08:38 prefetch.2.10.7:   verifying 'SRR12073643'...
2020-06-28T01:08:38 prefetch.2.10.7:  'SRR12073643' is valid
2020-06-28T01:08:38 prefetch.2.10.7: 3) 'SRR12073643' was downloaded successfully
2020-06-28T01:08:38 prefetch.2.10.7: 'SRR12073643' has 0 unresolved dependencies

2020-06-28T01:08:40 prefetch.2.10.7: 4) Downloading 'SRR12073644'...
2020-06-28T01:08:40 prefetch.2.10.7:  Downloading via HTTPS...
2020-06-28T01:15:32 prefetch.2.10.7:  HTTPS download succeed
2020-06-28T01:15:32 prefetch.2.10.7:   verifying 'SRR12073644'...
2020-06-28T01:15:32 prefetch.2.10.7:  'SRR12073644' is valid
2020-06-28T01:15:32 prefetch.2.10.7: 4) 'SRR12073644' was downloaded successfully
2020-06-28T01:15:32 prefetch.2.10.7: 'SRR12073644' has 0 unresolved dependencies
for /l %i in (41,1,44) do (fastq-dump.exe --split-files --gzip D:\R\R-exercise\ncbi-data\SRR120736%i\SRR120736%i.sra -O D:\R\R-exercise\ncbi-data\output)

# 本想用--split-3参数,但windows下一直报错,不知道怎么回事,用s--split-files不会,-O接转换后的数据存放目录
图2.2.3 转换后的数据,以fastq结尾

三、R语言运行data2

#############################################
##  FGFP sample inference from amplicon data 
##  by Raul Tito & Rodrigo Bacigalupe
##  date: March 28th 2018
#############################################
### Pipeline created from https://benjjneb.github.io/dada2/tutorial.html
### More information available at https://benjjneb.github.io/dada2/index.html

### This pipeline requires demultiplexed files: Two fastq files per sample (*R1 and *R2), they can be *.gz

### Load required modules
# module load R/3.4.2   ### very important to use this version

### Load necessary libraries and set seed
# 国内用户清华镜像站,加速国内用户下载
site="https://mirrors.tuna.tsinghua.edu.cn/CRAN"
# 检查是否存在Biocondoctor安装工具,没有则安装
if (!requireNamespace("BiocManager", quietly = TRUE))
 install.packages("BiocManager",repo=site)
# 加载安装工具,并安装dada2
library(BiocManager)
BiocManager::install("dada2")

library(dada2)
packageVersion("dada2")
set.seed(12345)

### Every new MiSeq or Hiseq run is quality checked for number of reads and presence of unused barcodes to estimate 
### levels of carryover from prevous runs or contamination events. As part of this initial analysis demultipled files 
### per each sample are generated (Using LotuS).

### Set directory and files
#' datasets used were downloaded from:https://trace.ncbi.nlm.nih.gov/Traces/study/?acc=SRP268421&o=acc_s%3Aa
path <- "D:\\R\\R-exercise\\ncbi-data\\output" # CHANGE to the directory containing your demultiplexed R1 and R2 fastq files
fns <- list.files(path) #列出某文件夹下的所有文件
#fns

####################
### load your data
####################
fastqs <- fns[grepl(".fastq.gz$", fns)]  # grepl查找匹配后返回true或false,而grep查找匹配后返回的是索引
fastqs <- sort(fastqs) # Sort ensures forward/reverse reads are in same order
### make sure that R1 is for forward read and R2 for reverse

fnFs <- fastqs[grepl(".1.fastq.gz", fastqs)] ## Just the forward read files
fnRs <- fastqs[grepl(".2.fastq.gz", fastqs)] ## Just the reverse read files
## Get sample names from the first part of the forward read filenames
sample.names <- sapply(strsplit(fnFs, ".1.fastq.gz"), `[`, 1) ## check if it is 1 or 2

## Fully specify the path for the fnFs and fnRs
fnFs <- file.path(path, fnFs)
fnRs <- file.path(path, fnRs)
###########################################
## Examine quality profiles of F & R reads
###########################################
pdf("plotQualityProfile.pdf", onefile=T)
plotQualityProfile(fnFs[1:2]) ## remove 20 plus 10 (primers and first odd bases)
plotQualityProfile(fnRs[1:2]) ## remove 20 plus 10 (primers and first odd bases)
dev.off()
图3.1
- 根据质量图对数据进行过滤和截取
##################################
## Perform filtering and trimming
##################################
filt_path <- file.path(path, "filtered") # Place filtered files in filtered/subdirectory
filtFs <- file.path(filt_path, paste0(sample.names, "_F_filt.fastq.gz"))
filtRs <- file.path(filt_path, paste0(sample.names, "_R_filt.fastq.gz"))

out <- filterAndTrim(fnFs, filtFs, fnRs, filtRs, truncLen=c(130,200), ## Different settings tried,these are good for current primer constructs for MiSeq and HiSeq,truncLen参数设定序列长度范围,不在该范围则过滤
                     trimLeft=c(30, 30), # 每条reads开头的30个碱基一般质量值偏低,建议过滤,或者根据实际情况设定参数范围。
                     maxN=0, maxEE=c(2,2), truncQ=11, rm.phix=TRUE,
                     compress=TRUE, multithread=TRUE) #
head(out)

## Examine quality profiles of filtered reads
pdf("plotQualityProfile.filt.pdf", onefile=T)
plotQualityProfile(filtFs[1:2])
plotQualityProfile(filtRs[1:2])
dev.off()
图3.2 过滤后的序列质量分布图
## Learn the Error Rates
## DADA2算法使用机器学习构建参数误差模型,认为每个扩增子测序样品都具有不同的误差比率。该learnErrors方法通过交替估计错误率和对参考样本序列学习错误模型,直到学习模型同真实错误率收敛于一致。与许多机器学习方法一样,算法有一个初始假设:数据中的最大可能错误率就是只有最丰富的序列是正确的,其余都是错误。
#########################
## Learn forward error rates
errF <- learnErrors(filtFs, nread=1e6, multithread=TRUE) ## variable but this is the minimum number of reads
## Learn reverse error rates
errR <- learnErrors(filtRs, nread=1e6, multithread=TRUE) ## variable but this is the minimum number of reads
## Sample inference and merger of paired-end reads
mergers <- vector("list", length(sample.names))
names(mergers) <- sample.names

## Plot estimated error as sanity check 
pdf("plotErrors_F.pdf", onefile=T)
plotErrors(errF, nominalQ=TRUE)
dev.off()

pdf("plotErrors_R.pdf", onefile=T)
plotErrors(errR, nominalQ=TRUE)
dev.off()
图3.3 错误率模型图

与usearch去冗余步骤类似,仅仅保留重复序列中的一条序列

#########################
## Perform dereplication
#########################
## Dereplicate the filtered fastq files
derepRs <- derepFastq(filtRs, verbose=TRUE)
derepFs <- derepFastq(filtFs, verbose=TRUE)

# Name the derep-class objects by the sample names
names(derepFs) <- sample.names
names(derepRs) <- sample.names


####################
## Sample Inference
####################
## Apply the core sequence-variant inference algorithm to the dereplicated data
## Infer the sequence variants in each sample
## 基于错误模型进一步质控
dadaFs <- dada(derepFs, err=errF, multithread=TRUE)
dadaRs <- dada(derepRs, err=errR, multithread=TRUE)

## Inspect the dada-class object returned by dada
dadaFs[[1]]
# 473 sequence variants were inferred from 5288 input unique sequences 从5288个输入独特序列中推断出473个真实序列

合并的条件是:默认情况下,仅当正向和反向序列重叠至少12个碱基并且在重叠区域中彼此相同时才输出合并序列

## Merge the denoised forward and reverse reads
mergers <- mergePairs(dadaFs, derepFs, dadaRs, derepRs, verbose=TRUE)
## Inspect the merger data.frame from the first sample
head(mergers[[1]])

mergers对象格式为R语言的数据框,数据框中包含序列及其丰度信息,未能成功合并的序列被删除。

开始构建 amplicon sequence variant(ASV)表,;类似于我们传统的OTU表一样,使用makeSequenceTable参数进行构建

############################
## Construct sequence table 
############################
seqtab <- makeSequenceTable(mergers)
## Get dimensions
dim(seqtab)
# [1]    4 1237  可以看到我们是4个样本,然后最终得到1237个ASV

## Inspect distribution of sequence lengths
table(nchar(getSequences(seqtab)))

Dada核心质控方法纠正了一些错误,但嵌合体仍然存在。幸运的是,去噪后序列的准确性使得识别嵌合体比处理模糊OTU时更简单。

## Remove chimeras
###################
## Remove chimeric sequences:
seqtab.nochim <- removeBimeraDenovo(seqtab, method="consensus", multithread=TRUE, verbose=TRUE)
dim(seqtab.nochim)

sum(seqtab.nochim)/sum(seqtab)

根据上述几个步骤的序列筛选,来mark一下原始序列到最终可以采用的时候其艰辛旅程吧

####################################
## Track reads through the pipeline
####################################
getN <- function(x) sum(getUniques(x))
track <- cbind(out, sapply(dadaFs, getN), sapply(mergers, getN), rowSums(seqtab), rowSums(seqtab.nochim))
# If processing a single sample, remove the sapply calls: e.g. replace sapply(dadaFs, getN) with getN(dadaFs)
colnames(track) <- c("input", "filtered", "denoised", "merged", "tabled", "nonchim")
rownames(track) <- sample.names
head(track)
图3.4 序列筛选过程统计
###################
## Assign taxonomy
###################
#' 几个常用的注释数据库:https://benjjneb.github.io/dada2/training.html
taxHS <- assignTaxonomy(seqtab.nochim, "data/rdp_train_set_16.fa.gz", multithread=TRUE) ## CHANGE to directory and pertinent database
taxHS_spe <- addSpecies(taxHS, "data/rdp_species_assignment_16.fa.gz") # 需要注释种水平另外加载相应数据库
colnames(taxHS) <- c("Kingdom", "Phylum", "Class", "Order", "Family", "Genus")
unname(head(taxHS)) # Remove the names or dimnames attribute of an R object when presentation
unname(tail(taxHS))


#######################
## Write data to files
#######################
write.table(track, file = "track.tsv", quote=FALSE)
seqtab.nochim <- as.data.frame(seqtab.nochim) # 将其转为data.frame
names(seqtab.nochim) <- paste("ASV",seq(1,dim(seqtab.nochim)[2],1), sep = "_") 
# ASV标签名称太长,这里将其转换下命名
write.table(seqtab.nochim, file = "sequence_table_SV.tsv", quote=FALSE)
write.table(taxHS, file = "taxa_SV.tsv", quote=FALSE)
taxHS_spe <- as.data.frame(taxHS_spe)
rownames(taxHS_spe) <- paste("ASV",seq(1,dim(seqtab.nochim)[2],1), sep = "_")
write.table(taxHS_spe, file = "taxa_SV_spe.txt", quote = FALSE)
图3.5 ASV物种注释结果 图3.6 个体对应的ASV丰度表

参考

  1. qiime2官网资料https://docs.qiime2.org/2020.6/
  2. DADA2中文教程v1.8https://blog.csdn.net/woodcorpse/article/details/86773312
上一篇下一篇

猜你喜欢

热点阅读