PYTHON

生信-爬虫 | 异步爬取网页表格数据

2021-06-11  本文已影响0人  生信卷王

写在前面

前段时间,师姐让爬取一些LncRNA的一些信息,具体在下面的网站中:
LncBook:https://ngdc.cncb.ac.cn/lncbook/lncrnas
由于这个表格是利用AJAX技术构建的,因此利用传统爬虫获取不到网页的列表信息,且翻页后URL不变,所以异步爬取顺理成章的被利用起来了。我们可以按F12获取一些蛛丝马迹,从而帮助我们直接获取表格信息。

技术实现

1、查看AJAX的构成规律

结论

https://ngdc.cncb.ac.cn/lncbook/lncrnas/lncall?page=1
https://ngdc.cncb.ac.cn/lncbook/lncrnas/lncall?page=2
https://ngdc.cncb.ac.cn/lncbook/lncrnas/lncall?page=3
#为了一次获取更多的数据,避免频繁获取被封IP,我们可以修改size的数值,参数间使用【&】连接
https://ngdc.cncb.ac.cn/lncbook/lncrnas/lncall?page=3&size=50

2、开始爬取

import json
import urllib.request,urllib.error
import re
import openpyxl
import time
def main():
    baseurl = "https://bigd.big.ac.cn/lncbook/lncrnas/lncall?page="
    datalist = getData(baseurl)
    savepath = "lncrnas.xlsx"
    saveData(datalist,savepath)
def getData(baseurl):
    datalist = []
    for i in range(0,27):
        #time.sleep(1)
        url = baseurl + str(i) + str("&size=10000")  #可以看到这里我设置的是10000,也就是一下获取1w条数据
        html = askURL(url)
        data = re.findall("\"total\":268848,\"transInfo\":(.+?),\"page\"", str(html))
        jsonObj = json.loads(data[0])
        for item in jsonObj:
            items = [item['transid'], item['geneid'], item['chrome'],
                     item['startsite'], item['endsite'], item['strand'], item['length'],
                     item['exonNum'], item['orfLength'], item['gcContent'],item['classification']]
            datalist.append(items)
            #print(items)
        print("第%d页已完成"%(i+1))
    return datalist
def saveData(datalist,savepath):
    book = openpyxl.Workbook()
    sheet = book.create_sheet(title="lncrnas")
    col= ("Transcript ID","Gene ID","Chrome","Startsite","Endsite","Strand","Length (nt)","Exon Number","ORF Length (nt)","GC Content (%)","Classification")
    for i in range(0,11):
        sheet.cell(row = 1, column = i+1).value = col[i]
    for i in range(0,268848):
        print("第%d条已完成"%(i+1))
        data = datalist[i]
        for j in range(0,11):
            sheet.cell(row = i+1, column = j+1).value = data[j]
    print('正在保存')
    book.save(savepath)
def askURL(url):
    head = {"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.90 Safari/537.36 Edg/89.0.774.57"}
    request = urllib.request.Request(url,headers=head)
    html=""
    try:
        response = urllib.request.urlopen(request)
        html= response.read().decode("utf-8")
        #print(html)
        pass
    except urllib.error.URLError as e:
        if hasattr(e,"code"):
            print(e.code)
        if hasattr(e,"reason"):
            print(e.reason)
            pass
        pass
    return html
if __name__ == "__main__":
    main()
    print('爬取结束')
爬取结果

写在最后

上一篇 下一篇

猜你喜欢

热点阅读