爬虫专题

Scrapy使用FormRequest POST采集团贷网投资计

2017-06-15  本文已影响93人  今夕何夕_walker

爬虫主要功能是什么

采集团贷网投资计划和投资人资料,采集的都是网页上的公开数据

为什么要写这个爬虫

帮以前同事写的

花了多久时间

很久没写爬虫了,最终花了三个小时

做到了什么程度,还可以进行怎样的改进

采集多了接口会返回404,可以增加UA随机切换(看是不是单纯封ip),接入代理ip

起始页:

https://www.tuandai.com/pages/weplan/welist.aspx#a

步骤

直接使用采集ajax接口
1.获取计划列表
2.获取每个计划所有分页链接
3.获取每个分页上的投资记录

spider代码

# -*- coding: utf-8 -*-
import json
import scrapy
from scrapy import Request, FormRequest
from tuandai.items import TuandaiItem
import time


class TuandaiSpider(scrapy.Spider):
    name = "tuandai_spider"
    allowed_domains = ["www.tuandai.com"]
    start_urls = ['https://www.tuandai.com/pages/weplan/welist.aspx', ]
    invest_url = 'https://www.tuandai.com/pages/ajax/invest_list.ashx'
    item_url = 'https://www.tuandai.com/ajaxCross/ajax_invest.ashx'
    headers = {"Host": "www.tuandai.com",
               "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:53.0) Gecko/20100101 Firefox/53.0",
               "Accept": "application/json, text/javascript, */*; q=0.01",
               "Accept-Language": "zh-CN,zh;q=0.8,en-US;q=0.5,en;q=0.3",
               "Referer": "https://www.tuandai.com/pages/weplan/welist.aspx",
               "Accept-Encoding": "gzip, deflate, br",
               "Content-Type": "application/x-www-form-urlencoded; charset=UTF-8",
               "X-Requested-With": "XMLHttpRequest",
               "Connection": "keep-alive",
               "Cache-Control": "max-age=0"}
    
    def start_requests(self):
        # 测试时只采集第一页的计划,range(1,21)即可采集全部计划
        for i in range(1, 2):
            formdata = {"Cmd": "GetProductPageList",
                        "pagesize": "10",
                        "pageindex": "1",
                        "type": "0"}
            formdata['pageindex'] = str(i)
            time.sleep(1)
            yield FormRequest(url=self.invest_url, headers=self.headers, formdata=formdata,method='POST', callback=self.parse_invest)
    
    def parse_invest(self, response):
        jsonres = json.loads(response.body_as_unicode())
        id_list = [invest['Id'] for invest in jsonres['ProductList']]
        print id_list
        # 注意字典深浅拷贝
        formdata = {"Cmd": "GetWePlanSubscribeList",
                    "pagesize": "15",
                    "pageindex": "1",
                    "projectid": "a308c7c1-6864-46b7-9c65-d6a992c427b3"}
        for id in id_list:
            f = {'Cmd': formdata['Cmd'],'pagesize': formdata['pagesize'],'pageindex':formdata['pageindex'],'projectid': id}
            yield FormRequest(url=self.item_url, method='POST', formdata=f, meta={'formdata': f},
                              headers=self.headers, callback=self.parse_itemlist)
            print f
            
            
    def parse_itemlist(self, response):
        jsonres = json.loads(response.body_as_unicode())
        print jsonres
        # 采集第一页item start
        item_list = [item for item in jsonres['list']]
        print item_list
        for item in item_list:
            yield item
        # 采集第一页item end
        count = int(jsonres['totalcount'])
        pageindex = 0
        formdata = response.meta['formdata']
        print "formdata: ", formdata
        
        # scrapy 会跳过重复页面 必须在之前把第一页item提取出来,测试时只采集3页,把if去掉即可采集全部分页
        for i in range((count/15)+1):
            if i < 3:
                pageindex = pageindex + 1
                formdata['pageindex'] = str(pageindex)
                print "yield response url: ",response.url, "projectid: ",formdata['projectid'],"pageindex: ", formdata['pageindex']
                yield FormRequest(url=response.url, method='POST', formdata=formdata, meta={'formdata':formdata},headers=self.headers,callback=self.parse)
    #
    def parse(self, response):
        jsonres = json.loads(response.body_as_unicode())
        item_list = [item for item in jsonres['list']]
        print item_list
        for item in item_list:
            yield item

item.py代码

import scrapy
class TuandaiItem(scrapy.Item):
    # define the fields for your item here like:
    # name = scrapy.Field()
    NickName = scrapy.Field()
    Amount = scrapy.Field()
    RepeatInvestType = scrapy.Field()
    OrderDate = scrapy.Field()
上一篇下一篇

猜你喜欢

热点阅读