爬取京东商城中的书籍信息

2018-03-24  本文已影响0人  MR_ChanHwang

项目需求

​ 爬取京东商城中所有Python书籍的名字和价格信息。

编码实现

​ 首先,在splash_examples项目目录下使用scrapy genspider命令创建Spider类:

$ scrapy genspider jd_book search.jd.com

​ 经上述分析,在爬取每一个书籍列表页面时都需要执行一段JavaScript代码,以让全部书籍加载,因此选用execute端点完成该任务,实现JDBookSpider代码如下:

# -*- coding: utf-8 -*-
import scrapy
from scrapy import Request
from scrapy_splash import SplashRequest
from ..items import BookItem

lua_script = '''
function main(splash)
    splash:go(splash.args.url)
    splash:wait(2)
    splash:runjs("document.getElementsByClassName('page')[0].scrollIntoView(true)")
    splash:wait(2)
    return splash:html()
end
'''


class JdBookSpider(scrapy.Spider):
    name = 'jd_book'
    allowed_domains = ['search.jd.com']
    base_url = 'http://search.jd.com/Search?keyword=python&enc=utf-8&book=y&wq=python'

    def start_requests(self):
        # 请求第一页,无须js渲染
        yield Request(self.base_url, callback=self.parse_urls, dont_filter=True)
        pass

    def parse_urls(self, response):
        # 获取商品总数,计算出总页数
        total = int(response.css('span#J_resCount::text').extract_first().replace("+", ""))
        pageNum = total // 60 + (1 if total % 60 else 0)

        # 构造每页的url,向Splash的execute端点发送请求
        for i in range(pageNum):
            url = '%s&page=%s' % (self.base_url, 2 * i + 1)
            yield SplashRequest(url,
                                endpoint='execute',
                                args={'lua_source': lua_script},
                                cache_args=['lua_source'])

    def parse(self, response):
        # 获取一个页面中每本书的名字和价格
        for sel in response.css('ul.gl-warp.clearfix > li.gl-item'):
            book = BookItem()
            book['name'] = sel.css('div.p-name').xpath('string(.//em)').extract_first(),
            book['price'] = sel.css('div.p-price i::text').extract_first(),
            yield book

解释上述代码如下:

​ 另外,京东服务器程序会对请求头部中的User-Agent字段进行检测,因此需要在配置文件settings.py中设置USER_AGENT,伪装成常规浏览器:

USER_AGENT = 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36(KHTML, like Gecko)'

​ 编码和配置的工作已经完成了,运行爬虫并观察结果。

​ 添加了excelExporters在其中,可见:https://www.jianshu.com/p/a50b19b6258d

​ 在项目中创建一个my_exporters.py(与settings.py同级目录),在其中实现ExcelItemExporter,代码如下:

# -*- coding: utf-8 -*-

from scrapy.exporters import BaseItemExporter
import xlwt


class ExcelItemExporter(BaseItemExporter):
    """
    导出为Excel
    在执行命令中指定输出格式为excel
    e.g. scrapy crawl -t excel -o books.xls
    """

    def __init__(self, file, **kwargs):
        self._configure(kwargs)
        self.file = file
        self.wbook = xlwt.Workbook(encoding='utf-8')
        self.wsheet = self.wbook.add_sheet('scrapy')
        self._headers_not_written = True
        self.fields_to_export = list()
        self.row = 0

    def finish_exporting(self):
        self.wbook.save(self.file)

    def export_item(self, item):
        if self._headers_not_written:
            self._headers_not_written = False
            self._write_headers_and_set_fields_to_export(item)

        fields = self._get_serialized_fields(item)
        for col, v in enumerate(x for _, x in fields):
            self.wsheet.write(self.row, col, v)
        self.row += 1

    def _write_headers_and_set_fields_to_export(self, item):
        if not self.fields_to_export:
            if isinstance(item, dict):
                self.fields_to_export = list(item.keys())
            else:
                self.fields_to_export = list(item.fields.keys())
        for column, v in enumerate(self.fields_to_export):
            self.wsheet.write(self.row, column, v)
        self.row += 1

在配置文件settings.py中添加自定义格式:

FEED_EXPORTERS = {'excel': 'splash_examples.my_exporters.ExcelItemExporter'}

运行

$ scrapy crawl jd_book -t excel -o books.xls

并观察结果:

image.png
上一篇下一篇

猜你喜欢

热点阅读