Scrapy爬虫框架(五) ------ CrawlSpide

2019-08-06  本文已影响0人  千喜Ya

1. CrawlSpiders

原理图

sequenceDiagram
start_urls ->>调度器: 初始化url
调度器->>下载器: request
下载器->>rules: response
rules->>数据提取: response
rules->>调度器: 新的url

通过下面的命令可以快速创建 CrawlSpider模板 的代码

scrapy genspider -t crawl 文件名 (allowed_url)

首先在说下Spider,它是所有爬虫的基类,而CrawSpiders就是Spider的派生类。对于设计原则是只爬取start_url列表中的网页,而从爬取的网页中获取link并继续爬取的工作CrawlSpider类更适合

2. Rule对象

Rule类与CrawlSpider类都位于scrapy.contrib.spiders模块中

class scrapy.contrib.spiders.Rule (  
link_extractor, callback=None,cb_kwargs=None,follow=None,process_links=None,process_request=None ) 

参数含义:

3.LinkExtractors

3.1 概念

顾名思义,链接提取器

3.2 作用

response对象中获取链接,并且该链接会被接下来爬取
每个LinkExtractor有唯一的公共方法是 extract_links(),它接收一个 Response 对象,并返回一个 scrapy.link.Link 对象

3.3 使用

class scrapy.linkextractors.LinkExtractor(
    allow = (),
    deny = (),
    allow_domains = (),
    deny_domains = (),
    deny_extensions = None,
    restrict_xpaths = (),
    tags = ('a','area'),
    attrs = ('href'),
    canonicalize = True,
    unique = True,
    process_value = None
)

主要参数:

3.3.1 查看效果(shell中验证)

首先运行

scrapy shell http://www.fhxiaoshuo.com/read/33/33539/17829387.shtml 

继续import相关模块:

from scrapy.linkextractors import LinkExtractor

提取当前网页中获得的链接

link = LinkExtractor(restrict_xpaths=(r'//div[@class="bottem"]/a[4]')

调用LinkExtractor实例的extract_links()方法查询匹配结果

 link.extract_links(response)
3.3.2 查看效果 CrawlSpider版本
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
from xiaoshuo.items import XiaoshuoItem


class XiaoshuoSpiderSpider(CrawlSpider):
    name = 'xiaoshuo_spider'
    allowed_domains = ['fhxiaoshuo.com']
    start_urls = ['http://www.fhxiaoshuo.com/read/33/33539/17829387.shtml']

    rules = [
        Rule(LinkExtractor(restrict_xpaths=(r'//d1/dd[2]/a')), callback='parse_item',follow=True), #注意第一章要加进来
        Rule(LinkExtractor(restrict_xpaths=(r'//div[@class="bottem"]/a[4]')), callback='parse_item'),follow=True]  #注意是从第二章开始爬

    def parse_item(self, response):
        info = response.xpath("//div[@id='TXT']/text()").extract()
        it = XiaoshuoItem()
        it['info'] = info
        yield it

注意:

 rules = [
        Rule(LinkExtractor(restrict_xpaths=(r'//div[@class="bottem"]/a[4]')), callback='parse_item'),]
上一篇下一篇

猜你喜欢

热点阅读