Generic spider

2017-07-17  本文已影响0人  方方块

use case - generic spider has useful methods for common crawling actions such as following all links on a site based on certain rules, crawling from Sitemaps, or parsing an XML/CSV feed

CrawlSipder

rules - objects that define crawling behavior
parse_start_url - a method that can be overriden to parse the initial responses and must return

Rules

scrapy.spiders.Rule
can declare multiple rules for followed links, always add a , at the end

Paste_Image.png

avoid calling parse since this is reserved for CrawlSpider to use it to set up the rules

Scrapy filter out duplicate link by default
beware that start_urls should not contain trailling slash
works

Paste_Image.png
does not work
Paste_Image.png
上一篇 下一篇

猜你喜欢

热点阅读