Python 爬虫专栏python 爬虫一起学python

爬虫入门(6)-Scrapy和Redis的使用

2017-05-19  本文已影响519人  Maxim_Tian

Scrapy中使用Redis可以实现分布式爬虫的抓取。

关于Redis的原理,目前还处于入门,展开不了太多。但是在爬虫中使用Redis可以加速网页的抓取。原因是:

Redis在内存中运行,它可以将抓取的网页内容存入到内存中。因此相对于从磁盘获取数据,Redis可以大大提高爬虫爬取效率。

总结一下Scrapy使用Redis的步骤

sudo apt-get install redis-server

pip install scrapy-redis

sudo redis-server

成功运行后应该会显示:

sudo redis-cli shutdown

Scrapy-Redis的使用

这里只作最基本的使用。在Scrapy使用Scrapy-Redis,需要在setting.py添加以下几行代码。

#确保所有的爬虫通过Redis去重
DUPEFILTER_CLASS = "scrapy_redis.dupefilter.RFPDupeFilter"

# 启动从reids缓存读取队列,调度爬虫
SCHEDULER = "scrapy_redis.scheduler.Scheduler"

# 调度状态持久化,不清理redis缓存,允许暂停/启动爬虫
SCHEDULER_PERSIST = True

# 请求调度使用优先队列(默认)
SCHEDULER_QUEUE_CLASS = 'scrapy_redis.queue.SpiderPriorityQueue'

#指定用于连接redis的URL(可选)
REDIS_URL = None

#指定连接到redis时使用的端口和地址
REDIS_HOST = '127.0.0.1'
REDIS_PORT = 6379

关于更多的scrapy-redis参数介绍可以参考这篇博客:

http://cuiqingcai.com/4048.html

另外一点需要注意的是在爬虫myspiders.py使用时不再是使用

from scrapy.spider import Spider

而是更改为

from scrapy_redis.spiders import RedisSpider

代码实例

依旧是我的上一篇博客的盗墓笔记的应用例子。只做了一点更进,就是爬取小说每一章节的内容。
settings.py代码如下:

# -*- coding: utf-8 -*-

BOT_NAME = 'daomubiji'

SPIDER_MODULES = ['daomubiji.spiders']
NEWSPIDER_MODULE = 'daomubiji.spiders'

ITEM_PIPELINES = {
    'daomubiji.pipelines.DaomubijiPipeline': 300 # 数字代表这个管道的优先级,取0-1000之间的任意一个数即可
}

USER_AGENT = 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.96 Safari/537.36'
COOKIES_ENABLED = True

DUPEFILTER_CLASS = "scrapy_redis.dupefilter.RFPDupeFilter"
SCHEDULER = "scrapy_redis.scheduler.Scheduler"
SCHEDULER_QUEUE_CLASS = 'scrapy_redis.queue.SpiderPriorityQueue'
SCHEDULER_PERSIST = True
REDIS_URL = None # 一般情况可以省去
REDIS_HOST = '127.0.0.1' # 也可以根据情况改成 localhost
REDIS_PORT = 6379

MONGODB_HOST = '127.0.0.1'
MONGODB_PORT = 27017
MONGODB_DBNAME = 'Mydaomubiji'
MONGODB_DOCNAME = 'daomubiji_v3'

spider.py代码如下:

#-*_coding:utf8-*-

from scrapy_redis.spiders import RedisSpider
from scrapy.selector import Selector
from scrapy.http import Request
from daomubiji.items import DaomubijiItem

class daomubijiSpider(RedisSpider):
    name = "daomubijiSpider"
    redis_key = 'daomubijiSpider:start_urls'
    start_urls = ['http://www.daomubiji.com/']

    def parse_content(self,response): # 爬取每一章节的内容
        selector = Selector(response)
        chapter_content = selector.xpath('//article[@class="article-content"]/p/text()').extract()
        item = response.meta['item']
        item['content'] = '\n'.join(chapter_content)
        yield item

    def parse_title(self,response): # 提取子网页信息
        selector = Selector(response)

        book_order_name = selector.xpath('//h1/text()').extract()[0]
        pos = book_order_name.find(u':')
        book_order = book_order_name[:pos] # 获取书编号
        book_name = book_order_name[pos + 1:] # 获取书名

        chapter_list = selector.xpath('//article[@class="excerpt excerpt-c3"]//text()').extract()
        chapter_link = selector.xpath('//article[@class="excerpt excerpt-c3"]/a/@href').extract()
        chapter_link_flag = 0 # 链接序号
        for each in chapter_list:
            pos_first = each.find(' ')
            pos_last = each.rfind(' ')
            chapter_first = ''
            chapter_mid = ''
            chapter_last = ''
            if pos_first != pos_last:
                chapter_first = each[:pos_first]
                chapter_mid = each[(pos_first + 1): pos_last]
                chapter_last = each[pos_last + 1:]
            else:
                chapter_first = each[:pos_first]
                chapter_last = each[pos_last + 1:]

            # 存储信息
            item = DaomubijiItem()
            item['bookOrder'] = book_order
            item['bookName'] = book_name
            item['chapterFirst'] = chapter_first
            item['chapterMid'] = chapter_mid
            item['chapterLast'] = chapter_last
            yield Request(chapter_link[chapter_link_flag], callback='parse_content', meta={'item':item})
            chapter_link_flag += 1

    def parse(self, response): # 程序从这个函数开始执行
        selector = Selector(response)

        book_filed = selector.xpath('//article/div') # 抓取书标题

        book_link = selector.xpath('//article/p/a/@href').extract() # 抓取盗墓笔记每本书的链接
        # '//article/p/a/@href'也可以写成('//article//@href')

        link_flag = 0
        for each in book_filed:
            book_name_title = each.xpath('h2/text()').extract()[0]
            pos = book_name_title.find(u':')
            if pos == -1: # 只抓取符合我们格式规定的书
                continue
            yield Request(book_link[link_flag], callback='parse_title') # 调用parse_title函数
            link_flag += 1

对于这句代码:

yield Request(chapter_link[chapter_link_flag], callback='parse_content', meta={'item':item})

在Request使用meta参数可以传递item对象给parse_content函数。

关于运行卡顿的说明:

在使用Redis时,可能会遇到卡顿显现,程序在python的运行窗口不断输出如下内容:

2017-05-19 20:08:53 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-05-19 20:08:53 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
2017-05-19 20:09:53 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-05-19 20:10:53 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-05-19 20:11:53 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-05-19 20:12:53 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-05-19 20:13:53 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
...

这时候需要在Terminal输入这两行命令:

redis-cli
lpush nvospider:start_urls http://www.daomubiji.com/

这是基于我的程序代码来书写的:

redis_key = 'daomubijiSpider:start_urls'
start_urls = ['http://www.daomubiji.com/']

个人理解:

因为爬虫需要从redis中爬取网页,而redis的初始队列为空。
爬虫需要等待redis输入初始网页才能进一步从redis队列中爬取网页内容。

然后程序就能正常爬取网页了:

就这么多。当爬虫爬完网页的时候,类似如下,程序会停留在这里,这个时候我们只要退出程序即可。

结果如下:

ps: 记得每一次运行程序时要记得清空redis缓存,不然爬虫不会进行

redis-cli flushdb

附上本次代码:

https://github.com/MaximTian/Daomubiji_Scrapy/tree/master

好好学习天天向上~

上一篇下一篇

猜你喜欢

热点阅读