虫虫

爬虫框架srcapy入门

2019-03-30  本文已影响107人  smallest_one

目录

  1. 参考
  2. 概述
  3. 安装
  4. 编写scrapy程序
  5. 问题总结

1. 参考

2. 概述

Scrapy 是用 Python 实现的一个为了爬取网站数据、提取结构性数据而编写的应用框架。

Scrapy架构图(绿线是数据流向)

image

3. 安装

安装依赖

apt-get install python-dev python-pip libxml2-dev libxslt1-dev zlib1g-dev libffi-dev libssl-dev

安装scrapy

pip3 install Scrapy

4. 编写scrapy程序

创建scrapy工程

scrapy startproject project_name
project_name/
    scrapy.cfg            # deploy configuration file
    tutorial/             # project's Python module, you'll import your code from here
        __init__.py
        items.py          # project items definition file
        middlewares.py    # project middlewares file
        pipelines.py      # project pipelines file
        settings.py       # project settings file
        spiders/          # a directory where you'll later put your spiders
            __init__.py

文件说明:

示例程序,放在spiders目录

import scrapy


class QuotesSpider(scrapy.Spider):
    name = "quotes"

    def start_requests(self):
        urls = [
            'http://quotes.toscrape.com/page/1/',
            'http://quotes.toscrape.com/page/2/',
        ]
        for url in urls:
            yield scrapy.Request(url=url, callback=self.parse)

    def parse(self, response):
        page = response.url.split("/")[-2]
        filename = 'quotes-%s.html' % page
        with open(filename, 'wb') as f:
            f.write(response.body)
        self.log('Saved file %s' % filename)

QuotesSpider继承于scrapy.Spider,并定义了一些属性和方法:

运行scrapy

scrapy crawl quotes
(omitted for brevity)
2016-12-16 21:24:05 [scrapy.core.engine] INFO: Spider opened
2016-12-16 21:24:05 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2016-12-16 21:24:05 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
2016-12-16 21:24:05 [scrapy.core.engine] DEBUG: Crawled (404) <GET http://quotes.toscrape.com/robots.txt> (referer: None)
2016-12-16 21:24:05 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://quotes.toscrape.com/page/1/> (referer: None)
2016-12-16 21:24:05 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://quotes.toscrape.com/page/2/> (referer: None)
2016-12-16 21:24:05 [quotes] DEBUG: Saved file quotes-1.html
2016-12-16 21:24:05 [quotes] DEBUG: Saved file quotes-2.html
2016-12-16 21:24:05 [scrapy.core.engine] INFO: Closing spider (finished)`

检查当前目录中的文件。应该已经创建了两个新文件:quotes-1.html和quotes-2.html,以及对应url的内容,正如我们的解析方法所指示的那样。

可以使用Feed导出结果,使用以下命令输出为json格式数据:

scrapy crawl quotes -o quotes.json

5. 问题总结

5.1 HTTP 403

解决方法:
在Settings.py中增加UA的设置,伪装为浏览器的访问

USER_AGENT = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/64.0.3282.186 Safari/537.36'

5.2 class名称中有空格

参考[5],空格用“.”替代空格

5.3 怎么解析本地文件

url使用file的地址符,如下所示:

file:///path_of_directory/example.html
上一篇 下一篇

猜你喜欢

热点阅读