Python爬虫_01_Scrapy的安装和一个小示例

2017-02-03  本文已影响252人  ChZ_CC

Windows下我也试过,没问题。


安装:【虚拟环境安装】

命令行操作:scrapy shell

scrapy shell 'http://quotes.toscrape.com/page/1/'

scrapy shell "http://quotes.toscrape.com/page/1/"
【Windows必须用双引号。就是这么矫情。】

可以用来测试抓取数据的效果,实时得到反馈。运行之后是这种样子:

我安装了ipython,所以它直接进入ipython的shell的样子。没安装的话就是python三个大于号(>>>)的那个界面。像这样:

[ ... Scrapy log here ... ]
2016-09-19 12:09:27 [scrapy] DEBUG: Crawled (200) <GET http://quotes.toscrape.com/page/1/> (referer: None)
[s] Available Scrapy objects:
[s]   scrapy     scrapy module (contains scrapy.Request, scrapy.Selector, etc)
[s]   crawler    <scrapy.crawler.Crawler object at 0x7fa91d888c90>
[s]   item       {}
[s]   request    <GET http://quotes.toscrape.com/page/1/>
[s]   response   <200 http://quotes.toscrape.com/page/1/>
[s]   settings   <scrapy.settings.Settings object at 0x7fa91d888c10>
[s]   spider     <DefaultSpider 'default' at 0x7fa91c8af990>
[s] Useful shortcuts:
[s]   shelp()           Shell help (print this help)
[s]   fetch(req_or_url) Fetch request (or URL) and update local objects
[s]   view(response)    View response in a browser
>>>

一个栗子

爬取百度贴吧帖子的标题、作者、回复数。

创建scrapy项目:

修改items.py

import scrapy


class TiebaItem(scrapy.Item):
    # name = scrapy.Field()
    title = scrapy.Field()
    author = scrapy.Field()
    comment_num = scrapy.Field()

编写爬虫程序: tieba.py

放在spider文件夹下。

from scrapy.spider import Spider
from scrapy.selector import Selector

from tutorial.items import TiebaItem


class TibaSpider(Spider):
    name = "tieba"
    allowed_domains = ['tieba.baidu.com']   # 域名列表
    start_urls = [
            'http://tieba.baidu.com/f?kw=%E7%BD%91%E7%BB%9C%E7%88%AC%E8%99%AB&ie=utf-8',
        ]

    def parse(self, response):
        #page = response.url.split("/")[-1]
        #filename = 'tieba.baidu-%s.html' % page
        #with open(filename, 'wb') as f:
        #    f.write(response.body)
        #self.log('Saved file %s' % filename)
        items = []
        for sel in response.xpath('//li[@class=" j_thread_list clearfix"]'):
            item = TiebaItem()
            item['title'] = sel.xpath('./div/div[2]/div[1]/div[1]/a/text()').extract()
            item['author'] = sel.xpath('.//span[@class="frs-author-name-wrap"]/a/text()').extract()
            item['comment_num'] = sel.xpath('.//span[@class="threadlist_rep_num center_text"]/text()').extract()
            items.append(item)
        return items

这里用的是xpath,谷歌浏览器右键--检查--HTML代码右键copy--copy xpath。copy selector获得的是css结构。

在终端运行

scrapy crawl tieba -o items.json 运行爬虫程序,输出为json格式。也可以是其他格式比如txt、csv等等。

结果

成功提取到了标题、作者和回復數。json文件显示的是Unicode编码。

保存CSV格式就可以直接看到汉字。

这才是个最基础的scrapy项目,还在学习中。。。[围笑]

上一篇下一篇

猜你喜欢

热点阅读