Python三期爬虫作业

【Python爬虫】- 统计各自作业完成情况

2017-08-20  本文已影响27人  Ubuay

目录

一、思路
二、封装函数
三、运行结果

一、思路

抓取作业提交专题文章数据,并统计各自作业的完成情况。

需要注意的地方:

二、封装函数

#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import requests
from lxml import etree


# 专题页:
def get_title(url):
    res = requests.get(url)
    res.encoding = "utf-8"
    html = res.text
    selector = etree.HTML(html)
    infos = selector.xpath('//ul[@class="note-list"]//div[@class="content"]//a[@class="title"]')
    for info in infos:
        title_url = 'http://www.jianshu.com' + info.xpath('@href')[0]
        title_text = info.xpath('text()')[0]
        get_title_source(title_url, title_text)

# 文章页:
def get_title_source(title_url, title_text):
    new_url = title_url
    new_res = requests.get(new_url)
    new_res.encoding = "utf-8"
    new_html = new_res.text
    new_selector = etree.HTML(new_html)
    new_fos = new_selector.xpath('//div[@class="note"]//div[@class="article"]//div[@class="info"]//a')
    for new_fo in new_fos:
        author_name = new_fo.xpath('text()')[0]
        # print(author_name)
        b.append(author_name)
        
# 函数用于统计提交次数
def t_imes(b):
    mylist = b
    myset = set(mylist)
    for item in myset:
        print("%r 提交次数:%r" %(item, mylist.count(item)))

if __name__ == '__main__':
    b = [ ]
    base_url = 'http://www.jianshu.com/c/1b31f26b6af0?order_by=added_at&page=%s'
    for page in range(1, 17):
        url = base_url %str(page)
        # print('第 %r 页' % page)
        get_title(url)
    t_imes(b)

三、运行结果

作业提交次数

上一篇下一篇

猜你喜欢

热点阅读