编写爬虫之爬取淘宝上某宝贝该月的销量

2019-08-16  本文已影响0人  无罪的坏人

首先感谢【小甲鱼】极客Python之效率革命。讲的很好,通俗易懂,适合入门。

感兴趣的朋友可以访问https://fishc.com.cn/forum-319-1.html来支持小甲鱼。谢谢大家。
想要学习requests库的可以查阅: https://fishc.com.cn/forum.php?mod=viewthread&tid=95893&extra=page%3D1%26filter%3Dtypeid%26typeid%3D701

1.找到目标URL

https://s.taobao.com/search?q=XXXX宝贝的名字XXXXXX

我们先把源码爬下来看看

# -*- coding:UTF-8 -*-
import requests

def open_url(keyword):
    payload = {'q': "零基础入门学习Python", "sort": "sale-desc"}
    url = "https://s.taobao.com/search"
    headers = {
        "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.87 Safari/537.36",
    }
    res = requests.get(url, params=payload, headers=headers)
    return res

def main():
    keyword = input(u"请输入搜索关键词:")
    res = open_url(keyword)

    with open('items.txt', 'w', encoding='utf-8') as file:
        file.write(res.text)

if __name__ == '__main__':
    main()

通过观察发现,我们想要的内容好像就在这里!!然后我们就上正则,把这一块抠出来


源码.png

2.用正则来定位元素

# -*- coding:UTF-8 -*-
import re

def main():
    with open("items.txt", 'r', encoding="utf-8") as file1:
        # re.search(pattern, string, flags=0)
        g_page_config = re.search(r"g_page_config = (.*?);\n", file1.read())  #  .*? 表示匹配任意数量的重复,但是在能使整个匹配成功的前提下使用最少的重复
        with open("g_page_config.txt", 'w', encoding="utf-8") as file2:
            file2.write(g_page_config.group(1))

if __name__ == '__main__':
    main()
正则抠出来的内容.png

发现内容还是好多,字典里面有字典,字典里面还有字典,头大,怎么办?
我们就按照老办法,把后缀名改成.json,然后用火狐浏览器打开。


定位.png

3.提取我们想要的数据(按销量排序,统计前3页所有的销量)

# -*- coding:UTF-8 -*-
import re
import json
import requests

def open_url(keyword, page=1):
    # &s=0表示从第1个商品开始显示,由于1页有44个商品,所以&s=44表示第二页
    payload = {'q': keyword, 's': str((page - 1) * 44), "sort": "sale-desc"}
    url = "https://s.taobao.com/search"
    headers = {
        "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.87 Safari/537.36",
    }
    res = requests.get(url, params=payload, headers=headers)
    return res


# 获取列表页的所有商品
def get_items(res):
    g_page_config = re.search(r"g_page_config = (.*?);\n", res.text)
    page_config_json = json.loads(g_page_config.group(1))  # 将已编码的 JSON 字符串解码为 Python 对象
    page_items = page_config_json['mods']['itemlist']['data']['auctions']

    results = []  # 整理出我们关注的信息
    for each_item in page_items:
        dict1 = dict.fromkeys(('nid', 'title', 'detail_url', 'view_price', 'view_sales', 'nick'))
        dict1['nid'] = each_item['nid']
        dict1['title'] = each_item['title']
        dict1['detail_url'] = each_item['detail_url']
        dict1['view_price'] = each_item['view_price']
        dict1['view_sales'] = each_item['view_sales']
        dict1['nick'] = each_item['nick']
        results.append(dict1)

    return results


# 统计该页面所有商品的销量
def count_sales(items):
    count = 0
    for each in items:
        if '小甲鱼' in each['title']:
            count += int(re.search(r'\d+', each['view_sales']).group())
    return count


def main():
    keyword = input(u"请输入搜索关键词:")
    page = 3  # 前三页
    total = 0
    for each in range(page):
        res = open_url(keyword, each+1)
        items = get_items(res)
        total += count_sales(items)
    print("总销量是:", total)


if __name__ == '__main__':
    main()
输出.png
上一篇下一篇

猜你喜欢

热点阅读