Python 学习——每天写点小东西-5
2016-06-19 本文已影响0人
盐巴有点咸
今天的爬虫是爬取某网站的商品信息,难点在于网页浏览量的爬取,不仅需要伪造Referer,而且浏览量的获取不能直接抓取,否则会为0。此项是由js控制的,如果使用chrome浏览器,可以在network里找到有一页控制浏览量的文件。
http://jst1.58.com/counter?infoid={}
通过infoid来获取浏览量,而此参数是商品网址的一部分,所以需要从网址中提取出来。
代码入下:
from bs4 import BeautifulSoup
import requests
import time
headers = {
'User-Agent': 'xxxxx',
'Referer': 'xxxxx',
'Cookie': 'xxxxx'
}
# 获取爬取页面个数以及其链接
def get_pages_num(who_sells, page_num):
base_urls = ['http://cd.58.com/taishiji/{}/pn{}'.format(who_sells, page_num) for page_num in range(1, page_num+1)]
return base_urls
# 获取所有链接
def get_links_from(who_sells, page_num):
base_urls = get_pages_num(who_sells, page_num)
links = []
for url in base_urls:
time.sleep(1)
r = requests.get(url, headers=headers).text
soup = BeautifulSoup(r, 'lxml')
for link in soup.select('td.t > a'):
if len(link.get('href').split('?')[0]) == 46:
links.append(link.get('href').split('?')[0])
return links
# 获取浏览量
def get_views(url):
id_num = url.split('/')[-1].strip('x.shtml')
api = 'http://jst1.58.com/counter?infoid={}'.format(id_num)
js = requests.get(api, headers=headers)
views = js.text.split('=')[-1]
return views
# 获取详细信息
def get_item_info(who_sells=0, page_num=1):
urls = get_links_from(who_sells, page_num)
for url in urls:
time.sleep(2)
r = requests.get(url, headers=headers)
soup = BeautifulSoup(r.text, 'lxml')
title = soup.title.text
price = soup.findAll('span', 'price c_f50')[0].text
area = list(soup.select('.c_25d')[-1].stripped_strings)
data = soup.select('li.time')[0].text
data = {
'title': title,
'price': price,
'data': data,
'area': ''.join(area) if len(list(soup.select('.c_25d'))) == 2 else None,
'cate': '个人' if who_sells == 0 else '商家', #通过参数来判断卖家
'views': get_views(url)
}
print(data)
get_item_info(page_num=3)
此代码的2个参数一个是对应卖家的,0代表个人,1代表商家,另一个是对应爬取多少页的。