信息组织与信息检索

爬取《我不是药神》电影热门短评

2018-08-02  本文已影响56人  cathy1997

1.目标数据:

数据来源:

我不是药神 短评 - https://movie.douban.com/subject/26752088/comments?status=P

目标描述:

建立一个爬虫项目,抓取豆瓣上对电影《我不是药神》的热门短评,采集字段包括:

2.开始爬虫

采集策略:

本次采集数据拟使用Requests库的requests.get()函数自动爬取html页面,然后使用Beautifulsoup库对下载的“标签树”html文本进行解析和遍历。

实现模拟登陆:

由于豆瓣设有反爬虫机制,若未登录的话则一次只能爬取60条评论,所以本次爬虫获取数据最主要的一个部分是实现模拟登陆:

def Login(headers,loginUrl,formData):
    r = s.post(loginUrl, data=formData, headers=headers) # 提交登录信息
    print (r.url)
    print (formData["redir"])
    if r.url == formData["redir"]:
        print ("登陆成功") # 若登录后返回的页面是想要爬取的页面则说明登录成功
    else:
        print ("第一次登陆失败")
        page = r.text
        soup = BeautifulSoup(page, "html.parser")
        captchaAddr = soup.find('img', id='captcha_image')['src'] # 获得登录验证码的URL
        print (captchaAddr)
 
        reCaptchaID = r'<input type="hidden" name="captcha-id" value="(.*?)"/'
        captchaID = re.findall(reCaptchaID, page)
 
        captcha = input('输入验证码:')
 
        formData['captcha-solution'] = captcha
        formData['captcha-id'] = captchaID
 
        r = s.post(loginUrl, data=formData, headers=headers)# 再次提交登录信息,加上了手动输入的验证码
        print (r.status_code)
        return r.cookies # 记录下cookie信息

验证登录是否成功主要是看登录后返回的页面是否是想要爬取的页面,是则说明登录成功,否则的话需手动输入验证码后再次提交登录表单。

动态IP:

为了不让电脑IP因为爬取频率过快而被封,需要设置动态IP。此处参考爬虫笔记-使用python爬取豆瓣短评 - CSDN博客

#获取动态ip,防止ip被封
def get_ip_list(url, headers):
    web_data = requests.get(url, headers=headers)
    soup = BeautifulSoup(web_data.text, 'lxml')
    ips = soup.find_all('tr')
    ip_list = []
    for i in range(1, len(ips)):
        ip_info = ips[i]
        tds = ip_info.find_all('td')
        ip_list.append(tds[1].text + ':' + tds[2].text)
    return ip_list
#随机从动态ip链表中选择一条ip
def get_random_ip(ip_list):
    proxy_list = []
    for ip in ip_list:
        proxy_list.append('http://' + ip)
    proxy_ip = random.choice(proxy_list)
    proxies = {'http': proxy_ip}
    return proxies
完整代码:
import requests
from bs4 import BeautifulSoup
import re
import random
import time
import pandas as pd

s = requests.session()#使用session来保存登陆信息

#获取动态ip,防止ip被封
def get_ip_list(url, headers):
    web_data = requests.get(url, headers=headers)
    soup = BeautifulSoup(web_data.text, 'lxml')
    ips = soup.find_all('tr')
    ip_list = []
    for i in range(1, len(ips)):
        ip_info = ips[i]
        tds = ip_info.find_all('td')
        ip_list.append(tds[1].text + ':' + tds[2].text)
    return ip_list
#随机从动态ip链表中选择一条ip
def get_random_ip(ip_list):
    proxy_list = []
    for ip in ip_list:
        proxy_list.append('http://' + ip)
    proxy_ip = random.choice(proxy_list)
    proxies = {'http': proxy_ip}
    return proxies
 
#实现模拟登陆
def Login(headers,loginUrl,formData):
    r = s.post(loginUrl, data=formData, headers=headers) # 提交登录信息
    print (r.url)
    print (formData["redir"])
    if r.url == formData["redir"]:
        print ("登陆成功") # 若登录后返回的页面是想要爬去的页面则说明登录成功
    else:
        print ("第一次登陆失败")
        page = r.text
        soup = BeautifulSoup(page, "html.parser")
        captchaAddr = soup.find('img', id='captcha_image')['src'] # 获得登录验证码的URL
        print (captchaAddr)
 
        reCaptchaID = r'<input type="hidden" name="captcha-id" value="(.*?)"/'
        captchaID = re.findall(reCaptchaID, page)
 
        captcha = input('输入验证码:')
 
        formData['captcha-solution'] = captcha
        formData['captcha-id'] = captchaID
 
        r = s.post(loginUrl, data=formData, headers=headers)# 再次提交登录信息,加上了手动输入的验证码
        print (r.status_code)
        return r.cookies # 记录下cookie信息
#获取评论内容和下一页链接
def get_data(html):
    soup = BeautifulSoup(html,"lxml")
    username_list = [s.get_text() for s in soup.select('.comment-info > a')]
    time_list = [s.get_text() for s in soup.select('.comment-info > span.comment-time')]
    useful_list = [s.get_text() for s in soup.select('.comment-vote > span')]
    comment_list = [s.get_text() for s in soup.select('.comment > p')]
    next_page = soup.select('.next')[0].get('href')
    return username_list,time_list,useful_list,comment_list,next_page
 
if __name__ =="__main__":
    absolute = 'https://movie.douban.com/subject/26752088/comments'
    headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/62.0.3202.89 Safari/537.36'}
    loginUrl = 'https://www.douban.com/accounts/login?source=movie'
    formData = {
        "redir":"https://movie.douban.com/subject/26752088/comments",
        "form_email":"账号",
        "form_password":"密码",
        "login":u'登录'
    }
    #获取动态ip
    url = 'http://www.xicidaili.com/nn/'
    cookies = Login(headers,loginUrl,formData)
    ip_list = get_ip_list(url, headers=headers)
    proxies = get_random_ip(ip_list)
 
    current_page = absolute
    next_page = ""
    time_list = []
    useful_list = []
    username_list = []
    comment_list = []
    temp_list = []
    num = 0
    while(1):
        html = s.get(current_page, cookies=cookies, headers=headers, proxies=proxies).content
        temp0_list,temp1_list,temp2_list,temp_list,next_page = get_data(html)
        if next_page is None:
            break
        current_page = absolute + next_page
        username_list = username_list + temp0_list
        time_list = time_list + temp1_list
        useful_list = useful_list + temp2_list
        comment_list = comment_list + temp_list
        #time.sleep(1 + float(random.randint(1, 100)) / 20)
        num = num + 1
        #每20次更新一次ip
        if num % 20 == 0:
            proxies = get_random_ip(ip_list)
        print (current_page)

    #写入csv文件
    infos = {'username': username_list,'date': time_list,'useful': useful_list,'comment': comment_list}
    data = pd.DataFrame(infos, columns=['username','date','useful','comment'])
    data.to_csv("D:/豆瓣《我不是药神》.csv")

3.查看数据:

共获得480条热门短评。

上一篇 下一篇

猜你喜欢

热点阅读