一起玩python大数据 爬虫Python AI Sql程序员

爬取小说(步骤三)python

2018-01-13  本文已影响232人  肥宅_Sean

假设各位老哥已经安装好了bs4 requests这些库了
这个小说是随便挑的,各位也就不用太介意(仅供各位学习)
python3 实现,网上用python2做爬虫的太多了,但用python3的还是比较少

步骤三:整合文章写入

import requests
import time
import random
from bs4 import BeautifulSoup
begin_url = "https://www.qu.la/book/12763/10664294.html"
base = begin_url[:begin_url.rindex('/')+1]
urls = [begin_url]  # 初始化url池
first = True
for url in urls:
    req = requests.get(url)
    req.encoding = 'utf-8'
    soup = BeautifulSoup(req.text, 'html.parser')
    try:
        content = soup.find(id='content')
        title = soup.find(attrs={"class": "bookname"})
        title = title.find('h1').text
    except:
        break
    string = content.text.replace('\u3000', '').replace('\t', '').replace('\n', '').replace('\r', '').replace(
        '『', '“') .replace('』', '”').replace('\ufffd', '')  # 去除不相关字符
    string = string.split('\xa0')  # 编码问题解决
    string = list(filter(lambda x: x, string))
    for i in range(len(string)):
        string[i] = '    ' + string[i]
        if "本站重要通知" in string[i]:  # 去除文末尾注
            t = string[i].index('本站重要通知')
            string[i] = string[i][:t]
    string = '\n'.join(string)
    string = '\n' + title + '\n' + string
    if first:
        first = False
        with open('E:/Code/Python/Project/txtGet/1.txt', 'w') as f:
            f.write(string)
    else:
        with open('E:/Code/Python/Project/txtGet/1.txt', 'a') as f:
            f.write(string)
    print(title+' 写入完成')
    next_ = soup.find(attrs={"class": "next"})
    next_url = base + next_['href']
    urls.append(next_url)
    time.sleep(random.randint(1, 5))  # 别访问的太快了..担心被禁hhh也别访问的太死板了

进阶文章:
爬取小说(步骤四)python
爬取小说(步骤五)python
爬取小说(步骤六)python

上一篇下一篇

猜你喜欢

热点阅读