扯淡

2018-05-06  本文已影响12人  WesterosDoge

author:艾莉亚·史塔克
【某某】

作为本公众号的忠实读者,你一定在想这个号到底是干嘛的,扯淡的吗?
当然不是,
扯淡是不可能的,
这辈子都不可能扯淡!


image.png

更何况
扯淡这个主题也不可以作为公号的主题
所以
今天来点实用干货

如果你经常碰到一些收集信息的琐碎任务,那么可以考虑化繁为简,用爬虫工具来替代手工劳动。
本文就是这样一个例子:爬取信用信息公示系统中,企业信息详情。

如下图所示:


image.png

首先确定lxml规则

image.png

xpath finder插件会直观显示匹配结果。

然后再ipython中验证

In [1]: import requests

In [2]: from lxml import html

In [3]: resp=requests.get('http://www.sdsszt.com/GSpublicity/GSpublicityList.html?service=entInfo_QuIz54WYBCp98MAnDE+TOjSI6nj4d
   ...: DhPid4wNzIOjLyqVswLC8L8we/iqFGcaayM-q1d+FAeb99tNXz0PkuiXwA==&localSetting=sd&from=singlemessage')

In [4]: text=resp.content.decode('utf-8')

In [7]: root=html.fromstring(text)

In [21]: root.findall('.//tr/td/span[@class=\'label\']')[0].xpath('text()')
Out[21]: ['统一社会信用代码/注册号:']

In [22]: root.findall('.//table//tr/td/span[@class=\'label\']')[0].xpath('text()')
Out[22]: ['统一社会信用代码/注册号:']

In [23]: root.findall('.//table//tr/td/span[@class=\'content\']')[0].xpath('text()')
Out[23]: ['914406063454106971']

动手写脚本,一气呵成

# encoding: utf-8
__author__ = 'fengshenjie'
import requests
from lxml import html
import json
import csv, random

conf = {
    'start_url': [
        'http://www.sdsszt.com/GSpublicity/GSpublicityList.html?service=entInfo_QuIz54WYBCp98MAnDE+TOjSI6nj4dDhPid4wNzIOjLyqVswLC8L8we/iqFGcaayM-q1d+FAeb99tNXz0PkuiXwA==&localSetting=sd&from=singlemessage'
    ],
    'raw_headers': ['''Host: www.sdsszt.com
Connection: keep-alive
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/66.0.3359.117 Safari/537.36
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8
Accept-Encoding: gzip, deflate
Accept-Language: zh-CN,zh;q=0.9,en;q=0.8,zh-TW;q=0.7,da;q=0.6
''']
}


def getHeader():
    headerrow = random.choice(conf['raw_headers'])
    res = {}
    lines = headerrow.split('\n')
    for line in lines:
        try:
            k, v = line.split(':')
            res[k] = v
        except Exception as e:
            print(e, line)
    return res


def downloader(url):
    resp = requests.get(url)
    return resp.content.decode('utf-8')


def parser(text):
    assert isinstance(text, str)
    root = html.fromstring(text)
    res = []
    labels = root.findall('.//tr/td/span[@class=\'label\']')
    contents = root.findall('.//tr/td/span[@class=\'content\']')
    assert len(labels) == len(contents)
    for i in range(len(labels)):
        label = labels[i].xpath('text()')
        content = contents[i].xpath('text()')
        res.append({
            'label': label[0].replace('\r\n', '').strip(),
            'content': content[0].strip()
        })
    # print(json.dumps(res, ensure_ascii=False))
    outputer(res)


def outputer(res, fname='./shunde.csv'):
    assert isinstance(res, list)
    for d in res:
        print(d['label'], d['content'])
    lines = [(d['label'], d['content']) for d in res]
    with open(fname, 'w', encoding='utf-8-sig') as f:
        w = csv.writer(f)
        w.writerows(lines)


def main():
    for url in conf['start_url']:
        print('->', url)
        parser(downloader(url))


if __name__ == '__main__':
    main()

这是我们最后输出的文件:


image.png

本期嘉宾

image.png

image.png
上一篇下一篇

猜你喜欢

热点阅读