Python 开发python爬虫实战Python

Python网络爬虫与信息提取(一):网络爬虫之规则

2017-03-14  本文已影响635人  娄叔啊喂

此系列笔记来源于
中国大学MOOC-北京理工大学-嵩天老师的Python系列课程


1. Requests库入门

def getHTMLText(url)
  try:
    r = requests.get(url,timeout = 30)
    r.raise_for_status()
    r.encoding = r.apparent_encoding
    return r.text
  except:
    return "产生异常"

2. 网络爬虫的盗亦有道

3. Requests库爬取实例

import requests
url = "http://item.jd.com/2967929.html"
try:
    r=requests.get(url)
    r.raise_for_status()
    r.encoding=r.apparent_encoding
    print(r.text[:1000])
except:
    print("爬取失败")
import requests
url = "http://www.amazon.cn/gp/product/B01M8L5Z3Y"
try:
    kv = {'user-agent':'Mozilla/5.0'}
    r= requests.get(url,header = kv)
    r.raise_for_status()
    r.encoding=r.apparent_encoding
    print(r.text[1000:2000])
except:
    print("爬取失败")
import requests
keyword = "Python"
try:
    kv = {'wd':keyword}
    r= requests.get("http://www.baidu.com/s",params = kv)
    print(r.requrst.url)
    r.raise_for_status()
    print(len(r.text))
except:
    print("爬取失败")
import requests
import os
url = "http://image.nationalgeographic.com.cn/2017/0311/20170311024522382.jpg"
root = "D://pics//"
path = root +url.split('/')[-1]
try:
    if not os.path.exists(root):
      os.mkdir(root)
    if not os.path.exists(path):
      r=requests.get(url)
      with open(path,'wb') as f:
        f.writr(r.content)
        f.close()
        print("文件保存成功")
    else:
        print("文件已存在")
except:
    print("爬取失败")
import requests
url = "http://m.ip138.com/ip.asp?ip="
try:
   r= requests.get(url + '202.204.80.112')
   r.raise_for_status()
   r.encoding=r.apparent_encoding
   print(r.text[-500:])
except:
   print("爬取失败")
上一篇下一篇

猜你喜欢

热点阅读