【3】数据筛选2 - requests

2018-11-24  本文已影响0人  夏夏夏夏颜曦

目录

    1.概述
    2.下载安装
    3.入门程序
    4.请求对象:请求方式
    5.请求对象:GET参数传递
    6.请求对象:POST参数传递
    7.请求对象:定制请求头
    8.请求对象:cookie
    9.响应对象

1.概述

2.下载安装

3.入门程序

# 引入依赖包
import requests
# 发送请求获取服务器数据
response = requests.get("http://www.sina.com.cn")
# 得到数据
print(response.text)

4.请求对象: 请求方式

5.请求对象: GET 参数传递

import requests
target_url = 'http://www.baidu.com/s'
data = {'wd': '魔道祖师'}
response = requests.get(target_url, params=data)
print(response.text)

6.请求对象: POST 参数传递

# 引入依赖的模块
import requests
# 定义目标 url 地址
# url = 'http://fanyi.youdao.com/translate_o?smartresult=dict&smartresult=rule'
url = 'http://fanyi.youdao.com/translate?smartresult=dict&smartresult=rule'
# 传递 post 中包含的参数
data = {
"i":"hello","from":"AUTO","to":"AUTO","smartresult":"dict","client":"fanyideskweb",
"salt":"1541660576025","sign":"4425d0e75778b94cf440841d47cc64fb",
"doctype":"json","version":"2.1","keyfrom":"fanyi.web",
"action":"FY_BY_REALTIME","typoResult":"false",
}
# 发送请求获取服务器返回的响应数据
response = requests.post(url, data=data)
print(response.text)

7.请求对象:定制请求头

# 引入依赖的模块
import requests
from fake_useragent import UserAgent
ua = UserAgent()
# 定义请求地址和请求头数据
url = 'http://www.baidu.com/s'
headers = {'User-agent': ua.random}
param = {'wd': 'PYTHON 爬虫'}
# 发送请求获取响应数据
response = requests.get(url, headers=headers)
print(response.text)

8.请求对象: cookie

from requests.cookies import RequestsCookieJar
url = 'http://httpbin.org/cookies'
# 1. 第一种直接定义方式
# cookie_data = {'name': 'jerry'}
# 2. 对象操作方式
cookie_data = RequestsCookieJar()
cookie_data.set('name', 'tom')
response = requests.get(url, cookies=cookie_data)
response.encoding = 'utf-8'
print(response.text)

9.响应对象

上一篇下一篇

猜你喜欢

热点阅读