python_spider

python爬取链家租房之获得每一页的房屋信息地址(持续更新)

2017-06-13  本文已影响17人  宁静消失何如
__author__ = 'Lee'
import requests
from bs4 import BeautifulSoup

url_text = 'https://bj.lianjia.com/zufang/xicheng/'

area_list = []
headers = {
    'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36',
}
proxies = {"http": "http://119.57.105.241:8080"}
wb_data = requests.get(url_text,headers=headers)
soup = BeautifulSoup(wb_data.text,'lxml')
# 面包屑模块
# 面包屑 breadcrumbs
bread_crumbs =soup.select('#house-lst > li')
item_url = soup.select('#house-lst > li > div > h2 > a')
for url in item_url:

    url1 = url.get('href')
    print(url1)
'''
上边的code中,get方法不能直接使用 因为数据类型不支持,
特别注意的是item_url,url两个变量数据类型不同,分别是list,bs4.element.Tag
只有url支持get方法
'''

'''
#house-lst > li:nth-child(1) > div.info-panel > h2 > a
'''

上一篇下一篇

猜你喜欢

热点阅读