Java爬虫中HttpClient请求头的设置

2017-08-31  本文已影响2261人  蔡俊宇

在做爬虫的时候发现如果不设置请求头的话,每次httpclient发起的请求都是响应移动端板式的,无法抓取响应的内容,后面查了一下才知道,需要重新设置请求头,让服务端误以为是某个真实浏览器发起的请求:

HttpClient httpClient = new DefaultHttpClient();    
    
//創建一個httpGet方法     
    
HttpGet httpGet = new HttpGet("xxxxx");    
    
//設置httpGet的头部參數信息     
    
httpGet.setHeader("Accept", "Accept text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8");    
    
httpGet.setHeader("Accept-Charset", "GB2312,utf-8;q=0.7,*;q=0.7");    
    
httpGet.setHeader("Accept-Encoding", "gzip, deflate");    
    
httpGet.setHeader("Accept-Language", "zh-cn,zh;q=0.5");    
    
httpGet.setHeader("Connection", "keep-alive");    
    
httpGet.setHeader("Cookie", "");    
    
httpGet.setHeader("Host", "");    
    
httpGet.setHeader("refer", "");    
    
httpGet.setHeader("User-Agent", "Mozilla/5.0 (Windows NT 6.1; rv:6.0.2) Gecko/20100101 Firefox/6.0.2");   

最基本的爬虫就用了几句代码就完成了,附上爬虫代码,主要是用到HttpClientwebmagicjsoup

public class WebMagicService {
    public static void main(String[] args)  throws Exception{
        HttpClient httpClient = HttpClients.createDefault();
        HttpGet httpGet = new HttpGet("http://bbs.e763.com/");
        httpGet.setHeader("User-Agent", "Mozilla/5.0 (Windows NT 6.1; rv:6.0.2) Gecko/20100101 Firefox/6.0.2");
        HttpResponse response = httpClient.execute(httpGet);
        String contents = EntityUtils.toString(response.getEntity(),"gbk");//utf-8
        Document document = Jsoup.parse(contents);
        Elements elements = document.select("div#hza11 div.boxtxthot a");
     //   System.out.println(contents);
        for (Element element : elements) {
            System.out.println(element.text()+ " : " + element.attr("href"));
        }
    }
}


爬取内容如下:
三大运营商9月1日起取消漫游费 用户无需申请自动生效 : viewthread.php?tid=907629
2年前月桂湖弃婴案:被蚂蚁咬过的孩子, 精神发育迟滞 : viewthread.php?tid=907607
.....

上一篇 下一篇

猜你喜欢

热点阅读