2017学习@IT·互联网大数据 爬虫Python AI Sql

java知乎爬虫写作过程和思路

2017-05-01  本文已影响5608人  关耳金名

0.需要的知识点

1.获取主话题

在知乎中一共有33个主话题,在33个主话题下又有15776个子话题,因此我们首先要获取到33个主话题
(ps:一开始打算用HttpURLConnection进行网络请求,由于后面需要以post形式访问并且提交form因此后面的代码改用了httpclient进行网络请求)

图中的data-id就是主话题的id
public static void getTopicId(){
        new Thread(new Runnable() {

            @Override
            public void run() {
                Connection connection;
                int id = 1;
                try {
                    URL url = new URL("https://www.zhihu.com/topics");
                    HttpURLConnection conn = (HttpURLConnection) url.openConnection();
                    conn.setRequestMethod("GET");
                    conn.connect();
                    BufferedReader bfr = new BufferedReader(new InputStreamReader(conn.getInputStream(),"UTF-8"));
                    String line = null;
                    StringBuilder sb = new StringBuilder();
                    while ((line = (bfr.readLine())) != null){
                        sb.append(line);
                    }
                    String result = sb.toString();
                    String regex = "data-id=\"[0-9]{0,6}\"";
                    Pattern pattern = Pattern.compile(regex);
                    Matcher m = pattern.matcher(result);
                    String regx = "href=\"#.*?\"";
                    Pattern p = Pattern.compile(regx);
                    Matcher mn = p.matcher(result);
                    while (m.find() && mn.find()){
                        String s = m.group();
                        s = s.substring(9,s.length() - 1);
                        String sn = mn.group();
                        sn = sn.substring(7,sn.length() - 1);
                        System.out.println(s + " " + sn);
                        connection = JDBCUtil.getConn();
                        PreparedStatement state = (PreparedStatement) connection.prepareStatement("insert into main_topic values(?,?,?)");
                        state.setInt(1, id++);
                        state.setInt(2, Integer.valueOf(s));
                        state.setString(3, sn);
                        state.execute();
                    }
                } catch (Exception e) {
                    // TODO Auto-generated catch block
                    e.printStackTrace();
                }
            }
        }).start();
    }

2.获取各个主话题下的分话题

通过抓包我们可以发现在分话题页面通过post请求访问到服务器来加载分话题,没一次下拉到底部会多加载20个分话题,通过offset来控制,因此我们可以通过post请求服务器来获得json数据


post请求结构 通过postman模拟post请求得到返回数据

由于返回的json数据中的中文为unicode编码,因此运用了Utils类中的方法来转换为中文以便储存到数据库中

public static void getChildTopics(int topic_id,Connection conn) throws Exception{
        int offset = 0;;
        while (true){
            PoolingHttpClientConnectionManager cm = new PoolingHttpClientConnectionManager();
            CloseableHttpClient httpClient = HttpClients.custom()
                    .setRetryHandler(new DefaultHttpRequestRetryHandler())
                    .setConnectionManager(cm)
                    .build();
            HttpPost req = new HttpPost("https://www.zhihu.com/node/TopicsPlazzaListV2");
            List<NameValuePair> params = new ArrayList<NameValuePair>();
            params.add(new BasicNameValuePair("method", "next"));
            params.add(new BasicNameValuePair("params", "{\"topic_id\":" + topic_id + ",\"offset\":" + offset +",\"hash_id\":\"37492588249aa9b50ee49d1797e9af81\"}"));
            req.setEntity(new UrlEncodedFormEntity(params,Consts.UTF_8));
            HttpResponse resp = httpClient.execute(req);
            String sb = EntityUtils.toString(resp.getEntity());
            if (sb.length() < 25) break;
            String regex = "<strong>.*?<\\\\/strong>";
            Pattern p = Pattern.compile(regex);
            Matcher m = p.matcher(sb);
            String regx = "href=\\\\\"\\\\/topic\\\\/[0-9]+\\\\\"";
            Pattern p1 = Pattern.compile(regx);
            Matcher m1 = p1.matcher(sb);
            while (m.find() && m1.find()){
                String temp = m.group().substring(8, m.group().length() - 10);
                String sql = "insert into child_topic values(null,?,?,?)";
                PreparedStatement state = (PreparedStatement) conn.prepareStatement(sql);
                state.setInt(1,topic_id);
                state.setString(2,m1.group().substring(16, m1.group().length() - 2));
                state.setString(3,Utils.decodeUnicode(temp));
                state.execute();
            }
            offset += 20;
        }
    }

通过主话题id来获取下面的子话题id

3.通过分话题id来获取热门问答

public static void loadOneTopicHotQA(int child_topic_id) throws Exception{
        PoolingHttpClientConnectionManager cm = new PoolingHttpClientConnectionManager();
        CloseableHttpClient httpClient = HttpClients.custom()
                .setRetryHandler(new DefaultHttpRequestRetryHandler())
                .setConnectionManager(cm)
                .build();
        HttpGet req = new HttpGet("https://www.zhihu.com/topic/" + child_topic_id + "/hot");
        HttpResponse resp = httpClient.execute(req);
        String result = EntityUtils.toString(resp.getEntity());
        //正则匹配打不出来,后面以截图形式给出
        Pattern p_qa_id = Pattern.compile(regex_qa_id);
        Pattern p_qa_name = Pattern.compile(regex_qa_name);
        Pattern p_qa_username = Pattern.compile(regex_qa_username);
        Matcher m_qa_id = p_qa_id.matcher(result);
        Matcher m_qa_name = p_qa_name.matcher(result);
        Matcher m_qa_username = p_qa_username.matcher(result);
        while (m_qa_id.find() && m_qa_name.find() && m_qa_username.find()){
            String[] qanda_id = m_qa_id.group().split("/");
            String q_id = qanda_id[2];
            String a_id = qanda_id[4].substring(0, qanda_id[4].length() - 2);
            String q_name = m_qa_name.group().split("\n")[1];
            String temp = m_qa_username.group();
            String q_username = null;
            if (temp.contains("匿名用户")) q_username = "匿名用户";
            else if (temp.contains("知乎用户")) q_username = "知乎用户";
            else q_username = temp.substring(30, temp.length() - 1);
            HotQA qa = new HotQA(child_topic_id,Integer.valueOf(q_id), q_name, Integer.valueOf(a_id), q_username);
            DaoImpl daoimpl = new DaoImpl();
            daoimpl.save(qa, child_topic_id);
        }
    }
正则匹配
由于未登陆情况下可能会出现答主名字被替换为知乎用户,因此要考虑答主名字为知乎用户这种情况

4.爬到的数据能做什么

数据

5.扩展

在DAO层中提供了返回指定话题最新动态的方法,同时也包含了通过answer_id获取指定回答的方法,将来可能还会加入爬取用户信息的功能。

6.感谢

感谢看到这里,如果不嫌弃的话给gayhub项目一个star呗,关注一下我也是最最最最吼的
gayhub项目 可以直接点哟
另外大家可以也可以通过微博联系我哟:关耳金名
如何使用及编程过程中踩过的坑

上一篇下一篇

猜你喜欢

热点阅读