Web Information Paper Review (1)

2016-04-04  本文已影响0人  凌晨2点的北京

1. Andrei Broder et al.:Graph structure in the Web. WWW 2000

Links: Graph structure in the webIntroduction to Graph Theory

This paper analyzes of the web structure as graph. It introduces several new definitions under the view of the graph structure and also verifies previous theories about the web structure and provide an environment to study the behavior of the Internet. The paper can be divided into two parts: (1) Verification of the Power Law; (2) Apply the graph theoretic methods on the web.

The Power Law is that: the distribution of the degrees i (in-degrees or out-degrees) has the probability proportional to 1/i^x, while x is number larger than one. The means that the distribution of number of pages in the Internet is inverse proportional to the degrees the page have. That means most pages has less degrees while seldom pages has large number of degrees. This phenomena is especially happening on the in-degrees case. They have done experiment to verified this law. They use the BFS to crawl the web pages from Internet and found that the degrees’ distribution indeed follow the Power Law. For in-degrees, x is around 2.1 while 2.72 for out-degrees. We can see from the graph that most pages are distributed with small in-degrees or out-degrees. The pages with in-degrees around 10000 is only less than ten. This is reasonable since the more degrees a node have, means it is more popular or often visited. We can predict that the web page with higher rank in the PageRank will be lied at the right bottom of the graph. What’s more, they also test the size of the undirected connected components and found out that the size’s distribution also follow the Power Law. That means the most WCC or SCC component has less nodes inside while only exist several big components with very large number of node inside.

In the second part, this paper introduce the basic idea of the web graph theory and some discoveries base on this theory and conduct experiments to verified them. The whole web graph can be divided into 6 part: SCC, IN, OUT, TENDRILS, TUDE, DISC. SCC is the strong connected component. It means that every node in the node network can access any node inside the SCC along the directed path. An opposite concept is called WCC, which form the component only consider the connectivity without the direction. So in WCC, it cannot make sure every node can visit to any other nodes. IN is component that can reach SCC in the in-direction while cannot reached by SCC. OUT means can be reached from SCC but cannot get to the SCC. TENDRILS means some page cannot reach or be reached by SCC. This paper also introduce some notions like directed graph, undirected graph, avg connected distance. The crawling strategy using in this paper is Breadth-first search.

In their experiment, they analyzing forward and backward BFS from 570 random starting nodes. They got two conclusion: (1) the starting nodes which can explode’s fraction to 570 is same as the SCC fraction to the overall web; (2) When in-link explosion happened, every time the number of the node reached is same. Also out-link explosion always has the same reached nodes. With this two observation, we can determine the number of node in SCC, IN and OUT. First we get the total number of web, multiplicate with the fraction of the explode start node in all start node to get the number of the SCC. Then, use the in-link explode reached node minus SCC to get IN and out-link explode reached nodes minus SCC to get OUT node count. They also estimate the directed diameter, the avg connected distance and max finite shortest path length and so on.

From this paper, I learn the basic graph theory applying to the web structure. And learn some features and properties of this special graph structure.

2. Sergey Brin, Lawrence Page:The Anatomy of a Large-Scale Hypertextual Web Search Engine, WWW 1998

Links: The Anatomy of a Large-Scale Hypertextual Web Search Enginecsdn中文笔记, The Google Pagerank Algorithm

This paper analyze the technologies behind Google search engine. It mainly introduced the overall structure of a practical large scale system which can exploit the information in hypertext. And also discuss how to make the search engine more accurate and effective under the situation that everyone can publish web page on the Internet. Basically, search engine need to store huge amount of data and fast searching among them. This require the search system to be more robust.

The paper first introduced the PageRank system. PageRank is used to rank all the pages in the Internet and give them orders. This orders can help Google perform more relative search result when consider the tradeoff of the pages. The PageRank is calculated by using the following formula:

PR(A) = (1-d) + d (PR(T1)/C(T1) + ... + PR(Tn)/C(Tn))

PR is the PageRank of the page and C is the out-degrees if the page. D is the damping factor to avoid other pages have too much influences. Also Google has an difference with most other search engine, they also associate the link text to the page it point to. This is because link text has a better description of the page itself and this is useful when the page don’t have too much text to describe it, such as pure images or videos.

Then the paper present the architecture of Google. Google has distributed crawlers programs. They can fetch the web page and store the page into the local repository. Every page has been converted into the word occurrences called hits and has a docID to address that.  Hits are stored in the Barrels. Sorter can sort the Barrel in docID order. And every time user request a query, the engine first convert the search text into wordID and then search through the Barrels until find best match results.

For such kind of large scale system, the storage and the indexing speed is also vital. The crawler store the page into the repository with prefixed by docID, length and URL. They also has their method to mapping the URL to the docID. The Hits list use a hand optimized compact encoding. It store the the position, font and capitalization of the word in a page. The encoding they use is very efficiency in comparison with traditional encoding methods. Their crawlers can handle 300 connections at the same time. It can handle them as they are in different state of crawling. And use DNS cache to speed up DNS resolving. Their search result is based on the PageRank as weight and has feedback from the trustable user to improve their weighting system.

Finally, the paper show the Google’s performance. It can return reliable search results and with a satisfied speed. Also, the whole system can be running on a small storage PC which means the system has a good storage strategy.

From this paper, I learn the architecture of the Google or to say the large scale search engine. And has a basic idea of how the search engines work and how the improve the performance.

上一篇下一篇

猜你喜欢

热点阅读