scispace - formally typeset
Open AccessProceedings Article

The PageRank Citation Ranking : Bringing Order to the Web

Lawrence Page, +3 more
- Vol. 98, pp 161-172
TLDR
This paper describes PageRank, a mathod for rating Web pages objectively and mechanically, effectively measuring the human interest and attention devoted to them, and shows how to efficiently compute PageRank for large numbers of pages.
Abstract
The importance of a Web page is an inherently subjective matter, which depends on the readers interests, knowledge and attitudes. But there is still much that can be said objectively about the relative importance of Web pages. This paper describes PageRank, a mathod for rating Web pages objectively and mechanically, effectively measuring the human interest and attention devoted to them. We compare PageRank to an idealized random Web surfer. We show how to efficiently compute PageRank for large numbers of pages. And, we show how to apply PageRank to search and to user navigation.

read more

Citations
More filters
Proceedings ArticleDOI

Relevance search in heterogeneous networks

TL;DR: Empirical studies show that HeteSim can effectively evaluate the relatedness of heterogeneous objects, and in the query and clustering tasks, it can achieve better performances than conventional measures.
Journal ArticleDOI

Centrality Measures in Networks

TL;DR: It is shown that although the prominent centrality measures in network analysis make use of different information about nodes' positions, they all process that information in an identical way: they all spring from a common family that are characterized by the same simple axioms.
Proceedings ArticleDOI

Measuring author contributions to the Wikipedia

TL;DR: The problem of measuring user contributions to versioned, collaborative bodies of information, such as wikis, is considered and various alternative criteria that take into account the quality of a contribution, in addition to the quantity are considered.
Journal ArticleDOI

On new approaches of assessing network vulnerability: hardness and approximation

TL;DR: The objective is to identify the minimum set of critical network elements, namely nodes and edges, whose removal results in a specific degradation of the network global pairwise connectivity, and it is proved the NP-completeness and inapproximability of this problem.
Journal Article

Link Analysis in Web Information Retrieval.

TL;DR: This survey describes two successful link analysis algorithms and the state-of-the art of the field.
References
More filters
Journal Article

The Anatomy of a Large-Scale Hypertextual Web Search Engine.

Sergey Brin, +1 more
- 01 Jan 1998 - 
TL;DR: Google as discussed by the authors is a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext and is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems.
Journal ArticleDOI

Efficient crawling through URL ordering

TL;DR: In this paper, the authors study in what order a crawler should visit the URLs it has seen, in order to obtain more "important" pages first, and they show that a good ordering scheme can obtain important pages significantly faster than one without.
Proceedings ArticleDOI

Silk from a sow's ear: extracting usable structures from the Web

TL;DR: This paper presents the exploration into techniques that utilize both the topology and textual similarity between items as well as usage data collected by servers and page meta-information lke title and size.
Proceedings ArticleDOI

HyPursuit: a hierarchical network search engine that exploits content-link hypertext clustering

TL;DR: Experience with HyPursuit suggests that abstraction functions based on hypertext clustering can be used to construct meaningful and scalable cluster hierarchies, and is encouraged by preliminary results on clustering based on both document contents and hyperlink structures.
Journal ArticleDOI

The quest for correct information on the Web: hyper search engines

TL;DR: This paper presents a novel method to extract from a web object its “hyper” informative content, in contrast with current search engines, which only deal with the “textual’ informative content.