scispace - formally typeset
Open AccessJournal ArticleDOI

Automatic resource compilation by analyzing hyperlink structure and associated text

Reads0
Chats0
TLDR
An evaluation of ARC suggests that the resources found by ARC frequently fare almost as well as, and sometimes better than, lists of resources that are manually compiled or classified into a topic.

Content maybe subject to copyright    Report

Automatic Resource Compilation by Analyzing Hyperlink
Structure and Associated Text
Soumen Chakrabarti, Byron Dom, Prabhakar Raghavan, Sridhar Rajagopalan
IBM Almaden Research Center K53, 650 Harry Road
San Jose, CA 95120, USA.
David Gibson
Computer Science Division, Soda Hall
University of California, Berkeley, CA 94720, USA.
Jon Kleinberg
Department of Computer Science, Upson Hall
Cornell University, Ithaca, NY 14853, USA.
Abstract
We describe the design, prototyping and evaluation of ARC, a system for automatically
compiling a list of authoritative web resources on any (sufficiently broad) topic. The goal of
ARC is to compile resource lists similar to those provided by Yahoo! or Infoseek. The
fundamental difference is that these services construct lists either manually or through a
combination of human and automated effort, while ARC operates fully automatically. We
describe the evaluation of ARC, Yahoo!, and Infoseek resource lists by a panel of human
users. This evaluation suggests that the resources found by ARC frequently fare almost as
well as, and sometimes better than, lists of resources that are manually compiled or
classified into a topic. We also provide examples of ARC resource lists for the reader to
examine.
Keywords: Search, taxonomies, link analysis, anchor text, information retrieval.
1. Overview
The subject of this paper is the design and evaluation of an automatic resource compiler. An automatic
resource compiler is a system which, given a topic that is broad and well-represented on the web, will
seek out and return a list of web resources that it considers the most authoritative for that topic. Our
system is built on an algorithm that performs a local analysis of both text and links to arrive at a "global
consensus" of the best resources for the topic. We describe a user-study, comparing our resource
compiler with commercial, human-compiled/assisted services. To our knowledge, this is one of the first
systematic user-studies comparing the quality of multiple web resource lists compiled using different
methods. Our study suggests that, although our resource lists are compiled wholly automatically (and
despite being presented to users without any embellishments in the "look and feel" or the presentation
context), they fare relatively well compared to the commercial human-compiled lists.
When web users seek definitive information on a broad topic, they frequently go to a hierarchical,
manually-compiled taxonomy such as Yahoo!, or a human-assisted compilation such as Infoseek. The
role of such a taxonomy is to provide, for any broad topic, such a resource list with high-quality
resources on the topic. In this paper we describe ARC (for Automatic Resource Compiler), a part of the

CLEVER project on information retrieval at the IBM Almaden Research Center. The goal of ARC is to
automatically compile a resource list on any topic that is broad and well-represented on the web. By
using an automated system to compile resource lists, we obtain a faster coverage of the available
resources and of the topic space than a human can achieve (or, alternatively, are able to update and
maintain more resource lists more frequently). As our studies with human users show, the loss in quality
is not significant compared to manually or semi-manually compiled lists.
1.1. Related prior work
The use of links for ranking documents is similar to work on citation analysis in the field of
bibliometrics (see e.g. [White and McCain]). In the context of the Web, links have been used for
enhancing relevance judgments by [Rivlin, Botafogo, and Schneiderman] and [Weiss et al]. They have
been incorporated into query-based frameworks for searching by [Arocena, Mendelzon, and Mihaila]
and by [Spertus].
Our work is oriented in a different direction - namely, to use links as a means of harnessing the latent
human annotation in hyper-links so as to broaden a user search and focus on a type of ‘high-quality’
page. Similar motivation arises in work of [Pirolli, Pitkow, and Rao]; [Carriere and Kazman]; and Brin
and Page [BrinPage97]. Pirolli et al. discuss a method based on link and text-based information for
grouping and categorizing WWW pages. Carriere and Kazman use the number of neighbors (without
regard to the directions of links) of a page in the link structure as a method of ranking pages; and Brin
and Page view web searches as random walks to assign a topic-independent "rank" to each page on the
WWW, which can then be used to re-order the output of a search engine. [For a more detailed review of
search engines and their rank functions (including some based on the number of links pointing to a web
page) see Search Engine Watch [SEW].] Finally, the link-based algorithm of Kleinberg [Kleinberg97]
serves as one of the building blocks of our method here; this connection is described in more detail in
Section 2 below, explaining how we enhance it with textual analysis.
1.2. Road map
We begin in Section 2 below with a description of our technique and how some of its parameters are
fixed. In Section 3 we describe our experiments using a number of topics, with a diverse set of users
from many different backgrounds. In Section 4 we summarize the ratings given by these evaluators, as
well as their qualitative comments and suggestions.
2. Algorithm
We now describe our algorithm, and the experiments that we use to set values for the small number of
parameters in the algorithm. The algorithm has three phases: a search-and-growth phase, a weighting
phase, and an iteration-and-reporting phase.
Given a topic, the algorithm first gathers a collection of pages from among which it will distill ones that
it considers the best for the topic. This is the intent of the first phase, which is nearly identical to that in
Kleinberg’s HITS technique [Kleinberg97]. The topic is sent to a term-based search engine - AltaVista
in our case - and a root set of 200 documents containing the topic term(s) is collected. The particular
root set returned by the search engine (among all the web resources containing the topic as a text string)
is determined by its own scoring function. The root set is then augmented through the following

expansion step: we add to the root set (1) any document that points to a document in the root set, and (2)
any document that is pointed to by a document in the root set. We perform this expansion step twice (in
Kleinberg’s work, this was performed only once), thus including all pages which are link-distance two
or less from at least one page in the root set. We will call the set of documents obtained in this way the
augmented set. In our experience, the augmented set contained between a few hundred and 3000 distinct
pages, depending on the topic.
We now develop two fundamental ideas. The first idea, due to Kleinberg [Kleinberg97] is that there are
two types of useful pages. An authority page is one that contains a lot of information about the topic. A
hub page is one that contains a large number of links to pages containing information about the topic -
an example of a hub page is a resource list on some specific topic. The basic principle here is the
following mutually reinforcing relationship between hubs and authorities. A good hub page points to
many good authority pages. A good authority page is pointed to by many good hub pages. To convert
this principle into a method for finding good hubs and authorities, we first describe a local iterative
process [Kleinberg97] that "bootstraps" the mutually reinforcing relationship described above to locate
good hubs and authorities. We then present the second fundamental notion underlying our algorithm,
which sharpens its accuracy when focusing on a topic. Finally, we present our overall algorithm and a
description of the experiments that help us fix its parameters.
Kleinberg maintains, for each page p in the augmented set, a hub score, h(p) and an authority score,
a(p). Each iteration consists of two steps: (1) replace each a(p) by the sum of the h(p) values of pages
pointing to p; (2) replace each h(p) by the sum of the a(p) values of pages pointed to by p. Note that this
iterative process ignores the text describing the topics; we remedy this by altering these sums to be
weighted in a fashion described below, so as to maintain focus on the topic. The idea is to iterate this
new, text-weighted process for a number of steps, then pick the pages with the top hub and authority
scores.
To this end we introduce our second fundamental notion: the text around href links to a page p is
descriptive of the contents of p; note that these href’s are not in p, but in pages pointing to p. In
particular, if text descriptive of a topic occurs in the text around an href into p from a good hub, it
reinforces our belief that p is an authority on the topic. How do we incorporate this textual conferral of
authority into the basic iterative process described above? The idea is to assign to each link (from page p
to page q of the augmented set) a positive numerical weight w(p,q) that increases with the amount of
topic-related text in the vicinity of the href from p to q. This assignment is the second, weighting phase
mentioned above. The precise mechanism we use for computing these weights is described in Section
2.1 below; for now, let us continue to the iteration and reporting phase, assuming that this
topic-dependent link weighting has been done.
In the final phase, we compute two vectors h (for hub) and a (for authority), with one entry for each
page in the augmented set. The entries of the first contain scores for the value of each page as a hub, and
the second describes the value of each page as an authority. We construct a matrix W that contains an
entry corresponding to each ordered pair p,q of pages in the augmented set. This entry is w(p,q)
(computed as below) when page p points to q, and 0 otherwise. Let Z be the matrix transpose of W. We
set the vector h equal to 1 initially and iteratively execute the following two steps k times.
a = Wh
h = Za
After k iterations we output the pages with the 15 highest values in h as the hubs, and the 15 highest
values in a as the authorities, without further annotation or human filtering. Thus our process is

completely automated. Our choice of the quantity 15 here is somewhat arbitrary: it arose from our sense
that a good resource list should offer the user a set of pointers that is easy to grasp from a single browser
frame.
Intuitively, the first step in each iteration reflects the notion that good authority pages are pointed to by
hub pages and are described in them as being relevant to the topic text. The second step in each iteration
reflects the notion that good hub pages point to good authority pages and describe them as being
relevant to the topic text. What do we set k to? It follows from the theory of eigenvectors [Golub89] that,
as k increases, the relative values of the components of h and a converge to a unique steady state, given
that the entries of W are non-negative real numbers. In our case, a very small value of k is sufficient --
and hence the computation can be performed extremely efficiently -- for two reasons. First, we have
empirically observed that convergence is quite rapid for the matrices that we are dealing with. Second,
we need something considerably weaker than convergence: we only require that the identities of the top
15 hubs/authorities become stable, since this is all that goes into the final resource list. This is an
important respect in which our goals differ from those of classical matrix eigenvector computations:
whereas that literature focuses on the time required for the values of the eigenvector components to
reach a stable state, our emphasis is only on identifying the 15 largest entries without regard to the actual
values of these entries. We found on a wide range of tests that this type of "near-convergence" occurs
around five iterations. We therefore decided to fix k to be 5.
2.1. Computing the weights w(p,q)
Recall that the weight w(p,q) is a measure of the authority on the topic invested by p in q. If the text in
the vicinity of the href from p to q contains text descriptive of the topic at hand, we want to increase
w(p,q); this idea of anchor-text arose first in the work of McBryan[McBryan94]. The immediate
questions, then, are (1) what precisely is "vicinity"? and (2) how do we map the occurrences of
descriptive text into a real-valued weight? Our idea is to look on either side of the href for a window of
B bytes, where B is a parameter determined through an experiment described below; we call this the
anchor window. Note that this includes the text between the <a href="..."> and </a> tags. Let n(t)
denote the number of matches between terms in the topic description in this anchor window. For this
purpose, a term may be specified as a contiguous string of words. We set
w(p,q) = 1 + n(t).
Since many entries of W are larger than one, the entries of h and a may grow as we iterate; however,
since we only need their relative values, we normalize after each iteration to keep the entries small.
Finally, we describe the determination of B, the parameter governing the width of the anchor window.
Postulating that the string <a href="http://www.yahoo.com"> would typically co-occur with the text
Yahoo in close proximity, we studied - on a test set of over 5000 web pages drawn from the web - the
distance to the nearest occurrence of Yahoo around all href’s to http://www.yahoo.com in these pages.
The results are shown below in Table 1; the first row indicates distance from the string <a
href="http://www.yahoo.com">, while the second row indicates the number of occurrences of the string
Yahoo at that distance. Here a distance of zero corresponds to occurrences between <a
href="http://www.yahoo.com"> and </a>. A negative distance connotes occurrences before the href, and
positive distances after.
Distance -100 -75 -50 -25 0 25 50 75 100
Density 1 6 11 31 880 73 112 21 7

Table 1: Anchor text position versus distance.
The table suggests that most occurrences are within 50 bytes of the href. Qualitatively similar
experiments with href’s other than Yahoo (where the text associated with a URL is likely to be as
clear-cut) suggested similar values of B. We therefore set B to be 50.
2.2. Implementation
Our experimental system consists of a computation kernel written in C, and a control layer and GUI
written in Tcl/Tk. An 80GB disk-based web-cache hosted on a PC enables us to store augmented sets for
various topics locally, allowing us to repeat text and link analysis for various parameter settings. The
emphasis in our current implementation has not been heavy-duty performance (in that we do not
envision our system fielding thousands of queries per second and producing answers in real time);
instead, we focused on the quality of our resource lists. The iterative computation at the core of the
analysis takes about a second for a single resource list, on a variety of modern platforms. We expect
that, in a full fledged taxonomy generation, the principal bottleneck will be the time cost of crawling the
web and extracting all the root and augmented sets.
3. Experiments
In this section we describe the setup by which a panel of human users evaluated our resource lists in
comparison with Yahoo! and Infoseek. The parameters for this experiment are: (1) the choice of topics;
(2) well-known sources to compare with the output of ARC; (3) the metrics for evaluation; (4) and the
choice of volunteers to test the output of our system.
3.1. Topics and baselines for comparison
Our test topics had to be chosen so that with some reasonable browsing effort our volunteers could make
judgments about the quality of our output, even if they are not experts on the topic. One way the
volunteers could do this relatively easily is through comparison with similar resource pages in
well-known Web directories. Several Web directories, such as Yahoo!, Infoseek, etc. are regarded as
"super-hubs"; therefore it is natural to pick such directories for comparison. This in part dictated the
possible topics we could experiment on.
We started by picking a set of topics, each described by a word or a short phrase (2-3 words). Most
topics were picked so that there were representative "resource" pages in both Yahoo! and Infoseek. We
tried to touch topics in arts, sciences, health, entertainment, and social issues. We picked 28 topics for
our study: affirmative action, alcoholism, amusement parks, architecture, bicycling, blues, classical
guitar, cheese, cruises, computer vision, field hockey, gardening, graphic design, Gulf war, HIV, lyme
disease, mutual funds, parallel architecture, rock climbing, recycling cans, stamp collecting,
Shakespeare, sushi, telecommuting, Thailand tourism, table tennis, vintage cars and zen buddhism. We
therefore believe that our system was tested on fairly representative topics for which typical web users
are likely to seek authoritative resource lists.
3.2. Volunteers and test setup
The participants range in age from early 20’s to 50’s, and were spread around North America and Asia.

Citations
More filters
Proceedings ArticleDOI

The Web as a graph

TL;DR: A set of algorithms that operate on the Web graph are reviewed, addressing problems from Web search, automatic community discovery, and classification, and a new family of random graph models are proposed.
Proceedings ArticleDOI

Topical locality in the Web

TL;DR: Empirically testing whether topical locality mirrors spatial locality of pages on the Web finds that the likelihood of linked pages having similar textual content to be high, and the similarity of sibling pages increases when the links from the parent are close together, show the foundations necessary for the success of many web systems.
Journal ArticleDOI

Data mining for hypertext: a tutorial survey

TL;DR: Recent advances in learning and mining problems related to hypertext in general and the Web in particular are surveyed and the continuum of supervised to semi-supervised to unsupervised learning problems is reviewed.
Patent

Ranking search results by reranking the results based on local inter-connectivity

TL;DR: A re-ranking component in the search engine then refined the initially returned document rankings so that documents that are frequently cited in the initial set of relevant documents were preferred over documents that were less frequently cited within the original set.
Journal ArticleDOI

Link analysis ranking: algorithms, theory, and experiments

TL;DR: This article works within the hubs and authorities framework defined by Kleinberg and proposes new families of algorithms, and provides an axiomatic characterization of the INDEGREE heuristic which ranks each node according to the number of incoming links.
References
More filters
Book

Matrix computations

Gene H. Golub
Journal ArticleDOI

The anatomy of a large-scale hypertextual Web search engine

TL;DR: This paper provides an in-depth description of Google, a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext and looks at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything they want.
Journal Article

The Anatomy of a Large-Scale Hypertextual Web Search Engine.

Sergey Brin, +1 more
- 01 Jan 1998 - 
TL;DR: Google as discussed by the authors is a prototype of a large-scale search engine which makes heavy use of the structure present in hypertext and is designed to crawl and index the Web efficiently and produce much more satisfying search results than existing systems.
Proceedings ArticleDOI

Authoritative sources in a hyperlinked environment

TL;DR: This work proposes and test an algorithmic formulation of the notion of authority, based on the relationship between a set of relevant authoritative pages and the set of \hub pages that join them together in the link structure, that has connections to the eigenvectors of certain matrices associated with the link graph.
Proceedings ArticleDOI

Silk from a sow's ear: extracting usable structures from the Web

TL;DR: This paper presents the exploration into techniques that utilize both the topology and textual similarity between items as well as usage data collected by servers and page meta-information lke title and size.
Frequently Asked Questions (7)
Q1. What have the authors contributed in "Automatic resource compilation by analyzing hyperlink structure and associated text" ?

The authors describe the design, prototyping and evaluation of ARC, a system for automatically compiling a list of authoritative web resources on any ( sufficiently broad ) topic. The goal of ARC is to compile resource lists similar to those provided by Yahoo ! or Infoseek. The authors describe the evaluation of ARC, Yahoo !, and Infoseek resource lists by a panel of human users. The authors also provide examples of ARC resource lists for the reader to examine. This evaluation suggests that the resources found by ARC frequently fare almost as well as, and sometimes better than, lists of resources that are manually compiled or classified into a topic. 

Since many entries of W are larger than one, the entries of h and a may grow as the authors iterate; however, since the authors only need their relative values, the authors normalize after each iteration to keep the entries small. 

The iterative computation at the core of the analysis takes about a second for a single resource list, on a variety of modern platforms. 

Web users express while searching for resources, which are related to the notions of recall and precision in the Information Retrieval literature. 

The emphasis in their current implementation has not been heavy-duty performance (in that the authors do not envision their system fielding thousands of queries per second and producing answers in real time); instead, the authors focused on the quality of their resource lists. 

In their case, a very small value of k is sufficient -- and hence the computation can be performed extremely efficiently -- for two reasons. 

Their work is oriented in a different direction - namely, to use links as a means of harnessing the latent human annotation in hyper-links so as to broaden a user search and focus on a type of ‘high-quality’ page.