scispace - formally typeset
Search or ask a question
Author

Andrei Z. Broder

Other affiliations: AmeriCorps VISTA, IBM, Columbia University  ...read more
Bio: Andrei Z. Broder is an academic researcher from Google. The author has contributed to research in topics: Web search query & Web query classification. The author has an hindex of 67, co-authored 241 publications receiving 27310 citations. Previous affiliations of Andrei Z. Broder include AmeriCorps VISTA & IBM.


Papers
More filters
Patent
03 Apr 2008
TL;DR: In this article, a method is provided to match an advertisement to a search query by generating an ad query that includes unigram features, classification features with respect to an external classification system, and phrase features.
Abstract: A method is provided to match an advertisement to a search query comprising: receiving search results produced by a search engine in response to a search query; producing an ad query that includes, unigram features, classification features with respect to an external classification system, and phrase features; producing a plurality of representations of corresponding advertisements in terms of the same types of features; and selecting one or more advertisements based upon a measure of similarity of ad query features to advertisements represented in terms of the same features.

34 citations

Proceedings ArticleDOI
01 Dec 1984
TL;DR: The analysis involves the surviving route graph, which consists of all non-faulty nodes in the network with two nodes being connected by a directed edge iff the route from the first to the second is still intact after a set of component failures.
Abstract: We analyze the problem of constructing a network which will have a fixed routing and which will be highly fault tolerant. A construction is presented which forms a “product route graph” from two or more constituent “route graphs.” The analysis involves the surviving route graph, which consists of all non-faulty nodes in the network with two nodes being connected by a directed edge iff the route from the first to the second is still intact after a set of component failures. The diameter of the surviving route graph, that is, the maximum distance between any pair of nodes, is a measure of the worst-case performance degradation caused by the faults. The number of faults tolerated, the diameter, and the degree of the product graph are related in a simple way to the corresponding parameters of the constituent graphs. In addition, there is a “padding theorem” which allows one to add nodes to a graph and to extend a previous routing.

33 citations

Book ChapterDOI
08 Oct 1998
TL;DR: It is shown that approximate min-wise independence allows similar uses, by presenting a derandomization of the RNC algorithm for approximate set cover due to S. Rajagopalan and V. Vazirani.
Abstract: Min-wise independence is a recently introduced notion of limited independence, similar in spirit to pairwise independence. The later has proven essential for the derandomization of many algorithms. Here we show that approximate min-wise independence allows similar uses, by presenting a derandomization of the RNC algorithm for approximate set cover due to S. Rajagopalan and V. Vazirani. We also discuss how to derandomize their set multi-cover and multi-set multi-cover algorithms in restricted cases. The multi-cover case leads us to discuss the concept of k-minima-wise independence, a natural counterpart to k-wise independence.

31 citations

Proceedings ArticleDOI
01 Jan 1993
TL;DR: A generic deterministic on-line algorithm and a generic randomized on- line algorithm for P that are competitive over all possible inputs are constructed and it is shown that their competitive ratios are optimal up to constant factors.
Abstract: Let {Al,Az,... ,Am} be a set of on-line algorithms for a problem P with input set I. We assume that P can be represented as a metrical task system. Each A; has a competitive ratio si with respect to the optimum offline algorithm, but only for a subset of the possible inputs such that the union of these subsets covers I. Given this setup, we construct a generic deterministic on-line algorithm and a generic randomized on-line algorithm for P that are competitive over all possible inputs. We show that their competitive ratios are optimal up to constant factors. Our analysis proceeds via an amusing card game.

31 citations

Proceedings ArticleDOI
28 Jan 1996
TL;DR: A randomized polynomial time algorithm that works for almost all graphs; more precisely in the G{ sub n,m} or G{sub n,p} models, the algorithm succeeds with high probability for all edge densities above the connectivity threshold.
Abstract: Given a graph G = (V, E) and a set of pairs of vertices in V, we are interested in finding for each pair (a{sub i}, b{sub i}) a path connecting a{sub i} to b{sub i}, such that the set of paths so found is vertex-disjoint. (The problem is NP-complete for general graphs as well as for planar graphs. It is in P if the number of pairs is fixed.) Our model is that the graph is chosen first, then an adversary chooses the pairs of endpoints, subject only to obvious feasibility constraints, namely, all pairs must be disjoint, no more than a constant fraction of the vertices could be required for the paths, and not {open_quotes}too many{close_quotes} neighbors of a vertex can be endpoints. We present a randomized polynomial time algorithm that works for almost all graphs; more precisely in the G{sub n,m} or G{sub n,p} models, the algorithm succeeds with high probability for all edge densities above the connectivity threshold. The set of pairs that can be accommodated is optimal up to constant factors. Although the analysis is intricate, the algorithm itself is quite simple and suggests a practical heuristic. We include two applications of the main result,more » one in the context of circuit switching communication, the other in the context of topological embeddings of graphs.« less

30 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this paper, a simple model based on the power-law degree distribution of real networks was proposed, which was able to reproduce the power law degree distribution in real networks and to capture the evolution of networks, not just their static topology.
Abstract: The emergence of order in natural systems is a constant source of inspiration for both physical and biological sciences. While the spatial order characterizing for example the crystals has been the basis of many advances in contemporary physics, most complex systems in nature do not offer such high degree of order. Many of these systems form complex networks whose nodes are the elements of the system and edges represent the interactions between them. Traditionally complex networks have been described by the random graph theory founded in 1959 by Paul Erdohs and Alfred Renyi. One of the defining features of random graphs is that they are statistically homogeneous, and their degree distribution (characterizing the spread in the number of edges starting from a node) is a Poisson distribution. In contrast, recent empirical studies, including the work of our group, indicate that the topology of real networks is much richer than that of random graphs. In particular, the degree distribution of real networks is a power-law, indicating a heterogeneous topology in which the majority of the nodes have a small degree, but there is a significant fraction of highly connected nodes that play an important role in the connectivity of the network. The scale-free topology of real networks has very important consequences on their functioning. For example, we have discovered that scale-free networks are extremely resilient to the random disruption of their nodes. On the other hand, the selective removal of the nodes with highest degree induces a rapid breakdown of the network to isolated subparts that cannot communicate with each other. The non-trivial scaling of the degree distribution of real networks is also an indication of their assembly and evolution. Indeed, our modeling studies have shown us that there are general principles governing the evolution of networks. Most networks start from a small seed and grow by the addition of new nodes which attach to the nodes already in the system. This process obeys preferential attachment: the new nodes are more likely to connect to nodes with already high degree. We have proposed a simple model based on these two principles wich was able to reproduce the power-law degree distribution of real networks. Perhaps even more importantly, this model paved the way to a new paradigm of network modeling, trying to capture the evolution of networks, not just their static topology.

18,415 citations

Journal ArticleDOI
TL;DR: Developments in this field are reviewed, including such concepts as the small-world effect, degree distributions, clustering, network correlations, random graph models, models of network growth and preferential attachment, and dynamical processes taking place on networks.
Abstract: Inspired by empirical studies of networked systems such as the Internet, social networks, and biological networks, researchers have in recent years developed a variety of techniques and models to help us understand or predict the behavior of these systems. Here we review developments in this field, including such concepts as the small-world effect, degree distributions, clustering, network correlations, random graph models, models of network growth and preferential attachment, and dynamical processes taking place on networks.

17,647 citations

Journal ArticleDOI
TL;DR: This article proposes a method for detecting communities, built around the idea of using centrality indices to find community boundaries, and tests it on computer-generated and real-world graphs whose community structure is already known and finds that the method detects this known structure with high sensitivity and reliability.
Abstract: A number of recent studies have focused on the statistical properties of networked systems such as social networks and the Worldwide Web. Researchers have concentrated particularly on a few properties that seem to be common to many networks: the small-world property, power-law degree distributions, and network transitivity. In this article, we highlight another property that is found in many networks, the property of community structure, in which network nodes are joined together in tightly knit groups, between which there are only looser connections. We propose a method for detecting such communities, built around the idea of using centrality indices to find community boundaries. We test our method on computer-generated and real-world graphs whose community structure is already known and find that the method detects this known structure with high sensitivity and reliability. We also apply the method to two networks whose community structure is not well known—a collaboration network and a food web—and find that it detects significant and informative community divisions in both cases.

14,429 citations

Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Journal ArticleDOI
TL;DR: It is demonstrated that the algorithms proposed are highly effective at discovering community structure in both computer-generated and real-world network data, and can be used to shed light on the sometimes dauntingly complex structure of networked systems.
Abstract: We propose and study a set of algorithms for discovering community structure in networks-natural divisions of network nodes into densely connected subgroups. Our algorithms all share two definitive features: first, they involve iterative removal of edges from the network to split it into communities, the edges removed being identified using any one of a number of possible "betweenness" measures, and second, these measures are, crucially, recalculated after each removal. We also propose a measure for the strength of the community structure found by our algorithms, which gives us an objective metric for choosing the number of communities into which a network should be divided. We demonstrate that our algorithms are highly effective at discovering community structure in both computer-generated and real-world network data, and show how they can be used to shed light on the sometimes dauntingly complex structure of networked systems.

12,882 citations