scispace - formally typeset
Search or ask a question
Author

Andrei Z. Broder

Other affiliations: AmeriCorps VISTA, IBM, Columbia University  ...read more
Bio: Andrei Z. Broder is an academic researcher from Google. The author has contributed to research in topics: Web search query & Web query classification. The author has an hindex of 67, co-authored 241 publications receiving 27310 citations. Previous affiliations of Andrei Z. Broder include AmeriCorps VISTA & IBM.


Papers
More filters
Journal ArticleDOI
TL;DR: The probability-generating function in closed form for the asymptotic cost of insertion via random probing with secondary clustering and higher-order clustering is derived and it is shown that all the moments of the probability distribution of the insertion cost exist and are asymPTotically equal to the corresponding Moments of the cost distribution under uniform hashing.
Abstract: A new approach to the analysis of random probing hashing algorithms is presented. The probability-generating function in closed form for the asymptotic cost of insertion via random probing with secondary clustering is derived. For higher-order clustering, it is shown that all the moments of the probability distribution of the insertion cost exist and are asymptotically equal to the corresponding moments of the cost distribution under uniform hashing. The method in this paper also leads to simple derivations for the expected cost of insertion for random probing with secondary and higher-order clustering.

5 citations

Patent
07 May 2008
TL;DR: In this paper, a set of features extracted from the plurality of digital ads and one of the webpage content or the search query are used to predict a degree of relevance between the set of candidate ads and a second web content or a second search query.
Abstract: Systems and methods for building a prediction model to predict a degree of relevance between digital ads and a search query or webpage content are disclosed. Generally, an indication of relevance is received between a plurality of digital ads and one of a webpage content or a search query. A set of features is extracted from the plurality of digital ads and one of the webpage content or the search query. A prediction model is then built to predict a degree of relevance between the set of candidate digital ads and one of a second webpage content or a second search query, where the prediction model is built based at least one the received indication of relevance and the extracted set of features.

5 citations

Proceedings ArticleDOI
28 Mar 2011
TL;DR: This work develops efficient algorithms for evaluating graph constraints over arbitrary directed graphs G and presents experimental results that demonstrate the effectiveness and scalability of the proposed algorithms using a realistic dataset from Yahoo!'s Web advertising exchange.
Abstract: We introduce the problem of evaluating graph constraints in content-based publish/subscribe (pub/sub) systems. This problem formulation extends traditional content-based pub/sub systems in the following manner: publishers and subscribers are connected via a (logical) directed graph G with node and edge constraints, which limits the set of valid paths between them. Such graph constraints can be used to model a Web advertising exchange (where there may be restrictions on how advertising networks can connect advertisers and publishers) and content delivery problems in social networks (where there may be restrictions on how information can be shared via the social graph). In this context, we develop efficient algorithms for evaluating graph constraints over arbitrary directed graphs G. We also present experimental results that demonstrate the effectiveness and scalability of the proposed algorithms using a realistic dataset from Yahoo!'s Web advertising exchange.

4 citations

Proceedings ArticleDOI
02 Feb 2015
TL;DR: A distinguished panel of eminent scientists, from both Industry and Academia, will share their point of view and take questions from the moderator and the audience to answer what Big Data means from a scientific perspective.
Abstract: The Gartner's 2014 Hype Cycle released last August moves Big Data technology from the Peak of Inflated Expectations to the beginning of the Trough of Disillusionment when interest starts to wane as reality does not live up to previous promises. As the hype is starting to dissipate it is worth asking what Big Data (however defined) means from a scientific perspective: Did the emergence of gigantic corpora exposed the limits of classical information retrieval and data mining and led to new concepts and challenges, the way say, the study of electromagnetism showed the limits of Newtonian mechanics and led to Relativity Theory, or is it all just "sound and fury, signifying nothing", simply a matter of scaling up well understood technologies? To answer this question, we have assembled a distinguished panel of eminent scientists, from both Industry and Academia: Lada Adamic (Facebook), Michael Franklin (University of California at Berkeley), Maarten de Rijke (University of Amsterdam), Eric Xing (Carnegie Mellon University), and Kai Yu (Baidu) will share their point of view and take questions from the moderator and the audience.

4 citations

Proceedings Article
01 Jan 2008
TL;DR: This work proposes augmenting collaborative reviewing systems with an automatic annotation capability that helps users interpret reviews and describes an algorithm that is able to derive annotations of the form of “This reviewer rates this movie better than 4 out of 6 other Woody Allen comedies that he rated”.
Abstract: We propose augmenting collaborative reviewing systems with an automatic annotation capability that helps users interpret reviews. Given an item and its review by a certain author, our approach is to find a reference set of similar items that is both easy to describe and meaningful to users. Depending on the number of available same-author reviews of items in the reference set, an annotation produced by our system may consist of similar items that the author has reviewed, the rank of the reviewed item among items in this set, a comparison of the author’s scores to averages, and other similar information that indicate the biases and competencies of the reviewer. We validate our approach in the context of movie reviews and describe an algorithm that, for example, presented with a review of a Woody Allen comedy, is able to derive annotations of the form: “This reviewer rates this movie better than 4 out of 6 other Woody Allen comedies that he rated” or ”This is the only Woody Allen comedy among the 29 movies rated by this reviewer” or “This reviewer rated 85 comedies. He likes this movie more than 60% of them. He likes comedies less than the average reviewer.”

4 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this paper, a simple model based on the power-law degree distribution of real networks was proposed, which was able to reproduce the power law degree distribution in real networks and to capture the evolution of networks, not just their static topology.
Abstract: The emergence of order in natural systems is a constant source of inspiration for both physical and biological sciences. While the spatial order characterizing for example the crystals has been the basis of many advances in contemporary physics, most complex systems in nature do not offer such high degree of order. Many of these systems form complex networks whose nodes are the elements of the system and edges represent the interactions between them. Traditionally complex networks have been described by the random graph theory founded in 1959 by Paul Erdohs and Alfred Renyi. One of the defining features of random graphs is that they are statistically homogeneous, and their degree distribution (characterizing the spread in the number of edges starting from a node) is a Poisson distribution. In contrast, recent empirical studies, including the work of our group, indicate that the topology of real networks is much richer than that of random graphs. In particular, the degree distribution of real networks is a power-law, indicating a heterogeneous topology in which the majority of the nodes have a small degree, but there is a significant fraction of highly connected nodes that play an important role in the connectivity of the network. The scale-free topology of real networks has very important consequences on their functioning. For example, we have discovered that scale-free networks are extremely resilient to the random disruption of their nodes. On the other hand, the selective removal of the nodes with highest degree induces a rapid breakdown of the network to isolated subparts that cannot communicate with each other. The non-trivial scaling of the degree distribution of real networks is also an indication of their assembly and evolution. Indeed, our modeling studies have shown us that there are general principles governing the evolution of networks. Most networks start from a small seed and grow by the addition of new nodes which attach to the nodes already in the system. This process obeys preferential attachment: the new nodes are more likely to connect to nodes with already high degree. We have proposed a simple model based on these two principles wich was able to reproduce the power-law degree distribution of real networks. Perhaps even more importantly, this model paved the way to a new paradigm of network modeling, trying to capture the evolution of networks, not just their static topology.

18,415 citations

Journal ArticleDOI
TL;DR: Developments in this field are reviewed, including such concepts as the small-world effect, degree distributions, clustering, network correlations, random graph models, models of network growth and preferential attachment, and dynamical processes taking place on networks.
Abstract: Inspired by empirical studies of networked systems such as the Internet, social networks, and biological networks, researchers have in recent years developed a variety of techniques and models to help us understand or predict the behavior of these systems. Here we review developments in this field, including such concepts as the small-world effect, degree distributions, clustering, network correlations, random graph models, models of network growth and preferential attachment, and dynamical processes taking place on networks.

17,647 citations

Journal ArticleDOI
TL;DR: This article proposes a method for detecting communities, built around the idea of using centrality indices to find community boundaries, and tests it on computer-generated and real-world graphs whose community structure is already known and finds that the method detects this known structure with high sensitivity and reliability.
Abstract: A number of recent studies have focused on the statistical properties of networked systems such as social networks and the Worldwide Web. Researchers have concentrated particularly on a few properties that seem to be common to many networks: the small-world property, power-law degree distributions, and network transitivity. In this article, we highlight another property that is found in many networks, the property of community structure, in which network nodes are joined together in tightly knit groups, between which there are only looser connections. We propose a method for detecting such communities, built around the idea of using centrality indices to find community boundaries. We test our method on computer-generated and real-world graphs whose community structure is already known and find that the method detects this known structure with high sensitivity and reliability. We also apply the method to two networks whose community structure is not well known—a collaboration network and a food web—and find that it detects significant and informative community divisions in both cases.

14,429 citations

Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Journal ArticleDOI
TL;DR: It is demonstrated that the algorithms proposed are highly effective at discovering community structure in both computer-generated and real-world network data, and can be used to shed light on the sometimes dauntingly complex structure of networked systems.
Abstract: We propose and study a set of algorithms for discovering community structure in networks-natural divisions of network nodes into densely connected subgroups. Our algorithms all share two definitive features: first, they involve iterative removal of edges from the network to split it into communities, the edges removed being identified using any one of a number of possible "betweenness" measures, and second, these measures are, crucially, recalculated after each removal. We also propose a measure for the strength of the community structure found by our algorithms, which gives us an objective metric for choosing the number of communities into which a network should be divided. We demonstrate that our algorithms are highly effective at discovering community structure in both computer-generated and real-world network data, and show how they can be used to shed light on the sometimes dauntingly complex structure of networked systems.

12,882 citations