scispace - formally typeset
Search or ask a question
Author

Robert E. Tarjan

Bio: Robert E. Tarjan is an academic researcher from Princeton University. The author has contributed to research in topics: Time complexity & Spanning tree. The author has an hindex of 114, co-authored 400 publications receiving 67305 citations. Previous affiliations of Robert E. Tarjan include AT&T & Massachusetts Institute of Technology.


Papers
More filters
Proceedings ArticleDOI
01 Jan 1988
TL;DR: An algorithm for the assignment problem (minimum cost perfect bipartite matching) that improves the best known sequential algorithm, and is within a factor of log of the bestknown bound for the problem without costs (maximum cardinality matching).
Abstract: We present algorithms for matching and related problems that run on an EREW PRAM with p processors. Given is a bipartite graph G with n vertices, m edges, and integral edge costs at most N in magnitude. We give an algorithm for the assignment problem (minimum cost perfect bipartite matching) that runs in O(√nm log (nN)(log(2p))/p) time and O(m) space, for p ≤ m/(√nlog2n). For p = 1 this improves the best known sequential algorithm, and is within a factor of log (nN) of the best known bound for the problem without costs (maximum cardinality matching). For p > 1 the time is within a factor of log p of optimum speed-up. Extensions include an algorithm for maximum cardinality bipartite matching with slightly better processor bounds, and similar results for bipartite degree-constrained subgraph problems (with and without costs). Our ideas also extend to general graph matching problems.

51 citations

Proceedings ArticleDOI
19 May 2012
TL;DR: This work presents the first pointer-based heap implementation with time bounds matching those of Fibonacci heaps in the worst case, and uses the pigeonhole principle in place of the redundant counter mechanism.
Abstract: We present the first pointer-based heap implementation with time bounds matching those of Fibonacci heaps in the worst case. We support make-heap, insert, find-min, meld and decrease-key in worst-case O(1) time, and delete and delete-min in worst-case O(lg n) time, where n is the size of the heap. The data structure uses linear space. A previous, very complicated, solution achieving the same time bounds in the RAM model made essential use of arrays and extensive use of redundant counter schemes to maintain balance. Our solution uses neither. Our key simplification is to discard the structure of the smaller heap when doing a meld. We use the pigeonhole principle in place of the redundant counter mechanism.

51 citations

Journal ArticleDOI
TL;DR: This paper describes an algorithm to construct, for each expression in a given program text, a symbolic expression whose value is equal to the value of the text expression for all executions of the program.
Abstract: This paper describes an algorithm to construct, for each expression in a given program text, a symbolic expression whose value is equal to the value of the text expression for all executions of the program. We call such a mapping from text expressions to symbolic expressions a cover. Covers are useful in such program optimization techniques as constant propagation and code motion. The particular cover constructed by our methods is in general weaker than the covers obtainable by the methods of [Ki], [FKU], [RL], [R2] but our method has the advantage of being very efficient. It requires $O(m\alpha (m,n) + l)$ operations if extended bit vector operations have unit cost, where n is the number of vertices in the control flow graph of the program, m is the number of edges, l is the length of the program text, and $\alpha $ is related to a functional inverse of Ackermann’s function [T2]. Our method does not require that the program be well-structured nor that the flow graph be reducible.

51 citations

Proceedings Article
15 Apr 2004
TL;DR: A complete, correct, simpler lineartime dominators algorithm, implementable on either a random-access machine or a pointer machine, and a linear-time reduction of the dominators problem to a nearest common ancestors problem.
Abstract: The problem of finding dominators in a flowgraph arises in many kinds of global code optimization and other settings. In 1979 Lengauer and Tarjan gave an almost-linear-time algorithm to find dominators. In 1985 Harel claimed a lineartime algorithm, but this algorithm was incomplete; Alstrup et al. [1999] gave a complete and “simpler” linear-time algorithm on a random-access machine. In 1998, Buchsbaum et al. claimed a “new, simpler” linear-time algorithm with implementations both on a random access machine and on a pointer machine. In this paper, we begin by noting that the key lemma of Buchsbaum et al. does not in fact apply to their algorithm, and their algorithm does not run in linear time. Then we provide a complete, correct, simpler lineartime dominators algorithm. One key result is a linear-time reduction of the dominators problem to a nearest common ancestors problem, implementable on either a random-access machine or a pointer machine.

50 citations

Proceedings ArticleDOI
03 May 1976
TL;DR: It is shown that for each graph with n vertices and maximum in-degree d, there is a pebbling strategy which requires at most c(d) n/log n pebbles, and this bound is tight to within a constant factor.
Abstract: We study a one-person game played by placing pebbles, according to certain rules, on the vertices of a directed graph. In [3] it was shown that for each graph with n vertices and maximum in-degree d , there is a pebbling strategy which requires at most c(d) n/log n pebbles. Here we show that this bound is tight to within a constant factor. We also analyze a variety of pebbling algorithms, including one which achieves the 0(n/log n) bound.

49 citations


Cited by
More filters
Book
01 Jan 1988
TL;DR: Probabilistic Reasoning in Intelligent Systems as mentioned in this paper is a complete and accessible account of the theoretical foundations and computational methods that underlie plausible reasoning under uncertainty, and provides a coherent explication of probability as a language for reasoning with partial belief.
Abstract: From the Publisher: Probabilistic Reasoning in Intelligent Systems is a complete andaccessible account of the theoretical foundations and computational methods that underlie plausible reasoning under uncertainty. The author provides a coherent explication of probability as a language for reasoning with partial belief and offers a unifying perspective on other AI approaches to uncertainty, such as the Dempster-Shafer formalism, truth maintenance systems, and nonmonotonic logic. The author distinguishes syntactic and semantic approaches to uncertainty—and offers techniques, based on belief networks, that provide a mechanism for making semantics-based systems operational. Specifically, network-propagation techniques serve as a mechanism for combining the theoretical coherence of probability theory with modern demands of reasoning-systems technology: modular declarative inputs, conceptually meaningful inferences, and parallel distributed computation. Application areas include diagnosis, forecasting, image interpretation, multi-sensor fusion, decision support systems, plan recognition, planning, speech recognition—in short, almost every task requiring that conclusions be drawn from uncertain clues and incomplete information. Probabilistic Reasoning in Intelligent Systems will be of special interest to scholars and researchers in AI, decision theory, statistics, logic, philosophy, cognitive psychology, and the management sciences. Professionals in the areas of knowledge-based systems, operations research, engineering, and statistics will find theoretical and computational tools of immediate practical use. The book can also be used as an excellent text for graduate-level courses in AI, operations research, or applied probability.

15,671 citations

Journal ArticleDOI
22 Dec 2000-Science
TL;DR: Locally linear embedding (LLE) is introduced, an unsupervised learning algorithm that computes low-dimensional, neighborhood-preserving embeddings of high-dimensional inputs that learns the global structure of nonlinear manifolds.
Abstract: Many areas of science depend on exploratory data analysis and visualization. The need to analyze large amounts of multivariate data raises the fundamental problem of dimensionality reduction: how to discover compact representations of high-dimensional data. Here, we introduce locally linear embedding (LLE), an unsupervised learning algorithm that computes low-dimensional, neighborhood-preserving embeddings of high-dimensional inputs. Unlike clustering methods for local dimensionality reduction, LLE maps its inputs into a single global coordinate system of lower dimensionality, and its optimizations do not involve local minima. By exploiting the local symmetries of linear reconstructions, LLE is able to learn the global structure of nonlinear manifolds, such as those generated by images of faces or documents of text.

15,106 citations

Book
01 Jan 1974
TL;DR: This text introduces the basic data structures and programming techniques often used in efficient algorithms, and covers use of lists, push-down stacks, queues, trees, and graphs.
Abstract: From the Publisher: With this text, you gain an understanding of the fundamental concepts of algorithms, the very heart of computer science. It introduces the basic data structures and programming techniques often used in efficient algorithms. Covers use of lists, push-down stacks, queues, trees, and graphs. Later chapters go into sorting, searching and graphing algorithms, the string-matching algorithms, and the Schonhage-Strassen integer-multiplication algorithm. Provides numerous graded exercises at the end of each chapter. 0201000296B04062001

9,262 citations

Journal ArticleDOI
TL;DR: A thorough exposition of community structure, or clustering, is attempted, from the definition of the main elements of the problem, to the presentation of most methods developed, with a special focus on techniques designed by statistical physicists.
Abstract: The modern science of networks has brought significant advances to our understanding of complex systems. One of the most relevant features of graphs representing real systems is community structure, or clustering, i. e. the organization of vertices in clusters, with many edges joining vertices of the same cluster and comparatively few edges joining vertices of different clusters. Such clusters, or communities, can be considered as fairly independent compartments of a graph, playing a similar role like, e. g., the tissues or the organs in the human body. Detecting communities is of great importance in sociology, biology and computer science, disciplines where systems are often represented as graphs. This problem is very hard and not yet satisfactorily solved, despite the huge effort of a large interdisciplinary community of scientists working on it over the past few years. We will attempt a thorough exposition of the topic, from the definition of the main elements of the problem, to the presentation of most methods developed, with a special focus on techniques designed by statistical physicists, from the discussion of crucial issues like the significance of clustering and how methods should be tested and compared against each other, to the description of applications to real networks.

9,057 citations

Journal ArticleDOI
TL;DR: A thorough exposition of the main elements of the clustering problem can be found in this paper, with a special focus on techniques designed by statistical physicists, from the discussion of crucial issues like the significance of clustering and how methods should be tested and compared against each other, to the description of applications to real networks.

8,432 citations