scispace - formally typeset
Search or ask a question
Author

Robert E. Tarjan

Bio: Robert E. Tarjan is an academic researcher from Princeton University. The author has contributed to research in topics: Time complexity & Spanning tree. The author has an hindex of 114, co-authored 400 publications receiving 67305 citations. Previous affiliations of Robert E. Tarjan include AT&T & Massachusetts Institute of Technology.


Papers
More filters
Journal ArticleDOI
TL;DR: A modified version of the dynamic trees of Sleator and Tarjan is developed that is suitable for efficient recursive algorithms, and used to reduce the running time of the algorithms for both problems toO(mα(m,n), where α is a functional inverse of Ackermann's function.
Abstract: We consider the twin problems of maintaining the bridge-connected components and the biconnected components of a dynamic undirected graph. The allowed changes to the graph are vertex and edge insertions. We give an algorithm for each problem. With simple data structures, each algorithm runs inO(n logn +m) time, wheren is the number of vertices andm is the number of operations. We develop a modified version of the dynamic trees of Sleator and Tarjan that is suitable for efficient recursive algorithms, and use it to reduce the running time of the algorithms for both problems toO(mα(m,n)), where α is a functional inverse of Ackermann's function. This time bound is optimal. All of the algorithms useO(n) space.

137 citations

Book
30 Aug 2011
TL;DR: In this article, the authors present linear time algorithms for solving the following problems involving a simple planar polygon P: (i) computing the collection of all shortest paths inside P from a given source vertex s to all the other vertices of P; (ii) computing a subpolygon of P consisting of points that are visible from a segment within P; and (iii) preprocessing P so that for any query ray r emerging from some fixed edge e of P, we can find in logarithmic time the first intersection of r with the boundary of P
Abstract: We present linear time algorithms for solving the following problems involving a simple planar polygon P: (i) Computing the collection of all shortest paths inside P from a given source vertex s to all the other vertices of P; (ii) Computing the subpolygon of P consisting of points that are visible from a segment within P; (iii) Preprocessing P so that for any query ray r emerging from some fixed edge e of P, we can find in logarithmic time the first intersection of r with the boundary of P; (iv) Preprocessing P so that for any query point x in P, we can find in logarithmic time the portion of the edge e that is visible from x; (v) Preprocessing P so that for any query point x inside P and direction u, we can find in logarithmic time the first point on the boundary of P hit by the ray at direction u from x; (vi) Calculating a hierarchical decomposition of P into smaller polygons by recursive polygon cutting, as in [Ch]. (vii) Calculating the (clockwise and counterclockwise) “convex ropes” (in the terminology of [PS]) from a fixed vertex s of P lying on its convex hull, to all other vertices of P. All these algorithms are based on a recent linear time algorithm of Tarjan and Van Wyk for triangulating a simple polygon, but use additional techniques to make all subsequent phases of these algorithms also linear.

130 citations

Proceedings ArticleDOI
01 Dec 1983
TL;DR: In this paper, self-adjusting trees have been used to create simple data structures for priority queues and search trees, where the adjustment is a sequence of exchanges of children, whereas in the case of search trees it is a series of rotations.
Abstract: We use the idea of self-adjusting trees to create new, simple data structures for priority queues (which we call heaps) and search trees. Unlike other efficient implementations of these data structures, self-adjusting trees have no balance condition. Instead, whenever the tree is accessed, certain adjustments take place. (In the case of heaps, the adjustment is a sequence of exchanges of children, in the case of search trees the adjustment is a sequence of rotations.) Self-adjusting trees are efficient in an amortized sense: any particular operation may be slow but any sequence of operations must be fast. Self-adjusting trees have two advantages over the corresponding balanced trees in both applications. First, they are simpler to implement because there are fewer cases in the algorithms. Second, they are more storage-efficient because no balance information needs to be stored. Furthermore, a self-adjusting search tree has the remarkable property that its running time (for any sufficiently long sequence of search operations) is within a constant factor of the running time for the same set of searches on any fixed binary tree. It follows that a self-adjusting tree is (up to a constant factor) as fast as the optimal fixed tree for a particular probability distribution of search requests, even though the distribution is unknown.

130 citations

Journal ArticleDOI
TL;DR: This paper describes an O(n)-time algorithm for recognizing and sorting Jordan sequences that uses level-linked search trees and a reduction of the recognition and sorting problem to a list-splitting problem.
Abstract: For a Jordan curve C in the plane nowhere tangent to the x axis, let x1, x2,…, xn be the abscissas of the intersection points of C with the x axis, listed in the order the points occur on C. We call x1, x2,…, xn a Jordan sequence. In this paper we describe an O(n)-time algorithm for recognizing and sorting Jordan sequences. The problem of sorting such sequences arises in computational geometry and computational geography. Our algorithm is based on a reduction of the recognition and sorting problem to a list-splitting problem. To solve the list-splitting problem we use level-linked search trees.

129 citations

Journal ArticleDOI
TL;DR: In this paper, two themes in data structure design are explored: amortized computational complexity and self adjustment.
Abstract: In this paper we explore two themes in data structure design: amortized computational complexity and self adjustment. We are motivated by the following observations. In most applications of data structures, we wish to perform not just a single operation but a sequence of operations, possibly having correlated behavior. By averaging the running time per operation over a worst-case sequence of operations, we can sometimes obtain an overall time bound much smaller than the worst-case time per operation multiplied by the number of operations. We call this kind of averaging amortization.Standard kinds of data structures, such as the many varieties of balanced trees, are specifically designed so that the worst-case time per operation is small. Such efficiency is achieved by imposing an explicit structural constraint that must be maintained during updates, at a cost of both running time and storage space. However, if amortized running time is the complexity measure of interest, we can guarantee efficiency withou...

128 citations


Cited by
More filters
Book
01 Jan 1988
TL;DR: Probabilistic Reasoning in Intelligent Systems as mentioned in this paper is a complete and accessible account of the theoretical foundations and computational methods that underlie plausible reasoning under uncertainty, and provides a coherent explication of probability as a language for reasoning with partial belief.
Abstract: From the Publisher: Probabilistic Reasoning in Intelligent Systems is a complete andaccessible account of the theoretical foundations and computational methods that underlie plausible reasoning under uncertainty. The author provides a coherent explication of probability as a language for reasoning with partial belief and offers a unifying perspective on other AI approaches to uncertainty, such as the Dempster-Shafer formalism, truth maintenance systems, and nonmonotonic logic. The author distinguishes syntactic and semantic approaches to uncertainty—and offers techniques, based on belief networks, that provide a mechanism for making semantics-based systems operational. Specifically, network-propagation techniques serve as a mechanism for combining the theoretical coherence of probability theory with modern demands of reasoning-systems technology: modular declarative inputs, conceptually meaningful inferences, and parallel distributed computation. Application areas include diagnosis, forecasting, image interpretation, multi-sensor fusion, decision support systems, plan recognition, planning, speech recognition—in short, almost every task requiring that conclusions be drawn from uncertain clues and incomplete information. Probabilistic Reasoning in Intelligent Systems will be of special interest to scholars and researchers in AI, decision theory, statistics, logic, philosophy, cognitive psychology, and the management sciences. Professionals in the areas of knowledge-based systems, operations research, engineering, and statistics will find theoretical and computational tools of immediate practical use. The book can also be used as an excellent text for graduate-level courses in AI, operations research, or applied probability.

15,671 citations

Journal ArticleDOI
22 Dec 2000-Science
TL;DR: Locally linear embedding (LLE) is introduced, an unsupervised learning algorithm that computes low-dimensional, neighborhood-preserving embeddings of high-dimensional inputs that learns the global structure of nonlinear manifolds.
Abstract: Many areas of science depend on exploratory data analysis and visualization. The need to analyze large amounts of multivariate data raises the fundamental problem of dimensionality reduction: how to discover compact representations of high-dimensional data. Here, we introduce locally linear embedding (LLE), an unsupervised learning algorithm that computes low-dimensional, neighborhood-preserving embeddings of high-dimensional inputs. Unlike clustering methods for local dimensionality reduction, LLE maps its inputs into a single global coordinate system of lower dimensionality, and its optimizations do not involve local minima. By exploiting the local symmetries of linear reconstructions, LLE is able to learn the global structure of nonlinear manifolds, such as those generated by images of faces or documents of text.

15,106 citations

Book
01 Jan 1974
TL;DR: This text introduces the basic data structures and programming techniques often used in efficient algorithms, and covers use of lists, push-down stacks, queues, trees, and graphs.
Abstract: From the Publisher: With this text, you gain an understanding of the fundamental concepts of algorithms, the very heart of computer science. It introduces the basic data structures and programming techniques often used in efficient algorithms. Covers use of lists, push-down stacks, queues, trees, and graphs. Later chapters go into sorting, searching and graphing algorithms, the string-matching algorithms, and the Schonhage-Strassen integer-multiplication algorithm. Provides numerous graded exercises at the end of each chapter. 0201000296B04062001

9,262 citations

Journal ArticleDOI
TL;DR: A thorough exposition of community structure, or clustering, is attempted, from the definition of the main elements of the problem, to the presentation of most methods developed, with a special focus on techniques designed by statistical physicists.
Abstract: The modern science of networks has brought significant advances to our understanding of complex systems. One of the most relevant features of graphs representing real systems is community structure, or clustering, i. e. the organization of vertices in clusters, with many edges joining vertices of the same cluster and comparatively few edges joining vertices of different clusters. Such clusters, or communities, can be considered as fairly independent compartments of a graph, playing a similar role like, e. g., the tissues or the organs in the human body. Detecting communities is of great importance in sociology, biology and computer science, disciplines where systems are often represented as graphs. This problem is very hard and not yet satisfactorily solved, despite the huge effort of a large interdisciplinary community of scientists working on it over the past few years. We will attempt a thorough exposition of the topic, from the definition of the main elements of the problem, to the presentation of most methods developed, with a special focus on techniques designed by statistical physicists, from the discussion of crucial issues like the significance of clustering and how methods should be tested and compared against each other, to the description of applications to real networks.

9,057 citations

Journal ArticleDOI
TL;DR: A thorough exposition of the main elements of the clustering problem can be found in this paper, with a special focus on techniques designed by statistical physicists, from the discussion of crucial issues like the significance of clustering and how methods should be tested and compared against each other, to the description of applications to real networks.

8,432 citations