scispace - formally typeset
Search or ask a question
Author

Robert E. Tarjan

Bio: Robert E. Tarjan is an academic researcher from Princeton University. The author has contributed to research in topics: Time complexity & Spanning tree. The author has an hindex of 114, co-authored 400 publications receiving 67305 citations. Previous affiliations of Robert E. Tarjan include AT&T & Massachusetts Institute of Technology.


Papers
More filters
Proceedings ArticleDOI
01 Nov 1986
TL;DR: This work shows how to triangulate a simple polygon in O (n) time and suggests an approach to the triangulation problem: use Jordan sorting in a divide-and-conquer fashion.
Abstract: A simple polygon with n vertices is triangulated by adding to it n 3 line segments between its vertices that partition the interior of the polygon into triangles. We present an algori thm for tr iangulating a simple polygon in t ime proportional to its size. This result has a number of applications in computat ional geometry. Introduction A simple polygon with n vertices is triangulated by adding to it n 3 line segments between its vertices to partition the interior of the polygon into triangles. We show how to triangulate a simple polygon in O (n) time. The result relies on the linear-time equivalence of triangulation and the problem of computing visibility information [6]. The algorithm uses divide-and-conquer, recursive finger search trees [1, 12, 14], and a variation of Jordan sorting [10,11]. Since Garey, Johnson, Preparata, and Tarjan gave an O(n log n) algorithm for triangulation [7], work on this problem has proceeded in two directions. Some authors have presented linear-time algorithms for triangulating special classes of polygons such as monotone polygons [6] and star-shaped polygons [18]. Other authors have given triangulation algorithms whose complexity is of the form O(n log k), where k is a property of the polygon such as the number of reflex angles [9] or its sinuosity [3]. Since there exist classes of polygons with k Ill(n), however, the worst-case performance of these algorithms is still O(n log n). Deciding whether Permission to copy without fee all or part of this material is granted provided that the-copies are not made or distributed for direct commercial advantage, the ACM copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Association for Computing Machinery. To copy otherwise, or to republish, requires a fee and/or specific permission. © 1986 A C M 0-89791-193-8/86/0500/0380 $00.75 there is an O(n)-time algorithm has boon one of the foremost open problems in computational geometry. Fournier and Montuno have shown that computing a triangulation of a polygon is linear-time reducible to computing its internal horizontal edge-vertex visibility information [6]: given the edge or two edges internally visible from each vertex of a simple polygon, one can compute a triangulation of the polygon in linear time. They call the result of computing internal horizontal edge-vertex visibility information a trapezoidization of the polygon, because the horizontal line segments that connect each vertex to its internally visible edge or edges partition the interior of the polygon into trapezoids. Hoffman, Meblhorn, Rosenstiehl, and Tarjan [10, 11] have presented a linear-time algorithm for Jordan sorting: given k points at which the edges of a polygon intersect a horizontal line, in the order in which they are encountered in a traversal of the boundary of the polygon, sort them into the order in which they appear along the line. The output of Jordan sorting gives internal and external edge-edge visibility information along the given horizontal line. We show below that Jordan sorting is linear-time reducible to the computation of all edgevertex and edge-edge visibility information. This implies that the triangulation problem is at least as hard as Jordan sorting, and suggests an approach to the triangulation problem: use Jordan sorting in a divide-and-conquer fashion. Our algorithm is a realization of this approach.

48 citations

Proceedings ArticleDOI
11 Jan 2004
TL;DR: A complete, correct, simpler linear-time dominators algorithm, implementable on either a random-access machine or a pointer machine, and one key result is alinear-time reduction of the dominators problem to a nearest common ancestors problem.
Abstract: The problem of finding dominators in a flowgraph arises in many kinds of global code optimization and other settings. In 1979 Lengauer and Tarjan gave an almost-linear-time algorithm to find dominators. In 1985 Harel claimed a linear-time algorithm, but this algorithm was incomplete; Alstrup et al. [1999] gave a complete and "simpler" linear-time algorithm on a random-access machine. In 1998, Buchsbaum et al. claimed a "new, simpler" linear-time algorithm with implementations both on a random access machine and on a pointer machine. In this paper, we begin by noting that the key lemma of Buchsbaum et al. does not in fact apply to their algorithm, and their algorithm does not run in linear time. Then we provide a complete, correct, simpler linear-time dominators algorithm. One key result is a linear-time reduction of the dominators problem to a nearest common ancestors problem, implementable on either a random-access machine or a pointer machine.

48 citations

Proceedings ArticleDOI
01 May 1990
TL;DR: The major new techniques employed are the efficient location of horizontal visibility edges which partition the interior of the polygon into regions of approximately equal size, and a linear-time algorithm for obtaining the horizontal visibility partition of a subchain of a polygonal chain, from the horizontal vision partition of the entire chain.
Abstract: We give a new O(n log log n)-time deterministic linear-time algorithm for triangulating simple n-vertex polygons, which avoids the use of complicated data-structures. In addition, for polygons whose vertices have integer coordinates of polynomially bounded size, the algorithm can be modified to run in O(n log* n) time. The major new techniques employed are the efficient location of horizontal visibility edges which partition the interior of the polygon into regions of approximately equal size, and a linear-time algorithm for obtaining the horizontal visibility partition of a subchain of a polygonal chain, from the horizontal visibility partition of the entire chain. This latter technique has other interesting applications, including a linear-time algorithm to convert a Steiner triangulation of a polygon into a true triangulation.This research was partially supported by DIMACS and the following grants: NSERC 583584, NSERC 580485, NSF-STC88-09648, ONR-N00014-87-0467.

48 citations

Proceedings ArticleDOI
30 Apr 1979
TL;DR: A pebbling problem which has been used to study the storage requirements of various models of computation is examined and the original problem P-space complete is proved by employing a modification of Lingas's proof.
Abstract: We examine a pebbling problem which has been used to study the storage requirements of various models of computation. Sethi has shown this problem to be NP-hard and Lingas has shown a generalization to be P-space complete. We prove the original problem P-space complete by employing a modification of Lingas's proof. The pebbling problem is one of the few examples of a P-space complete problem not exhibiting any obvious quantifier alternation.

47 citations

Proceedings ArticleDOI
29 May 1995
TL;DR: An eficient purely functional implementation of stacks with catenation is described, which has a worst-case running time of O ( 1) for each push, pop, andCatenation, and the solution is not only faster but simpler, and indeed it may be practical.
Abstract: We describe an eficient purely functional implementation of stacks with catenation. In addition to being an intriguing problem in its own right, functional implementation of catenable stacks is the tool required to add certain sophisticated programming constructs to functional programming lan.quages. Our solution has a worst-case running time of O ( 1) for each push, pop, and catenation. The best previously known solution has an O(log* k) time bound for the kth stack operation. Our solution is not only faster but simpler, and indeed we hope it may be practical. The major new ingredient in our result is a general technique that we call recursive slowdown. Recursive slow-down is an algorithmic design principle that can give constant worst-case time bounds for operations on data structures. We expect this technique to have additional applications. Indeed, we have recently been able to extend the result described here to obtain a purely functional implementation of double-ended queues with catenation that takes constant time per operation. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the ACM copyri ht notice and the %“” title of the publication and its date appear, an notice ISgiven that copyin ISby permission of the Association of Computing Machinery. o cop othenwse, or to republish, requires T /’ a fee and/or specl ICpermission. STOC’ 95, Las Vegas, Nevada, USA oACM 0-89791 -718-9/95/0005..$3 50 *Department of Computer Science, Princeton University, Princeton, NJ 08544 USA. Research supported by the Office of Naval Research, Contract No. Nooo14-91J-1463 and a United States-Israel Educational Foundation (USIEF) Fulbright Grant. hkl@cs.princeton. edu. t Department of Computer Science, princeton University, Princeton, NJ 08544 USA and NEC Institute, Princeton, NJ. Research at Princeton University partiaJly supported by the NSF, Grant No. CCR-8920505 and the Office of Naval Research, Contract No. NOO014-91-J-1463. ret@cs. princeton.edu. Robert E. Tarj ant 1 History of the Problem A persistent data structure is one in which a change to the structure can be made without destroying the old version, so that all versions of the structure persist and can be accessed or (possibly) modified. In the functional programming literature, persistent structures are often called immutable. Purely functional programming, without side effects, has the property that every structure created is automatically persistent. Persistent data structures arise not only in functional programming but also in text, program, and file editing and maintenance; computational geometry; and other algorithmic application areas. (See [5, 8,9, 10, 11, 12, 13, 14,21,28,29,30,31,32, 33, 35].) Several papers have dealt with the problem of adding persistence to general data structures in a way that is more efficient than the obvious solution of copying the entire structure whenever a change is made. In particular, Driscoll, Sarnak, Sleator, and Tarjan [11] described how to make pointer-based structures persistent using a technique called node-splitting, which is related to fractional cascading [6] in a way that is not yet fully understood. Dietz [10] described a method for making array-based structures persistent. Additional references on persistence can be found in those papers. The general techniques in [10] and [11] fail to work on data structures that can be combined with each other rather than just be changed locally. Perhaps the simplest and probably the 1For the purposes of this paper, a “purely functional” data structure is one built using only the LISP functions car, cons, cdr. Though we do not state our constructions explicitly in terms of these functions, it is routine to verify that our structures are purely functional.

47 citations


Cited by
More filters
Book
01 Jan 1988
TL;DR: Probabilistic Reasoning in Intelligent Systems as mentioned in this paper is a complete and accessible account of the theoretical foundations and computational methods that underlie plausible reasoning under uncertainty, and provides a coherent explication of probability as a language for reasoning with partial belief.
Abstract: From the Publisher: Probabilistic Reasoning in Intelligent Systems is a complete andaccessible account of the theoretical foundations and computational methods that underlie plausible reasoning under uncertainty. The author provides a coherent explication of probability as a language for reasoning with partial belief and offers a unifying perspective on other AI approaches to uncertainty, such as the Dempster-Shafer formalism, truth maintenance systems, and nonmonotonic logic. The author distinguishes syntactic and semantic approaches to uncertainty—and offers techniques, based on belief networks, that provide a mechanism for making semantics-based systems operational. Specifically, network-propagation techniques serve as a mechanism for combining the theoretical coherence of probability theory with modern demands of reasoning-systems technology: modular declarative inputs, conceptually meaningful inferences, and parallel distributed computation. Application areas include diagnosis, forecasting, image interpretation, multi-sensor fusion, decision support systems, plan recognition, planning, speech recognition—in short, almost every task requiring that conclusions be drawn from uncertain clues and incomplete information. Probabilistic Reasoning in Intelligent Systems will be of special interest to scholars and researchers in AI, decision theory, statistics, logic, philosophy, cognitive psychology, and the management sciences. Professionals in the areas of knowledge-based systems, operations research, engineering, and statistics will find theoretical and computational tools of immediate practical use. The book can also be used as an excellent text for graduate-level courses in AI, operations research, or applied probability.

15,671 citations

Journal ArticleDOI
22 Dec 2000-Science
TL;DR: Locally linear embedding (LLE) is introduced, an unsupervised learning algorithm that computes low-dimensional, neighborhood-preserving embeddings of high-dimensional inputs that learns the global structure of nonlinear manifolds.
Abstract: Many areas of science depend on exploratory data analysis and visualization. The need to analyze large amounts of multivariate data raises the fundamental problem of dimensionality reduction: how to discover compact representations of high-dimensional data. Here, we introduce locally linear embedding (LLE), an unsupervised learning algorithm that computes low-dimensional, neighborhood-preserving embeddings of high-dimensional inputs. Unlike clustering methods for local dimensionality reduction, LLE maps its inputs into a single global coordinate system of lower dimensionality, and its optimizations do not involve local minima. By exploiting the local symmetries of linear reconstructions, LLE is able to learn the global structure of nonlinear manifolds, such as those generated by images of faces or documents of text.

15,106 citations

Book
01 Jan 1974
TL;DR: This text introduces the basic data structures and programming techniques often used in efficient algorithms, and covers use of lists, push-down stacks, queues, trees, and graphs.
Abstract: From the Publisher: With this text, you gain an understanding of the fundamental concepts of algorithms, the very heart of computer science. It introduces the basic data structures and programming techniques often used in efficient algorithms. Covers use of lists, push-down stacks, queues, trees, and graphs. Later chapters go into sorting, searching and graphing algorithms, the string-matching algorithms, and the Schonhage-Strassen integer-multiplication algorithm. Provides numerous graded exercises at the end of each chapter. 0201000296B04062001

9,262 citations

Journal ArticleDOI
TL;DR: A thorough exposition of community structure, or clustering, is attempted, from the definition of the main elements of the problem, to the presentation of most methods developed, with a special focus on techniques designed by statistical physicists.
Abstract: The modern science of networks has brought significant advances to our understanding of complex systems. One of the most relevant features of graphs representing real systems is community structure, or clustering, i. e. the organization of vertices in clusters, with many edges joining vertices of the same cluster and comparatively few edges joining vertices of different clusters. Such clusters, or communities, can be considered as fairly independent compartments of a graph, playing a similar role like, e. g., the tissues or the organs in the human body. Detecting communities is of great importance in sociology, biology and computer science, disciplines where systems are often represented as graphs. This problem is very hard and not yet satisfactorily solved, despite the huge effort of a large interdisciplinary community of scientists working on it over the past few years. We will attempt a thorough exposition of the topic, from the definition of the main elements of the problem, to the presentation of most methods developed, with a special focus on techniques designed by statistical physicists, from the discussion of crucial issues like the significance of clustering and how methods should be tested and compared against each other, to the description of applications to real networks.

9,057 citations

Journal ArticleDOI
TL;DR: A thorough exposition of the main elements of the clustering problem can be found in this paper, with a special focus on techniques designed by statistical physicists, from the discussion of crucial issues like the significance of clustering and how methods should be tested and compared against each other, to the description of applications to real networks.

8,432 citations