scispace - formally typeset
Search or ask a question

Showing papers by "Robert E. Tarjan published in 1986"


Proceedings ArticleDOI
01 Nov 1986
TL;DR: By incorporating the dynamic tree data structure of Sleator and Tarjan, a version of the algorithm running in O(nm log(n'/m)) time on an n-vertex, m-edge graph is obtained, as fast as any known method for any graph density and faster on graphs of moderate density.
Abstract: All previously known efftcient maximum-flow algorithms work by finding augmenting paths, either one path at a time (as in the original Ford and Fulkerson algorithm) or all shortest-length augmenting paths at once (using the layered network approach of Dinic). An alternative method based on the preflow concept of Karzanov is introduced. A preflow is like a flow, except that the total amount flowing into a vertex is allowed to exceed the total amount flowing out. The method maintains a preflow in the original network and pushes local flow excess toward the sink along what are estimated to be shortest paths. The algorithm and its analysis are simple and intuitive, yet the algorithm runs as fast as any other known method on dense. graphs, achieving an O(n)) time bound on an n-vertex graph. By incorporating the dynamic tree data structure of Sleator and Tarjan, we obtain a version of the algorithm running in O(nm log(n'/m)) time on an n-vertex, m-edge graph. This is as fast as any known method for any graph density and faster on graphs of moderate density. The algorithm also admits efticient distributed and parallel implementations. A parallel implementation running in O(n'log n) time using n processors and O(m) space is obtained. This time bound matches that of the Shiloach-Vishkin

1,374 citations


Proceedings ArticleDOI
01 Nov 1986
TL;DR: This paper develops simple, systematic, and efficient techniques for making linked data structures persistent, and uses them to devise persistent forms of binary search trees with logarithmic access, insertion, and deletion times and O (1) space bounds for insertion and deletion.
Abstract: This paper is a study of persistence in data structures. Ordinary data structures are ephemeral in the sense that a change to the structure destroys the old version, leaving only the new version available for use. In contrast, a persistent structure allows access to any version, old or new, at any time. We develop simple, systematic, and efficient techniques for making linked data structures persistent. We use our techniques to devise persistent forms of binary search trees with logarithmic access, insertion, and deletion times and O (1) space bounds for insertion and deletion.

866 citations


Journal ArticleDOI
TL;DR: A data compression scheme that exploits locality of reference, such as occurs when words are used frequently over short intervals and then fall into long periods of disuse, is described and proves that it never performs much worse than Huffman coding and can perform substantially better.
Abstract: A data compression scheme that exploits locality of reference, such as occurs when words are used frequently over short intervals and then fall into long periods of disuse, is described. The scheme is based on a simple heuristic for self-organizing sequential search and on variable-length encodings of integers. We prove that it never performs much worse than Huffman coding and can perform substantially better; experiments on real files show that its performance is usually quite close to that of Huffman coding. Our scheme has many implementation advantages: it is simple, allows fast encoding and decoding, and requires only one pass over the data to be compressed (static Huffman coding takes two passes).

564 citations


Journal ArticleDOI
TL;DR: This paper uses F-heaps to obtain fast algorithms for finding minimum spanning trees in undirected and directed graphs and can be extended to allow a degree constraint at one vertex.
Abstract: Recently, Fredman and Tarjan invented a new, especially efficient form of heap (priority queue). Their data structure, theFibonacci heap (or F-heap) supports arbitrary deletion inO(logn) amortized time and other heap operations inO(1) amortized time. In this paper we use F-heaps to obtain fast algorithms for finding minimum spanning trees in undirected and directed graphs. For an undirected graph containingn vertices andm edges, our minimum spanning tree algorithm runs inO(m logβ (m, n)) time, improved fromO(mβ(m, n)) time, whereβ(m, n)=min {i|log(i) n ≦m/n}. Our minimum spanning tree algorithm for directed graphs runs inO(n logn + m) time, improved fromO(n log n +m log log log(m/n+2) n). Both algorithms can be extended to allow a degree constraint at one vertex.

543 citations


Journal ArticleDOI
TL;DR: This work develops a persistent form of binary search tree that supports insertions and deletions in the present and queries in the past, and provides an alternative to Chazelle's "hive graph" structure, which has a variety of applications in geometric retrieval.
Abstract: A classical problem in computational geometry is the planar point location problem. This problem calls for preprocessing a polygonal subdivision of the plane defined by n line segments so that, given a sequence of points, the polygon containing each point can be determined quickly on-line. Several ways of solving this problem in O(log n) query time and O(n) space are known, but they are all rather complicated. We propose a simple O(log n)-query-time, O(n)-space solution, using persistent search trees. A persistent search tree differs from an ordinary search tree in that after an insertion or deletion, the old version of the tree can still be accessed. We develop a persistent form of binary search tree that supports insertions and deletions in the present and queries in the past. The time per query or update is O(log m), where m is the total number of updates, and the space needed is O(1) per update. Our planar point location algorithm is an immediate application of this data structure. The structure also provides an alternative to Chazelle's "hive graph" structure, which has a variety of applications in geometric retrieval.

529 citations


Journal ArticleDOI
TL;DR: This work proposes a linear-time algorithm, a variant of one by Otten and van Wijk, that generally produces a more compact layout than theirs and allows the dual of the graph to be laid out in an interlocking way.
Abstract: We propose a linear-time algorithm for generating a planar layout of a planar graph. Each vertex is represented by a horizontal line segment and each edge by a vertical line segment. All endpoints of the segments have integer coordinates. The total space occupied by the layout is at mostn by at most 2n---4. Our algorithm, a variant of one by Otten and van Wijk, generally produces a more compact layout than theirs and allows the dual of the graph to be laid out in an interlocking way. The algorithm is based on the concept of abipolar orientation. We discuss relationships among the bipolar orientations of a planar graph.

335 citations


Journal ArticleDOI
TL;DR: A new form of heap is described, intended to be competitive with the Fibonacci heap in theory and easy to implement and fast in practice, and a partial complexity analysis of pairing heaps is provided.
Abstract: Recently, Fredman and Tarjan invented a new, especially efficient form of heap (priority queue) called theFibonacci heap. Although theoretically efficient, Fibonacci heaps are complicated to implement and not as fast in practice as other kinds of heaps. In this paper we describe a new form of heap, called thepairing heap, intended to be competitive with the Fibonacci heap in theory and easy to implement and fast in practice. We provide a partial complexity analysis of pairing heaps. Complete analysis remains an open problem.

243 citations


Proceedings ArticleDOI
01 Nov 1986
TL;DR: A tight bound is established on the maximum rotation distance between two A2-node trees for all large n, using volumetric arguments in hyperbolic 3-space, and is given on the minimum number of tetrahedra needed to dissect a polyhedron in the worst case.
Abstract: A rotation in a binary tree is a local restructuring that changes the tree into another tree. Rotations are useful in the design of tree-based data structures. The rotation distance between a pair of trees is the minimum number of rotations needed to convert one tree into the other. In this paper we establish a tight bound of In 6 on the maximum rotation distance between two A2-node trees for all large n, using volumetric arguments in hyperbolic 3-space. Our proof also gives a tight bound on the minimum number of tetrahedra needed to dissect a polyhedron in the worst case, and reveals connections 1 This is a revised and expanded version of a paper that appeared in the 18th Annual ACM Symposium on Theory of Computing, [9]. 2 Partial support provided by DARPA, ARPA order 4976, amendment 19, monitored by the Air Force Avionics Laboratory under contract F33615-87-C-1499, and by the National Science Foundation under grant CCR-8658139. 3 Partial support provided by the National Science Foundation under grant DCR-8605962. 4 Partial support provided by the National Science Foundation under grants DMR-8504984 and DCR8505517.

183 citations


Journal ArticleDOI
TL;DR: This paper describes an O(n)-time algorithm for recognizing and sorting Jordan sequences that uses level-linked search trees and a reduction of the recognition and sorting problem to a list-splitting problem.
Abstract: For a Jordan curve C in the plane nowhere tangent to the x axis, let x1, x2,…, xn be the abscissas of the intersection points of C with the x axis, listed in the order the points occur on C. We call x1, x2,…, xn a Jordan sequence. In this paper we describe an O(n)-time algorithm for recognizing and sorting Jordan sequences. The problem of sorting such sequences arises in computational geometry and computational geography. Our algorithm is based on a reduction of the recognition and sorting problem to a list-splitting problem. To solve the list-splitting problem we use level-linked search trees.

129 citations


Journal ArticleDOI
TL;DR: In this paper, two themes in data structure design are explored: amortized computational complexity and self adjustment.
Abstract: In this paper we explore two themes in data structure design: amortized computational complexity and self adjustment. We are motivated by the following observations. In most applications of data structures, we wish to perform not just a single operation but a sequence of operations, possibly having correlated behavior. By averaging the running time per operation over a worst-case sequence of operations, we can sometimes obtain an overall time bound much smaller than the worst-case time per operation multiplied by the number of operations. We call this kind of averaging amortization.Standard kinds of data structures, such as the many varieties of balanced trees, are specifically designed so that the worst-case time per operation is small. Such efficiency is achieved by imposing an explicit structural constraint that must be maintained during updates, at a cost of both running time and storage space. However, if amortized running time is the complexity measure of interest, we can guarantee efficiency withou...

128 citations


Proceedings ArticleDOI
01 Nov 1986
TL;DR: This work shows how to triangulate a simple polygon in O (n) time and suggests an approach to the triangulation problem: use Jordan sorting in a divide-and-conquer fashion.
Abstract: A simple polygon with n vertices is triangulated by adding to it n 3 line segments between its vertices that partition the interior of the polygon into triangles. We present an algori thm for tr iangulating a simple polygon in t ime proportional to its size. This result has a number of applications in computat ional geometry. Introduction A simple polygon with n vertices is triangulated by adding to it n 3 line segments between its vertices to partition the interior of the polygon into triangles. We show how to triangulate a simple polygon in O (n) time. The result relies on the linear-time equivalence of triangulation and the problem of computing visibility information [6]. The algorithm uses divide-and-conquer, recursive finger search trees [1, 12, 14], and a variation of Jordan sorting [10,11]. Since Garey, Johnson, Preparata, and Tarjan gave an O(n log n) algorithm for triangulation [7], work on this problem has proceeded in two directions. Some authors have presented linear-time algorithms for triangulating special classes of polygons such as monotone polygons [6] and star-shaped polygons [18]. Other authors have given triangulation algorithms whose complexity is of the form O(n log k), where k is a property of the polygon such as the number of reflex angles [9] or its sinuosity [3]. Since there exist classes of polygons with k Ill(n), however, the worst-case performance of these algorithms is still O(n log n). Deciding whether Permission to copy without fee all or part of this material is granted provided that the-copies are not made or distributed for direct commercial advantage, the ACM copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Association for Computing Machinery. To copy otherwise, or to republish, requires a fee and/or specific permission. © 1986 A C M 0-89791-193-8/86/0500/0380 $00.75 there is an O(n)-time algorithm has boon one of the foremost open problems in computational geometry. Fournier and Montuno have shown that computing a triangulation of a polygon is linear-time reducible to computing its internal horizontal edge-vertex visibility information [6]: given the edge or two edges internally visible from each vertex of a simple polygon, one can compute a triangulation of the polygon in linear time. They call the result of computing internal horizontal edge-vertex visibility information a trapezoidization of the polygon, because the horizontal line segments that connect each vertex to its internally visible edge or edges partition the interior of the polygon into trapezoids. Hoffman, Meblhorn, Rosenstiehl, and Tarjan [10, 11] have presented a linear-time algorithm for Jordan sorting: given k points at which the edges of a polygon intersect a horizontal line, in the order in which they are encountered in a traversal of the boundary of the polygon, sort them into the order in which they appear along the line. The output of Jordan sorting gives internal and external edge-edge visibility information along the given horizontal line. We show below that Jordan sorting is linear-time reducible to the computation of all edgevertex and edge-edge visibility information. This implies that the triangulation problem is at least as hard as Jordan sorting, and suggests an approach to the triangulation problem: use Jordan sorting in a divide-and-conquer fashion. Our algorithm is a realization of this approach.

Journal ArticleDOI
H Gajewska1, Robert E. Tarjan1
TL;DR: This paper describes an implementation of deques with heap order for which the worst-case time per operation is O(1), where n is the number of items on the deque.


Book ChapterDOI
Robert E. Tarjan1
01 Jan 1986
TL;DR: A survey of efficient algorithms for the maximum flow problem, from the point of view of a theoretical computer scientist, and of the most efficient known algorithm for sparse graphs, which makes use of a novel data structure for representing rooted trees.
Abstract: This paper is a survey, from the point of view of a theoretical computer scientist, of efficient algorithms for the maximum flow problem. Included is a discussion of the most efficient known algorithm for sparse graphs, which makes use of a novel data structure for representing rooted trees. Also discussed are the potential practical significance of the algorithms and open problems.

01 Jan 1986
TL;DR: In this article, a linear-time algorithm for generating a planar layout of planar graphs is proposed, where each vertex is represented by a horizontal line segment and each edge by a vertical line segment All endpoints of the segments have integer coordinates.
Abstract: We propose a linear-time algorithm for generating a planar layout of a planar graph Each vertex is represented by a horizontal line segment and each edge by a vertical line segment All endpoints of the segments have integer coordinates The total space occupied by the layout is at most n by at most 2n -4 Our algorithm, a variant of one by Otten and van Wijk, generally produces a more compact layout than theirs and allows the dual of the graph to be laid out in an interlocking way The algorithm is based on the concept of a bipolar orientation We discuss relation- ships among the bipolar orientations of a planar graph

01 Jan 1986
TL;DR: A data compression scheme that exploits locality of reference, such as occurs when words are used frequently over short intervals and then fall into long periods of disuse, is described.
Abstract: A data compression scheme that exploits locality of reference, such as occurs when words are used frequently over short intervals and then fall into long periods of disuse, is described. The scheme is based on a simple heuristic for self-organizing sequential search and on variable-length encodings of integers. We prove that it never performs much worse than Huffman coding and can perform substantially better; experiments on real files show that its performance is usually quite close to that of Huffman coding. Our scheme has many implementation advantages: it is simple, allows fast encoding and decod- ing, and requires only one pass over the data to be com- pressed (static Huffman coding takes two passes).