scispace - formally typeset
Search or ask a question

Showing papers by "Robert E. Tarjan published in 1985"


Journal ArticleDOI
TL;DR: This article shows that move-to-front is within a constant factor of optimum among a wide class of list maintenance rules, and analyzes the amortized complexity of LRU, showing that its efficiency differs from that of the off-line paging rule by a factor that depends on the size of fast memory.
Abstract: In this article we study the amortized efficiency of the “move-to-front” and similar rules for dynamically maintaining a linear list. Under the assumption that accessing the ith element from the front of the list takes t(i) time, we show that move-to-front is within a constant factor of optimum among a wide class of list maintenance rules. Other natural heuristics, such as the transpose and frequency count rules, do not share this property. We generalize our results to show that move-to-front is within a constant factor of optimum as long as the access cost is a convex function. We also study paging, a setting in which the access cost is not convex. The paging rule corresponding to move-to-front is the “least recently used” (LRU) replacement rule. We analyze the amortized complexity of LRU, showing that its efficiency differs from that of the off-line paging rule (Belady's MIN algorithm) by a factor that depends on the size of fast memory. No on-line paging algorithm has better amortized performance.

2,378 citations


Journal ArticleDOI
TL;DR: The splay tree, a self-adjusting form of binary search tree, is developed and analyzed and is found to be as efficient as balanced trees when total running time is the measure of interest.
Abstract: The splay tree, a self-adjusting form of binary search tree, is developed and analyzed. The binary search tree is a data structure for representing tables and lists so that accessing, inserting, and deleting items is easy. On an n-node splay tree, all the standard search tree operations have an amortized time bound of O(log n) per operation, where by “amortized time” is meant the time per operation averaged over a worst-case sequence of operations. Thus splay trees are as efficient as balanced trees when total running time is the measure of interest. In addition, for sufficiently long access sequences, splay trees are as efficient, to within a constant factor, as static optimum search trees. The efficiency of splay trees comes not from an explicit structural constraint, as with balanced trees, but from applying a simple restructuring heuristic, called splaying, whenever the tree is accessed. Extensions of splaying give simplified forms of two other data structures: lexicographic or multidimensional search trees and link/cut trees.

1,321 citations


Journal ArticleDOI
TL;DR: A linear-time algorithm for the special case of the disjoint set union problem in which the structure of the unions (defined by a “union tree”) is known in advance that is useful in finding maximum cardinality matchings in nonbipartite graphs.

608 citations


Journal ArticleDOI
TL;DR: This paper surveys recent work by several researchers on amortized complexity and obtains “self-adjusting” data structures that are simple, flexible and efficient.
Abstract: A powerful technique in the complexity analysis of data structures is amortization, or averaging over time. Amortized running time is a realistic but robust complexity measure for which we can obtain surprisingly tight upper and lower bounds on a variety of algorithms. By following the principle of designing algorithms whose amortized complexity is low, we obtain “self-adjusting” data structures that are simple, flexible and efficient. This paper surveys recent work by several researchers on amortized complexity.

523 citations


Journal ArticleDOI
Robert E. Tarjan1
TL;DR: An O( nm )-time algorithm for finding a decomposition of an n -vertex, m -edge graph is given and it is described how this decomposition can be used in divide-and-conquer algorithms for various graph problems, such as graph coloring and finding maximum independent sets.

468 citations


Journal ArticleDOI
TL;DR: Two related classes of biased search trees whose average access time is within a constant factor of the minimum and that are easy to update under insertions, deletions and more radical update operations are described.
Abstract: We consider the problem of storing items from a totally ordered set in a search tree so that the access time for a given item depends on a known estimate of the access frequency of the item. We describe two related classes of biased search trees whose average access time is within a constant factor of the minimum and that are easy to update under insertions, deletions and more radical update operations. We present and analyze efficient update algorithms for biased search trees. We list several applications of such trees.

160 citations


Journal ArticleDOI
TL;DR: This work considers the problem of finding a set of k edge-disjoint spanning trees in G of minimum total edge cost and presents an implementation of the matroid greedy algorithm that runs in O ( m log m + k 2 n 2 ) time.
Abstract: Let G be an undirected graph with n vertices and m edges, such that each edge has a real-valued cost. We consider the problem of finding a set of k edge-disjoint spanning trees in G of minimum total edge cost. This problem can be solved in polynomial time by the matroid greedy algorithm. We present an implementation of this algorithm that runs in O(m log m + k2n2) time. If all edge costs are the same, the algorithm runs in O(k2n2) time. The algorithm can also be extended to find the largest k such that k edge-disjoint spanning trees exist in O(m2) time. We mention several applications of the algorithm.

141 citations


Journal ArticleDOI
Robert E. Tarjan1
TL;DR: This paper proves the sequential access theorem, which implies as a special case that accessing the items in a splay tree in sequential order takes linear time, i.e.O(1) time per access.
Abstract: Sleator and Tarjan have invented a form of self-adjusting binary search tree called thesplay tree. On any sufficiently long access sequence, splay trees are as efficient, to within a constant factor, as both dynamically balanced and static optimum search trees. Sleator and Tarjan have made a much stronger conjecture; namely, that on any sufficiently long access sequence and to within a constant factor, splay trees are as efficient asany form of dynamically updated search tree. Thisdynamic optimality conjecture implies as a special case that accessing the items in a splay tree in sequential order takes linear time, i.e.O(1) time per access. In this paper we prove this special case of the conjecture, generalizing an unpublished result of Wegman. Oursequential access theorem not only supports belief in the dynamic optimality conjecture but provides additional insight into the workings of splay trees. As a corollary of our result, we show that splay trees can be used to simulate output-restricted deques (double-ended queues) in linear time. We pose several open problems related to our result.

97 citations


Journal ArticleDOI
TL;DR: A new algorithm to solve the single function coarsest partition problem in O( n ) time and space using a different, constructive approach is presented and can be applied to the automated manufacturing of woven fabric.

87 citations


Journal ArticleDOI
01 Dec 1985-Networks
TL;DR: It is proved that, if a mixed multigraph of radius r has any strongly connected orientation, it must have an orientation of radius at most 42 + Ar, and the proof gives a polynomial-time algorithm for constructing such an orientation.
Abstract: We study the problem of orienting all the undirected edges of a mixed multigraph so as to preserve reachability. Extending work by Robbins and by Boesch and Tindell, we develop a linear-time algorithm to test whether there is an orientation that preserves strong connectivity and to construct such an orientation whenever possible. This algorithm makes no attempt to minimize distances in the resulting directed graph, and indeed the maximum distance, for example, can blow up by a factor proportional to the number of vertices in the graph. Extending work by Chvatal and Thomassen, we then prove that, if a mixed multigraph of radius r has any strongly connected orientation, it must have an orientation of radius at most 42 + Ar. The proof gives a polynomial-time algorithm for constructing such an orientation.

58 citations


Proceedings ArticleDOI
01 Jun 1985
TL;DR: This paper describes an O(n)-time algorithm for recognizing and sorting Jordan sequences based on a reduction of the recognition and sorting problem to a list-splitting problem and uses level linked search trees.
Abstract: For a Jordan curve C in the plane, let x_{1},x_{2},...,x_{n} be the abscissas of the intersection points of C with the x-axis, listed in the order the points occur on C. We call x_{1},x_{2},...,x_{n} a Jordan sequence. In this paper we describe an O(n)-time algorithm for recognizing and sorting Jordan sequences. The problem of sorting such sequences arises in computational geometry and computational geography. Our algorithm is based on a reduction of the recognition and sorting problem to a list-splitting problem. To solve the list-splitting problem we use level linked search trees.

01 Jan 1985
TL;DR: In this article, an O(n)-time algorithm for recognizing and sorting Jordan sequences was proposed based on a reduction of the recognition and sorting problem to a list-splitting problem.
Abstract: For a Jordan curve C in the plane, let x_{1},x_{2},...,x_{n} be the abscissas of the intersection points of C with the x-axis, listed in the order the points occur on C. We call x_{1},x_{2},...,x_{n} a Jordan sequence. In this paper we describe an O(n)-time algorithm for recognizing and sorting Jordan sequences. The problem of sorting such sequences arises in computational geometry and computational geography. Our algorithm is based on a reduction of the recognition and sorting problem to a list-splitting problem. To solve the list-splitting problem we use level linked search trees.

Journal ArticleDOI
TL;DR: In this article, the authors explore the limitations of coding schemes of this nature and explore the problem of finding a set of strings that can code X robustly under the assumption that all the strings in X and Y have the same length.
Abstract: Let X, $Y \subset \{ 0,1 \}^* $ We say Y codes X if every $x \in X$ can be obtained by applying a short program to some $y \in Y$ We are interested in sets Y that code X robustly in the sense that even if we delete an arbitrary subset $Y' \subset Y$ of size k, say, the remaining set of strings $Y\backslash Y'$ still codes X In general, this can be achieved only by making in some sense more than k copies of each $x \in X$ and distributing these copies on different strings Y Thus if the strings in X and Y have the same length, then $# \,Y\geqq ( k + 1 )# X$ If we allow coding of X by Y in a way that every $x \in X$ is obtained from strings x, $z \in Y$ by application of a short program, then we can do betterLet $Y = \{ \oplus _{x \in S} x |S \subset X \}$ where $ \oplus $ denotes bitwise sum mod 2 Then $# Y = 2^{ # X} $ Yet Y codes X robustly for $k = 2^{ # X - 1} - 1$ This paper explores the limitations of coding schemes of this nature


01 Jan 1985
TL;DR: This article shows that move-to-front is within a constant factor of optimum as long as the access cost is a convex function, and analyzes the amortized complexity of LRU, showing that its efficiency differs from that of the off- line paging rule (Belady's MlN algorithm) by a factor that depends on the size of fast memory.
Abstract: In this article we study the amortized efficiency of the "move-to-front" and similar rules for dynamically maintaining a linear list. Under the assumption that accessing the ith element from the front of the list takes e(i) time, we show that move-to-front is within a constant factor of optimum among a wide class of list maintenance rules. Other natural heuristics, such as the transpose and frequency count rules, do not share this property. We generalize our results to show that move-to-front is within a constant factor of optimum as long as the access cost is a convex function. We also study paging, a setting in which the access cost is not convex. The paging rule corresponding to move-to-front is the "least recently used" replacement rule. We analyze the amortized complexity of LRU, showing that its efficiency differs from that of the off- line paging rule (Belady's MlN algorithm) by a factor that depends on the size of fast memory. No on-line paging algorithm has better amortized performance.

01 Jan 1985
TL;DR: This article shows that move-to-front is within a constant factor of optimum as long as the access cost is a convex function, and analyzes the amortized complexity of LRU, showing that its efficiency differs from that of the off- line paging rule by a factor that depends on the size of fast memory.
Abstract: In this article we study the amortized efficiency of the "move-to-front" and similar rules for dynamically maintaining a linear list. Under the assumption that accessing the ith element from the front of the list takes 0(i) time, we show that move-to-front is within a consfant factor of optimum among a wide class of list maintenance rules. Other natural heuristics, such as the transpose and frequency count rules, da not share this property. We generalize our results to show that move-to-front is within a constant factor of optimum as long as the access cost is a convex function. We also study paging, a setting in which the access cost is not convex. The paging rule corresponding to move-to-front is the "least recently used" (LRU) replacement rule. We analyze the amortized complexity of LRU, showing that its efficiency differs from that of the off- line paging rule (Belady's MIN algorithm) by a factor that depends on the size of fast memory. No on-line paging algorithm has better amortized performance.