scispace - formally typeset
Search or ask a question

Showing papers by "Robert E. Tarjan published in 2011"


Book
27 Aug 2011
TL;DR: Efficient implementations of Dijkstra's shortest path algorithm are investigated and a new data structure, called the radix heap, is proposed for use in this algorithm.
Abstract: Efficient implementations of Dijkstra's shortest path algorithm are investigated. A new data structure, called the radix heap, is proposed for use in this algorithm. On a network with n vertices, m edges, and nonnegative integer arc costs bounded by C, a one-level form of radix heap gives a time bound for Dijkstra's algorithm of O(m + n log C). A two-level form of radix heap gives a bound of O(m + n log C/log log C). A combination of a radix heap and a previously known data structure called a Fibonacci heap gives a bound of O(m + na

637 citations


Book
25 Aug 2011
TL;DR: A new algorithm for finding the blocks (biconnected components) of an undirected graph and a general algorithmic technique that simplifies and improves computation of various functions on trees is introduced.
Abstract: In this paper we propose a new algorithm for finding the blocks (biconnected components) of an undirected graph. A serial implementation runs in $O(n + m)$ time and space on a graph of n vertices and m edges. A parallel implementation runs in $O(\log n)$ time and $O(n + m)$ space using $O(n + m)$ processors on a concurrent-read, concurrent-write parallel RAM. An alternative implementation runs in $O(n^2 /p)$ time and $O(n^2 )$ space using any number $p \leqq n^2 /\log ^2 n$ of processors, on a concurrent-read, exclusive-write parallel RAM. The last algorithm has optimal speedup, assuming an adjacency matrix representation of the input. A general algorithmic technique that simplifies and improves computation of various functions on trees is introduced. This technique typically requires $O(\log n)$ time using processors and $O(n)$ space on an exclusive-read exclusive-write parallel RAM.

501 citations


Book
27 Aug 2011
TL;DR: This paper combines several techniques to yield an algorithm running in O(nm(log logU) log(nC) time on networks withn vertices, m edges, maximum arc capacityU, and maximum arc cost magnitudeC, and discusses a capacity-bounding approach to the minimum-cost flow problem.
Abstract: Several researchers have recently developed new techniques that give fast algorithms for the minimum-cost flow problem. In this paper we combine several of these techniques to yield an algorithm running in O(nm log log Ulog(nC)) time on networks with n vertices, m edges, maximum arc capacity U, and maximum arc cost magnitude C. The major techniques used are the capacity-scaling approach of Edmonds and Karp, the excess-scaling approach of Ahuja and Orlin, the cost-scaling approach Goldberg and Tarjan, and the dynamic tree data structure of Sleator and Tarjan. For nonsparse graphs with large maximum arc capacity, we obtain a similar but slightly better bound. We also obtain a slightly better bound for the (noncapacitated) transportation problem. In addition, we discuss a capacity-bounding approach to the minimum-cost flow problem.

141 citations


Book
30 Aug 2011
TL;DR: In this article, the authors present linear time algorithms for solving the following problems involving a simple planar polygon P: (i) computing the collection of all shortest paths inside P from a given source vertex s to all the other vertices of P; (ii) computing a subpolygon of P consisting of points that are visible from a segment within P; and (iii) preprocessing P so that for any query ray r emerging from some fixed edge e of P, we can find in logarithmic time the first intersection of r with the boundary of P
Abstract: We present linear time algorithms for solving the following problems involving a simple planar polygon P: (i) Computing the collection of all shortest paths inside P from a given source vertex s to all the other vertices of P; (ii) Computing the subpolygon of P consisting of points that are visible from a segment within P; (iii) Preprocessing P so that for any query ray r emerging from some fixed edge e of P, we can find in logarithmic time the first intersection of r with the boundary of P; (iv) Preprocessing P so that for any query point x in P, we can find in logarithmic time the portion of the edge e that is visible from x; (v) Preprocessing P so that for any query point x inside P and direction u, we can find in logarithmic time the first point on the boundary of P hit by the ray at direction u from x; (vi) Calculating a hierarchical decomposition of P into smaller polygons by recursive polygon cutting, as in [Ch]. (vii) Calculating the (clockwise and counterclockwise) “convex ropes” (in the terminology of [PS]) from a fixed vertex s of P lying on its convex hull, to all other vertices of P. All these algorithms are based on a recent linear time algorithm of Tarjan and Van Wyk for triangulating a simple polygon, but use additional techniques to make all subsequent phases of these algorithms also linear.

130 citations


Book ChapterDOI
05 Sep 2011
TL;DR: The incremental breadth-first search (IBFS) method is introduced, which uses ideas from BK but augments on shortest paths and usually outperforms BK on vision problems.
Abstract: Maximum flow and minimum s-t cut algorithms are used to solve several fundamental problems in computer vision. These problems have special structure, and standard techniques perform worse than the special-purpose Boykov-Kolmogorov (BK) algorithm. We introduce the incremental breadth-first search (IBFS) method, which uses ideas from BK but augments on shortest paths. IBFS is theoretically justified (runs in polynomial time) and usually outperforms BK on vision problems.

91 citations


Book
03 Sep 2011
TL;DR: This work offers asymptotic improvements in both time and space to Chase's bottom-up algorithm for pattern preprocessing and shows how to modify the algorithm using a new decomposition method to obtain a space/time tradeoff.
Abstract: Pattern matching in trees is fundamental to a variety of programming language systems. However, progress has been slow in satisfying a pressing need for general-purpose pattern-matching algorithms that are efficient in both time and space. We offer asymptotic improvements in both time and space to Chase's bottom-up algorithm for pattern preprocessing. A preliminary implementation of our algorithm runs ten times faster than Chase's (1987) implementation on the hardest problem instances. Our preprocessing algorithm has the advantage of being on-line with respect to pattern additions and deletions. It also adapts to favorable input instances, and on Hoffmann and O'Donnell's (1982) class of simple patterns, it performs better than their special-purpose algorithm tailored to this class. We show how to modify our algorithm using a new decomposition method to obtain a space/time tradeoff. Finally, we trade a log factor in time for a linear space bottom-up pattern-matching algorithm that handles a wide subclass of Hoffmann and O'Donnell's (1982) simple patterns.

37 citations


Journal ArticleDOI
TL;DR: The rank-pairing heap is introduced, an implementation of heaps that combines the asymptotic efficiency of Fibonacci heaps with much of the simplicity of pairing heaps.
Abstract: We introduce the rank-pairing heap, an implementation of heaps that combines the asymptotic efficiency of Fibonacci heaps with much of the simplicity of pairing heaps. Other heap implementations that match the bounds of Fibonacci heaps do so by maintaining a balance condition on the trees representing the heap. In contrast to these structures but like pairing heaps, our trees can evolve to have arbitrary (unbalanced) structure. Also like pairing heaps, our structure requires at most one cut and no other restructuring per key decrease, in the worst case: the only changes that can cascade during a key decrease are changes in node ranks. Although our data structure is simple, its analysis is not.

34 citations


Posted Content
TL;DR: Two online algorithms for maintaining a topological order of a directed n-vertex acyclic graph as arcs are added, and detecting a cycle when one is created are presented.
Abstract: We present two on-line algorithms for maintaining a topological order of a directed $n$-vertex acyclic graph as arcs are added, and detecting a cycle when one is created. Our first algorithm handles $m$ arc additions in $O(m^{3/2})$ time. For sparse graphs ($m/n = O(1)$), this bound improves the best previous bound by a logarithmic factor, and is tight to within a constant factor among algorithms satisfying a natural {\em locality} property. Our second algorithm handles an arbitrary sequence of arc additions in $O(n^{5/2})$ time. For sufficiently dense graphs, this bound improves the best previous bound by a polynomial factor. Our bound may be far from tight: we show that the algorithm can take $\Omega(n^2 2^{\sqrt{2\lg n}})$ time by relating its performance to a generalization of the $k$-levels problem of combinatorial geometry. A completely different algorithm running in $\Theta(n^2 \log n)$ time was given recently by Bender, Fineman, and Gilbert. We extend both of our algorithms to the maintenance of strong components, without affecting the asymptotic time bounds.

16 citations


Book
30 Aug 2011
TL;DR: A linear-time algorithm for finding an ambitus, a cycle in a graph containing two distinguished vertices such that certain different groups of bridges satisfy the property that a bridge in one group does not interlace with any bridge in the other groups, is devised.
Abstract: We devise a linear-time algorithm for finding an ambitus in an undirected graph. An ambitus is a cycle in a graph containing two distinguished vertices such that certain different groups of bridges (calledBitp-,BitQ-, andBitPQ-bridges) satisfy the property that a bridge in one group does not interlace with any bridge in the other groups. Thus, an ambitus allows the graph to be cut into pieces, where, in each piece, certain graph properties may be investigated independently and recursively, and then the pieces can be pasted together to yield information about these graph properties in the original graph. In order to achieve a good time-complexity for such an algorithm employing the divide-and-conquer paradigm, it is necessary to find an ambitus quickly. We also show that, using ambitus, linear-time algorithms can be devised for abiding-path-finding and nonseparating-induced-cycle-finding problems.

8 citations


Posted Content
TL;DR: In this article, the authors considered the problem of detecting a cycle in a directed graph that grows by arc insertions, and the related problems of maintaining a topological order and the strong components of such a graph.
Abstract: We consider the problem of detecting a cycle in a directed graph that grows by arc insertions, and the related problems of maintaining a topological order and the strong components of such a graph. For these problems, we give two algorithms, one suited to sparse graphs, and the other to dense graphs. The former takes the minimum of O(m^{3/2}) and O(mn^{2/3}) time to insert m arcs into an n-vertex graph; the latter takes O(n^2 log(n)) time. Our sparse algorithm is considerably simpler than a previous O(m^{3/2})-time algorithm; it is also faster on graphs of sufficient density. The time bound of our dense algorithm beats the previously best time bound of O(n^{5/2}) for dense graphs. Our algorithms rely for their efficiency on topologically ordered vertex numberings; bounds on the size of the numbers give bound on running times.

7 citations


Journal ArticleDOI
TL;DR: In this article, the authors consider a variant of the problem of efficiently maintaining a forest of dynamic rooted trees, where an operation that merges two tree paths can add and delete up to a linear number of arcs.
Abstract: Motivated by an application in computational geometry, we consider a novel variant of the problem of efficiently maintaining a forest of dynamic rooted trees. This variant includes an operation that merges two tree paths. In contrast to the standard problem, in which a single operation can only add or delete one arc, one merge can add and delete up to a linear number of arcs. In spite of this, we develop three different methods that need only polylogarithmic time per operation. The first method extends a solution of Farach and Thorup [1998] for the special case of paths. Each merge takes O(log2n) amortized time on an n-node forest and each standard dynamic tree operation takes O(log n) time; the latter bound is amortized, worst case, or randomized depending on the underlying data structure. For the special case that occurs in the motivating application, in which arbitrary arc deletions (cuts) do not occur, we give a method that takes O(log n) time per operation, including merging. This is best possible in a model of computation with an Ω(n log n) lower bound for sorting n numbers, since such sorting can be done in O(n) tree operations. For the even-more-special case in which there are no cuts and no parent queries, we give a method that uses standard dynamic trees as a black box: each mergeable tree operation becomes a constant number of standard dynamic tree operations. This third method can also be used in the motivating application, but only by changing the algorithm in the application. Each of our three methods needs different analytical tools and reveals different properties of dynamic trees.

Patent
27 Oct 2011
TL;DR: In this article, a shiftable memory supporting in-memory data structures employs built-in data shifting capability, which facilitates data insertion, deletion and insertion of data within the data structure.
Abstract: A shiftable memory supporting in-memory data structures employs built-in data shifting capability. The shiftable memory includes a memory having built-in shifting capability to shift a contiguous subset of data from a first location to a second location within the memory. The shiftable memory further includes a data structure defined on the memory to contain data comprising the contiguous subset. The built-in shifting capability of the memory to facilitate one or more of movement of the data, insertion of the data and deletion of the data within the data structure.

Patent
27 Oct 2011
TL;DR: In this article, a verschiebbare Speicher, der In-Memory-Datenstrukturen unterstutzt, setzt eingebaute Datenverschiebungsfahigkeit ein.
Abstract: Ein verschiebbarer Speicher, der In-Memory-Datenstrukturen unterstutzt, setzt eine eingebaute Datenverschiebungsfahigkeit ein. Der verschiebbare Speicher weist einen Speicher mit eingebauter Verschiebungsfahigkeit auf, um einen zusammenhangenden Datenteilsatz von einem ersten Ort an einen zweiten Ort in dem Speicher zu verschieben. Der verschiebbare Speicher weist weiterhin eine Datenstruktur auf, die auf dem Speicher definiert ist, um Daten zu enthalten, die den zusammenhangenden Teilsatz umfassen. Die eingebaute Verschiebungsfahigkeit des Speichers erleichtert die Bewegung der Daten, das Einfugen der Daten und/oder das Loschen der Daten in der Datenstruktur.