scispace - formally typeset
Search or ask a question

Showing papers by "Robert E. Tarjan published in 1988"


Journal ArticleDOI
TL;DR: An alternative method based on the preflow concept of Karzanov, which runs as fast as any other known method on dense graphs, achieving an O(n) time bound on an n-vertex graph and faster on graphs of moderate density.
Abstract: All previously known efficient maximum-flow algorithms work by finding augmenting paths, either one path at a time (as in the original Ford and Fulkerson algorithm) or all shortest-length augmenting paths at once (using the layered network approach of Dinic). An alternative method based on the preflow concept of Karzanov is introduced. A preflow is like a flow, except that the total amount flowing into a vertex is allowed to exceed the total amount flowing out. The method maintains a preflow in the original network and pushes local flow excess toward the sink along what are estimated to be shortest paths. The algorithm and its analysis are simple and intuitive, yet the algorithm runs as fast as any other known method on dense graphs, achieving an O(n3) time bound on an n-vertex graph. By incorporating the dynamic tree data structure of Sleator and Tarjan, we obtain a version of the algorithm running in O(nm log(n2/m)) time on an n-vertex, m-edge graph. This is as fast as any known method for any graph density and faster on graphs of moderate density. The algorithm also admits efficient distributed and parallel implementations. A parallel implementation running in O(n2log n) time using n processors and O(m) space is obtained. This time bound matches that of the Shiloach-Vishkin algorithm, which also uses n processors but requires O(n2) space.

1,700 citations


Journal ArticleDOI
TL;DR: In this paper, the authors established a tight bound of In 6 on the maximum rotation distance between two A2-node trees for all large n, using volumetric arguments in hyperbolic 3-space.
Abstract: A rotation in a binary tree is a local restructuring that changes the tree into another tree. Rotations are useful in the design of tree-based data structures. The rotation distance between a pair of trees is the minimum number of rotations needed to convert one tree into the other. In this paper we establish a tight bound of In 6 on the maximum rotation distance between two A2-node trees for all large n, using volumetric arguments in hyperbolic 3-space. Our proof also gives a tight bound on the minimum number of tetrahedra needed to dissect a polyhedron in the worst case, and reveals connections 1 This is a revised and expanded version of a paper that appeared in the 18th Annual ACM Symposium on Theory of Computing, [9]. 2 Partial support provided by DARPA, ARPA order 4976, amendment 19, monitored by the Air Force Avionics Laboratory under contract F33615-87-C-1499, and by the National Science Foundation under grant CCR-8658139. 3 Partial support provided by the National Science Foundation under grant DCR-8605962. 4 Partial support provided by the National Science Foundation under grants DMR-8504984 and DCR8505517.

368 citations


Proceedings ArticleDOI
24 Oct 1988
TL;DR: In this article, a randomized algorithm with O(1) worst-case time for lookup and O( 1) amortized expected time for insertion and deletion was given for the dictionary problem.
Abstract: A randomized algorithm is given for the dictionary problem with O(1) worst-case time for lookup and O(1) amortized expected time for insertion and deletion. An Omega (log n) lower bound is proved for the amortized worst-case time complexity of any deterministic algorithm in a class of algorithms encompassing realistic hashing-based schemes. If the worst-case lookup time is restricted to k, then the lower bound for insertion becomes Omega (kn/sup 1/k/). >

344 citations


Journal ArticleDOI
TL;DR: It is shown that the problem of finding minimum cost schedules is NP-complete; however, an efficient algorithm is given that finds minimum cost scheduling whenever the tasks either all have the same length or are required to be executed in a given fixed sequence.
Abstract: We consider one-processor scheduling problems having the following form: Tasks T1, T2,..., TN are given, with each Ti having a specified length li and a preferred starting time ai or, equivalently, a preferred completion time bi. The tasks are to be scheduled nonpreemptively i.e., a task cannot be split on a single processor to begin as close to their preferred starting times as possible. We examine two different cost measures for such schedules, the sum of the absolute discrepancies from the preferred starting times and the maximum such discrepancy. For the first of these, we show that the problem of finding minimum cost schedules is NP-complete; however, we give an efficient algorithm that finds minimum cost schedules whenever the tasks either all have the same length or are required to be executed in a given fixed sequence. For the second cost measure, we give an efficient algorithm that finds minimum cost schedules in general, with no constraints on the ordering or lengths of the tasks.

341 citations


Journal ArticleDOI
TL;DR: The relaxed heap is a priority queue data structure that achieves the same amortized time bounds as the Fibonacci heap—a sequence of m decrease_key and n delete_min operations takes time O(m + n log n).
Abstract: The relaxed heap is a priority queue data structure that achieves the same amortized time bounds as the Fibonacci heap—a sequence of m decrease_key and n delete_min operations takes time O(m + n log n). A variant of relaxed heaps achieves similar bounds in the worst case—O(1) time for decrease_key and O(log n) for delete_min. Relaxed heaps give a processor-efficient parallel implementation of Dijkstra's shortest path algorithm, and hence other algorithms in network optimization. A relaxed heap is a type of binomial queue that allows heap order to be violated.

219 citations


Journal ArticleDOI
TL;DR: A bottleneck optimization problem on a graph with edge costs is the problem of finding a subgraph of a certain kind that minimizes the maximum edge cost in the subgraph, and a fast algorithms for two bottleneck optimization problems are proposed.

194 citations


Proceedings ArticleDOI
01 Jan 1988
TL;DR: It is shown that a judicious choice of cycles for canceling leads to a polynomial bound on the number of iterations in this algorithm, which is comparable to those of the fastest previously known algorithms.
Abstract: A classical algorithm for finding a minimum-cost circulation consists of repeatedly finding a residual cycle of negative cost and canceling it by pushing enough flow around the cycle to saturate an arc. We show that a judicious choice of cycles for canceling leads to a polynomial bound on the number of iterations in this algorithm. This gives a very simple strongly polynomial algorithm that uses no scaling. A variant of the algorithm that uses dynamic trees runs in O(nm(log n) min{log(nC), mlog n}) time on a network of n vertices, m arcs, and arc costs of maximum absolute value C. This bound is comparable to those of the fastest previously known algorithms.

192 citations


Journal ArticleDOI
TL;DR: Improved algorithms for several other computational geometry problems, including testing whether a polygon is simple, follow from the proposed O(n\log \log n)-time algorithm, improving on the previously best bound and showing that triangulation is not as hard as sorting.
Abstract: Given a simple n-vertex polygon, the triangulation problem is to partition the interior of the polygon into $n - 2$ triangles by adding $n - 3$ nonintersecting diagonals. We propose an $O(n\log \log n)$-time algorithm for this problem, improving on the previously best bound of $O(n\log n)$ and showing that triangulation is not as hard as sorting. Improved algorithms for several other computational geometry problems, including testing whether a polygon is simple, follow from our result.

166 citations


Proceedings ArticleDOI
01 Jan 1988
TL;DR: An algorithm for the assignment problem (minimum cost perfect bipartite matching) that improves the best known sequential algorithm, and is within a factor of log of the bestknown bound for the problem without costs (maximum cardinality matching).
Abstract: We present algorithms for matching and related problems that run on an EREW PRAM with p processors. Given is a bipartite graph G with n vertices, m edges, and integral edge costs at most N in magnitude. We give an algorithm for the assignment problem (minimum cost perfect bipartite matching) that runs in O(√nm log (nN)(log(2p))/p) time and O(m) space, for p ≤ m/(√nlog2n). For p = 1 this improves the best known sequential algorithm, and is within a factor of log (nN) of the best known bound for the problem without costs (maximum cardinality matching). For p > 1 the time is within a factor of log p of optimum speed-up. Extensions include an algorithm for maximum cardinality bipartite matching with slightly better processor bounds, and similar results for bipartite degree-constrained subgraph problems (with and without costs). Our ideas also extend to general graph matching problems.

51 citations


Proceedings ArticleDOI
06 Jan 1988
TL;DR: An algorithm that triangulates a simple polygon on n vertices in log log expected time using random sampling on the input, and its running time does not depend on any assumptions about a probability distribution from which the polygon is drawn.
Abstract: We present an algorithm that triangulates a simple polygon on n vertices in O(n log*n) expected time. The algorithm uses random sampling on the input, and its running time does not depend on any assumptions about a probability distribution from which the polygon is drawn.

46 citations


01 Mar 1988
TL;DR: In this article, a variant of the classical disjoint set union (equivalence) problem was studied in which an extra operation, called deunion, can undo the most recently performed union operation not yet undone.
Abstract: : Mannila and Ukkonen have studied a variant of the classical disjoint set union (equivalence) problem in which an extra operation, called deunion, can undo the most recently performed union operation not yet undone They proposed a way to modify standard set union algorithms to handle deunion operations This document analyzes several algorithms based on their approach The most efficient such algorithms have an amortized running time of O(log n/log log n) per operation, where n is the total number of elements in all the sets These algorithms use O(n log n) space, but the space usage can be reduced to O(n) by a simple change It is proven that any separable pointer-based algorithm for the problem required omega(log n/log log n) time per operation, thus showing that our upper bound an amortized time is tight (KR)

Journal ArticleDOI
TL;DR: It is shown that a minimum cost spanning pseudoforest of a graph with n vertices and m edges can be found in O(m+n) time, which implies that aminimum spanning tree can be find in O (m) time for graphs with girth at least log(i)n for some constant i.



ReportDOI
15 Jul 1988
TL;DR: The authors study the following problem: given a strongly connected digraph, find a minimal strongly connected spanning subgraph of it and present a parallel algorithm for this problem, which runs in polylog parallel time and uses O(n{sup 3}) processors on a PRAM.
Abstract: The authors study the following problem: given a strongly connected digraph, find a minimal strongly connected spanning subgraph of it. Their main result is a parallel algorithm for this problem, which runs in polylog parallel time and uses O(n{sup 3}) processors on a PRAM. The authors' algorithm is simple and the major tool it uses is computing a minimum-weigh branching with zero-one weights. They also present sequential algorithms for the problem that run in time (m + n {center dot} log n).

01 Jan 1988
TL;DR: An Omega (log n) lower bound is proved for the amortized worst-case time complexity of any deterministic algorithm in a class of algorithms encompassing realistic hashing-based schemes.
Abstract: The dynamic dictionary problem is considered: provide an algorithm for storing a dynamic set, allowing the operations insert, delete, and lookup. A dynamic perfect hashing strategy is given: a randomized algorithm for the dynamic dictionary problem that takes $O(1)$ worst-case time for lookups and $O(1)$ amortized expected time for insertions and deletions; it uses space proportional to the size of the set stored. Furthermore, lower bounds for the time complexity of a class of deterministic algorithms for the dictionary problem are proved. This class encompasses realistic hashing-based schemes that use linear space. Such algorithms have amortized worst-case time complexity $\Omega(\log n)$ for a sequence of $n$ insertions and lookups; if the worst-case lookup time is restricted to $k$, then the lower bound becomes $\Omega(k\cdot n^{1/k})$.