scispace - formally typeset
Search or ask a question

Showing papers by "Robert E. Tarjan published in 1987"


Journal ArticleDOI
TL;DR: Using F-heaps, a new data structure for implementing heaps that extends the binomial queues proposed by Vuillemin and studied further by Brown, the improved bound for minimum spanning trees is the most striking.
Abstract: In this paper we develop a new data structure for implementing heaps (priority queues). Our structure, Fibonacci heaps (abbreviated F-heaps), extends the binomial queues proposed by Vuillemin and studied further by Brown. F-heaps support arbitrary deletion from an n-item heap in O(log n) amortized time and all other standard heap operations in O(1) amortized time. Using F-heaps we are able to obtain improved running times for several network optimization algorithms. In particular, we obtain the following worst-case bounds, where n is the number of vertices and m the number of edges in the problem graph: O(n log n + m) for the single-source shortest path problem with nonnegative edge lengths, improved from O(mlog(m/n+2)n);O(n2log n + nm) for the all-pairs shortest path problem, improved from O(nm log(m/n+2)n);O(n2log n + nm) for the assignment problem (weighted bipartite matching), improved from O(nmlog(m/n+2)n);O(mβ(m, n)) for the minimum spanning tree problem, improved from O(mlog log(m/n+2)n); where β(m, n) = min {i | log(i)n ≤ m/n}. Note that β(m, n) ≤ log*n if m ≥ n.Of these results, the improved bound for minimum spanning trees is the most striking, although all the results give asymptotic improvements for graphs of appropriate densities.

2,484 citations


Journal ArticleDOI
TL;DR: This work presents improved partition refinement algorithms for three problems: lexicographic sorting, relational coarsest partition, and double lexical ordering that uses a new, efficient method for unmerging two sorted sets.
Abstract: We present improved partition refinement algorithms for three problems: lexicographic sorting, relational coarsest partition, and double lexical ordering. Our double lexical ordering algorithm uses a new, efficient method for unmerging two sorted sets.

1,267 citations


Journal ArticleDOI
TL;DR: Given a triangulation of a simple polygonP, linear-time algorithms for solving a collection of problems concerning shortest paths and visibility withinP are presented.
Abstract: Given a triangulation of a simple polygonP, we present linear-time algorithms for solving a collection of problems concerning shortest paths and visibility withinP. These problems include calculation of the collection of all shortest paths insideP from a given source vertexS to all the other vertices ofP, calculation of the subpolygon ofP consisting of points that are visible from a given segment withinP, preprocessingP for fast "ray shooting" queries, and several related problems.

544 citations


Proceedings ArticleDOI
01 Jan 1987
TL;DR: This work introduces a framework for solving minimum-cost flow problems and shows how to extend techniques developed for the maximum flow problem to improve the quality of a solution.
Abstract: We introduce a framework for solving minimum-cost flow problems. Our approach measures the quality of a solution by the amount that the complementary slackness conditions are violated. We show how to extend techniques developed for the maximum flow problem to improve the quality of a solution. This framework allows us to achieve O(min(n3, n5/3 m2/3, nm log n) log (nC)) running time.

197 citations


Journal ArticleDOI
TL;DR: The combination of the original George-Liu nested dissection algorithm and the Lipton-Tarjan planar separator algorithm is analyzed, guaranteeing bounds ofO (n logn) on fill andO(n3/2) on operation count for planar graphs, twodimensional finite element graphs, graph of bounded genus, and graphs of bounded degree withn1/2-separators.
Abstract: Nested dissection is an algorithm invented by Alan George for preserving sparsity in Gaussian elimination on symmetric positive definite matrices. Nested dissection can be viewed as a recursive divide-and-conquer algorithm on an undirected graph; it usesseparators in the graph, which are small sets of vertices whose removal divides the graph approximately in half. George and Liu gave an implementation of nested dissection that used a heuristic to find separators. Lipton and Tarjan gave an algorithm to findn 1/2-separators in planar graphs and two-dimensional finite element graphs, and Lipton, Rose, and Tarjan used these separators in a modified version of nested dissection, guaranteeing bounds ofO (n logn) on fill andO(n 3/2) on operation count. We analyze the combination of the original George-Liu nested dissection algorithm and the Lipton-Tarjan planar separator algorithm. This combination is interesting because it is easier to implement than the Lipton-Rose-Tarjan version, especially in the framework of existing sparse matrix software. Using some topological graph theory, we proveO(n logn) fill andO(n 3/2) operation count bounds for planar graphs, twodimensional finite element graphs, graphs of bounded genus, and graphs of bounded degree withn 1/2-separators. For planar and finite element graphs, the leading constant factor is smaller than that in the Lipton-Rose-Tarjan analysis. We also construct a class of graphs withn 1/2-separators for which our algorithm does not achieve anO(n logn) bound on fill.

119 citations


Journal ArticleDOI
TL;DR: The quest for efficiency in computational methods yields not only fast algorithms, but also insights that lead to elegant, simple, and general problem-solving methods.
Abstract: The quest for efficiency in computational methods yields not only fast algorithms, but also insights that lead to elegant, simple, and general problem-solving methods.

27 citations


Journal ArticleDOI
TL;DR: In the following interview, John Hopcroft and Robert Tarjan discuss their collaboration and its influence on their separate research today and comment on supercomputing and parallelism.
Abstract: In the following interview, which took place at the 1986 Fall Joint Computer Conference in Dallas, Texas, John Hopcroft and Robert Tarjan discuss their collaboration and its influence on their separate research today. They also comment on supercomputing and parallelism, particularly with regard to statements by FJCC Keynote speakers Kenneth Wilson, Nobel laureate and director of Cornell University's Supercomputer Center, and C. Gordon Bell, chief architect on the team that designed DEC's VAX and now with the National Science Foundation. Finally the Turing Award winners air their views on the direction of computer science as a whole and on funding and the Strategic Defense Initiative.

5 citations


01 Jul 1987
TL;DR: The recent maximum flow algorithm of Goldberg and Tarjan can be extended to solve an important class of such parametric maximum flow problems, at the cost of only a constant factor in its worst case time bound.
Abstract: : The classical maximum flow problem sometimes occurs in settings in which the capacities are not fixed but are functions of a single parameters, and the goal is to find the value of the parameter such that the corresponding maximum flow or minimum cut satisfies some side condition. Finding the desired parameter value requires solving a sequence of related maximum flow problems. We shoe that the recent maximum flow algorithm of Goldberg and Tarjan can be extended to solve an important class of such parametric maximum flow problems, at the cost of only a constant factor in its worst case time bound. Faster algorithms for a variety of combinational optimization problems follow from our result. Keywords: Algorithms; Data structures; Graphs; Maximum glow; Network flows; Networks.

3 citations


01 Jul 1987
TL;DR: The relaxed heap is a priority queue data structure that achieves the same amortized time bounds as the Fibonacci heap - a sequence of m decrease key and n delete min operations takes time O(m + n log n).
Abstract: : The relaxed heap is a priority queue data structure that achieves the same amortized time bounds as the Fibonacci heap - a sequence of m decrease key and n delete min operations takes time O(m + n log n). A variant of relaxed heaps achieves similar bounds in the worst case- o(1) time for decrease key and O(log n) for delete min. A relaxed heap is a type of binomial queue that allows heap order to be violated.

3 citations


Proceedings ArticleDOI
12 Oct 1987
TL;DR: In "A linear-time algorithm for triangulating a simple polygon" [Proceedings of the Eighteenth Annual ACM Symposium on Theory of Computing (1986), 380-388], the analysis showing that the authors' triangulation algorithm runs in linear time is incorrect, and indeed the algorithm does not run inlinear time in the worst case.
Abstract: In "A linear-time algorithm for triangulating a simple polygon" [Proceedings of the Eighteenth Annual ACM Symposium on Theory of Computing (1986), 380-388. 486], the analysis showing that the authors' triangulation algorithm runs in linear time is incorrect, and indeed the algorithm does not run in linear time in the worst case. So far they have been unable to obtain a linear-time algorithm for the triangulation problem. They have been able to obtain an O(n loglogn)-time algorithm, however. The details are described in "An O(n loglogn)-Time Algorithm for Triangulating a Simple Polygon," SIAM Journal on Computing 17, 1 (February, 1988), to appear.

2 citations