scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Fast Estimation of Diameter and Shortest Paths (Without Matrix Multiplication)

01 Mar 1999-SIAM Journal on Computing (Society for Industrial and Applied Mathematics)-Vol. 28, Iss: 4, pp 1167-1181
TL;DR: In this article, a combinatorial algorithm for the APSP problem with an additive error of 2 in time O(n 2.5 + n 1.5 ) was proposed.
Abstract: In the recent past, there has been considerable progress in devising algorithms for the all-pairs shortest paths (APSP) problem running in time significantly smaller than the obvious time bound of O(n3). Unfortunately, all the new algorithms are based on fast matrix multiplication algorithms that are notoriously impractical. Our work is motivated by the goal of devising purely combinatorial algorithms that match these improved running times. Our results come close to achieving this goal, in that we present algorithms with a small additive error in the length of the paths obtained. Our algorithms are easy to implement, have the desired property of being combinatorial in nature, and the hidden constants in the running time bound are fairly small. Our main result is an algorithm which solves the APSP problem in unweighted, undirected graphs with an additive error of 2 in time $O(n^{2.5}\sqrt{\log n})$. This algorithm returns actual paths and not just the distances. In addition, we give more efficient algorithms with running time {\footnotesize $O(n^{1.5} \sqrt{k \log n} + n^2 \log^2 n)$} for the case where we are only required to determine shortest paths between k specified pairs of vertices rather than all pairs of vertices. The starting point for all our results is an $O(m \sqrt{n \log n})$ algorithm for distinguishing between graphs of diameter 2 and 4, and this is later extended to obtaining a ratio 2/3 approximation to the diameter in time $O(m \sqrt{n \log n} + n^2 \log n)$. Unlike in the case of APSP, our results for approximate diameter computation can be extended to the case of directed graphs with arbitrary positive real weights on the edges.
Citations
More filters
Journal ArticleDOI
TL;DR: The most impressive feature of the data structure is its constant query time, hence the name "oracle", and it provides faster constructions of sparse spanners of weighted graphs, and improved tree covers and distance labelings of weighted or unweighted graphs.
Abstract: Let G = (V,E) be an undirected weighted graph with vVv = n and vEv = m. Let k ≥ 1 be an integer. We show that G = (V,E) can be preprocessed in O(kmn1/k) expected time, constructing a data structure of size O(kn1p1/k), such that any subsequent distance query can be answered, approximately, in O(k) time. The approximate distance returned is of stretch at most 2k−1, that is, the quotient obtained by dividing the estimated distance by the actual distance lies between 1 and 2k−1. A 1963 girth conjecture of Erdos, implies that Ω(n1p1/k) space is needed in the worst case for any real stretch strictly smaller than 2kp1. The space requirement of our algorithm is, therefore, essentially optimal. The most impressive feature of our data structure is its constant query time, hence the name "oracle". Previously, data structures that used only O(n1p1/k) space had a query time of Ω(n1/k).Our algorithms are extremely simple and easy to implement efficiently. They also provide faster constructions of sparse spanners of weighted graphs, and improved tree covers and distance labelings of weighted or unweighted graphs.

618 citations

Proceedings ArticleDOI
06 Jul 2001
TL;DR: The most impressive feature of the data structure is its constant query time, hence the name ``oracle', which provides faster constructions of sparse spanners of weighted graphs, and improved tree covers and distance labelings of weighted or unweighted graphs.
Abstract: Let G=(V,E) be an undirected weighted graph with |V|=n and |E|=m. Let k\ge 1 be an integer. We show that G=(V,E) can be preprocessed in O(kmn^{1/k}) expected time, constructing a data structure of size O(kn^{1+1/k}), such that any subsequent distance query can be answered, approximately, in O(k) time. The approximate distance returned is of stretch at most 2k-1, i.e., the quotient obtained by dividing the estimated distance by the actual distance lies between 1 and 2k-1. We show that a 1963 girth conjecture of Erd{\H{o}}s, implies that ω(n^{1+1/k}) space is needed in the worst case for any real stretch strictly smaller than 2k+1. The space requirement of our algorithm is, therefore, essentially optimal. The most impressive feature of our data structure is its constant query time, hence the name oracle. Previously, data structures that used only O(n^{1+1/k}) space had a query time of ω(n^{1/k}) and a slightly larger, non-optimal, stretch. Our algorithms are extremely simple and easy to implement efficiently. They also provide faster constructions of sparse spanners of weighted graphs, and improved tree covers and distance labelings of weighted or unweighted graphs.}

563 citations

Posted Content
TL;DR: In this article, the authors consider several well-studied problems in dynamic algorithms and prove that sufficient progress on any of them would imply a breakthrough on one of five major open problems in the theory of algorithms.
Abstract: We consider several well-studied problems in dynamic algorithms and prove that sufficient progress on any of them would imply a breakthrough on one of five major open problems in the theory of algorithms: 1. Is the 3SUM problem on $n$ numbers in $O(n^{2-\epsilon})$ time for some $\epsilon>0$? 2. Can one determine the satisfiability of a CNF formula on $n$ variables in $O((2-\epsilon)^n poly n)$ time for some $\epsilon>0$? 3. Is the All Pairs Shortest Paths problem for graphs on $n$ vertices in $O(n^{3-\epsilon})$ time for some $\epsilon>0$? 4. Is there a linear time algorithm that detects whether a given graph contains a triangle? 5. Is there an $O(n^{3-\epsilon})$ time combinatorial algorithm for $n\times n$ Boolean matrix multiplication? The problems we consider include dynamic versions of bipartite perfect matching, bipartite maximum weight matching, single source reachability, single source shortest paths, strong connectivity, subgraph connectivity, diameter approximation and some nongraph problems such as Pagh's problem defined in a recent paper by Patrascu [STOC 2010].

308 citations

Journal ArticleDOI
Uri Zwick1
TL;DR: Two new algorithms for solving the All Pairs Shortest Paths (APSP) problem for weighted directed graphs using fast matrix multiplication algorithms are presented.
Abstract: We present two new algorithms for solving the All Pairs Shortest Paths (APSP) problem for weighted directed graphs. Both algorithms use fast matrix multiplication algorithms.The first algorithm solves the APSP problem for weighted directed graphs in which the edge weights are integers of small absolute value in O(n2+μ) time, where μ satisfies the equation ω(1, μ, 1) = 1 + 2μ and ω(1, μ, 1) is the exponent of the multiplication of an n × nμ matrix by an nμ × n matrix. Currently, the best available bounds on ω(1, μ, 1), obtained by Coppersmith, imply that μ 0 is an error parameter and W is the largest edge weight in the graph, after the edge weights are scaled so that the smallest non-zero edge weight in the graph is 1. It returns estimates of all the distances in the graph with a stretch of at most 1 + ϵ. Corresponding paths can also be found efficiently.

286 citations

Journal ArticleDOI
TL;DR: A simple argument shows that computing all distances in G with an additive one-sided error of at most 1 is as hard as Boolean matrix multiplication, and describes an APASP2 algorithm, which is simple, easy to implement, and faster than the fastest known matrix-multiplication algorithm.
Abstract: Let G=(V,E) be an unweighted undirected graph on n vertices. A simple argument shows that computing all distances in G with an additive one-sided error of at most 1 is as hard as Boolean matrix multiplication. Building on recent work of Aingworth et al. [SIAM J. Comput., 28 (1999), pp. 1167--1181], we describe an $\Ot(\min\{n^{3/2}m^{1/2},n^{7/3}\})$-time algorithm APASP2 for computing all distances in G with an additive one-sided error of at most 2. Algorithm APASP2 is simple, easy to implement, and faster than the fastest known matrix-multiplication algorithm. Furthermore, for every even k>2, we describe an ${\tilde{O}}(\min\{n^{2-{2}/{(k+2)}}m^{{2}/{(k+2)}}, n^{2+{2}/{(3k-2)}}\})$-time algorithm APASPk for computing all distances in G with an additive one-sided error of at most k. We also give an ${\tilde{O}}(n^2)$-time algorithm ${\bf APASP}_\infty$ for producing stretch 3 estimated distances in an unweighted and undirected graph on n vertices. No constant stretch factor was previously achieved in ${\tilde{O}}(n^2)$ time. We say that a weighted graph F=(V,E') k-emulates an unweighted graph G=(V,E) if for every $u,v\in V$ we have $\delta_G(u,v)\le \delta_F(u,v)\le \delta_G(u,v)+k$. We show that every unweighted graph on n vertices has a 2-emulator with ${\tilde{O}}(n^{3/2})$ edges and a 4-emulator with ${\tilde{O}}(n^{4/3})$ edges. These results are asymptotically tight. Finally, we show that any weighted undirected graph on n vertices has a 3-spanner with ${\tilde{O}}(n^{3/2})$ edges and that such a 3-spanner can be built in ${\tilde{O}}(mn^{1/2})$ time. We also describe an ${\tilde{O}}(n(m^{2/3}+n))$-time algorithm for estimating all distances in a weighted undirected graph on n vertices with a stretch factor of at most 3.

280 citations

References
More filters
Book
01 Jan 1990
TL;DR: The updated new edition of the classic Introduction to Algorithms is intended primarily for use in undergraduate or graduate courses in algorithms or data structures and presents a rich variety of algorithms and covers them in considerable depth while making their design and analysis accessible to all levels of readers.
Abstract: From the Publisher: The updated new edition of the classic Introduction to Algorithms is intended primarily for use in undergraduate or graduate courses in algorithms or data structures. Like the first edition,this text can also be used for self-study by technical professionals since it discusses engineering issues in algorithm design as well as the mathematical aspects. In its new edition,Introduction to Algorithms continues to provide a comprehensive introduction to the modern study of algorithms. The revision has been updated to reflect changes in the years since the book's original publication. New chapters on the role of algorithms in computing and on probabilistic analysis and randomized algorithms have been included. Sections throughout the book have been rewritten for increased clarity,and material has been added wherever a fuller explanation has seemed useful or new information warrants expanded coverage. As in the classic first edition,this new edition of Introduction to Algorithms presents a rich variety of algorithms and covers them in considerable depth while making their design and analysis accessible to all levels of readers. Further,the algorithms are presented in pseudocode to make the book easily accessible to students from all programming language backgrounds. Each chapter presents an algorithm,a design technique,an application area,or a related topic. The chapters are not dependent on one another,so the instructor can organize his or her use of the book in the way that best suits the course's needs. Additionally,the new edition offers a 25% increase over the first edition in the number of problems,giving the book 155 problems and over 900 exercises thatreinforcethe concepts the students are learning.

21,651 citations

Book
01 Sep 1985

7,736 citations

Journal ArticleDOI
TL;DR: In this paper, Cook et al. gave an algorithm which computes the coefficients of the product of two square matrices A and B of order n with less than 4. 7 n l°g 7 arithmetical operations (all logarithms in this paper are for base 2).
Abstract: t. Below we will give an algorithm which computes the coefficients of the product of two square matrices A and B of order n from the coefficients of A and B with tess than 4 . 7 n l°g7 arithmetical operations (all logarithms in this paper are for base 2, thus tog 7 ~ 2.8; the usual method requires approximately 2n 3 arithmetical operations). The algorithm induces algorithms for invert ing a matr ix of order n, solving a system of n linear equations in n unknowns, comput ing a determinant of order n etc. all requiring less than const n l°g 7 arithmetical operations. This fact should be compared with the result of KLYUYEV and KOKOVKINSHCHERBAK [1 ] tha t Gaussian elimination for solving a system of l inearequations is optimal if one restricts oneself to operations upon rows and columns as a whole. We also note tha t WlNOGRAD [21 modifies the usual algorithms for matr ix multiplication and inversion and for solving systems of linear equations, trading roughly half of the multiplications for additions and subtractions. I t is a pleasure to thank D. BRILLINGER for inspiring discussions about the present subject and ST. COOK and B. PARLETT for encouraging me to write this paper. 2. We define algorithms e~, ~ which mult iply matrices of order m2 ~, by induction on k: ~ , 0 is the usual algorithm, for matr ix multiplication (requiring m a multiplications and m 2 ( m t) additions), e~,k already being known, define ~ , ~ +t as follows: If A, B are matrices of order m 2 k ~ to be multiplied, write

2,581 citations

Journal ArticleDOI
TL;DR: For the problem of finding the maximum clique in a graph, no algorithm has been found for which the ratio does not grow at least as fast as n^@e, where n is the problem size and @e>0 depends on the algorithm.

2,472 citations

Journal ArticleDOI
TL;DR: In this article, a new method for accelerating matrix multiplication asymptotically is presented, based on the ideas of Volker Strassen, by using a basic trilinear form which is not a matrix product.

2,454 citations