scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Fast approximation algorithms for the diameter and radius of sparse graphs

TL;DR: This paper presents the first improvement over the diameter approximation algorithm of Aingworth et.
Abstract: The diameter and the radius of a graph are fundamental topological parameters that have many important practical applications in real world networks. The fastest combinatorial algorithm for both parameters works by solving the all-pairs shortest paths problem (APSP) and has a running time of ~O(mn) in m-edge, n-node graphs. In a seminal paper, Aingworth, Chekuri, Indyk and Motwani [SODA'96 and SICOMP'99] presented an algorithm that computes in ~O(m√ n + n2) time an estimate D for the diameter D, such that ⌊ 2/3 D ⌋ ≤ ^D ≤ D. Their paper spawned a long line of research on approximate APSP. For the specific problem of diameter approximation, however, no improvement has been achieved in over 15 years.Our paper presents the first improvement over the diameter approximation algorithm of Aingworth et. al, producing an algorithm with the same estimate but with an expected running time of ~O(m√ n). We thus show that for all sparse enough graphs, the diameter can be 3/2-approximated in o(n2) time. Our algorithm is obtained using a surprisingly simple method of neighborhood depth estimation that is strong enough to also approximate, in the same running time, the radius and more generally, all of the eccentricities, i.e. for every node the distance to its furthest node.We also provide strong evidence that our diameter approximation result may be hard to improve. We show that if for some constant e>0 there is an O(m2-e) time (3/2-e)-approximation algorithm for the diameter of undirected unweighted graphs, then there is an O*( (2-δ)n) time algorithm for CNF-SAT on n variables for constant δ>0, and the strong exponential time hypothesis of [Impagliazzo, Paturi, Zane JCSS'01] is false.Motivated by this negative result, we give several improved diameter approximation algorithms for special cases. We show for instance that for unweighted graphs of constant diameter D not divisible by 3, there is an O(m2-e) time algorithm that gives a (3/2-e) approximation for constant e>0. This is interesting since the diameter approximation problem is hardest to solve for small D.

Summary (3 min read)

1. INTRODUCTION

  • The diameter and the radius are two of the most basic graph parameters.
  • The diameter of a graph is the largest distance between its vertices.
  • Being able to compute the diameter, center and radius of a graph efficiently has become an increasingly important problem in the analysis of large networks [35].
  • For general graphs with arbitrary edge weights, the only known algorithms computing the diameter and radius exactly compute the distance between every pair of vertices in the graph, thus solving the all-pairs shortest paths problem (APSP).

Our contributions.

  • The authors give the first improvement over the diameter approximation algorithm of Aingworth et al. for sparse graphs.
  • The authors present an algorithm with a slightly better approximation and an expected running time of Õ(m √ n).
  • The fastest known algorithm for CNF-SAT is the exhaustive search algorithm that runs in O∗(2n) time by trying all possible 2n assignments to the variables.
  • (We elaborate on this hypothesis later on in the paper.)the authors.

Notation.

  • It can be directed or undirected; this will be specified in each context.
  • Unless explicitly specified, the graphs the authors consider are unweighted.
  • In an unweighted graph, the eccentricity of a vertex v denoted with ecc(v) is the depth of its BFS tree BFS(v).
  • The authors assume throughout the paper that for each v and each s ≤ n, |N ins (v)| = |Nouts (v)| = s, as otherwise the diameter of the graph would be ∞, and this can be checked with two BFS runs from and to an arbitrary node.
  • The authors use the following standard notation for running times.

2. DIAMETER

  • The authors first revisit the algorithm of Aingworth et al. and tighten its approximation analysis.
  • The authors then present their new neighborhood estimation approach that is at the basis of their improved algorithm.
  • Aingworth et al. set s = √ n and obtain their running time.
  • The authors note that if one sets s = m1/3 instead, one can get a runtime of Õ(m2/3n) that is better for sparse graphs; they later show that both of these runtimes can be improved using their new method.
  • The authors now analyze the quality of the estimate returned by the algorithm.

2.2 Improving the running time

  • While keeping the quality of the estimate unchanged.the authors.
  • In the next Lemma the authors analyze the running time of the algorithm.
  • D̂ be the estimate returned by the above algorithm.
  • The authors can assume also that dout(w) < 2h + z since the algorithm computes BFSout(w) and if dout(w) ≥.
  • In each run, S∩Nouts (w) = ∅ holds with very small probability: S is large enough so that whp it intersects the s-neighborhoods of all n vertices of the graph.

3. ECCENTRICITIES

  • A good approximation ê(v) of its eccentricity ecc(v).the authors.
  • The authors note that their eccentricities algorithm can also be made to work for undirected graphs with nonnegative weights at most W by again using Dijkstra’s algorithm in place of BFS.
  • Then it computes all BFS trees for the vertices of S ∪Ns(w) for s = √ n. Let vt ∈ Ns(w) be the closest vertex to v on the shortest path between w and v. Such a vertex exists since w ∈ Ns(w), and for every v it can be computed during the computation of the BFS tree from w.
  • In the next three lemmas the authors prove the bounds on the approximation.
  • Consider the node v′t that is after vt on the shortest path between w and v. Since vt is the closest node to v on the shortest path between w and v that belongs to Ns(w) it follows that v ′ t /∈ Ns(w).

4. HARDNESS UNDER SETH

  • Impagliazzo, Paturi, and Zane [23, 24] introduced the Exponential Time Hypothesis (ETH) and its stronger variant, the Strong Exponential Time Hypothesis (SETH).
  • These two complexity hypotheses assume lower bounds on how fast satisfiability problems can be solved.
  • A natural question is how fast can one solve r-SAT as r grows.
  • Create a vertex for every one of the 2n/2 partial assignments to the variables in S1 and 2Pǎtraşcu and Williams [29] are able to show that improving the runtime for k-dominating set can be reduced to improving the known algorithms for a problem related to CNFSAT, but that problem could still be harder than CNF-SAT.

5. IMPROVED APPROXIMATIONS

  • In this section the authors show that in some cases it is possible to obtain fast (3/2 − ε)-approximations for the diameter.
  • The authors present two algorithms, one works well for dense graphs and the other for sparse graphs.

5.1 Dense graphs

  • Both theorems rely on algorithm Approx-Diam(G) that works as follows.
  • Next, the algorithm scans all pairs of vertices u and v and checks whether the following condition holds: BFSout(u, douts (u)−1) and BFSin(v, dins (v)−1) are disjoint and there is no edge between BFSout(u, douts (u)−.
  • If Approx-Diam(G) returns the value that it gets from one of the runs of Aingworth et al. algorithm then the claim follows from Lemma 2.
  • In this case let w be the vertex with the largest douts (w) value.
  • 1) (and no vertex in common between the two trees).

5.2 Sparse graphs

  • The authors now show that for graphs of constant diameter, it is sometimes possible to obtain a better than 3/2-approximation in Õ(m2−ε) time for constant ε >.
  • Finally, it returns the maximum depth of all computed BFS trees.
  • The authors now analyze the quality of the approximation.
  • Let yi be the vertex with the deepest incoming BFS among the vertices of BFSout(w, h′), where h′ = min{h̃+.
  • The algorithm computes a BFS tree for every vertex of H. |H| = O(m/∆) since there are at most that many vertices of outdegree at least ∆.

6. REFERENCES

  • Fast estimation of diameter and shortest paths (without matrix multiplication).
  • On the possibility of faster SAT algorithms.
  • On the all-pairs-shortest-path problem in unweighted undirected graphs.
  • All pairs shortest paths using bridging sets and rectangular matrix multiplication.

Did you find this useful? Give us your feedback

Content maybe subject to copyright    Report

Fast Approximation Algorithms for the Diameter and
Radius of Sparse Graphs
Liam Roditty
Bar Ilan University
liam.roditty@biu.ac.il
Virginia Vassilevska Williams
UC Berkeley and Stanford University
virgi@eecs.berkeley.edu
ABSTRACT
The diameter and the radius of a graph are fundamental
topological parameters that have many important practi-
cal applications in real world networks. The fastest com-
binatorial algorithm for both par am eters works by solving
the all-pairs shortest paths problem (APSP) and has a run-
ning time of
e
O (m n ) in m-edge, n-node graphs. In a seminal
paper, Aingworth, Chekuri, Indyk and Motwani [SODA’96
and SICOMP’99] presented an algorithm that computes in
e
O (m
n + n
2
) time an estimate
ˆ
D for the diameter D, such
that 2/3D
ˆ
D D. Their paper spawned a long line of
research on approximate APSP. For the specific problem of
diameter approximation, however, no improvement has b een
achieved in over 15 years.
Our paper presents the first improvement over the diame-
ter approximation algorithm of Aingworth et al. , produc in g
an algorithm with the same estimate but with an expected
running time of
e
O(m
n). We thus show that for all sparse
enough graphs, the diameter can be 3/2-ap p r oximated in
o(n
2
) time. Our algorithm is obtain ed using a surprisingly
simple method of neighborhood depth estimation that is
strong enough to also approximate, in the same running
time, the radius and more generally, all of the eccentrici-
ties, i.e. for every node the distance to its furthest node.
We also provide strong evidence that our diameter approx-
imation result may be hard to improve. We show that if for
some constant ε > 0 there is an O(m
2ε
) time (3/2 ε)-
approximation alg o r ith m for the d ia meter of undirected un-
weighted graphs, then there is an O
((2 δ)
n
) time algo-
rithm for CNF-SAT on n variables for co n s ta nt δ > 0, and
the strong exponential time hypothesis of [Impagliazzo, Pa-
turi, Zane JCSS’01] is false.
Work supported by the Israel Science Foundation (grant
no. 822/10) .
Partially supported by NSF Grants CCF-0830797 and
CCF-1118083 at UC Berkeley, and by NSF Grants IIS-
0963478 and IIS-09 0 4 3 2 5 , and an AFOSR MURI Grant, at
Stanford University.
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission and/or a fee.
STOC’13, June 1 - 4, 2013, Palo Alto, California, USA.
Copyright 2013 ACM 978-1-4503-2029-0/13/06 ...$15.00.
Motivated by this negative result, we give several im-
proved diameter approximation algorithms for special cases.
We show for instance that for unweighted gra p h s of constant
diameter D not divisible by 3, there is an O(m
2ε
) time al-
gorithm that gives a (3/2 ε) approximation for constant
ε > 0. This is interesting since the diameter approximation
problem is hardest to so lve for small D.
Categories and Subject Descriptors
F.2.2 [Nonnumerical Algorithms and Problems]; G.2.2
[Graph Theory]: Graph algorithms
Keywords
graph diameter; approximation algorithm; shortest paths
1. INTRODUCTION
The diameter and the radius are two of the most basic
graph parameters. The diameter of a graph is the largest
distance between its vertices. The center of a graph is a ver-
tex that minimizes the ma ximum distance to all other nodes,
and the radius is the distance from the center to th e node
furthest from it. Being able to compute the diameter, center
and radius of a graph effic iently has become an increasingly
important problem in the analysis of large networks [35].
The diameter of the web graph for instance is the largest
number of clicks necessary to get from one document to an-
other, and Albert et al. were ab le to show experimentally
that it is ro u g h ly 19 [2]. The prob lem of computing a center
vertex and the radius of a graph is o ften studied as a fa cility
location problem for networks: pick a single vertex facility
so that the maximum dista n ce from a demand point (client)
in the network is minimized.
The algorithmic complexity of the diameter and radius
problems is very well-studied. For special classes of graphs
there are efficient algorit h ms [21, 19, 15, 11, 12, 5]. E.g. the
radius in chordal graphs can be found in linear time. How-
ever, for general graphs with arbitrary edge weights, the
only known algorithms computing the diameter and radius
exactly compute the distance between every pair of vertices
in the graph, thus solving the all-pairs shortest paths prob-
lem (APSP).
For dense directed unweighted graphs, one can co mp u t e
both th e diameter and the radius using fast matrix multipli-
cation (this is folklore; for a recent simple algorithm see [17 ]),
thus obtaining
˜
O(n
ω
) time algorithms, where ω < 2.38 is the
matrix multiplication exponent [14, 33, 34] and n is the num-
ber of nodes in the graph. It is not known whether APSP

in such graphs can be solved in
˜
O(n
ω
) time the best al-
gorithm is by Zwick [36] running in O(n
2.54
) time [25], and
hence for directed unweighted graphs diameter and radius
can be solved somewhat faster than APSP. For un d irec ted
unweighted graphs th e best known algorithm for diameter
and radius is Seidel’s
˜
O (n
ω
) time APSP algorithm [3 2 ].
For sparse directed or und irec ted unweighted graphs , the
best known algorithm (ignoring poly-lo g a ri th mic factors)
1
for APSP, diameter and radius, does breadt h -fi r st search
(BFS) f ro m every node and hence runs in O(mn) time,
where m is the number of edges in the grap h . For sparse
graphs with m = O(n), the running time is Θ(n
2
) which
is natural for APSP since the algorithm needs to output n
2
distances. However, for the diameter a nd the radius the ou t -
put is a single integer, and it is not immediately clear why
one should spend Ω(n
2
) time to compute t h em.
A nat u r al question is whether one can get substantially
faster diameter and radius a lg o rit h ms by settling for an ap-
proximation. It is well-known that a 2-approximation for
both the diameter and the radius in an undirected graph
is easy to achieve in O(m + n) time using B FS from an
arbitrary node. On the other hand, for APSP, Dor et al.
[18] show that any (2 ε)-approximation algorithm in un-
weighted un d irec ted graphs running in T (n) time would im-
ply an O(T (n)) time algorithm for Boolean matrix multi-
plication (BMM). Hence apriori it could be that (2 ε)-
approximating the diameter and radius of a graph may also
require solving BMM.
In a seminal paper from 1996, Aingworth et al. [1] showed
that it is in fact possible to get a subcubic (2 ε) - approx-
imation algorithm for the diameter in both directed and
undirected graphs without reso rt in g to fast matrix multi-
plication. They designed an
˜
O (m
n + n
2
) time algorithm
computing an estimate
ˆ
D that satisfies 2D/3
ˆ
D D.
Their algorithm has several important and interesting prop-
erties. It is the only known algorithm for approximating
the diameter polynomially faster than O(mn) for every m
that is superlinear in n. It always runs in truly subcubic
time even in dense graphs, and does not explicitly compute
all-pairs approximate shortest paths.
For the radiu s problem, Berman and Ka s iviswanathan [6]
showed that the approach of Aingworth et al. can be used
to obtain in
˜
O(m
n + n
2
) time an estimate ˆr that satisfies
r ˆr 3/2r, where r is the radius of the graph. Thus
both radius and diameter admit
˜
O (m
n + n
2
) time 3/2-
approximations.
Aingworth et al. also presented an algorithm that c o m-
putes an additive 2-app roximation for the APSP problem
in
e
O(n
2.5
) time, that is for every u, v V the algorithm re-
turns a value
ˆ
d(u, v) such that d(u, v)
ˆ
d(u, v) d(u, v)+2,
where d(u, v) is the distance between u and v. Their pa-
per spawned a long line of research on distance approxi-
mation. However, none of the following works con si d ered
the specific problems of diameter a n d radius approxima-
tion, but rather focused on approximation algorithms for
APSP. Dor, Helperin, and Zwick [18] presented an additive
2-approximation for APSP in unweighted undirected graphs
with a running time of
e
O(min{n
3/2
m
1/2
, n
7/3
}), thus im-
proving o n Aingworth et al. ’s APSP approximation algo-
rithm. Baswana et al. [3] presented an algorithm for un-
1
Chan [10] and Blello ch et al. [8] presented algorithms with
O (m n / poly log n) running times.
weighted undir ect ed graphs with an expected running time
of O(m
2/3
n log n + n
2
) that computes an approximation of
all distances with a multiplicative error of 2 and an a d d it ive
error of 1. Elkin [20] presented an algorithm for unweighted
undirected graphs with a running time of O(mn
ρ
+ n
2
ζ)
that approximates t h e distances with a multiplicative error
of (1 + ε) and an additive error that is a function of ζ, ρ
and ε. Cohen and Zwick [13] extended the results of [18]
to weighted graphs. B as wana and Kavitha [4] presented an
e
O(m
n+n
2
) time multiplicative 2-approximation algorithm
and an
e
O(m
2/3
n+n
2
) time 7/3-approximation algorithm for
APSP in weighted un d ir ect ed graphs.
Since Aingworth et al. ’s paper, the only paper that con-
siders the diameter approximation problem direct ly is by
Boitmanis et al. [9]. They presented an algorithm with
e
O(m
n) running time that computes the diameter with an
additive error of
n. Although such an additive error could
be small for graphs with large diameter, it is prohibitive
when it comes to gr a p h s with small diameter.
A simple rando m sampling argument shows that for all
graphs with diameter at least n
δ
, there is an
e
O(mn
1δ
)
time (1 + ε)-app roximation algorithm for all ε > 0. Hence
diameter approximation is hardest for graphs with small di-
ameter. For such graphs the additive approximation of Boit-
manis et al. presents no significant a p p r oximation guarantee.
Our contributions.
We give the first improvement over the diameter approxi-
mation algorithm of Aingworth et al. for sparse graphs. We
present an a lg o r ith m with a slightly better approximation
and an expected running time of
e
O(m
n). This is always
faster than runtime of [1] for m = o(n
1.5
).
Theorem 1. Let G = (V, E) be a directed or an undi-
rected unweighted graph with diameter D = 3h + z, where
h 0 and z {0, 1, 2}. In
e
O(m
n) expected time one can
compute an estimate
ˆ
D of D such that 2h + z
ˆ
D D for
z {0, 1} and 2h + 1
ˆ
D D for z = 2.
We obtain our efficient algorithm by a surprisin g ly simple
node sa mp lin g technique that allows us to replace an expen-
sive neighborhood computation with a cheap estimate.
The diameter and radius are the maximum and minimum
eccentricities in the graph, respectively. In an unweighted
graph, the eccentricity of a vertex is the distance to its fur-
thest node. Our techniques are general enough that we ca n
obtain good estimates of a ll n eccentricities in an und irec ted
unweighted graph in
e
O (m
n) time. We prove:
Theorem 2. Let G = (V, E) be an undirected unweighted
graph with diameter D and radius r. In
e
O (m
n) expected
time one can compute for every node v V an estimate ˆe(v)
of its eccentricity ecc(v) such that:
max{r, 2/3ecc(v)} ˆe(v) min{D, 3/2ecc(v)}.
We note that until now the only known approximation al-
gorithm for all node eccentricities that runs in o(n
2
) time
for sparse graphs is the simple 2-approximation algorithm
for radius and diameter that runs BFS from a single node.
That algorithm only achieves estimates ˆe(v) for which
max{r, ecc(v)/2} ˆe(v) min{D, 2ecc(v) } .

Our approximation algorithm f or radius follows directly
from Theorem 2 by taking ˆr = min
v
ˆe(v). We obtain:
Theorem 3. In
e
O(m
n) expected time one can compute
an estimate ˆr of the radius r of an undirected unweighted
graph such that r ˆr 3/2r.
Our diameter, radius and eccentricity algorithms natu-
rally extend to graphs with nonnegative edge weights, simi-
lar to the algorithm of Aingworth et al.
A natural question is whether there is an almost linear
time approximation scheme for the diameter problem: an al-
gorithm that for any constant ε > 0 runs in
˜
O(m) time and
returns an estimate
ˆ
D such that (1 ε)D
ˆ
D D. Bern-
stein [7] showed that related problems in directed graphs
such as the second shortest path between two nodes and
the replacement paths problem admit such approximation
schemes. Such an algorithm for diameter would be of im-
mense interest, and has not so far been explicitly ruled out,
even conditionally.
Here we give strong evidence that a fast (3/2 ε) - di-
ameter approximation algorithm may be very hard to find,
even for undirected unweighted graphs. We prove:
Theorem 4. Suppose there is a constant ε > 0 so that
there is a (3/2 ε)-approximation algorithm for the diam-
eter in m-edge undirected unweighted graphs that runs in
O (m
2ε
) time for every m. Then, SAT for CNF formulas
on n variables can be solved in O
((2 δ)
n
) time for some
constant δ > 0.
The fastest known algorithm for CNF-SAT is the exhaus-
tive search algorithm that runs in O
(2
n
) time by trying all
possible 2
n
assignments to the variables. I t is a major open
problem whether there is a faster algorithm. Several other
NP-hard p ro blems are known to be equivalent to CNF-SAT
so that if one of these problems has a faster algorithm than
exhaustive s ear ch, then all of them do [16]. Hence, our result
has the following surprising implica tio n : if the diameter can
be approximated fast enough, then problems su ch as Hitting
Set, Set Splitting, or NAE-SAT, all seemingly unrelated to
the diameter, can be solved faster than exhaustive search.
The strong exponential time hypothesis (SETH) of Im-
pagliazzo, Paturi, and Zane [23, 24] imp lies that there is no
improved O
((2 δ)
n
) time algorithm for CNF-SAT, and
hence our result also implies that there is no (3/2 ε)-
approximation algorithm for the diameter approximation
running in O(m
2ε
) time un les s SETH fails. (We elaborate
on this hypothesis later on in the paper.)
We prove Theorem 4 by showing that any O(n
2ε
) time
algorithm that distinguishes whether the diameter of a g iven
sparse (m = O(n)) undirec ted unweighted graph is 2 or a t
least 3 would imply an imp r oved CNF-SAT algorithm. Th is
implies that unless SETH fails, O(n
2
) time is essentially re-
quired to get a (3/2ε)-approximation algorithm for the di-
ameter in sparse graphs, within n
o(1)
factors. Hence, within
n
o(1)
factors, the time for (3/2 ε)-approximating the di-
ameter in a sparse graph is the same as the time required
for computing APSP exactly!
In their paper, Aingworth et al. showed that one can d is -
tinguish between graphs of diameter 2 and 4 in
e
O (m
n)
time, whereas we show that distinguishing between 2 and
3 fast may be difficult. We further explore which graph
diameters can be efficiently distinguished, and prove the fol-
lowing two theo rems that improve upon the a p p r oximation
of Aingworth et al. algorithm.
Theorem 5. Let G = (V, E) be a directed or undirected
unweighted graph with diameter D = 3h + z, where h 0
and z {0, 1, 2}. There is an
˜
O(m
2/3
n
4/3
) time algorithm
that reports an estimate
ˆ
D such that 2h + z
ˆ
D D.
Theorem 6. There is an
˜
O (m
2/3
n
4/3
) time algorithm that
when run on an undirected unweighted graph with diameter
D , reports an estimate
ˆ
D with 4D/5
ˆ
D D.
Theorem 5 shows for instance that one can efficiently dis-
tinguish between directed or un directed graph s of diameter
3 an d 5, and Theorem 6 obtains a 5/4-approximation for
the diameter that runs in O(mn/n
ε
) time for some const a nt
ε > 0 in a ll undirected gr a p h s with a superlinear number of
edges. The previous best approximation quality achievable
polynomially faster than O(mn) time for such graph s was
Aingworth et al. ’s 3/2-approximation.
We further investigate whether one ca n ever obtain a (3/2
ε)-approximation for the diameter in O(m
2ε
) time, and
show that this is indeed possible for graphs with constant
diameter that is not divisible by 3. This is intriguing since,
as we pointed o u t earlier, the diameter approximation prob-
lem is hardest for gr aphs with small diameter. We prove:
Theorem 7. There is an
˜
O(m
21/(2h+3)
) time determin-
istic algorithm that computes an estimate
ˆ
D with 2 D/3
ˆ
D D for all m-edge unweighted graphs of diameter D =
3h+z with h 0 and z {0, 1, 2}. In particular,
ˆ
D 2h+z.
Notation.
Let G = (V, E) denote a graph. It ca n be directed or
undirected; this will be specified in each context. If the
graph is weighted, then there is a function on the edges
w : E Q
+
{0} . Unless explicitly specified, the graph s
we consider are unweighted.
For any u, v V , let d(u, v) denote the distance from u
to v in G. Let BF S
in
(v) and BF S
out
(v) be the incoming
and outgoing breadth-first search (BFS) trees of v, respec-
tively, that is the BFS trees starting at v in G and in G with
the edges reversed. Let d
in
(v) be the depth of BF S
in
(v),
i.e. the larg est distance from a vertex of BF S
in
(v) to v.
Similarly, let d
out
(v) be the depth of BF S
out
(v).
In an unweighted graph, the eccentricity of a vertex v de-
noted with ecc(v) is the depth of its BFS tree BF S(v). In
a weighted graph, the eccentricity ec c( v) of v is the max-
imum over all u V of d(v, u). The radius of a graph is
r = min
vV
ecc(v), and the diameter is D = max
vV
ecc(v).
For h d
in
(v), let BF S
in
(v, h) be the vertices in the
first h levels of BF S
in
(v). Similarly, for h d
out
(v),
let BF S
out
(v, h) be the vertices in the first h levels of
BF S
out
(v).
Let N
in
s
(v) (N
out
s
(v)) be the set of the s closest inco min g
(outgoing) vertices of v, where ties are broken by taking the
vertex with the smaller id. We assume throughout the paper
that for each v and each s n, |N
in
s
(v)| = |N
out
s
(v)| = s,
as otherwise the diameter of the graph would be , and this
can be checked with two BFS runs from a n d to an arbitrary
node. Fo r undirected graphs N
s
(v) = N
IN
s
(v) = N
OU T
s
(v).

Let d
in
s
(v) be the largest distance from a vertex o f N
in
s
(v)
to v, and d
out
s
(v) be the largest distance from v to a ver-
tex of N
out
s
(v). Let d
in
s
= max
vV
d
in
s
(v) and d
out
s
=
max
vV
d
out
s
(v).
For a set S V and a vertex v V we define p
S
(v) to
be a vertex of S such that d ( v, p
S
(v)) d(v, w) for every
w S, i.e. the closest vertex of S to v.
For a degree we defin e p
(v) to be the closest vertex
to v of degree at least ∆, that is, d(v, p
(v)) d(v, w) for
every w V of degree at least ∆.
We use the following standard notation fo r runn in g times.
For a function of n, f(n),
˜
O(f(n)) denotes O(f (n)poly lo g n)
and O
(f(n)) denot es O(f (n)poly(n)).
We write whp to mean with high proba b ility, i.e. with
probability at least 1 1/poly(n).
2. DIAMETER
In this section we present the pro o f of Theorem 1. We
first revisit th e algorithm of A in g worth et al. and tighten
its approximation analysis. We then present our n ew neigh-
borhoo d est ima tio n approach that is at the basis of our im-
proved algorithm.
2.1 The algorithm of Aingworth et al.
The algorithm of Aingworth, C h eku ri, Indyk and Mot-
wani [1], computes a (roughly) 3/2-a p p r oximation of the di-
ameter of a directed (or undirected) graph in
e
O(m
n + n
2
)
time. Let s be a given parameter in [1, n]. The algorithm
works as follows. First, it computes N
out
s
(v) for every v
V . Then, for a vertex w, where d
out
s
(w ) = d
out
s
it com-
putes BF S
out
(w ) and for every u N
out
s
(w ) it computes
BF S
in
(u). Next, it com p u tes a set S that hits N
out
s
(v) for
every v V and for every u S it co mp u t es BF S
out
(u).
As an estimate, the algorithm returns the depth of the deep-
est computed BFS tree. The next lemma appears in [1]. We
state it for completeness.
Lemma 1. The algorithm runtime is
e
O(ns
2
+(n/s+s)m).
Aingworth et al. set s =
n and obtain their running
time. We note that if one sets s = m
1/3
instead, one can
get a runtime of
e
O(m
2/3
n) that is better for sparse graphs;
we later show that both of these runtimes can be improved
using our new method.
We now analyze the quality of the estimate returned by
the algorithm. Aing worth et al. [1] proved that this estimate
is at least 2D/3 in gr a p h s with diameter D. Here we
present a tighter ana lysis .
Lemma 2. Let G = (V, E) be a directed graph with diam-
eter D = 3h +z, where h 0 and z {0, 1, 2}. Let
ˆ
D be the
estimate returned by the algorithm. For z {0, 1}, we have
2h + z
ˆ
D D. For z = 2, we have that 2h + 1
ˆ
D D.
Proof. Let a, b V such that d(a, b) = D. First notice
that the algorithm always retu r n s the depth of some shortest
paths tree and hence
ˆ
D D.
If d
out
s
(w ) h then also d
out
s
(a) h and as S hits
N
out
s
(a), one of the BFS trees computed for vertices of S
has depth at least 2h + z. Hence, assume that d
out
s
(w ) > h.
We can also assume that d
out
(w ) < 2 h + z as otherwise
when we c o mp u t e BF S
out
(w ), the estimate would become
at least 2h + z.
As d
out
(w ) < 2h + z, also d(w , b) < 2h + z. Sin ce
d
out
s
(w ) > h, we have that BF S
out
(w , h) N
out
s
(w ).
Hence there is a vertex w
N
out
s
(w ) on the path from w to
b such that d(w, w
) = h and hence d(w
, b) < h + z. Since
d(a, b) = 3h+z, we must have that d(a, w
) 2h+1. As the
algorithm computes BF S
in
(u) for every u N
out
s
(w ), in
particular, it computes BF S
in
(w
), and returns an estimate
2h + 1. For z {0 , 1}, d(a, w
) 2h + 1 2h + z and
hence the final estimate returned is always at least 2h + z.
For z = 2 we only have that d(a, w
) 2h + 1 and if the al-
gorithm returns d(a, w
) as an estimate, it may return 2h+1
instead of 2h + z. 2
2.2 Improving the running time
The algorit h m of Aingworth et al. [1] runs in
e
O (n s
2
+
(n/s + s)m). In this section we show how to get rid of
the ns
2
term wit h some randomizat io n , while keeping the
quality of the estimate unchanged. B y choosing s =
n, we
get an algorithm running in
e
O(m
n) time.
The term of n s
2
in the running t ime comes from the com-
putation of N
out
s
(v) for every v V . This c o mp u t a tio n is
done to accomplish two tasks. One t as k is to ob ta in d
out
s
(v)
for every v V and then to use it to find a vertex w s u ch
that d
out
s
(w ) = d
out
s
. A second task is to obtain, deter-
ministically, a hitting set S of size
e
O (n / s ) that hits the set
N
out
s
(v) of every v V .
Our main idea is to accomplish these two tasks without
explicitly computing N
out
s
(v) for every v V . The major
step in our a p p r o a ch is to completely mod ify the first task
above by picking a different type o f vertex to play the role of
w. Making the second task above fast can be accomplished
easily with randomization. We elaborate on this below.
Our a lg o rit h m works as follows. First, it computes a hit-
ting set by using randomization, that is, it picks a random
sample S of the vertices of size Θ(n/s log n ) . This g u a r a n -
tees that with high pr o b a b ility (at least 1 n
c
, for s om e
constant c), S N
out
s
(v) 6= , for every v V . This ac-
complishes the second task above in
˜
O (n ) time, with high
probability. Similarly to the algorithm of Aingworth et al.
[1], our algorithm computes BF S
out
(v), for every v S.
We now explain the main idea of our algorithm, i.e. how
to replace the first task above w it h a much faster step. First,
for every v V our algorithm computes the closest node of
S, p
S
(v), to v, by creating a new graph as follows. It adds
an add itio n a l vertex r with edges (u, r), for every u S. It
computes BF S
in
(r) in this graph. It is easy to see that for
every v V the last vertex before r on the shortest path
from v to r is p
S
(v). This step t a kes O(m) time.
Now, the crucial point of our algorithm is that, as op-
posed to the algorithm of Aingworth et al. that picks a
vertex w such that d
out
s
(w ) = d
out
s
, our algorithm finds
a vertex w V that is furthest away from S: i.e. such
that d(w, p
S
(w )) d(u, p
S
(u)), for every u V . The vertex
w plays the same role as its counterpart in [1]: Our algo-
rithm computes BF S
out
(w ) and obtains N
out
s
(w ) from it.
Finally, it computes BF S
in
(u) for every u N
out
s
(w ). As
an estimate, the algo r ith m returns the d ep th of the deepest
BFS tree that it h a s computed.

In the next Lemma we analyze the running time of the
algorithm.
Lemma 3. The algorithm runtime is
e
O (( n / s + s)m).
Proof. A hittin g set S is formed in
˜
O (n ) time. With a
single BFS computation , in O(m) time, we find p
S
(v) for
every v V , and hence also find w. The cost of computing
a BFS tree for every v S N
out
s
(w ) is
e
O((n/s + s)m). 2
Next, we show that the estimate p r oduc ed by our algo-
rithm is of the same quality as the estimate produced by
Aingworth et al. algorithm, whp.
Lemma 4. Let G = (V, E) be a directed (or undirected)
graph with diameter D = 3h + z, where h 0 and z
{0, 1, 2}. Let
ˆ
D be the estimate returned by the above algo-
rithm. With high probability, 2h + z
ˆ
D D whenever
z {0, 1}, and 2h + 1
ˆ
D D whenever z = 2.
Proof. Let a, b V such that d( a, b) = D. Let w be a
vertex that satisfies d(w, p
S
(w )) d(u, p
S
(u)), u V .
If d(w, p
S
(w )) h then also d(a, p
S
(a)) h. As the
algorithm computes BF S
out
(v) for every v S, it follows
that BF S
out
(p
S
(a)) is computed as well and its depth is at
least 2h + z as required. Hence, assume that d(w, p
S
(w )) >
h. We can assume also that d
out
(w ) < 2h + z since the
algorithm computes BF S
out
(w ) and if d
out
(w ) 2h + z
then it computes a BF S tree of depth at least 2h + z.
Since d
out
(w ) < 2h + z it follows that d(w, b) < 2h + z.
Moreover, since d (w, p
S
(w )) > h and S hits N
out
s
(w ) whp,
we must have that N
out
s
(w ) contains a no d e at distance > h
from w, and hence BF S
out
(w , h) N
out
s
(w ). This implies
that there is a vertex w
N
out
s
(w ) on the path from w to
b such that d(w, w
) = h and hence d(w
, b) < h + z. Since
d(a, b) = 3h + z, we also have tha t d(a, w
) 2h + 1.
The algorithm computes BF S
in
(u) for every u N
out
s
(w ),
and in particular, it computes BF S
in
(w
), thus returning an
estimate at least d(a, w
) 2h + 1. Hence for z {0, 1} the
final estimate is always 2h + z, and for z = 2 the estimate
could be 2h + 1 but no less . 2
We n ow turn to prove Theorem 1 from the introduction.
Reminder of Theorem 1 Let G = (V, E) be a directed
or an undirected graph with diameter D = 3h + z, where
h 0 and z {0, 1, 2}. In
e
O (m
n) expected time one can
compute an estimate
ˆ
D of D such that 2h + z
ˆ
D D for
z {0, 1} and 2h + 1
ˆ
D D for z = 2.
Proof. From Lemma 3 we have th at if we set s =
n the
algorithm runs in
e
O (m
n) worst case time. From Lemma 4
we have that wh p , the algorithm returns an estimate of the
desired quality. We now show how to convert the algorithm
into a Las-Vegas one so that it always returns an estimate
of the desired quality but the running time is
e
O(m
n) in
expectation.
Randomization is used only in order to obtain a set that
hits N
out
s
(v) for every v V . The only place that the
hitting set affects the quality of the approximation is in
Lemma 4 where we used the fac t that, wh p , S contains a
node of N
out
s
(w ), so th a t if d(w, S) > h, N
out
s
(w ) contains
a node at distance > h from w.
Algorithm 1: Approx-Ecc(G)
Let S be a random sample of Θ(n/s log n) nodes.
Let w be such that d(w, p
S
(w )) d(u, p
S
(u)) for all
u V .
foreach x N
s
(w ) S do
BFS(x).
foreach v V do
if d(v, v
t
) d(v
t
, w) then
ˆe(v) = max{max
qS
d(v, q), d(v, w), ecc(v
t
)}
else
ˆe(v) =
max{max
qS
d(v, q), d(v, w), min
qS
ecc(s)}
Note that the algorithm computes N
out
s
(w ) and we can
check whether S intersects it in
e
O (s ) time. If it does not,
we can r eru n the algorithm until we have verified that S
N
out
s
(w ) 6= . In each run, S N
out
s
(w ) = holds with very
small probability: S is large enough so that whp it intersects
the s-neighborhoods of all n vertices of the graph . Thus, th e
expected running time of the algorithm is
e
O(m
n) and its
estimate is guaranteed to have th e required quality. 2
Just as in [1], our algorithm works for grap h s with non-
negative weights as well by replacing every use of BFS with
Dijkstra’s algorithm. The p r oofs are analogous, th e running
time is increased by at most a log n factor, and the quality of
the approximation only suffers an additive W term, where
W is the maximum edge weight in the graph. (The same
approximation quality is achieved by Aingworth et al. b u t
with an
e
O(m
n + n
2
) running time.) We obtain:
Theorem 8. Let G = (V, E) be a directed or an undi-
rected graph with nonnegative edge weights at most W and
diameter D. In
e
O(m
n) expected time one can compute an
estimate
ˆ
D of D such that 2D/3 W <
ˆ
D D.
3. ECCENTRICITIES
In this section we show that our method can be generalized
to compute for every vertex v in an undirected unweighted
graph, a good approximation ˆe(v) of its eccentricity ecc(v).
We prove Theorem 2.
Reminder of Theorem 2 Let G = (V, E) be an undirected
graph with diameter D and radius r. In
e
O (m
n) expected
time one can compute for every node v V an estimate ˆe(v)
of its eccentricity ecc(v) such that:
max{r, 2/3ecc(v)} ˆe(v) min{D, 3/2ecc(v)}.
We note that our ec centricities algorithm can also be ma d e
to work for undirected graphs with nonnegative weights at
most W by again using Dijkstra’s algorithm in place of
BFS. T h en the runnin g time is still
e
O (m
n) and t h e ap-
proximation quality becomes 2/3ecc(v) 2W < ˆe(v) <
3/2ecc(v) + W .
One can immediately obtain our 3/2-approximation of the
radius in unweighted undirected graphs stated in Th eo r em 3
as a corollary to Theorem 2 by taking ˆr = min
v
ˆe(v). For
this choice, ˆr r, and ˆr min
v
3/2ecc(v) = 3/2r.

Citations
More filters
Posted Content
TL;DR: In this article, the authors consider several well-studied problems in dynamic algorithms and prove that sufficient progress on any of them would imply a breakthrough on one of five major open problems in the theory of algorithms.
Abstract: We consider several well-studied problems in dynamic algorithms and prove that sufficient progress on any of them would imply a breakthrough on one of five major open problems in the theory of algorithms: 1. Is the 3SUM problem on $n$ numbers in $O(n^{2-\epsilon})$ time for some $\epsilon>0$? 2. Can one determine the satisfiability of a CNF formula on $n$ variables in $O((2-\epsilon)^n poly n)$ time for some $\epsilon>0$? 3. Is the All Pairs Shortest Paths problem for graphs on $n$ vertices in $O(n^{3-\epsilon})$ time for some $\epsilon>0$? 4. Is there a linear time algorithm that detects whether a given graph contains a triangle? 5. Is there an $O(n^{3-\epsilon})$ time combinatorial algorithm for $n\times n$ Boolean matrix multiplication? The problems we consider include dynamic versions of bipartite perfect matching, bipartite maximum weight matching, single source reachability, single source shortest paths, strong connectivity, subgraph connectivity, diameter approximation and some nongraph problems such as Pagh's problem defined in a recent paper by Patrascu [STOC 2010].

308 citations

Proceedings ArticleDOI
14 Jun 2015
TL;DR: This paper shows that, if the edit distance can be computed in time O(n2-δ) for some constant δ>0, then the satisfiability of conjunctive normal form formulas with N variables and M clauses can be solved in time MO(1) 2(1-ε)N for a constant ε>0.
Abstract: The edit distance (a.k.a. the Levenshtein distance) between two strings is defined as the minimum number of insertions, deletions or substitutions of symbols needed to transform one string into another. The problem of computing the edit distance between two strings is a classical computational task, with a well-known algorithm based on dynamic programming. Unfortunately, all known algorithms for this problem run in nearly quadratic time.In this paper we provide evidence that the near-quadratic running time bounds known for the problem of computing edit distance might be {tight}. Specifically, we show that, if the edit distance can be computed in time O(n2-δ) for some constant δ>0, then the satisfiability of conjunctive normal form formulas with N variables and M clauses can be solved in time MO(1) 2(1-e)N for a constant e>0. The latter result would violate the Strong Exponential Time Hypothesis, which postulates that such algorithms do not exist.

264 citations

Proceedings ArticleDOI
17 Oct 2015
TL;DR: It is shown that for any constant ε >0, an O(n2-ε) time algorithm for computing the LCS or the DTWD of two sequences of length n over a constant size alphabet, refutes the popular Strong Exponential Time Hypothesis (SETH).
Abstract: Two important similarity measures between sequences are the longest common subsequence (LCS) and the dynamic time warping distance (DTWD). The computations of these measures for two given sequences are central tasks in a variety of applications. Simple dynamic programming algorithms solve these tasks in O(n2) time, and despite an extensive amount of research, no algorithms with significantly better worst case upper bounds are known. In this paper, we show that for any constant a#x03B5;>0, an O(n(2-a#x03B5;)) time algorithm for computing the LCS or the DTWD of two sequences of length n over a constant size alphabet, refutes the popular Strong Exponential Time Hypothesis (SETH).

236 citations


Cites background from "Fast approximation algorithms for t..."

  • ...Recently, SETH has been shown to imply many interesting lower bounds for polynomial time solvable problems [3], [5], [9], [15], [42], [46]....

    [...]

Proceedings ArticleDOI
17 Oct 2015
TL;DR: In this article, it was shown that these measures do not have strongly sub quadratic time algorithms, i.e., no algorithm with running time O(n 2 ) for any a#x03B5; > 0, unless the Strong Exponential Time Hypothesis fails.
Abstract: Classic similarity measures of strings are longest common subsequence and Levenshtein distance (i.e., The classic edit distance). A classic similarity measure of curves is dynamic time warping. These measures can be computed by simple O(n2) dynamic programming algorithms, and despite much effort no algorithms with significantly better running time are known. We prove that, even restricted to binary strings or one-dimensional curves, respectively, these measures do not have strongly sub quadratic time algorithms, i.e., No algorithms with running time O(n2 -- a#x03B5;) for any a#x03B5; > 0, unless the Strong Exponential Time Hypothesis fails. We generalize the result to edit distance for arbitrary fixed costs of the four operations (deletion in one of the two strings, matching, substitution), by identifying trivial cases that can be solved in constant time, and proving quadratic-time hardness on binary strings for all other cost choices. This improves and generalizes the known hardness result for Levenshtein distance [Backurs, Indyk STOC'15] by the restriction to binary strings and the generalization to arbitrary costs, and adds important problems to a recent line of research showing conditional lower bounds for a growing number of quadratic time problems. As our main technical contribution, we introduce a framework for proving quadratic-time hardness of similarity measures. To apply the framework it suffices to construct a single gadget, which encapsulates all the expressive power necessary to emulate a reduction from satisfiability. Finally, we prove quadratic-time hardness for longest palindromic subsequence and longest tandem subsequence via reductions from longest common subsequence, showing that conditional lower bounds based on the Strong Exponential Time Hypothesis also apply to string problems that are not necessarily similarity measures.

195 citations

Posted Content
TL;DR: In this article, it was shown that the edit distance can be computed in time O(n 2 − ε ) for some constant ε > 0, where ε is the number of insertions, deletions or substitutions of symbols needed to transform one string into another.
Abstract: The edit distance (a.k.a. the Levenshtein distance) between two strings is defined as the minimum number of insertions, deletions or substitutions of symbols needed to transform one string into another. The problem of computing the edit distance between two strings is a classical computational task, with a well-known algorithm based on dynamic programming. Unfortunately, all known algorithms for this problem run in nearly quadratic time. In this paper we provide evidence that the near-quadratic running time bounds known for the problem of computing edit distance might be tight. Specifically, we show that, if the edit distance can be computed in time $O(n^{2-\delta})$ for some constant $\delta>0$, then the satisfiability of conjunctive normal form formulas with $N$ variables and $M$ clauses can be solved in time $M^{O(1)} 2^{(1-\epsilon)N}$ for a constant $\epsilon>0$. The latter result would violate the Strong Exponential Time Hypothesis, which postulates that such algorithms do not exist.

192 citations

References
More filters
Journal ArticleDOI
04 Jun 1998-Nature
TL;DR: Simple models of networks that can be tuned through this middle ground: regular networks ‘rewired’ to introduce increasing amounts of disorder are explored, finding that these systems can be highly clustered, like regular lattices, yet have small characteristic path lengths, like random graphs.
Abstract: Networks of coupled dynamical systems have been used to model biological oscillators, Josephson junction arrays, excitable media, neural networks, spatial games, genetic control networks and many other self-organizing systems. Ordinarily, the connection topology is assumed to be either completely regular or completely random. But many biological, technological and social networks lie somewhere between these two extremes. Here we explore simple models of networks that can be tuned through this middle ground: regular networks 'rewired' to introduce increasing amounts of disorder. We find that these systems can be highly clustered, like regular lattices, yet have small characteristic path lengths, like random graphs. We call them 'small-world' networks, by analogy with the small-world phenomenon (popularly known as six degrees of separation. The neural network of the worm Caenorhabditis elegans, the power grid of the western United States, and the collaboration graph of film actors are shown to be small-world networks. Models of dynamical systems with small-world coupling display enhanced signal-propagation speed, computational power, and synchronizability. In particular, infectious diseases spread more easily in small-world networks than in regular lattices.

39,297 citations


"Fast approximation algorithms for t..." refers background in this paper

  • ...Being able to compute the diameter, center and radius of a graph efficiently has become an increasingly important problem in the analysis of large networks [35]....

    [...]

Journal ArticleDOI
09 Sep 1999-Nature
TL;DR: The World-Wide Web becomes a large directed graph whose vertices are documents and whose edges are links that point from one document to another, which determines the web's connectivity and consequently how effectively the authors can locate information on it.
Abstract: Despite its increasing role in communication, the World-Wide Web remains uncontrolled: any individual or institution can create a website with any number of documents and links. This unregulated growth leads to a huge and complex web, which becomes a large directed graph whose vertices are documents and whose edges are links (URLs) that point from one document to another. The topology of this graph determines the web's connectivity and consequently how effectively we can locate information on it. But its enormous size (estimated to be at least 8×108 documents1) and the continual changing of documents and links make it impossible to catalogue all the vertices and edges.

4,135 citations


"Fast approximation algorithms for t..." refers background in this paper

  • ...were able to show experimentally that it is roughly 19 [2]....

    [...]

  • ...It is complete for W[2], and improving on the n running time is a major open problem....

    [...]

Journal ArticleDOI
TL;DR: In this article, a new method for accelerating matrix multiplication asymptotically is presented, based on the ideas of Volker Strassen, by using a basic trilinear form which is not a matrix product.

2,454 citations


"Fast approximation algorithms for t..." refers background in this paper

  • ...38 is the matrix multiplication exponent [14, 33, 34] and n is the number of nodes in the graph....

    [...]

Journal ArticleDOI
TL;DR: It is shown that the optimum location of a switching center is always at a vertex of the communication network while the best location for the police station is not necessarily at an intersection.
Abstract: The concepts of the "center" and the "median vertex" of a graph are generalized to the "absolute center" and the "absolute median" of a weighted graph a graph with weights attached to its vertices as well as to its branches. These results are used to find the optimum location of a "switching center" in a communication network and to locate the best place to build a "police station" in a highway system. It is shown that the optimum location of a switching center is always at a vertex of the communication network while the best location for the police station is not necessarily at an intersection. Procedures for finding these locations are given.

2,224 citations


"Fast approximation algorithms for t..." refers background in this paper

  • ...For special classes of graphs there are efficient algorithms [21, 19, 15, 11, 12, 5]....

    [...]

Journal ArticleDOI
TL;DR: A generalized reduction that is based on an algorithm that represents an arbitrary k-CNF formula as a disjunction of 2?nk-C NF formulas that are sparse, that is, each disjunct has O(n) clauses, and shows that Circuit-SAT is SERF-complete for all NP-search problems.

1,410 citations

Frequently Asked Questions (10)
Q1. What contributions have the authors mentioned in the paper "Fast approximation algorithms for the diameter and radius of sparse graphs" ?

The authors thus show that for all sparse enough graphs, the diameter can be 3/2-approximated in o ( n ) time. The authors also provide strong evidence that their diameter approximation result may be hard to improve. The authors show that if for some constant ε > 0 there is an O ( m ) time ( 3/2 − ε ) approximation algorithm for the diameter of undirected unweighted graphs, then there is an O ( ( 2 − δ ) ) time algorithm for CNF-SAT on n variables for constant δ > 0, and the strong exponential time hypothesis of [ Impagliazzo, Paturi, Zane JCSS ’ 01 ] is false. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Motivated by this negative result, the authors give several improved diameter approximation algorithms for special cases. The authors show for instance that for unweighted graphs of constant diameter D not divisible by 3, there is an O ( m ) time algorithm that gives a ( 3/2 − ε ) approximation for constant ε > 

For sparse directed or undirected unweighted graphs, the best known algorithm (ignoring poly-logarithmic factors)1 for APSP, diameter and radius, does breadth-first search (BFS) from every node and hence runs in O(mn) time, where m is the number of edges in the graph. 

The authors note that their eccentricities algorithm can also be made to work for undirected graphs with nonnegative weights at most W by again using Dijkstra’s algorithm in place of BFS. 

For sparse graphs with m = O(n), the running time is Θ(n2) which is natural for APSP since the algorithm needs to output n2 distances. 

In O(m) time the authors can get a 2-approximation to the diameter, i.e. an estimate E with D/2 ≤ E ≤ D. Since D = 3h + z, the authors have that (E − 2)/3 ≤ h ≤ 2E/3. 

The algorithm computes a BFS tree for every vertex of H. |H| = O(m/∆) since there are at most that many vertices of outdegree at least ∆. 

For dense directed unweighted graphs, one can compute both the diameter and the radius using fast matrix multiplication (this is folklore; for a recent simple algorithm see [17]), thus obtaining Õ(nω) time algorithms, where ω < 2.38 is the matrix multiplication exponent [14, 33, 34] and n is the number of nodes in the graph. 

There is an Õ(m2/3n4/3) time algorithm that reports an estimate D̂ such that 2h+ z ≤ D̂ ≤ D.Theorem 6. There is an Õ(m2/3n4/3) time algorithm that when run on an undirected unweighted graph with diameter D, reports an estimate D̂ with ⌊4D/5⌋ ≤ 

Since the estimate reported by the algorithm is the maximum among values that also include douts (a) + dins (b) = 2h+ 2, the authors get that D̂ ≥ 2h+ 2, as required. 

If D̂ = 2 then D mightbe either 2, 3 or 4, and for this case the authors can just use the Aingworth et al. algorithm to get an estimate of 3 whenever D = 4 which gives the desired approximation.