scispace - formally typeset
Search or ask a question

Showing papers on "Connectivity published in 2015"


Journal ArticleDOI
TL;DR: The exponential convergence of the proposed algorithm under strongly connected and weight-balanced digraph topologies when the local costs are strongly convex with globally Lipschitz gradients is established, and an upper bound on the stepsize is provided that guarantees exponential convergence over connected graphs for implementations with periodic communication.

543 citations


Proceedings ArticleDOI
21 Jul 2015
TL;DR: This paper presents several new bounds on the time and message complexities of randomized Monte Carlo algorithms for Graph Connectivity and Minimum Spanning Tree in the Congested Clique, and shows that Ω(n2) messages are needed by any algorithm that solves GC, regardless of the number of rounds used.
Abstract: We study two fundamental graph problems, Graph Connectivity (GC) and Minimum Spanning Tree (MST), in the well-studied Congested Clique model, and present several new bounds on the time and message complexities of randomized algorithms for these problems. No non-trivial (i.e., super-constant) time lower bounds are known for either of the aforementioned problems; in particular, an important open question is whether or not constant-round algorithms exist for these problems. We make progress toward answering this question by presenting randomized Monte Carlo algorithms for both problems that run in O(log log log n) rounds (where n is the size of the clique). Our results improve by an exponential factor on the long-standing (deterministic) time bound of O(log log n) rounds for these problems due to Lotker et al. (SICOMP 2005). Our algorithms make use of several algorithmic tools including graph sketching, random sampling, and fast sorting.The second contribution of this paper is to present several almost-tight bounds on the message complexity of these problems. Specifically, we show that Ω(n2) messages are needed by any algorithm (including randomized Monte Carlo algorithms, and regardless of the number of rounds) that solves the GC (and hence also the MST) problem if each machine in the Congested Clique has initial knowledge only of itself (the so-called KT0 model). In contrast, if the machines have initial knowledge of their neighbors' IDs (the so-called KT1 model), we present a randomized Monte Carlo algorithm for MST that uses O(n polylog n) messages and runs in O(polylog n) rounds. To complement this, we also present a lower bound in the KT1 model that shows that Ω(n) messages are required by any algorithm that solves GC, regardless of the number of rounds used. Our results are a step toward understanding the power of randomization in the Congested Clique with respect to both time and message complexity.

132 citations


Journal ArticleDOI
TL;DR: This Letter shows intuition that graph connectivity is an indicator of fast quantum search on complete graphs, strongly regular graphs, and hypercubes to be false by giving two examples of graphs for which the opposite holds true: one with low connectivity but fast search, and one with high connectivity but slow search.
Abstract: A randomly walking quantum particle evolving by Schrodinger's equation searches on d-dimensional cubic lattices in O(√N) time when d≥5, and with progressively slower runtime as d decreases. This suggests that graph connectivity (including vertex, edge, algebraic, and normalized algebraic connectivities) is an indicator of fast quantum search, a belief supported by fast quantum search on complete graphs, strongly regular graphs, and hypercubes, all of which are highly connected. In this Letter, we show this intuition to be false by giving two examples of graphs for which the opposite holds true: one with low connectivity but fast search, and one with high connectivity but slow search. The second example is a novel two-stage quantum walk algorithm in which the walking rate must be adjusted to yield high search probability.

81 citations


Journal ArticleDOI
01 Oct 2015-Networks
TL;DR: This study considers a class of critical node detection problems that involves minimization of a distance‐based connectivity measure of a given unweighted graph via the removal of a subset of nodes subject to a budgetary constraint and develops an effective exact algorithm that iteratively solves a series of simpler IPs to obtain an optimal solution for the original problem.
Abstract: This study considers a class of critical node detection problems that involves minimization of a distance-based connectivity measure of a given unweighted graph via the removal of a subset of nodes referred to as critical nodes subject to a budgetary constraint. The distance-based connectivity measure of a graph is assumed to be a function of the actual pairwise distances between nodes in the remaining graph e.g., graph efficiency, Harary index, characteristic path length, residual closeness rather than simply whether nodes are connected or not, a typical assumption in the literature. We derive linear integer programming IP formulations, along with additional enhancements, aimed at improving the performance of standard solvers. For handling larger instances, we develop an effective exact algorithm that iteratively solves a series of simpler IPs to obtain an optimal solution for the original problem. The edge-weighted generalization is also considered, which results in some interesting implications for distance-based clique relaxations, namely, s-clubs. Finally, we conduct extensive computational experiments with real-world and randomly generated network instances under various settings that reveal interesting insights and demonstrate the advantages and limitations of the proposed approach. In particular, one important conclusion of our work is that vulnerability of real-world networks to targeted attacks can be significantly more pronounced than what can be estimated by centrality-based heuristic methods commonly used in the literature. © 2015 Wiley Periodicals, Inc. NETWORKS, Vol. 663, 170-195 2015

80 citations


Journal ArticleDOI
TL;DR: The connectivity controller is modified to use only local information and can be used in conjunction with artificial potential-based formation controllers and the performance is demonstrated in an experiment on a team of wheeled mobile robots.
Abstract: The preservation of connectivity in mobile robot networks is critical to the success of most existing algorithms designed to achieve various goals. The most basic method to preserve connectivity is to have each agent preserve its set of neighbors for all time. More advanced methods preserve a (minimum) spanning tree in the network. Other methods are based on increasing the algebraic graph connectivity, which is given by the second smallest eigenvalue $\lambda_{2}({\cal L})$ of the graph Laplacian ${\cal L} $ that represents the network. These methods typically result in a monotonic increase in connectivity until the network is completely connected. In previous work by the authors, a continuous feedback control method had been proposed which allows the connectivity to decrease, that is, edges in the network may be broken. This method requires agents to have knowledge of the entire network. In this paper, we modify the controller to use only local information. The connectivity controller is based on maximization of $\lambda_{2}({\cal L} ) $ and artificial potential functions and can be used in conjunction with artificial potential-based formation controllers. The controllers are extended for implementation on nonholonomic-wheeled mobile robots, and the performance is demonstrated in an experiment on a team of wheeled mobile robots.

79 citations


Journal ArticleDOI
01 Jan 2015
TL;DR: A path in an edge-colored graph is properly colored if no two consecutive edges receive the same color as mentioned in this paper, and a path in a properly colored graph can be seen as having a fixed number of vertices.
Abstract: A path in an edge-colored graph is properly colored if no two consecutive edges receive the same color. In this survey, we gather results concerning notions of graph connectivity involving properly colored paths.

79 citations


Journal ArticleDOI
TL;DR: In this paper, an extension of the g-extra connectivity of an n-dimensional folded hypercube to all 0 ?

59 citations


Journal ArticleDOI
TL;DR: The proposed SCMAG algorithm uses a cell-based subspace clustering approach and identifies cells with dense connectivity in the subspaces and proposes a new cell combining strategy on dimensions of categorical attributes and a novel mechanism to handle multi-valued attributes.

56 citations


Posted Content
TL;DR: This paper considers fully dynamic graph algorithms with both faster worst case update time and sublinear space, and shows that 2-edge connectivity can be maintained using O(n log^2 n) words with an amortized update time of O(log^6 n).
Abstract: This paper considers fully dynamic graph algorithms with both faster worst case update time and sublinear space. The fully dynamic graph connectivity problem is the following: given a graph on a fixed set of n nodes, process an online sequence of edge insertions, edge deletions, and queries of the form "Is there a path between nodes a and b?" In 2013, the first data structure was presented with worst case time per operation which was polylogarithmic in n. In this paper, we shave off a factor of log n from that time, to O(log^4 n) per update. For sequences which are polynomial in length, our algorithm answers queries in O(log n/\log\log n) time correctly with high probability and using O(n \log^2 n) words (of size log n). This matches the amount of space used by the most space-efficient graph connectivity streaming algorithm. We also show that 2-edge connectivity can be maintained using O(n log^2 n) words with an amortized update time of O(log^6 n).

54 citations


Journal ArticleDOI
TL;DR: This paper formulates the problem of detecting deep communities as multi-stage node removal that maximizes a new centrality measure, called the local Fiedler vector centrality (LFVC), at each stage and proves that a greedy node/edge removal strategy, based on successive maximization of LFVC, has bounded performance loss.
Abstract: A deep community in a graph is a connected component that can only be seen after removal of nodes or edges from the rest of the graph This paper formulates the problem of detecting deep communities as multi-stage node removal that maximizes a new centrality measure, called the local Fiedler vector centrality (LFVC), at each stage The LFVC is associated with the sensitivity of algebraic connectivity to node or edge removals We prove that a greedy node/edge removal strategy, based on successive maximization of LFVC, has bounded performance loss relative to the optimal, but intractable, combinatorial batch removal strategy Under a stochastic block model framework, we show that the greedy LFVC strategy can extract deep communities with probability one as the number of observations becomes large We apply the greedy LFVC strategy to real-world social network datasets Compared with conventional community detection methods we demonstrate improved ability to identify important communities and key members in the network

54 citations


Journal ArticleDOI
TL;DR: A fully distributed, linear, and iterative algorithm based on the complex-valued Laplacian associated with the sensor network is proposed, which converges globally and gives the correct localization result.
Abstract: This paper studies the 2D localization problem of a sensor network given anchor node positions in a common global coordinate frame and relative position measurements in local coordinate frames between node pairs. It is assumed that the local coordinate frames of different sensors have different orientations and the orientation difference with respect to the global coordinate frame are not known. In terms of graph connectivity, a necessary and sufficient condition is obtained for self-localizability that leads to a fully distributed localization algorithm. Moreover, a distributed verification algorithm is developed to check the graph connectivity condition, which can terminate successfully when the sensor network is self-localizable. Finally, a fully distributed, linear, and iterative algorithm based on the complex-valued Laplacian associated with the sensor network is proposed, which converges globally and gives the correct localization result.

Book ChapterDOI
07 Oct 2015
TL;DR: This paper shows that if the initial imbalance in degree between the two opinions satisfies|dA-dB|/2m \ge 2\lambda ^2$$, then with high probability voting completes in Olog n steps, and the opinion with the larger initial degree wins.
Abstract: Distributed voting is a fundamental topic in distributed computing. In the standard model of pull voting, at each step every vertex chooses a neighbour uniformly at random and adopts its opinion. The voting is completed when all vertices hold the same opinion. In the simplest case, each vertex initially holds one of two different opinions. This partitions the vertices into arbitrary sets A and B. For many graphs, including regular graphs and irrespective of their expansion properties, if both A and B are sufficiently large sets, then pull voting requires $$\Omega n$$ expected steps, where n is the number of vertices of the graph. In this paper we consider a related class of voting processes based on sampling two opinions. In the simplest case, every vertex v chooses two random neighbours at each step. If both these neighbours have the same opinion, then v adopts this opinion. Otherwise, v keeps its own opinion. Let G be a connected graph with n vertices and m edges. Let P be the transition matrix of a simple random walk on G with second largest eigenvalue $$\lambda < 1/\sqrt{2}$$. We show that if the initial imbalance in degree between the two opinions satisfies $$|dA-dB|/2m \ge 2\lambda ^2$$, then with high probability voting completes in $$O\log n$$ steps, and the opinion with the larger initial degree wins. The condition that $$\lambda 0$$, or only a bound on the conductance of the graph is known, the sampling process can be modified so that voting still provably completes in $$O\log n$$ steps with high probability. The modification uses two sampling based on probing to a fixed depth $$O1/\epsilon $$ from any vertex. In its most general form our voting process allows vertices to bias their sampling of opinions among their neighbours to achieve a desired outcome. This is done by allocating weights to edges.

Journal ArticleDOI
TL;DR: This paper considers the problem of adding a small set of nonexisting edges (shortcuts) in a social graph with the main objective of minimizing its characteristic path length, which determines the average distance between pairs of vertices and essentially controls how broadly information can propagate through a network.
Abstract: Small changes on the structure of a graph can have a dramatic effect on its connectivity. While in the traditional graph theory, the focus is on well-defined properties of graph connectivity, such as biconnectivity, in the context of a social graph, connectivity is typically manifested by its ability to carry on social processes. In this paper, we consider the problem of adding a small set of nonexisting edges (shortcuts) in a social graph with the main objective of minimizing its characteristic path length. This property determines the average distance between pairs of vertices and essentially controls how broadly information can propagate through a network. We formally define the problem of interest, characterize its hardness and propose a novel method, path screening, which quickly identifies important shortcuts to guide the augmentation of the graph. We devise a sampling-based variant of our method that can scale up the computation in larger graphs. The claims of our methods are formally validated. Through experiments on real and synthetic data, we demonstrate that our methods are a multitude of times faster than standard approaches, their accuracy outperforms sensible baselines and they can ease the spread of information in a network, for a varying range of conditions.

Journal ArticleDOI
TL;DR: A hierarchical estimation procedure that implements power iteration in a decentralized manner, exploiting an algorithm for balancing strongly connected directed graphs and guaranteeing preservation of the strong connectivity property is introduced.
Abstract: In order to accomplish cooperative tasks, decentralized systems are required to communicate among each other. Thus, maintaining the connectivity of the communication graph is a fundamental issue. Connectivity maintenance has been extensively studied in the last few years, but generally considering undirected communication graphs. In this paper, we introduce a decentralized control and estimation strategy to maintain the strong connectivity property of directed communication graphs. In particular, we introduce a hierarchical estimation procedure that implements power iteration in a decentralized manner, exploiting an algorithm for balancing strongly connected directed graphs. The output of the estimation system is then utilized for guaranteeing preservation of the strong connectivity property. The control strategy is validated by means of analytical proofs and simulation results.

Journal ArticleDOI
TL;DR: It is shown that every connected graph other than the single edge K2 has an antimagic labeling, and this conjecture is proved for regular graphs of odd degree.
Abstract: An antimagic labeling of a graph G with m edges is a bijection from EG to {1,2,...,m} such that for all vertices u and v, the sum of labels on edges incident to u differs from that for edges incident to v. Hartsfield and Ringel conjectured that every connected graph other than the single edge K2 has an antimagic labeling. We prove this conjecture for regular graphs of odd degree.

Posted Content
TL;DR: It is proved that every connected graph (not necessarily planar) with $\Delta(G)=3$ other than the Petersen graph satisfies $\chi_l(G^2)\leq 8$ (and this is best possible).
Abstract: The {\em square} $G^2$ of a graph $G$ is the graph with the same vertex set as $G$ and with two vertices adjacent if their distance in $G$ is at most 2. Thomassen showed that every planar graph $G$ with maximum degree $\Delta(G)=3$ satisfies $\chi(G^2)\leq 7$. Kostochka and Woodall conjectured that for every graph, the list-chromatic number of $G^2$ equals the chromatic number of $G^2$, that is $\chi_l(G^2)=\chi(G^2)$ for all $G$. If true, this conjecture (together with Thomassen's result) implies that every planar graph $G$ with $\Delta(G)=3$ satisfies $\chi_l(G^2)\leq 7$. We prove that every connected graph (not necessarily planar) with $\Delta(G)=3$ other than the Petersen graph satisfies $\chi_l(G^2)\leq 8$ (and this is best possible). In addition, we show that if $G$ is a planar graph with $\Delta(G)=3$ and girth $g(G)\geq 7$, then $\chi_l(G^2)\leq 7$. Dvo\v{r}\'ak, \v{S}krekovski, and Tancer showed that if $G$ is a planar graph with $\Delta(G) = 3$ and girth $g(G) \geq 10$, then $\chi_l(G^2)\leq 6$. We improve the girth bound to show that if $G$ is a planar graph with $\Delta(G)=3$ and $g(G) \geq 9$, then $\chi_l(G^2) \leq 6$. All of our proofs can be easily translated into linear-time coloring algorithms.

Journal ArticleDOI
TL;DR: This article focuses on developing a novel noniterative fat water decomposition algorithm more robust to fat water swaps and related ambiguities.
Abstract: Purpose This article focuses on developing a novel noniterative fat water decomposition algorithm more robust to fat water swaps and related ambiguities. Methods Field map estimation is reformulated as a constrained surface estimation problem to exploit the spatial smoothness of the field, thus minimizing the ambiguities in the recovery. Specifically, the differences in the field map-induced frequency shift between adjacent voxels are constrained to be in a finite range. The discretization of the above problem yields a graph optimization scheme, where each node of the graph is only connected with few other nodes. Thanks to the low graph connectivity, the problem is solved efficiently using a noniterative graph cut algorithm. The global minimum of the constrained optimization problem is guaranteed. The performance of the algorithm is compared with that of state-of-the-art schemes. Quantitative comparisons are also made against reference data. Results The proposed algorithm is observed to yield more robust fat water estimates with fewer fat water swaps and better quantitative results than other state-of-the-art algorithms in a range of challenging applications. Conclusion The proposed algorithm is capable of considerably reducing the swaps in challenging fat water decomposition problems. The experiments demonstrate the benefit of using explicit smoothness constraints in field map estimation and solving the problem using a globally convergent graph-cut optimization algorithm. Magn Reson Med 73:1289–1299, 2015. © 2014 Wiley Periodicals, Inc.

Journal ArticleDOI
TL;DR: This work gives a correct, different, and self-contained proof of the ratio 1.5 that is also substantially simpler and shorter than the previous proofs.
Abstract: The Tree Augmentation Problem (TAP) is as follows: given a connected graph G=(V, e) and an edge set E on V, find a minimum size subset of edges F⊆E such that (V, e ∪ F) is 2-edge-connected. In the conference version [Even et al. 2001] was sketched a 1.5-approximation algorithm for the problem. Since a full proof was very complex and long, the journal version was cut into two parts. The first part [Even et al. 2009] only proved ratio 1.8. An attempt to simplify the second part produced an error in Even et al. [2011]. Here we give a correct, different, and self-contained proof of the ratio 1.5 that is also substantially simpler and shorter than the previous proofs.

Book ChapterDOI
01 Jan 2015
TL;DR: A linear time algorithm for deciding the feasibility of problems for distinguishable pebbles (robots) residing on the vertices of an \(n\)-vertex connected graph with \(p \le n\) and an \(O(n^3)\) algorithm for planning complete paths are established.
Abstract: We study the problem of planning paths for \(p\) distinguishable pebbles (robots) residing on the vertices of an \(n\)-vertex connected graph with \(p \le n\). A pebble may move from a vertex to an adjacent one in a time step provided that it does not collide with other pebbles. When \(p = n\), the only collision free moves are synchronous rotations of pebbles on disjoint cycles of the graph. We show that the feasibility of such problems is intrinsically determined by the diameter of a (unique) permutation group induced by the underlying graph. Roughly speaking, the diameter of a group \(\mathbf G\) is the minimum length of the generator product required to reach an arbitrary element of \(\mathbf G\) from the identity element. Through bounding the diameter of this associated permutation group, which assumes a maximum value of \(O(n^2)\), we establish a linear time algorithm for deciding the feasibility of such problems and an \(O(n^3)\) algorithm for planning complete paths.

Journal ArticleDOI
Wayne Pullan1
TL;DR: Extensive computational experiments, using a range of sparse real-world graphs, and a comparison with previous exact results demonstrate the effectiveness of the proposed algorithms.
Abstract: Given a graph, the critical node detection problem can be broadly defined as identifying the minimum subset of nodes such that, if these nodes were removed, some metric of graph connectivity is minimised. In this paper, two variants of the critical node detection problem are addressed. Firstly, the basic critical node detection problem where, given the maximum number of nodes that can be removed, the objective is to minimise the total number of connected nodes in the graph. Secondly, the cardinality constrained critical node detection problem where, given the maximum allowed connected graph component size, the objective is to minimise the number of nodes required to be removed to achieve this. Extensive computational experiments, using a range of sparse real-world graphs, and a comparison with previous exact results demonstrate the effectiveness of the proposed algorithms.

Journal ArticleDOI
TL;DR: It is proved that with probability tending to one as $n$ goes to infinity the rainbow connection of G satisfies $rc(G)=O(\log n)$, which is best possible up to a hidden constant.
Abstract: An edge colored graph $G$ is rainbow edge connected if any two vertices are connected by a path whose edges have distinct colors. The rainbow connection of a connected graph $G$, denoted by $rc(G)$, is the smallest number of colors that are needed in order to make $G$ rainbow connected. In this work we study the rainbow connection of the random $r$-regular graph $G=G(n,r)$ of order $n$, where $r\ge 4$ is a constant. We prove that with probability tending to one as $n$ goes to infinity the rainbow connection of $G$ satisfies $rc(G)=O(\log n)$, which is best possible up to a hidden constant.

Proceedings ArticleDOI
10 Dec 2015
TL;DR: This study proposes an unsupervised, state-of-the-art saliency map generation algorithm which is based on a recently proposed link between quantum mechanics and spectral graph clustering, Quantum Cuts and introduces a novel approach to propose several saliency maps.
Abstract: In this study, we propose an unsupervised, state-of-the-art saliency map generation algorithm which is based on a recently proposed link between quantum mechanics and spectral graph clustering, Quantum Cuts. The proposed algorithm forms a graph among superpixels extracted from an image and optimizes a criterion related to the image boundary, local contrast and area information. Furthermore, the effects of the graph connectivity, superpixel shape irregularity, superpixel size and how to determine the affinity between superpixels are analyzed in detail. Furthermore, we introduce a novel approach to propose several saliency maps. Resulting saliency maps consistently achieves a state-of-the-art performance in a large number of publicly available benchmark datasets in this domain, containing around 18k images in total.

Proceedings ArticleDOI
27 May 2015
TL;DR: In this paper, the authors studied the problem of finding a minimum Wiener connector, which is the sum of all pairwise shortest-path distances between the vertices of a graph.
Abstract: The Wiener index of a graph is the sum of all pairwise shortest-path distances between its vertices. In this paper we study the novel problem of finding a minimum Wiener connector: given a connected graph G=(V,E) and a set Q ⊆ V of query vertices, find a subgraph of G that connects all query vertices and has minimum Wiener index. We show that MIN WIENER CONNECTOR admits a polynomial-time (albeit impractical) exact algorithm for the special case where the number of query vertices is bounded. We show that in general the problem is NP-hard, and has no PTAS unless P = NP. Our main contribution is a constant-factor approximation algorithm running in time O(|Q||E|). A thorough experimentation on a large variety of real-world graphs confirms that our method returns smaller and denser solutions than other methods, and does so by adding to the query set Q a small number of ``important'' vertices (i.e., vertices with high centrality).

Journal ArticleDOI
TL;DR: The complete cubic network structure is proposed to extend the existing class of hierarchical cubic networks, and a general connectivity result is established which states that the surviving graph of a complete cubicnetwork, when a linear number of vertices are removed, consists of a large (connected) component and a number of smaller components which altogether contain a limited number of Vertices.
Abstract: We propose the complete cubic network structure to extend the existing class of hierarchical cubic networks, and establish a general connectivity result which states that the surviving graph of a complete cubic network, when a linear number of vertices are removed, consists of a large (connected) component and a number of smaller components which altogether contain a limited number of vertices. As applications, we characterize several fault-tolerance properties for the complete cubic network, including its restricted connectivity, i.e., the size of a minimum vertex cut such that the degree of every vertex in the surviving graph has a guaranteed lower bound; its cyclic vertex-connectivity, i.e., the size of a minimum vertex cut such that at least two components in the surviving graph contain a cycle; its component connectivity, i.e., the size of a minimum vertex cut whose removal leads to a certain number of components in its surviving graph; and its conditional diagnosability, i.e., the maximum number of faulty vertices that can be detected via a self-diagnostic process, in terms of the common Comparison Diagnosis model.

Journal ArticleDOI
TL;DR: In this paper, it was shown that the problem to test whether a graph has a disconnected cut is polynomially equivalent to testing if a graph is 2 K 2 -partition, testing if the graph allows a vertex-surjective homomorphism to the reflexive 4-cycle, and testing if it has a spanning subgraph that consists of at most two bicliques.

Proceedings ArticleDOI
01 Jan 2015
TL;DR: The first tight Omega(n * log(n) bits space lower bounds are provided for randomized algorithms which succeed with constant probability in a stream of edge insertions for a number of graph problems.
Abstract: Despite the large amount of work on solving graph problems in the data stream model, there do not exist tight space bounds for almost any of them, even in a stream with only edge insertions. For example, for testing connectivity, the upper bound is O(n * log(n)) bits, while the lower bound is only Omega(n) bits. We remedy this situation by providing the first tight Omega(n * log(n)) space lower bounds for randomized algorithms which succeed with constant probability in a stream of edge insertions for a number of graph problems. Our lower bounds apply to testing bipartiteness, connectivity, cycle-freeness, whether a graph is Eulerian, planarity, H-minor freeness, finding a minimum spanning tree of a connected graph, and testing if the diameter of a sparse graph is constant. We also give the first Omega(n * k * log(n)) space lower bounds for deterministic algorithms for k-edge connectivity and k-vertex connectivity; these are optimal in light of known deterministic upper bounds (for k-vertex connectivity we also need to allow edge duplications, which known upper bounds allow). Finally, we give an Omega(n * log^2(n)) lower bound for randomized algorithms approximating the minimum cut up to a constant factor with constant probability in a graph with integer weights between 1 and n, presented as a stream of insertions and deletions to its edges. This lower bound also holds for cut sparsifiers, and gives the first separation of maintaining a sparsifier in the data stream model versus the offline model.

Posted Content
TL;DR: In this paper, the authors presented an almost optimal distributed randomized algorithm for graph connectivity in a message-passing model for distributed computing, where the input graph is initially randomly partitioned among the machines, and the goal is to minimize the number of communication rounds.
Abstract: Motivated by the increasing need to understand the algorithmic foundations of distributed large-scale graph computations, we study a number of fundamental graph problems in a message-passing model for distributed computing where $k \geq 2$ machines jointly perform computations on graphs with $n$ nodes (typically, $n \gg k$). The input graph is assumed to be initially randomly partitioned among the $k$ machines, a common implementation in many real-world systems. Communication is point-to-point, and the goal is to minimize the number of communication rounds of the computation. Our main result is an (almost) optimal distributed randomized algorithm for graph connectivity. Our algorithm runs in $\tilde{O}(n/k^2)$ rounds ($\tilde{O}$ notation hides a $\poly\log(n)$ factor and an additive $\poly\log(n)$ term). This improves over the best previously known bound of $\tilde{O}(n/k)$ [Klauck et al., SODA 2015], and is optimal (up to a polylogarithmic factor) in view of an existing lower bound of $\tilde{\Omega}(n/k^2)$. Our improved algorithm uses a bunch of techniques, including linear graph sketching, that prove useful in the design of efficient distributed graph algorithms. Using the connectivity algorithm as a building block, we then present fast randomized algorithms for computing minimum spanning trees, (approximate) min-cuts, and for many graph verification problems. All these algorithms take $\tilde{O}(n/k^2)$ rounds, and are optimal up to polylogarithmic factors. We also show an almost matching lower bound of $\tilde{\Omega}(n/k^2)$ rounds for many graph verification problems by leveraging lower bounds in random-partition communication complexity.

Journal ArticleDOI
TL;DR: In this paper, the authors studied the problem of finding exact values or sharp bounds for the strong metric dimension of strong product graphs and express these in terms of invariants of the factor graphs.
Abstract: Let G be a connected graph. A vertex w 2 V.G/ strongly resolves two vertices u;v 2 V.G/ if there exists some shortestu w path containingv or some shortestv w path containingu. A setS of vertices is a strong resolving set forG if every pair of vertices ofG is strongly resolved by some vertex ofS . The smallest cardinality of a strong resolving set for G is called the strong metric dimension of G. It is well known that the problem of computing this invariant is NP-hard. In this paper we study the problem of finding exact values or sharp bounds for the strong metric dimension of strong product graphs and express these in terms of invariants of the factor graphs.

Journal ArticleDOI
TL;DR: This technical combinatorial theorem can be used to derive an even smaller linear vertex kernel for general graphs and it is shown that the related maximization problem allows for a polynomial-time factor-14 approximation algorithm.

Journal ArticleDOI
TL;DR: In this article, it was shown that T n − 3, 1 1 gives the second minimum distance signless Laplacian spectral radius among the trees with fixed number of vertices.