scispace - formally typeset
Search or ask a question

Showing papers on "Vertex cover published in 2017"


Proceedings Article
04 Dec 2017
TL;DR: This paper proposes a unique combination of reinforcement learning and graph embedding that behaves like a meta-algorithm that incrementally constructs a solution, and the action is determined by the output of agraph embedding network capturing the current state of the solution.
Abstract: The design of good heuristics or approximation algorithms for NP-hard combinatorial optimization problems often requires significant specialized knowledge and trial-and-error. Can we automate this challenging, tedious process, and learn the algorithms instead? In many real-world applications, it is typically the case that the same optimization problem is solved again and again on a regular basis, maintaining the same problem structure but differing in the data. This provides an opportunity for learning heuristic algorithms that exploit the structure of such recurring problems. In this paper, we propose a unique combination of reinforcement learning and graph embedding to address this challenge. The learned greedy policy behaves like a meta-algorithm that incrementally constructs a solution, and the action is determined by the output of a graph embedding network capturing the current state of the solution. We show that our framework can be applied to a diverse range of optimization problems over graphs, and learns effective algorithms for the Minimum Vertex Cover, Maximum Cut and Traveling Salesman problems.

717 citations


Posted Content
TL;DR: In this paper, a combination of reinforcement learning and graph embedding is proposed to learn heuristics for combinatorial optimization problems over graphs, such as Minimum Vertex Cover, Maximum Cut and Traveling Salesman problems.
Abstract: The design of good heuristics or approximation algorithms for NP-hard combinatorial optimization problems often requires significant specialized knowledge and trial-and-error. Can we automate this challenging, tedious process, and learn the algorithms instead? In many real-world applications, it is typically the case that the same optimization problem is solved again and again on a regular basis, maintaining the same problem structure but differing in the data. This provides an opportunity for learning heuristic algorithms that exploit the structure of such recurring problems. In this paper, we propose a unique combination of reinforcement learning and graph embedding to address this challenge. The learned greedy policy behaves like a meta-algorithm that incrementally constructs a solution, and the action is determined by the output of a graph embedding network capturing the current state of the solution. We show that our framework can be applied to a diverse range of optimization problems over graphs, and learns effective algorithms for the Minimum Vertex Cover, Maximum Cut and Traveling Salesman problems.

455 citations


Proceedings ArticleDOI
01 Jul 2017
TL;DR: This work proposes a random mutation rate α/n, where α is chosen from a power-law distribution and proves that the (1 + 1) EA with this heavy-tailed mutation rate optimizes any Jumpm, n function in a time that is only a small polynomial factor above the one stemming from the optimal rate for this m.
Abstract: For genetic algorithms (GAs) using a bit-string representation of length n, the general recommendation is to take 1/n as mutation rate. In this work, we discuss whether this is justified for multi-modal functions. Taking jump functions and the (1+1) evolutionary algorithm (EA) as the simplest example, we observe that larger mutation rates give significantly better runtimes. For the Jumpm, n function, any mutation rate between 2/n and m/n leads to a speedup at least exponential in m compared to the standard choice.The asymptotically best runtime, obtained from using the mutation rate m/n and leading to a speed-up super-exponential in m, is very sensitive to small changes of the mutation rate. Any deviation by a small (1 ± e) factor leads to a slow-down exponential in m. Consequently, any fixed mutation rate gives strongly sub-optimal results for most jump functions.Building on this observation, we propose to use a random mutation rate α/n, where α is chosen from a power-law distribution. We prove that the (1 + 1) EA with this heavy-tailed mutation rate optimizes any Jumpm, n function in a time that is only a small polynomial (in m) factor above the one stemming from the optimal rate for this m. Our heavy-tailed mutation operator yields similar speed-ups (over the best known performance guarantees) for the vertex cover problem in bipartite graphs and the matching problem in general graphs.Following the example of fast simulated annealing, fast evolution strategies, and fast evolutionary programming, we propose to call genetic algorithms using a heavy-tailed mutation operator fast genetic algorithms.

184 citations


Journal ArticleDOI
TL;DR: This study is motivated by results establishing that for many NP-hard problems, the classical complexity of reconfiguration is PSPACE-complete, and addresses the question for several important graph properties under two natural parameterizations.
Abstract: We present the first results on the parameterized complexity of reconfiguration problems, where a reconfiguration variant of an optimization problem $$\mathcal {Q}$$Q takes as input two feasible solutions S and T and determines if there is a sequence of reconfiguration steps, ie a reconfiguration sequence, that can be applied to transform S into T such that each step results in a feasible solution to $$\mathcal {Q}$$Q For most of the results in this paper, S and T are sets of vertices of a given graph and a reconfiguration step adds or removes a vertex Our study is motivated by results establishing that for many NP-hard problems, the classical complexity of reconfiguration is PSPACE-complete We address the question for several important graph properties under two natural parameterizations: k, a bound on the size of solutions, and $$\ell $$l, a bound on the length of reconfiguration sequences Our first general result is an algorithmic paradigm, the reconfiguration kernel, used to obtain fixed-parameter tractable algorithms for reconfiguration variants of Vertex Cover and, more generally, Bounded Hitting Set and Feedback Vertex Set, all parameterized by k In contrast, we show that reconfiguring Unbounded Hitting Set is W[2]-hard when parameterized by $$k+\ell $$k+l We also demonstrate the W[1]-hardness of reconfiguration variants of a large class of maximization problems parameterized by $$k+\ell $$k+l, and of their corresponding deletion problems parameterized by $$\ell $$l; in doing so, we show that there exist problems in FPT when parameterized by k, but whose reconfiguration variants are W[1]-hard when parameterized by $$k+\ell $$k+l

86 citations


Proceedings ArticleDOI
19 Jun 2017
TL;DR: In this article, a polynomial size α-approximate kernel is defined, which is a pre-processing algorithm that takes as input an instance (I, k) to a parameterized problem, and outputs another instance(I,k) to the same problem, such that |I′| + k′ ≤ kO(1).
Abstract: In this paper we propose a new framework for analyzing the performance of preprocessing algorithms. Our framework builds on the notion of kernelization from parameterized complexity. However, as opposed to the original notion of kernelization, our definitions com- bine well with approximation algorithms and heuristics. The key new definition is that of a polynomial size α-approximate kernel. Loosely speaking, a polynomial size α-approximate kernel is a polynomial time pre-processing algorithm that takes as input an instance (I, k) to a parameterized problem, and outputs another instance (I′,k′) to the same problem, such that |I′| + k′ ≤ kO(1). Additionally, for every c ≥ 1, a c-approximate solution s′ to the pre-processed instance (I′, k′) can be turned in polynomial time into a (c · α)-approximate solution s to the original instance (I,k). Amongst our main technical contributions are α-approximate kernels of polynomial size for three problems, namely Connected Vertex Cover, Disjoint Cycle Packing and Disjoint Factors. These problems are known not to admit any polynomial size kernels unless NP ⊆ coNP/Poly. Our approximate kernels simultaneously beat both the lower bounds on the (normal) kernel size, and the hardness of approximation lower bounds for all three problems. On the negative side we prove that Longest Path parameterized by the length of the path and Set Cover parameterized by the universe size do not admit even an α-approximate kernel of polynomial size, for any α≥1, unless NP ⊆ coNP/Poly. In order to prove this lower bound we need to combine in a non-trivial way the techniques used for showing kernelization lower bounds with the methods for showing hardness of approximation.

80 citations


Posted Content
TL;DR: Czumaj et al. as discussed by the authors gave a single unified approach that yields better approximation algorithms for matching and vertex cover in all these models, including the streaming model, the distributed communication model, and the massively parallel computation model that is a common abstraction of MapReduce-style computation.
Abstract: As massive graphs become more prevalent, there is a rapidly growing need for scalable algorithms that solve classical graph problems, such as maximum matching and minimum vertex cover, on large datasets. For massive inputs, several different computational models have been introduced, including the streaming model, the distributed communication model, and the massively parallel computation (MPC) model that is a common abstraction of MapReduce-style computation. In each model, algorithms are analyzed in terms of resources such as space used or rounds of communication needed, in addition to the more traditional approximation ratio. In this paper, we give a single unified approach that yields better approximation algorithms for matching and vertex cover in all these models. The highlights include: * The first one pass, significantly-better-than-2-approximation for matching in random arrival streams that uses subquadratic space, namely a $(1.5+\epsilon)$-approximation streaming algorithm that uses $O(n^{1.5})$ space for constant $\epsilon > 0$. * The first 2-round, better-than-2-approximation for matching in the MPC model that uses subquadratic space per machine, namely a $(1.5+\epsilon)$-approximation algorithm with $O(\sqrt{mn} + n)$ memory per machine for constant $\epsilon > 0$. By building on our unified approach, we further develop parallel algorithms in the MPC model that give a $(1 + \epsilon)$-approximation to matching and an $O(1)$-approximation to vertex cover in only $O(\log\log{n})$ MPC rounds and $O(n/poly\log{(n)})$ memory per machine. These results settle multiple open questions posed in the recent paper of Czumaj~this http URL. [STOC 2018].

71 citations


Proceedings ArticleDOI
19 Jun 2017
TL;DR: A candidate reduction from the 3-Lin problem to the 2-to-2 Games problem is presented and a combinatorial hypothesis about Grassmann graphs is presented which, if correct, is sufficient to show the soundness of the reduction in a certain non-standard sense.
Abstract: We present a candidate reduction from the 3-Lin problem to the 2-to-2 Games problem and present a combinatorial hypothesis about Grassmann graphs which, if correct, is sufficient to show the soundness of the reduction in a certain non-standard sense. A reduction that is sound in this non-standard sense implies that it is NP-hard to distinguish whether an n-vertex graph has an independent set of size ( 1- 1/√2 ) n - o(n) or whether every independent set has size o(n), and consequently, that it is NP-hard to approximate the Vertex Cover problem within a factor √2-o(1).

70 citations


Posted Content
TL;DR: In this article, a heavy-tailed mutation operator was proposed for genetic algorithms with a bit-string representation of length n. The algorithm achieves a speedup at least exponential in n/m$ compared to the standard mutation rate.
Abstract: For genetic algorithms using a bit-string representation of length~$n$, the general recommendation is to take $1/n$ as mutation rate. In this work, we discuss whether this is really justified for multimodal functions. Taking jump functions and the $(1+1)$ evolutionary algorithm as the simplest example, we observe that larger mutation rates give significantly better runtimes. For the $\jump_{m,n}$ function, any mutation rate between $2/n$ and $m/n$ leads to a speed-up at least exponential in $m$ compared to the standard choice. The asymptotically best runtime, obtained from using the mutation rate $m/n$ and leading to a speed-up super-exponential in $m$, is very sensitive to small changes of the mutation rate. Any deviation by a small $(1 \pm \eps)$ factor leads to a slow-down exponential in $m$. Consequently, any fixed mutation rate gives strongly sub-optimal results for most jump functions. Building on this observation, we propose to use a random mutation rate $\alpha/n$, where $\alpha$ is chosen from a power-law distribution. We prove that the $(1+1)$ EA with this heavy-tailed mutation rate optimizes any $\jump_{m,n}$ function in a time that is only a small polynomial (in~$m$) factor above the one stemming from the optimal rate for this $m$. Our heavy-tailed mutation operator yields similar speed-ups (over the best known performance guarantees) for the vertex cover problem in bipartite graphs and the matching problem in general graphs. Following the example of fast simulated annealing, fast evolution strategies, and fast evolutionary programming, we propose to call genetic algorithms using a heavy-tailed mutation operator \emph{fast genetic algorithms}.

70 citations


Proceedings ArticleDOI
19 Jun 2017
TL;DR: In this paper, the authors give new results for the set cover problem in the fully dynamic model, where the set of "active" elements to be covered changes over time, and the goal is to maintain a near-optimal solution for the currently active elements, while making few changes in each timestep.
Abstract: In this paper, we give new results for the set cover problem in the fully dynamic model. In this model, the set of "active" elements to be covered changes over time. The goal is to maintain a near-optimal solution for the currently active elements, while making few changes in each timestep. This model is popular in both dynamic and online algorithms: in the former, the goal is to minimize the update time of the solution, while in the latter, the recourse (number of changes) is bounded. We present generic techniques for the dynamic set cover problem inspired by the classic greedy and primal-dual offline algorithms for set cover. The former leads to a competitive ratio of O(lognt), where nt is the number of currently active elements at timestep t, while the latter yields competitive ratios dependent on ft, the maximum number of sets that a currently active element belongs to. We demonstrate that these techniques are useful for obtaining tight results in both settings: update time bounds and limited recourse, exhibiting algorithmic techniques common to these two parallel threads of research.

58 citations


Proceedings ArticleDOI
16 Jan 2017
TL;DR: A polynomial gap is bridge between the worst case and amortised update times for this problem, without using any randomisation, for all sufficiently small constants ϵ.
Abstract: We consider the problem of maintaining an approximately maximum (fractional) matching and an approximately minimum vertex cover in a dynamic graph. Starting with the seminal paper by Onak and Rubinfeld [STOC 2010], this problem has received significant attention in recent years. There remains, however, a polynomial gap between the best known worst case update time and the best known amortised update time for this problem, even after allowing for randomisation. Specifically, Bernstein and Stein [ICALP 2015, SODA 2016] have the best known worst case update time. They present a deterministic data structure with approximation ratio (3/2 + ϵ) and worst case update time O(m1/4 / ϵ2), where m is the number of edges in the graph. In recent past, Gupta and Peng [FOCS 2013] gave a deterministic data structure with approximation ratio (1 + ϵ) and worst case update time [EQUATION]. No known randomised data structure beats the worst case update times of these two results. In contrast, the paper by Onak and Rubinfeld [STOC 2010] gave a randomised data structure with approximation ratio O(1) and amortised update time O(log2n), where n is the number of nodes in the graph. This was later improved by Baswana, Gupta and Sen [FOCS 2011] and Solomon [FOCS 2016], leading to a randomised date structure with approximation ratio 2 and amortised update time O(1).We bridge the polynomial gap between the worst case and amortised update times for this problem, without using any randomisation. We present a deterministic data structure with approximation ratio (2 + ϵ) and worst case update time O(log3n), for all sufficiently small constants ϵ.

56 citations


Journal ArticleDOI
TL;DR: This paper introduces carousel greedy, an enhanced greedy algorithm which seeks to overcome the traditional weaknesses of greedy approaches and can be combined with other approaches to create a powerful, new metaheuristic.

Proceedings ArticleDOI
19 Jun 2017
TL;DR: An exact algorithm is given for 2-perturbation resilient instances of clustering problems with natural center-based objectives and an exact algorithm for (2-2/k)-stable instances of Minimum Multiway Cut with k terminals is given.
Abstract: We study the notion of stability and perturbation resilience introduced by Bilu and Linial (2010) and Awasthi, Blum, and Sheffet (2012). A combinatorial optimization problem is α-stable or α-perturbation-resilient if the optimal solution does not change when we perturb all parameters of the problem by a factor of at most α. In this paper, we give improved algorithms for stable instances of various clustering and combinatorial optimization problems. We also prove several hardness results. We first give an exact algorithm for 2-perturbation resilient instances of clustering problems with natural center-based objectives. The class of clustering problems with natural center-based objectives includes such problems as k-means, k-median, and k-center. Our result improves upon the result of Balcan and Liang (2016), who gave an algorithm for clustering 1+√≈2.41 perturbation-resilient instances. Our result is tight in the sense that no polynomial-time algorithm can solve (2e)-perturbation resilient instances of k-center unless NP = RP, as was shown by Balcan, Haghtalab, and White (2016). We then give an exact algorithm for (2-2/k)-stable instances of Minimum Multiway Cut with k terminals, improving the previous result of Makarychev, Makarychev, and Vijayaraghavan (2014), who gave an algorithm for 4-stable instances. We also give an algorithm for (2-2/k+ς)-weakly stable instances of Minimum Multiway Cut. Finally, we show that there are no robust polynomial-time algorithms for n1-e-stable instances of Set Cover, Minimum Vertex Cover, and Min 2-Horn Deletion (unless P = NP).

Proceedings ArticleDOI
09 May 2017
TL;DR: A Reducing-Peeling framework which iteratively reduces the graph size by applying reduction rules on vertices with very low degrees and temporarily removing the vertex with the highest degree if the reduction rules cannot be applied, and a linear-time algorithm and a near-linear time algorithm that can generate a high-quality independent set from a graph in practice.
Abstract: This paper studies the problem of efficiently computing a maximum independent set from a large graph, a fundamental problem in graph analysis. Due to the hardness results of computing an exact maximum independent set or an approximate maximum independent set with accuracy guarantee, the existing algorithms resort to heuristic techniques for approximately computing a maximum independent set with good performance in practice but no accuracy guarantee theoretically. Observing that the existing techniques have various limits, in this paper, we aim to develop efficient algorithms (with linear or near-linear time complexity) that can generate a high-quality (large-size) independent set from a graph in practice. In particular, firstly we develop a Reducing-Peeling framework which iteratively reduces the graph size by applying reduction rules on vertices with very low degrees (Reducing) and temporarily removing the vertex with the highest degree (Peeling) if the reduction rules cannot be applied. Secondly, based on our framework we design two baseline algorithms, BDOne and BDTwo, by utilizing the existing reduction rules for handling degree-one and degree-two vertices, respectively. Both algorithms can generate higher-quality (larger-size) independent sets than the existing algorithms. Thirdly, we propose a linear-time algorithm, LinearTime, and a near-linear time algorithm, NearLinear, by designing new reduction rules and developing techniques for efficiently and incrementally applying reduction rules. In practice, LinearTime takes similar time and space to BDOne but computes a higher quality independent set, similar in size to that of an independent set generated by BDTwo. Moreover, in practice NearLinear has a good chance to generate a maximum independent set and it often generates near-maximum independent sets. Fourthly, we extend our techniques to accelerate the existing iterated local search algorithms. Extensive empirical studies show that all our algorithms output much larger independent sets than the existing linear-time algorithms while having a similar running time, as well as achieve significant speedup against the existing iterated local search algorithms.

Posted Content
TL;DR: The main insight of this work is that the intractability of matching and vertex cover in the simultaneous communication model is inherently connected to an adversarial partitioning of the underlying graph across machines.
Abstract: A common approach for designing scalable algorithms for massive data sets is to distribute the computation across, say $k$, machines and process the data using limited communication between them. A particularly appealing framework here is the simultaneous communication model whereby each machine constructs a small representative summary of its own data and one obtains an approximate/exact solution from the union of the representative summaries. If the representative summaries needed for a problem are small, then this results in a communication-efficient and round-optimal protocol. While many fundamental graph problems admit efficient solutions in this model, two prominent problems are notably absent from the list of successes, namely, the maximum matching problem and the minimum vertex cover problem. Indeed, it was shown recently that for both these problems, even achieving a polylog$(n)$ approximation requires essentially sending the entire input graph from each machine. The main insight of our work is that the intractability of matching and vertex cover in the simultaneous communication model is inherently connected to an adversarial partitioning of the underlying graph across machines. We show that when the underlying graph is randomly partitioned across machines, both these problems admit randomized composable coresets of size $\widetilde{O}(n)$ that yield an $\widetilde{O}(1)$-approximate solution. This results in an $\widetilde{O}(1)$-approximation simultaneous protocol for these problems with $\widetilde{O}(nk)$ total communication when the input is randomly partitioned across $k$ machines. We further prove the optimality of our results. Finally, by a standard application of composable coresets, our results also imply MapReduce algorithms with the same approximation guarantee in one or two rounds of communication

Posted Content
TL;DR: The first super-linear lower bound for natural graph problems in the CONGEST model was shown in this article, where it was shown that computing an exact solution to weighted all-pairs-shortest-paths (APSP) has a complexity of O(n 2 ε(n ε 2 ) in the worst case.
Abstract: We present the first super-linear lower bounds for natural graph problems in the CONGEST model, answering a long-standing open question Specifically, we show that any exact computation of a minimum vertex cover or a maximum independent set requires $\Omega(n^2/\log^2{n})$ rounds in the worst case in the CONGEST model, as well as any algorithm for $\chi$-coloring a graph, where $\chi$ is the chromatic number of the graph We further show that such strong lower bounds are not limited to NP-hard problems, by showing two simple graph problems in P which require a quadratic and near-quadratic number of rounds Finally, we address the problem of computing an exact solution to weighted all-pairs-shortest-paths (APSP), which arguably may be considered as a candidate for having a super-linear lower bound We show a simple $\Omega(n)$ lower bound for this problem, which implies a separation between the weighted and unweighted cases, since the latter is known to have a complexity of $\Theta(n/\log{n})$ We also formally prove that the standard Alice-Bob framework is incapable of providing a super-linear lower bound for exact weighted APSP, whose complexity remains an intriguing open question

Journal ArticleDOI
TL;DR: Some interesting structural properties of the dissociation sets and 3-path vertex covers of maximum size and minimum size in graphs are revealed, which allow them to be solved in O * ( 1.4656 n ) time and polynomial space or O *( 1.3659 n) time and exponential space.

Proceedings ArticleDOI
24 Jul 2017
TL;DR: In this article, it was shown that the intractability of matching and vertex cover in the simultaneous communication model is inherently connected to an adversarial partitioning of the underlying graph across machines.
Abstract: A common approach for designing scalable algorithms for massive data sets is to distribute the computation across, say k, machines and process the data using limited communication between them A particularly appealing framework here is the simultaneous communication model whereby each machine constructs a small representative summary of its own data and one obtains an approximate/exact solution from the union of the representative summaries If the representative summaries needed for a problem are small, then this results in a communication-efficient and \emph{round-optimal} (requiring essentially no interaction between the machines) protocol Some well-known examples of techniques for creating summaries include sampling, linear sketching, and composable coresets These techniques have been successfully used to design communication efficient solutions for many fundamental graph problems However, two prominent problems are notably absent from the list of successes, namely, the maximum matching problem and the minimum vertex cover problem Indeed, it was shown recently that for both these problems, even achieving a modest approximation factor of \polylog{(n)} requires using representative summaries of size \widetilde{\Omega}(n^2) ie essentially no better summary exists than each machine simply sending its entire input graph The main insight of our work is that the intractability of matching and vertex cover in the simultaneous communication model is inherently connected to an adversarial partitioning of the underlying graph across machines We show that when the underlying graph is randomly partitioned across machines, both these problems admit \emph{randomized composable coresets} of size \widetilde{O}(n) that yield an \widetilde{O}(1)-approximate solution\footnote{Here and throughout the paper, we use \Ot(\cdot) notation to suppress \polylog{(n)} factors, where n is the number of vertices in the graph In other words, a small subgraph of the input graph at each machine can be identified as its representative summary and the final answer then is obtained by simply running any maximum matching or minimum vertex cover algorithm on these combined subgraphs This results in an O(1)-approximation simultaneous protocol for these problems with O(nk) total communication when the input is randomly partitioned across k machines We also prove our results are optimal in a very strong sense: we not only rule out existence of smaller randomized composable coresets for these problems but in fact show that our \Ot(nk) bound for total communication is optimal for em any simultaneous communication protocol (ie not only for randomized coresets) for these two problems Finally, by a standard application of composable coresets, our results also imply MapReduce algorithms with the same approximation guarantee in one or two rounds of communication, improving the previous best known round complexity for these problems

Posted Content
TL;DR: The best known worst-case update time for this problem is O(m^{1/4}/\epsilon^2), where m is the number of edges in the graph as discussed by the authors.
Abstract: We consider the problem of maintaining an approximately maximum (fractional) matching and an approximately minimum vertex cover in a dynamic graph. Starting with the seminal paper by Onak and Rubinfeld [STOC 2010], this problem has received significant attention in recent years. There remains, however, a polynomial gap between the best known worst case update time and the best known amortised update time for this problem, even after allowing for randomisation. Specifically, Bernstein and Stein [ICALP 2015, SODA 2016] have the best known worst case update time. They present a deterministic data structure with approximation ratio $(3/2+\epsilon)$ and worst case update time $O(m^{1/4}/\epsilon^2)$, where $m$ is the number of edges in the graph. In recent past, Gupta and Peng [FOCS 2013] gave a deterministic data structure with approximation ratio $(1+\epsilon)$ and worst case update time $O(\sqrt{m}/\epsilon^2)$. No known randomised data structure beats the worst case update times of these two results. In contrast, the paper by Onak and Rubinfeld [STOC 2010] gave a randomised data structure with approximation ratio $O(1)$ and amortised update time $O(\log^2 n)$, where $n$ is the number of nodes in the graph. This was later improved by Baswana, Gupta and Sen [FOCS 2011] and Solomon [FOCS 2016], leading to a randomised date structure with approximation ratio $2$ and amortised update time $O(1)$. We bridge the polynomial gap between the worst case and amortised update times for this problem, without using any randomisation. We present a deterministic data structure with approximation ratio $(2+\epsilon)$ and worst case update time $O(\log^3 n)$, for all sufficiently small constants $\epsilon$.

Journal ArticleDOI
TL;DR: The generalized Nemhauser and Trotter's theorem is refined and a linear-vertex kernel for each d ź 0 is given in the d-Bounded-Degree Vertex Deletion parameterized by the deletion size k for each fixed d Ż 3.

Proceedings ArticleDOI
01 Jan 2017
TL;DR: It is formally proved that the standard Alice-Bob framework is incapable of providing a super-linear lower bound for exact weighted APSP, whose complexity remains an intriguing open question.
Abstract: We present the first super-linear lower bounds for natural graph problems in the CONGEST model, answering a long-standing open question. Specifically, we show that any exact computation of a minimum vertex cover or a maximum independent set requires a near-quadratic number of rounds in the CONGEST model, as well as any algorithm for computing the chromatic number of the graph. We further show that such strong lower bounds are not limited to NP-hard problems, by showing two simple graph problems in P which require a quadratic and near-quadratic number of rounds. Finally, we address the problem of computing an exact solution to weighted all-pairs-shortest-paths (APSP), which arguably may be considered as a candidate for having a super-linear lower bound. We show a simple linear lower bound for this problem, which implies a separation between the weighted and unweighted cases, since the latter is known to have a sub-linear complexity. We also formally prove that the standard Alice-Bob framework is incapable of providing a super-linear lower bound for exact weighted APSP, whose complexity remains an intriguing open question.

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a low-complexity heuristic algorithm for solving the problem of finding a minimum vertex cover (MinVC) in a large-scale real-world graph.
Abstract: The problem of finding a minimum vertex cover (MinVC) in a graph is a well known NP-hard combinatorial optimization problem of great importance in theory and practice. Due to its NP-hardness, there has been much interest in developing heuristic algorithms for finding a small vertex cover in reasonable time. Previously, heuristic algorithms for MinVC have focused on solving graphs of relatively small size, and they are not suitable for solving massive graphs as they usually have high-complexity heuristics. This paper explores techniques for solving MinVC in very large scale real-world graphs, including a construction algorithm, a local search algorithm and a preprocessing algorithm. Both the construction and search algorithms are based on low-complexity heuristics, and we combine them to develop a heuristic algorithm for MinVC called FastVC. Experimental results on a broad range of real-world massive graphs show that, our algorithms are very fast and have better performance than previous heuristic algorithms for MinVC. We also develop a preprocessing algorithm to simplify graphs for MinVC algorithms. By applying the preprocessing algorithm to local search algorithms, we obtain two efficient MinVC solvers called NuMVC2+p and FastVC2+p, which show further improvement on the massive graphs.

Journal ArticleDOI
TL;DR: In this paper, the authors examine the computational complexity and algorithmics of enumeration, the task to output all solutions of a given problem, from the point of view of parameterized complexity.
Abstract: The aim of the paper is to examine the computational complexity and algorithmics of enumeration, the task to output all solutions of a given problem, from the point of view of parameterized complexity. First, we define formally different notions of efficient enumeration in the context of parameterized complexity: FPT-enumeration and delayFPT. Second, we show how different algorithmic paradigms can be used in order to get parameter-efficient enumeration algorithms in a number of examples. These paradigms use well-known principles from the design of parameterized decision as well as enumeration techniques, like for instance kernelization and self-reducibility. The concept of kernelization, in particular, leads to a characterization of fixed-parameter tractable enumeration problems. Furthermore, we study the parameterized complexity of enumerating all models of Boolean formulas having weight at least k, where k is the parameter, in the famous Schaefer's framework. We consider propositional formulas that are conjunctions of constraints taken from a fixed finite set Γ. Given such a formula and an integer k, we are interested in enumerating all the models of the formula that have weight at least k. We obtain a dichotomy classification and prove that, according to the properties of the constraint language Γ, either one can enumerate all such models in delayFPT, or no such delayFPT enumeration algorithm exists under some complexity-theoretic assumptions.

Proceedings ArticleDOI
16 Jan 2017
TL;DR: This work designs a tight algorithm for random models, and extends it to give the same guarantee for arbitrary instances, and shows that this is tight under plausible complexity conjectures: it cannot be approximated better than O(n1/4) assuming an extension of the so-called "Dense versus Random" conjecture for DkS to hypergraphs.
Abstract: In the Minimum k-Union problem (MkU) we are given a set system with n sets and are asked to select k sets in order to minimize the size of their union. Despite being a very natural problem, it has received surprisingly little attention: the only known approximation algorithm is an [EQUATION]-approximation due to [Chlamtac et al APPROX '16]. This problem can also be viewed as the bipartite version of the Small Set Vertex Expansion problem (SSVE), which we call the Small Set Bipartite Vertex Expansion problem (SSBVE). SSVE, in which we are asked to find a set of k nodes to minimize their vertex expansion, has not been as well studied as its edge-based counterpart Small Set Expansion (SSE), but has recently received significant attention, e.g. [Louis-Makarychev APPROX '15]. However, due to the connection to Unique Games and hardness of approximation the focus has mostly been on sets of size k = Ω(n), while we focus on the case of general k, for which no polylogarithmic approximation is known.We improve the upper bound for this problem by giving an n1/4+e approximation for SSBVE for any constant e > 0. Our algorithm follows in the footsteps of Densest k-Subgraph (DkS) and related problems, by designing a tight algorithm for random models, and then extending it to give the same guarantee for arbitrary instances. Moreover, we show that this is tight under plausible complexity conjectures: it cannot be approximated better than O(n1/4) assuming an extension of the so-called "Dense versus Random" conjecture for DkS to hypergraphs.In addition to conjectured hardness via our reduction, we show that the same lower bound is also matched by an integrality gap for a super-constant number of rounds of the Sherali-Adams LP hierarchy, and an even worse integrality gap for the natural SDP relaxation. Finally, we note that there exists a simple bicriteria [EQUATION] approximation for the more general SSVE problem (where no non-trivial approximations were known for general k).

Posted Content
TL;DR: It is shown that one can achieve an O(\log{n}) approximation to minimum vertex cover in only O(n) rounds of the massive parallel computation (MPC) framework, when the memory per machine is $O(n).
Abstract: Recently, Czumaj et.al. (arXiv 2017) presented a parallel (almost) $2$-approximation algorithm for the maximum matching problem in only $O({(\log\log{n})^2})$ rounds of the massive parallel computation (MPC) framework, when the memory per machine is $O(n)$. The main approach in their work is a way of compressing $O(\log{n})$ rounds of a distributed algorithm for maximum matching into only $O({(\log\log{n})^2})$ MPC rounds. In this note, we present a similar algorithm for the closely related problem of approximating the minimum vertex cover in the MPC framework. We show that one can achieve an $O(\log{n})$ approximation to minimum vertex cover in only $O(\log\log{n})$ MPC rounds when the memory per machine is $O(n)$. Our algorithm for vertex cover is similar to the maximum matching algorithm of Czumaj et.al. but avoids many of the intricacies in their approach and as a result admits a considerably simpler analysis (at a cost of a worse approximation guarantee). We obtain this result by modifying a previous parallel algorithm by Khanna and the author (SPAA 2017) for vertex cover that allowed for compressing $O(\log{n})$ rounds of a distributed algorithm into constant MPC rounds when the memory allowed per machine is $O(n\sqrt{n})$.

Book ChapterDOI
26 Jun 2017
TL;DR: In this article, a deterministic O(1)-approximation algorithm with O(f 2 ) amortized update time was given for minimum vertex cover and maximum fractional matching.
Abstract: We consider the problems of maintaining approximate maximum matching and minimum vertex cover in a dynamic graph. Starting with the seminal work of Onak and Rubinfeld [STOC 2010], this problem has received significant attention in recent years. Very recently, extending the framework of Baswana, Gupta and Sen [FOCS 2011], Solomon [FOCS 2016] gave a randomized 2-approximation dynamic algorithm for this problem that has amortized update time of O(1) with high probability. We consider the natural open question of derandomizing this result. We present a new deterministic fully dynamic algorithm that maintains a O(1)-approximate minimum vertex cover and maximum fractional matching, with an amortized update time of O(1). Previously, the best deterministic algorithm for this problem was due to Bhattacharya, Henzinger and Italiano [SODA 2015]; it had an approximation ratio of \((2+\epsilon )\) and an amortized update time of \(O(\log n/\epsilon ^2)\). Our result can be generalized to give a fully dynamic \(O(f^3)\)-approximation algorithm with \(O(f^2)\) amortized update time for the hypergraph vertex cover and fractional matching problems, where every hyperedge has at most f vertices.

Journal ArticleDOI
TL;DR: In this article, it was shown that planar F-minior-free deletion problems do not have uniformly polynomial kernels, unless NP ⊆ coNP/poly, not even when parameterized by the vertex cover number.
Abstract: The F-Minor-Free Deletion problem asks, for a fixed set F and an input consisting of a graph G and integer k, whether k vertices can be removed from G such that the resulting graph does not contain any member of F as a minor. At FOCS 2012, Fomin et al. showed that the special case when F contains at least one planar graph has a kernel of size f(F) c kg(F) for some functions f and g. They left open whether this PlanarF-Minor-Free Deletion problem has kernels whose size is uniformly polynomial, of the form f(F) c kc for some universal constant c. We prove that some PlanarF-Minor-Free Deletion problems do not have uniformly polynomial kernels (unless NP ⊆ coNP/poly), not even when parameterized by the vertex cover number. On the positive side, we consider the problem of determining whether k vertices can be removed to obtain a graph of treedepth at most η. We prove that this problem admits uniformly polynomial kernels with O(k6) vertices for every fixed η.

Journal ArticleDOI
TL;DR: An efficient local search algorithm with tabu strategy and perturbation mechanism to solve the generalized vertex cover problem and shows that it performs better than a state-of-art algorithm in terms of both solution quality and computational efficiency in most instances.
Abstract: The generalized vertex cover problem, an extension of classic minimum vertex cover problem, is an important NP-hard combinatorial optimization problem with a wide range of applications. The aim of this paper is to design an efficient local search algorithm with tabu strategy and perturbation mechanism to solve this problem. Firstly, we use tabu strategy to prevent the local search from immediately returning to a previously visited candidate solution and avoiding the cycling problem. Secondly, we propose the flip gain for each vertex, and then the tabu strategy is combined with the flip gain for vertex selecting. Finally, we apply a simple perturbation mechanism to help the search to escape from deep local optima and to bring diversification into the search. The experiments are carried on random instances with up to 1000 vertexes and 450,000 edges. The experimental results show that our algorithm performs better than a state-of-art algorithm in terms of both solution quality and computational efficiency in most instances.

Journal ArticleDOI
TL;DR: The conjecture that DL-Hom is fixed-parameter tractable for the class of graphs H for which the list homomorphism problem (without deletions) is polynomial-time solvable is conjectured.
Abstract: In the deletion version of the list homomorphism problem, we are given graphs G and H, a list $$L(v)\subseteq V(H)$$L(v)⊆V(H) for each vertex $$v\in V(G)$$vźV(G), and an integer k. The task is to decide whether there exists a set $$W \subseteq V(G)$$W⊆V(G) of size at most k such that there is a homomorphism from $$G {\setminus } W$$G\W to H respecting the lists. We show that DL-Hom($${H}$$H), parameterized by k and |H|, is fixed-parameter tractable for any $$(P_6,C_6)$$(P6,C6)-free bipartite graph H; already for this restricted class of graphs, the problem generalizes Vertex Cover, Odd Cycle Transversal, and Vertex Multiway Cut parameterized by the size of the cutset and the number of terminals. We conjecture that DL-Hom($${H}$$H) is fixed-parameter tractable for the class of graphs H for which the list homomorphism problem (without deletions) is polynomial-time solvable; by a result of Feder et al. (Combinatorica 19(4):487---505, 1999), a graph H belongs to this class precisely if it is a bipartite graph whose complement is a circular arc graph. We show that this conjecture is equivalent to the fixed-parameter tractability of a single fairly natural satisfiability problem, Clause Deletion Chain-SAT.

Journal ArticleDOI
Jinkun Chen1, Yaojin Lin1, Guoping Lin1, Jinjin Li1, Yan-Lan Zhang1 
TL;DR: First, it is shown that finding the attribute reduction of a covering decision system is equivalent to finding the minimal vertex cover of a derivative hypergraph, and a proposed model for covering decision systems based on graph theory is presented.
Abstract: Attribute reduction (also called feature subset selection) plays an important role in rough set theory. Different from the classical attribute reduction algorithms, the methods of attribute reduction based on covering rough sets appear to be suitable for numerical data. However, it is time-consuming in dealing with the large-scale data. In this paper, we study the problem of attribute reduction of covering decision systems based on graph theory. First, we translate this problem into a graph model and show that finding the attribute reduction of a covering decision system is equivalent to finding the minimal vertex cover of a derivative hypergraph. Then, based on the proposed model, a thm for covering decision systems is presented. Experiments show that the new proposed method is more effective to handle the large-scale data.

Journal ArticleDOI
TL;DR: This article considers kernelization for problems on d-degenerate graphs, that is, graphs such that any subgraph contains a vertex of degree at most d, and proves that unless coNP ⊆ NP/poly Dominating Set has no kernels of size O(kd−1)(d−3)−ε) for any ε > 0.
Abstract: Kernelization is a strong and widely applied technique in parameterized complexity. In a nutshell, a kernelization algorithm for a parameterized problem transforms in polynomial time a given instance of the problem into an equivalent instance whose size depends solely on the parameter. Recent years have seen major advances in the study of both upper and lower bound techniques for kernelization, and by now this area has become one of the major research threads in parameterized complexity.In this article, we consider kernelization for problems on d-degenerate graphs, that is, graphs such that any subgraph contains a vertex of degree at most d. This graph class generalizes many classes of graphs for which effective kernelization is known to exist, for example, planar graphs, H-minor free graphs, and H-topological-minor free graphs. We show that for several natural problems on d-degenerate graphs the best-known kernelization upper bounds are essentially tight. In particular, using intricate constructions of weak compositions, we prove that unless coNP ⊆ NP/poly:b Dominating Set has no kernels of size O(k(d−1)(d−3)−e) for any e > 0. The current best upper bound is O(k(d+1)2).b Independent Dominating Set has no kernels of size O(kd−4−e) for any e > 0. The current best upper bound is O(kd+1).b Induced Matching has no kernels of size O(kd−3−e) for any e > 0. The current best upper bound is O(kd).To the best of our knowledge, Dominating Set is the the first problem where a lower bound with superlinear dependence on d (in the exponent) can be proved.In the last section of the article, we also give simple kernels for Connected Vertex Cover and Capacitated Vertex Cover of size O(kd) and O(kd+1), respectively. We show that the latter problem has no kernels of size O(kd−e) unless coNP ⊆ NP/poly by a simple reduction from d-Exact Set Cover (the same lower bound for Connected Vertex Cover on d-degenerate graphs is already known).