scispace - formally typeset
Search or ask a question

Showing papers on "Vertex cover published in 2009"


Journal ArticleDOI
TL;DR: In this article, a convex-concave programming approach is proposed for the labeled weighted graph matching problem, which is obtained by rewriting the problem as a least-square problem on the set of permutation matrices and relaxing it to two different optimization problems.
Abstract: We propose a convex-concave programming approach for the labeled weighted graph matching problem. The convex-concave programming formulation is obtained by rewriting the weighted graph matching problem as a least-square problem on the set of permutation matrices and relaxing it to two different optimization problems: a quadratic convex and a quadratic concave optimization problem on the set of doubly stochastic matrices. The concave relaxation has the same global minimum as the initial graph matching problem, but the search for its global minimum is also a hard combinatorial problem. We, therefore, construct an approximation of the concave problem solution by following a solution path of a convex-concave problem obtained by linear interpolation of the convex and concave formulations, starting from the convex relaxation. This method allows to easily integrate the information on graph label similarities into the optimization problem, and therefore, perform labeled weighted graph matching. The algorithm is compared with some of the best performing graph matching methods on four data sets: simulated graphs, QAPLib, retina vessel images, and handwritten Chinese characters. In all cases, the results are competitive with the state of the art.

442 citations


Book ChapterDOI
06 Jul 2009
TL;DR: This paper shows how to combine results with combinatorial reductions which use colors and IDs in order to prove kernelization lower bounds for a variety of basic problems, and rules out the existence of compression algorithms for many of the problems in question.
Abstract: In parameterized complexity each problem instance comes with a parameter k , and a parameterized problem is said to admit a polynomial kernel if there are polynomial time preprocessing rules that reduce the input instance to an instance with size polynomial in k . Many problems have been shown to admit polynomial kernels, but it is only recently that a framework for showing the non-existence of polynomial kernels has been developed by Bodlaender et al. [4] and Fortnow and Santhanam [9]. In this paper we show how to combine these results with combinatorial reductions which use colors and IDs in order to prove kernelization lower bounds for a variety of basic problems: We show that the Steiner Tree problem parameterized by the number of terminals and solution size k , and the Connected Vertex Cover and Capacitated Vertex Cover problems do not admit a polynomial kernel. The two latter results are surprising because the closely related Vertex Cover problem admits a kernel of size 2k . Alon and Gutner obtain a k poly (h ) kernel for Dominating Set in H -Minor Free Graphs parameterized by h = |H | and solution size k and ask whether kernels of smaller size exist [2]. We partially resolve this question by showing that Dominating Set in H -Minor Free Graphs does not admit a kernel with size polynomial in k + h . Harnik and Naor obtain a "compression algorithm" for the Sparse Subset Sum problem [13]. We show that their algorithm is essentially optimal since the instances cannot be compressed further. Hitting Set and Set Cover admit a kernel of size k O (d ) when parameterized by solution size k and maximum set size d . We show that neither of them, along with the Unique Coverage and Bounded Rank Disjoint Sets problems, admits a polynomial kernel. All results are under the assumption that the polynomial hierarchy does not collapse to the third level. The existence of polynomial kernels for several of the problems mentioned above were open problems explicitly stated in the literature [2,3,11,12,14]. Many of our results also rule out the existence of compression algorithms, a notion similar to kernelization defined by Harnik and Naor [13], for the problems in question.

233 citations


01 Jan 2009
TL;DR: For any integer d ≥ 3 and positive real e, it was shown in this article that satisfiability for n-variable d-CNF formulas has a protocol of cost O(nd √ log n−1/e) where n is the number of bits of communication from the first player to the second player.
Abstract: Consider the following two-player communication process to decide a language L: The first player holds the entire input x but is polynomially bounded; the second player is computationally unbounded but does not know any part of x; their goal is to decide cooperatively whether x belongs to L at small cost, where the cost measure is the number of bits of communication from the first player to the second player.For any integer d ≥ 3 and positive real e, we show that, if satisfiability for n-variable d-CNF formulas has a protocol of cost O(nd − e), then coNP is in NP/poly, which implies that the polynomial-time hierarchy collapses to its third level. The result even holds when the first player is conondeterministic, and is tight as there exists a trivial protocol for e = 0. Under the hypothesis that coNP is not in NP/poly, our result implies tight lower bounds for parameters of interest in several areas, namely sparsification, kernelization in parameterized complexity, lossy compression, and probabilistically checkable proofs.By reduction, similar results hold for other NP-complete problems. For the vertex cover problem on n-vertex d-uniform hypergraphs, this statement holds for any integer d ≥ 2. The case d = 2 implies that no NP-hard vertex deletion problem based on a graph property that is inherited by subgraphs can have kernels consisting of O(k2 − e) edges unless coNP is in NP/poly, where k denotes the size of the deletion set. Kernels consisting of O(k2) edges are known for several problems in the class, including vertex cover, feedback vertex set, and bounded-degree deletion.

225 citations


Proceedings ArticleDOI
25 Oct 2009
TL;DR: This paper presents a rounding 2-approximation algorithm for the sub modular vertex cover problem based on the half-integrality of the continuous relaxation problem, and shows that the rounding algorithm can be performed by one application of submodular function minimization on a ring family.
Abstract: This paper addresses the problems of minimizing nonnegative submodular functions under covering constraints, which generalize the vertex cover, edge cover, and set cover problems. We give approximation algorithms for these problems exploiting the discrete convexity of submodular functions. We first present a rounding 2-approximation algorithm for the submodular vertex cover problem based on the half-integrality of the continuous relaxation problem, and show that the rounding algorithm can be performed by one application of submodular function minimization on a ring family. We also show that a rounding algorithm and a primal-dual algorithm for the submodular cost set cover problem are both constant factor approximation algorithms if the maximum frequency is fixed. In addition, we give an essentially tight lower bound on the approximability of the submodular edge cover problem.

162 citations


Proceedings ArticleDOI
31 May 2009
TL;DR: In this paper, integrality gaps for SDP relaxations of constraint satisfaction problems, in the hierarchy of SDPs defined by Lasserre, were established for the general MAX k-CSP problem, where the ratio of the SDP optimum to the integer optimum may be as large as 2.
Abstract: We study integrality gaps for SDP relaxations of constraint satisfaction problems, in the hierarchy of SDPs defined by Lasserre. Schoenebeck [23] recently showed the first integrality gaps for these problems, showing that for MAX k-XOR, the ratio of the SDP optimum to the integer optimum may be as large as 2 even after Ω(n) rounds of the Lasserre hierarchy. We show that for the general MAX k-CSP problem, this ratio can be as large as 2k/2k - e when the alphabet is binary and qk/q(q-1)k - e when the alphabet size a prime q, even after Ω(n) rounds of the Lasserre hierarchy. We also explore how to translate gaps for CSP into integrality gaps for other problems using reductions, and establish SDP gaps for Maximum Independent Set, Approximate Graph Coloring, Chromatic Number and Minimum Vertex Cover. For Independent Set and Chromatic Number, we show integrality gaps of n/2O(√(log n log log n)) even after 2Ω(√(log n log log n)) rounds. In case of Approximate Graph Coloring, for every constant l, we construct graphs with chromatic number Ω(2l/2/l2), which admit a vector l-coloring for the SDP obtained by Ω(n) rounds. For Vertex Cover, we show an integrality gap of 1.36 for Ω(nδ) rounds, for a small constant δ. The results for CSPs provide the first examples of Ω(n) round integrality gaps matching hardness results known only under the Unique Games Conjecture. This and some additional properties of the integrality gap instance, allow for gaps for in case of Independent Set and Chromatic Number which are stronger than the NP-hardness results known even under the Unique Games Conjecture.

151 citations


Proceedings ArticleDOI
31 May 2009
TL;DR: A conceptually simple geometric approach to constructing Sherali-Adams gap examples via constructions of consistent local SDP solutions is developed, which is surprisingly versatile.
Abstract: We prove strong lower bounds on integrality gaps of Sherali-Adams relaxations for MAX CUT, Vertex Cover, Sparsest Cut and other problems. Our constructions show gaps for Sherali-Adams relaxations that survive nδ rounds of lift and project. For MAX CUT and Vertex Cover, these show that even nδ rounds of Sherali-Adams do not yield a better than 2-e approximation. The main combinatorial challenge in constructing these gap examples is the construction of a fractional solution that is far from an integer solution, but yet admits consistent distributions of local solutions for all small subsets of variables. Satisfying this consistency requirement is one of the major hurdles to constructing Sherali-Adams gap examples. We present a modular recipe for achieving this, building on previous work on metrics with a local-global structure. We develop a conceptually simple geometric approach to constructing Sherali-Adams gap examples via constructions of consistent local SDP solutions. This geometric approach is surprisingly versatile. We construct Sherali-Adams gap examples for Unique Games based on our construction for MAX CUT together with a parallel repetition like procedure. This in turn allows us to obtain Sherali-Adams gap examples for any problem that has a Unique Games based hardness result (with some additional conditions on the reduction from Unique Games). Using this, we construct 2-e gap examples for Maximum Acyclic Subgraph that rules out any family of linear constraints with support at most nδ.

148 citations


Journal ArticleDOI
Abstract: We reduce the approximation factor for the vertex cover to 2 − Θ (1/√logn) (instead of the previous 2 − Θ ln ln n/2ln n obtained by Bar-Yehuda and Even [1985] and Monien and Speckenmeyer [1985]). The improvement of the vanishing factor comes as an application of the recent results of Arora et al. [2004] that improved the approximation factor of the sparsest cut and balanced cut problems. In particular, we use the existence of two big and well-separated sets of nodes in the solution of the semidefinite relaxation for balanced cut, proven by Arora et al. [2004]. We observe that a solution of the semidefinite relaxation for vertex cover, when strengthened with the triangle inequalities, can be transformed into a solution of a balanced cut problem, and therefore the existence of big well-separated sets in the sense of Arora et al. [2004] translates into the existence of a big independent set.

140 citations


Proceedings ArticleDOI
25 Oct 2009
TL;DR: A long code test with one free bit, completeness 1-epsilon and soundness delta is presented, and the following two inapproximability results are proved.
Abstract: For arbitrarily small constants epsilon, delta ≫ 0$, we present a long code test with one free bit, completeness 1-epsilon and soundness delta. Using the test, we prove the following two inapproximability results:1. Assuming the Unique Games Conjecture of Khot, given an n-vertex graph that has two disjoint independent sets of size (1/2-epsilon)n each, it is NP-hard to find an independent set of size delta n.2. Assuming a (new) stronger version of the Unique Games Conjecture, the scheduling problem of minimizing weighted completion time with precedence constraints is inapproximable within factor 2-epsilon.

125 citations


Journal ArticleDOI
TL;DR: This work proposes an algorithm that solves the problem of removing k clauses from a 2-cnf formula in O(15^kxkxm^3) time showing that this problem is fixed-parameter tractable.

115 citations


Journal ArticleDOI
TL;DR: A theoretical analysis that explains empirical results is presented concerning the random local search algorithm and the (1+1)-EA and a lower bound for the worst case approximation ratio, slightly less than two, is proved.
Abstract: Vertex cover is one of the best known NP-hard combinatorial optimization problems. Experimental work has claimed that evolutionary algorithms (EAs) perform fairly well for the problem and can compete with problem-specific ones. A theoretical analysis that explains these empirical results is presented concerning the random local search algorithm and the (1+1)-EA. Since it is not expected that an algorithm can solve the vertex cover problem in polynomial time, a worst case approximation analysis is carried out for the two considered algorithms and comparisons with the best known problem-specific ones are presented. By studying instance classes of the problem, general results are derived. Although arbitrarily bad approximation ratios of the (1+1)-EA can be proved for a bipartite instance class, the same algorithm can quickly find the minimum cover of the graph when a restart strategy is used. Instance classes where multiple runs cannot considerably improve the performance of the (1+1)-EA are considered and the characteristics of the graphs that make the optimization task hard for the algorithm are investigated and highlighted. An instance class is designed to prove that the (1+1)-EA cannot guarantee better solutions than the state-of-the-art algorithm for vertex cover if worst cases are considered. In particular, a lower bound for the worst case approximation ratio, slightly less than two, is proved. Nevertheless, there are subclasses of the vertex cover problem for which the (1+1)-EA is efficient. It is proved that if the vertex degree is at most two, then the algorithm can solve the problem in polynomial time.

108 citations


Proceedings ArticleDOI
31 May 2009
TL;DR: An algorithm to approximate the size of some maximal independent set with additive error ε n whose running time is O(d2) is presented, and it is shown that there are approximation algorithms for many other problems, e.g., the maximum matching problem, the minimum vertex cover problem, and the minimum set cover problems, that run exponentially faster than existing algorithms.
Abstract: This paper studies approximation algorithms for problems on degree-bounded graphs. Let n and d be the number of vertices and the degree bound, respectively. This paper presents an algorithm to approximate the size of some maximal independent set with additive error e n whose running time is O(d2). Using this algorithm, it also shows that there are approximation algorithms for many other problems, e.g., the maximum matching problem, the minimum vertex cover problem, and the minimum set cover problem, that run exponentially faster than existing algorithms with respect to d and 1/e. Its approximation algorithm for the maximum matching problem can be transformed to a testing algorithm for the property of having a perfect matching with two-sided error. On the contrary, it also shows that every one-sided error tester for the property requires at least Ω(n) queries.

Proceedings ArticleDOI
25 Oct 2009
TL;DR: This paper introduces an algorithmic framework for studying combinatorial problems in the presence of multiple agents with submodular cost functions and studies several fundamental covering problems in this setting to establish tight upper and lower bounds for the approximability of these problems.
Abstract: — Applications in complex systems such as the Internet have spawned recent interest in studying situations involving multiple agents with their individual cost or utility functions. In this paper, we introduce an algorithmic framework for studying combinatorial problems in the presence of multiple agents with submodular cost functions. We study several fundamental covering problems (Vertex Cover, Shortest Path, Perfect Matching, and Spanning Tree) in this setting and establish tight upper and lower bounds for the approximability of these problems

Journal ArticleDOI
TL;DR: Much improved FPT algorithms are described for a large number of graph problems, for input graphs G for which ml(G)≤k, based on the polynomial-time extremal structure theory canonically associated to this parameter.
Abstract: In the framework of parameterized complexity, exploring how one parameter affects the complexity of a different parameterized (or unparameterized problem) is of general interest. A well-developed example is the investigation of how the parameter treewidth influences the complexity of (other) graph problems. The reason why such investigations are of general interest is that real-world input distributions for computational problems often inherit structure from the natural computational processes that produce the problem instances (not necessarily in obvious, or well-understood ways). The max leaf number ml(G) of a connected graph G is the maximum number of leaves in a spanning tree for G. Exploring questions analogous to the well-studied case of treewidth, we can ask: how hard is it to solve 3-Coloring, Hamilton Path, Minimum Dominating Set, Minimum Bandwidth or many other problems, for graphs of bounded max leaf number? What optimization problems are W[1]-hard under this parameterization? We do two things: We describe much improved FPT algorithms for a large number of graph problems, for input graphs G for which ml(G)≤k, based on the polynomial-time extremal structure theory canonically associated to this parameter. We consider improved algorithms both from the point of view of kernelization bounds, and in terms of improved fixed-parameter tractable (FPT) runtimes O *(f(k)). The way that we obtain these concrete algorithmic results is general and systematic. We describe the approach, and raise programmatic questions.

Journal ArticleDOI
TL;DR: For various values of t, NP-completeness and approximability results (both upper and lower bounds) and FPT algorithms for problems concerned with finding the minimum size of a t-total vertex cover, t- total edge cover and connected vertex cover are presented, in particular improving on a previous FPT algorithm for the latter problem.

Journal ArticleDOI
01 Oct 2009
TL;DR: The presented algorithm for finding maximum weight matchings in bipartite graphs with nonnegative integer weights works in [email protected]?(Wn^@w) time, where @w is the matrix multiplication exponent, and W is the highest edge weight in the graph.
Abstract: In this paper we consider the problem of finding maximum weight matchings in bipartite graphs with nonnegative integer weights. The presented algorithm for this problem works in [email protected]?(Wn^@w) time, where @w is the matrix multiplication exponent, and W is the highest edge weight in the graph. As a consequence of this result we obtain [email protected]?(Wn^@w) time algorithms for computing: minimum weight bipartite vertex cover, single source shortest paths and minimum weight vertex disjoint s-t paths. All of the presented algorithms are randomized and with small probability can return suboptimal solutions.

Book ChapterDOI
23 Sep 2009
TL;DR: A distributed 2-approximation algorithm for the minimum vertex cover problem is presented, and it runs in (Δ + 1)2 synchronous communication rounds, where Δ is the maximum degree of the graph.
Abstract: We present a distributed 2-approximation algorithm for the minimum vertex cover problem. The algorithm is deterministic, and it runs in (Δ + 1)2 synchronous communication rounds, where Δ is the maximum degree of the graph. For Δ = 3, we give a 2-approximation algorithm also for the weighted version of the problem.

Journal ArticleDOI
Wayne Pullan1
TL;DR: This paper extends the recently introduced Phased Local Search (PLS) maximum clique algorithm to unweighted/weighted maximum independent set and minimum vertex cover problems.

16 Jun 2009
TL;DR: In case of a legitimate complaint, the Library will make the material inaccessible and/or remove it from the website as mentioned in this paper, in case of legitimate complaints the material will be removed.
Abstract: Disclaimer/Complaints regulations If you believe that digital publication of certain material infringes any of your rights or (privacy) interests, please let the Library know, stating your reasons. In case of a legitimate complaint, the Library will make the material inaccessible and/or remove it from the website. Please Ask the Library: http://uba.uva.nl/en/contact, or a letter to: Library of the University of Amsterdam, Secretariat, Singel 425, 1012 WP Amsterdam, The Netherlands. You will be contacted as soon as possible.

Book ChapterDOI
24 Jun 2009
TL;DR: The purposes of this paper are to give an exposition of the main ideas of parameterized complexity, and to discuss some of the current research frontiers and directions.
Abstract: The purposes of this paper are two:(1)To give an exposition of the main ideas of parameterized complexity,and (2)To discuss some of the current research frontiers and directions.

Journal ArticleDOI
TL;DR: A local algorithm for finding a 3-approximate vertex cover in bounded-degree graphs that is deterministic, and no auxiliary information besides port numbering is required.

Journal ArticleDOI

Journal ArticleDOI
TL;DR: The presented results have a methodological interest because, to the best of the authors' knowledge, this is the first time when a new parameterized upper bound is obtained through design and analysis of an exact exponential algorithm.

Book ChapterDOI
02 Dec 2009
TL;DR: It is shown that ROOTED MAXIMUM LEAF OUTBRANCHING admits an edge-quadratic kernel, improving over the vertex-cubic kernel given by Fernau et al [13], and the notion of s ?
Abstract: The ROOTED MAXIMUM LEAF OUTBRANCHING problem consists in finding a spanning directed tree rooted at some prescribed vertex of a digraph with the maximum number of leaves. Its parameterized version asks if there exists such a tree with at least k leaves. We use the notion of s ? t numbering studied in [19,6,20] to exhibit combinatorial bounds on the existence of spanning directed trees with many leaves. These combinatorial bounds allow us to produce a constant factor approximation algorithm for finding directed trees with many leaves, whereas the best known approximation algorithm has a $\sqrt{OPT}$-factor [11]. We also show that ROOTED MAXIMUM LEAF OUTBRANCHING admits an edge-quadratic kernel, improving over the vertex-cubic kernel given by Fernau et al [13].

Proceedings ArticleDOI
10 Aug 2009
TL;DR: The paper presents distributed and parallel δ-approximation algorithms for covering problems, where δ is the maximum number of variables on which any constraint depends (for example, δ = 2 for VERTEX COVER).
Abstract: The paper presents distributed and parallel δ-approximation algorithms for covering problems, where δ is the maximum number of variables on which any constraint depends (for example, δ = 2 for VERTEX COVER)Specific results include the following≺ For WEIGHTED VERTEX COVER, the first distributed 2-approximation algorithm taking O(log n) rounds and the first parallel 2-approximation algorithm in RNC The algorithms generalize to covering mixed integer linear programs (CMIP) with two variables per constraint (δ = 2)≺ For any covering problem with monotone constraints and submodular cost, a distributed δ-approximation algorithm taking O(log2 |C|) rounds, where |C| is the number of constraints (Special cases include CMIP, facility location, and probabilistic (two-stage) variants of these problems)

Journal ArticleDOI
TL;DR: This paper makes a first step into the rigorous analysis of such combinations for combinatorial optimization problems, the subject of which is the vertex cover problem for which several approximation algorithms have been proposed.
Abstract: Hybrid methods are very popular for solving problems from combinatorial optimization. In contrast, the theoretical understanding of the interplay of different optimization methods is rare. In this paper, we make a first step into the rigorous analysis of such combinations for combinatorial optimization problems. The subject of our analyses is the vertex cover problem for which several approximation algorithms have been proposed. We point out specific instances where solutions can (or cannot) be improved by the search process of a simple evolutionary algorithm in expected polynomial time.

Journal ArticleDOI
TL;DR: In this article, it was shown that the addressed scheduling problem is a special case of the vertex cover problem, which implies that previous results for the scheduling problem can be explained, and in some cases improved, by means of vertex cover theory.
Abstract: In this paper we study the single machine precedence constrained scheduling problem of minimizing the sum of weighted completion time. Specifically, we settle an open problem first raised by Chudak and Hochbaum and whose answer was subsequently conjectured by Correa and Schulz. As shown by Correa and Schulz, the proof of this conjecture implies that the addressed scheduling problem is a special case of the vertex cover problem. This means that previous results for the scheduling problem can be explained, and in some cases improved, by means of vertex cover theory. For example, the conjecture implies the existence of a polynomial time algorithm for the special case of two-dimensional partial orders. This considerably extends Lawler’s result from 1978 for series-parallel orders.

Book ChapterDOI
21 Aug 2009
TL;DR: This work shows that if the set of assignments accepted by P contains the support of a balanced pairwise independent distribution over the domain of the inputs, then such a problem on n variables cannot be approximated better than the trivial (random) approximation, even using ***(n ) levels of the Sherali-Adams LP hierarchy.
Abstract: This work considers the problem of approximating fixed predicate constraint satisfaction problems (MAX k-CSP(P )). We show that if the set of assignments accepted by P contains the support of a balanced pairwise independent distribution over the domain of the inputs, then such a problem on n variables cannot be approximated better than the trivial (random) approximation, even using ***(n ) levels of the Sherali-Adams LP hierarchy. It was recently shown [3] that under the Unique Game Conjecture, CSPs with predicates with this condition cannot be approximated better than the trivial approximation. Our results can be viewed as an unconditional analogue of this result in the restricted computational model defined by the Sherali-Adams hierarchy. We also introduce a new generalization of techniques to define consistent "local distributions" over partial assignments to variables in the problem, which is often the crux of proving lower bounds for such hierarchies.

Proceedings ArticleDOI
06 Jul 2009
TL;DR: The treewidth parameter of graphs is considered, which roughly measures the degree of tree-likeness of a given graph, and has previously been used to tackle many classical NPhard problems in the literature and should be taken into consideration when tackling this problem in any scenario.
Abstract: The Target Set Selection problem proposed by Kempe, Kleinberg, and Tardos, gives a nice clean combinatorial formulation for many problems arising in economy, sociology, and medicine Its input is a graph with vertex thresholds, the social network, and the goal is to find a subset of vertices, the target set, that "activates" a prespecified number of vertices in the graph Activation of a vertex is defined via a so-called activation process as follows: Initially, all vertices in the target set become active Then at each step i of the process, each vertex gets activated if the number of its active neighbors at iteration i -- 1 exceeds its threshold The activation process is "monotone" in the sense that once a vertex is activated, it remains active for the entire processUnsurprisingly perhaps, Target Set Selection is NPC More surprising is the fact that both of its maximization and minimization variants turn out to be extremely hard to approximate, even for very restrictive special cases The only known case for which the problem is known to have some sort of acceptable worst-case solution is the case where the given social network is a tree and the problem becomes polynomial-time solvable In this paper, we attempt at extending this sparse landscape of tractable instances by considering the treewidth parameter of graphs This parameter roughly measures the degree of tree-likeness of a given graph, eg the treewidth of a tree is 1, and has previously been used to tackle many classical NPhard problems in the literatureOur contribution is twofold: First, we present an algorithm for Target Set Selection running in nO(w) time, for graphs with n vertices and treewidth bounded by w The algorithm utilizes various combinatorial properties of the problem; drifting somewhat from standard dynamic-programming algorithms for small treewidth graphs Also, it can be adopted to much more general settings, including the case of directed graphs, weighted edges, and weighted vertices On the other hand, we also show that it is highly unlikely to find an nO(√w) time algorithm for Target Set Selection, as this would imply a sub-exponential algorithm for all problems in SNPclass Together with our upper bound result, this shows that the treewidth parameter determines the complexity of Target Set Selection to a large extent, and should be taken into consideration when tackling this problem in any scenario

Proceedings ArticleDOI
01 Jan 2009
TL;DR: It was shown that on planar graphs both problems can be solved in time $2^{\cO(k)n^{\ cO(1)}$ and on parameterized complexity classes when parameterized by $k.
Abstract: Partial Cover problems are optimization versions of fundamental and well studied problems like {\sc Vertex Cover} and {\sc Dominating Set}. Here one is interested in covering (or dominating) the maximum number of edges (or vertices) using a given number ($k$) of vertices, rather than covering all edges (or vertices). In general graphs, these problems are hard for parameterized complexity classes when parameterized by $k$. It was recently shown by Amini et. al. [{\em FSTTCS 08}\,] that {\sc Partial Vertex Cover} and {\sc Partial Dominating Set} are fixed parameter tractable on large classes of sparse graphs, namely $H$-minor free graphs, which include planar graphs and graphs of bounded genus. In particular, it was shown that on planar graphs both problems can be solved in time $2^{\cO(k)}n^{\cO(1)}$.

Proceedings Article
11 Jul 2009
TL;DR: It is shown that for several classes of sparse graphs, including planar graphs, graphs of bounded vertex degree and graphs excluding some fixed graph as a minor, an improved solution in the k-exchange neighborhood for many problems can be found much more efficiently.
Abstract: Many local search algorithms are based on searching in the k-exchange neighborhood. This is the set of solutions that can be obtained from the current solution by exchanging at most k elements. As a rule of thumb, the larger k is, the better are the chances of finding an improved solution. However, for inputs of size n, a naive brute-force search of the k-exchange neighborhood requires nO(k) time, which is not practical even for very small values of k. We show that for several classes of sparse graphs, like planar graphs, graphs of bounded vertex degree and graphs excluding some fixed graph as a minor, an improved solution in the k-exchange neighborhood for many problems can be found much more efficiently. Our algorithms run in time O(τ (k) ċ nc), where τ is a function depending on k only and c is a constant independent of k. We demonstrate the applicability of this approach on different problems like r-CENTER, VERTEX COVER, ODD CYCLE TRANSVERSAL, MAX-CUT, and MIN-BISECTION. In particular, on planar graphs, all our algorithms searching for a k- local improvement run in time O(2O(k) ċ n2), which is polynomial for k = O(log n). We also complement the algorithms with complexity results indicating that--brute force search is unavoidable--in more general classes of sparse graphs.