scispace - formally typeset
Search or ask a question

Showing papers on "Vertex cover published in 2012"


Proceedings ArticleDOI
20 Oct 2012
TL;DR: In this paper, a constant factor approximation algorithm for the optimization version of planar F-deletion was proposed, which runs in time O(exp(2, O(k))n).
Abstract: Let F be a finite set of graphs. In the F-deletion problem, we are given an n-vertex graph G and an integer k as input, and asked whether at most k vertices can be deleted from G such that the resulting graph does not contain a graph from F as a minor. F-deletion is a generic problem and by selecting different sets of forbidden minors F, one can obtain various fundamental problems such as {\sc Vertex Cover}, {\sc Feedback Vertex Set} or {\sc Tree width k-deletion}. In this paper we obtain a number of generic algorithmic results about F-deletion, when F contains at least one planar graph. The highlights of our work are: 1. A constant factor approximation algorithm for the optimization version of Planar F-deletion, 2. A linear time and single exponential parameterized algorithm, that is, an algorithm running in time O(exp(2, O(k))n), for the parameterized version of Planar F-deletion where all graphs in Planar F are connected, 3. A polynomial kernel for parameterized F-deletion. These algorithms unify, generalize, and improve a multitude of results in the literature. Our main results have several direct applications, but also the methods we develop on the way have applicability beyond the scope of this paper. Our results -- constant factor approximation, polynomial kernelization and FPT algorithms -- are stringed together by a common theme of polynomial time preprocessing.

211 citations


Journal ArticleDOI
TL;DR: It is shown that any FO property in both of these classes with a singly exponential parameter dependence is possible and that it is possible to decide MSO logic on graphs of bounded vertex cover with a doubly exponentialParameter dependence, and is proved that the upper bound results cannot be improved significantly, under widely believed complexity assumptions.
Abstract: Possibly the most famous algorithmic meta-theorem is Courcelle’s theorem, which states that all MSO-expressible graph properties are decidable in linear time for graphs of bounded treewidth. Unfortunately, the running time’s dependence on the formula describing the problem is in general a tower of exponentials of unbounded height, and there exist lower bounds proving that this cannot be improved even if we restrict ourselves to deciding FO logic on trees. We investigate whether this parameter dependence can be improved by focusing on two proper subclasses of the class of bounded treewidth graphs: graphs of bounded vertex cover and graphs of bounded max-leaf number. We prove stronger algorithmic meta-theorems for these more restricted classes of graphs. More specifically, we show it is possible to decide any FO property in both of these classes with a singly exponential parameter dependence and that it is possible to decide MSO logic on graphs of bounded vertex cover with a doubly exponential parameter dependence. We also prove lower bound results which show that our upper bounds cannot be improved significantly, under widely believed complexity assumptions. Our work addresses an open problem posed by Michael Fellows.

205 citations


Posted Content
TL;DR: In this paper, the authors introduce the cross-composition framework for proving kernelization lower bounds, which generalizes and strengthens the recent techniques of using composition algorithms and of transferring the lower bounds via polynomial parameter transformations.
Abstract: We introduce the cross-composition framework for proving kernelization lower bounds. A classical problem L AND/OR-cross-composes into a parameterized problem Q if it is possible to efficiently construct an instance of Q with polynomially bounded parameter value that expresses the logical AND or OR of a sequence of instances of L. Building on work by Bodlaender et al. (ICALP 2008) and using a result by Fortnow and Santhanam (STOC 2008) with a refinement by Dell and van Melkebeek (STOC 2010), we show that if an NP-hard problem OR-cross-composes into a parameterized problem Q then Q does not admit a polynomial kernel unless NP \subseteq coNP/poly and the polynomial hierarchy collapses. Similarly, an AND-cross-composition for Q rules out polynomial kernels for Q under Bodlaender et al.'s AND-distillation conjecture. Our technique generalizes and strengthens the recent techniques of using composition algorithms and of transferring the lower bounds via polynomial parameter transformations. We show its applicability by proving kernelization lower bounds for a number of important graphs problems with structural (non-standard) parameterizations, e.g., Clique, Chromatic Number, Weighted Feedback Vertex Set, and Weighted Odd Cycle Transversal do not admit polynomial kernels with respect to the vertex cover number of the input graphs unless the polynomial hierarchy collapses, contrasting the fact that these problems are trivially fixed-parameter tractable for this parameter. After learning of our results, several teams of authors have successfully applied the cross-composition framework to different parameterized problems. For completeness, our presentation of the framework includes several extensions based on this follow-up work. For example, we show how a relaxed version of OR-cross-compositions may be used to give lower bounds on the degree of the polynomial in the kernel size.

196 citations


Posted Content
TL;DR: In this article, the authors investigated the parameterized complexity of vertex cover parameterized by the difference between the size of the optimal solution and the value of the LP relaxation of the problem.
Abstract: We investigate the parameterized complexity of Vertex Cover parameterized by the difference between the size of the optimal solution and the value of the linear programming (LP) relaxation of the problem. By carefully analyzing the change in the LP value in the branching steps, we argue that combining previously known preprocessing rules with the most straightforward branching algorithm yields an $O^*((2.618)^k)$ algorithm for the problem. Here $k$ is the excess of the vertex cover size over the LP optimum, and we write $O^*(f(k))$ for a time complexity of the form $O(f(k)n^{O(1)})$, where $f (k)$ grows exponentially with $k$. We proceed to show that a more sophisticated branching algorithm achieves a runtime of $O^*(2.3146^k)$. Following this, using known and new reductions, we give $O^*(2.3146^k)$ algorithms for the parameterized versions of Above Guarantee Vertex Cover, Odd Cycle Transversal, Split Vertex Deletion and Almost 2-SAT, and an $O^*(1.5214^k)$ algorithm for Konig Vertex Deletion, Vertex Cover Param by OCT and Vertex Cover Param by KVD. These algorithms significantly improve the best known bounds for these problems. The most notable improvement is the new bound for Odd Cycle Transversal - this is the first algorithm which beats the dependence on $k$ of the seminal $O^*(3^k)$ algorithm of Reed, Smith and Vetta. Finally, using our algorithm, we obtain a kernel for the standard parameterization of Vertex Cover with at most $2k - c \log k$ vertices. Our kernel is simpler than previously known kernels achieving the same size bound.

136 citations


Book ChapterDOI
01 Jan 2012
TL;DR: The effectiveness of linear and semidefinite relaxations in approximating the optimum for combinatorial optimization problems is discussed, and some positive applications of these hierarchies are surveyed, where their use yields improved approximation algorithms.
Abstract: We discuss the effectiveness of linear and semidefinite relaxations in approximating the optimum for combinatorial optimization problems. Various hierarchies of these relaxations, such as the ones defined by Lovasz and Schrijver, Sherali and Adams, and Lasserre generate increasingly strong linear and semidefinite programming relaxations starting from a basic one. We survey some positive applications of these hierarchies, where their use yields improved approximation algorithms. We also discuss known lower bounds on the integrality gaps of relaxations arising from these hierarchies, demonstrating limits on the applicability of such hierarchies for certain optimization problems.

114 citations


Book ChapterDOI
01 Jan 2012
TL;DR: An overview of some of the early work in the area and also survey newer techniques that have emerged in the design and analysis of kernelization algorithms, which are a pre-processing algorithm which simplifies the instances given as input in polynomial time.
Abstract: Data reduction techniques are widely applied to deal with computationally hard problems in real world applications. It has been a long-standing challenge to formally express the efficiency and accuracy of these "pre-processing" procedures. The framework of parameterized complexity turns out to be particularly suitable for a mathematical analysis of pre-processing heuristics. A kernelization algorithm is a pre-processing algorithm which simplifies the instances given as input in polynomial time, and the extent of simplification desired is quantified with the help of the additional parameter. We give an overview of some of the early work in the area and also survey newer techniques that have emerged in the design and analysis of kernelization algorithms. We also outline the framework of Bodlaender et al. [9] and Fortnow and Santhanam [38] for showing kernelization lower bounds under reasonable assumptions from classical complexity theory, and highlight some of the recent results that strengthen and generalize this framework.

106 citations


Proceedings ArticleDOI
17 Jan 2012
TL;DR: This work shows lower bounds for the kernelization of d-Set Matching and other packing problems, and applies this scheme to the vertex cover problem, which allows us to replace the number-theoretical construction by Dell and Van Melkebeek with shorter elementary arguments.
Abstract: Kernelization algorithms are polynomial-time reductions from a problem to itself that guarantee their output to have a size not exceeding some bound. For example, d-Set Matching for integers d ≥ 3 is the problem of finding a matching of size at least k in a given d-uniform hypergraph and has kernels with O(kd) edges. Recently, Bodlaender et al. [ICALP 2008], Fortnow and Santhanam [STOC 2008], Dell and Van Melkebeek [STOC 2010] developed a framework for proving lower bounds on the kernel size for certain problems, under the complexity-theoretic hypothesis that coNP is not contained in NP/poly. Under the same hypothesis, we show lower bounds for the kernelization of d-Set Matching and other packing problems.Our bounds are tight for d-Set Matching: It does not have kernels with O(kd−e) edges for any e > 0 unless the hypothesis fails. By reduction, this transfers to a bound of O(kd−1−e) for the problem of finding k vertex-disjoint cliques of size d in standard graphs. It is natural to ask for tight bounds on the kernel sizes of such graph packing problems. We make first progress in that direction by showing non-trivial kernels with O(k2.5) edges for the problem of finding k vertex-disjoint paths of three edges each. This does not quite match the best lower bound of O(k2−e) that we can prove.Most of our lower bound proofs follow a general scheme that we discover: To exclude kernels of size O(kd−e) for a problem in d-uniform hypergraphs, one should reduce from a carefully chosen d-partite problem that is still NP-hard. As an illustration, we apply this scheme to the vertex cover problem, which allows us to replace the number-theoretical construction by Dell and Van Melkebeek [STOC 2010] with shorter elementary arguments.

104 citations


Proceedings ArticleDOI
17 Jan 2012
TL;DR: An algorithm is given that outputs a (2, e)-estimate of the size of a minimum vertex cover whose query complexity and running time are O(n) · poly(1/e) and the result is nearly optimal.
Abstract: We give a nearly optimal sublinear-time algorithm for approximating the size of a minimum vertex cover in a graph G. The algorithm may query the degree deg(v) of any vertex v of its choice, and for each 1 ≤ i ≤ deg(v), it may ask for the ith neighbor of v. Letting VCopt(G) denote the minimum size of vertex cover in G, the algorithm outputs, with high constant success probability, an estimate [EQUATION] such that [EQUATION], where e is a given additive approximation parameter. We refer to such an estimate as a (2, e)-estimate. The query complexity and running time of the algorithm are O([EQUATION] · poly(1/e)), where d denotes the average vertex degree in the graph. The best previously known sublinear algorithm, of Yoshida et al. (STOC 2009), has query complexity and running time O(d4/e2), where d is the maximum degree in the graph. Given the lower bound of Ω(d) (for constant e) for obtaining such an estimate (with any constant multiplicative factor) due to Parnas and Ron (TCS 2007), our result is nearly optimal.In the case that the graph is dense, that is, the number of edges is Θ(n2), we consider another model, in which the algorithm may ask, for any pair of vertices u and v, whether there is an edge between u and v. We show how to adapt the algorithm that uses neighbor queries to this model and obtain an algorithm that outputs a (2, e)-estimate of the size of a minimum vertex cover whose query complexity and running time are O(n) · poly(1/e).

97 citations


Book ChapterDOI
27 Aug 2012
TL;DR: This work discusses several aspects of efficiency and focuses on the search for "stronger parameterizations" in developing fixed-parameter algorithms, particularly in the case of kernelization algorithms.
Abstract: Once having classified an NP-hard problem fixed-parameter tractable with respect to a certain parameter, the race for the most efficient fixed-parameter algorithm starts. Herein, the attention usually focuses on improving the running time factor exponential in the considered parameter, and, in case of kernelization algorithms, to improve the bound on the kernel size. Both from a practical as well as a theoretical point of view, however, there are further aspects of efficiency that deserve attention. We discuss several of these aspects and particularly focus on the search for "stronger parameterizations" in developing fixed-parameter algorithms.

87 citations


Journal ArticleDOI
TL;DR: It is proved that the Metric Dimension problem is not approximable within (1-@e)lnn for any @e>0, unless NP@?DTIME(n), and an approximation algorithm is given which matches the lower bound.

86 citations


Proceedings ArticleDOI
26 Jun 2012
TL;DR: The strong exponential time hypothesis (SETH) by Impagliazzo and Paturi [JCSS 2001] goes a little bit further and asserts that, for every epsilon, 2 is the optimal growth rate.
Abstract: The field of exact exponential time algorithms for NP-hard problems has thrived over the last decade. While exhaustive search remains asymptotically the fastest known algorithm for some basic problems, difficult and non-trivial exponential time algorithms have been found for a myriad of problems, including Graph Coloring, Hamiltonian Path, Dominating Set and 3-CNF-Sat. In some instances, improving these algorithms further seems to be out of reach. The CNF-Sat problem is the canonical example of a problem for which the trivial exhaustive search algorithm runs in time O(2^n), where n is the number of variables in the input formula. While there exist non-trivial algorithms for CNF-Sat that run in time o(2^n), no algorithm was able to improve the growth rate 2 to a smaller constant, and hence it is natural to conjecture that 2 is the optimal growth rate. The strong exponential time hypothesis (SETH) by Impagliazzo and Paturi [JCSS 2001] goes a little bit further and asserts that, for every epsilon

Journal ArticleDOI
TL;DR: In this paper, the authors show that for several classes of sparse graphs, including planar graphs, graphs of bounded vertex degree and graphs excluding some fixed graph as a minor, an improved solution in the k-exchange neighborhood for many problems can be found much more efficiently.

Proceedings ArticleDOI
20 May 2012
TL;DR: This work proposes a novel disk-based index, a tree-structured index constructed based on the concept of vertex cover, and proposes an I/O-efficient algorithm to construct the index when the input graph is too large to fit in main memory.
Abstract: We propose a novel disk-based index for processing single-source shortest path or distance queries. The index is useful in a wide range of important applications (e.g., network analysis, routing planning, etc.). Our index is a tree-structured index constructed based on the concept of vertex cover. We propose an I/O-efficient algorithm to construct the index when the input graph is too large to fit in main memory. We give detailed analysis of I/O and CPU complexity for both index construction and query processing, and verify the efficiency of our index for query processing in massive real-world graphs.

Book ChapterDOI
01 Jan 2012
TL;DR: In this article, the authors present a survey of parameterized complexity results for problems that arise in the context of backdoor sets, such as the problem of finding a backdoor set of size at most k, parameterized by k.
Abstract: A backdoor set is a set of variables of a propositional formula such that fixing the truth values of the variables in the backdoor set moves the formula into some polynomial-time decidable class. If we know a small backdoor set we can reduce the question of whether the given formula is satisfiable to the same question for one or several easy formulas that belong to the tractable class under consideration. In this survey we review parameterized complexity results for problems that arise in the context of backdoor sets, such as the problem of finding a backdoor set of size at most k, parameterized by k. We also discuss recent results on backdoor sets for problems that are beyond NP.

Journal ArticleDOI
TL;DR: Empirical testing reveals crucial but latent features of high-throughput biological data which distinguish real data from random data intended to reproduce salient topological features and novel decomposition strategies are tuned to the data and coupled with the best FPT MCE implementations.
Abstract: The maximum clique enumeration (MCE) problem asks that we identify all maximum cliques in a finite, simple graph. MCE is closely related to two other well-known and widely-studied problems: the maximum clique optimization problem, which asks us to determine the size of a largest clique, and the maximal clique enumeration problem, which asks that we compile a listing of all maximal cliques. Naturally, these three problems are -hard, given that they subsume the classic version of the -complete clique decision problem. MCE can be solved in principle with standard enumeration methods due to Bron, Kerbosch, Kose and others. Unfortunately, these techniques are ill-suited to graphs encountered in our applications. We must solve MCE on instances deeply seeded in data mining and computational biology, where high-throughput data capture often creates graphs of extreme size and density. MCE can also be solved in principle using more modern algorithms based in part on vertex cover and the theory of fixed-parameter tractability (FPT). While FPT is an improvement, these algorithms too can fail to scale sufficiently well as the sizes and densities of our datasets grow. An extensive testbed of benchmark graphs are created using publicly available transcriptomic datasets from the Gene Expression Omnibus (GEO). Empirical testing reveals crucial but latent features of such high-throughput biological data. In turn, it is shown that these features distinguish real data from random data intended to reproduce salient topological features. In particular, with real data there tends to be an unusually high degree of maximum clique overlap. Armed with this knowledge, novel decomposition strategies are tuned to the data and coupled with the best FPT MCE implementations. Several algorithmic improvements to MCE are made which progressively decrease the run time on graphs in the testbed. Frequently the final runtime improvement is several orders of magnitude. As a result, instances which were once prohibitively time-consuming to solve are brought into the domain of realistic feasibility.

Journal ArticleDOI
01 Jun 2012
TL;DR: This paper proposes a simple and efficient population-based iterated greedy algorithm for tackling the minimum weight vertex cover problem and shows that this algorithm outperforms current state-of-the-art approaches.
Abstract: Given an undirected, vertex-weighted graph, the goal of the minimum weight vertex cover problem is to find a subset of the vertices of the graph such that the subset is a vertex cover and the sum of the weights of its vertices is minimal. This problem is known to be NP-hard and no efficient algorithm is known to solve it to optimality. Therefore, most existing techniques are based on heuristics for providing approximate solutions in a reasonable computation time. Population-based search approaches have shown to be effective for solving a multitude of combinatorial optimization problems. Their advantage can be identified as their ability to find areas of the space containing high quality solutions. This paper proposes a simple and efficient population-based iterated greedy algorithm for tackling the minimum weight vertex cover problem. At each iteration, a population of solutions is established and refined using a fast randomized iterated greedy heuristic based on successive phases of destruction and reconstruction. An extensive experimental evaluation on a commonly used set of benchmark instances shows that our algorithm outperforms current state-of-the-art approaches.

Journal ArticleDOI
TL;DR: This paper provides a complete description of the complexity status of the problem in subclasses of triangle-free graphs obtained by forbidding a forest with at most 6 vertices and proves polynomial-time solvability of theproblem in many classes of this type.

Proceedings ArticleDOI
01 Jan 2012
TL;DR: It is argued that even the most straightforward branching algorithm (after some preprocessing) results in an O (2.6181 r ) algorithm for the problem where r is the excess of the vertex cover size over the LP optimum.
Abstract: We investigate the parameterized complexity of Vertex Cover parameterized above the optimum value of the linear programming (LP) relaxation of the integer linear programming formulation of the problem. By carefully analyzing the change in the LP value in the branching steps, we argue that even the most straightforward branching algorithm (after some preprocessing) results in an O (2.6181 r ) algorithm for the problem where r is the excess of the vertex cover size over the LP optimum. We write O (f(k)) for a time complexity of the form O(f(k)n O(1) ), where f(k) grows exponentially with k.

Journal ArticleDOI
TL;DR: In this article, a pure 0-1 optimization model and a metaheuristic algorithm based on the variable neighborhood search methodology for the vertex separation problem on general graphs are proposed, which is able to find high quality solutions with a moderate computing time for large-scale instances.

Proceedings ArticleDOI
17 Jan 2012
TL;DR: In this paper, a generic approach to design EPTASs and subexponential time parameterized algorithms for problems on classes of graphs which are not minor closed, but instead exhibit a geometric structure.
Abstract: Bidimensionality theory was introduced by Demaine et al. [JACM 2005] as a framework to obtain algorithmic results for hard problems on minor closed graph classes. The theory has been successfully applied to yield subex-ponential time parameterized algorithms, EPTASs and linear kernels for many problems on families of graphs excluding a fixed graph H as a minor. In this paper we use several of the key ideas from Bidimensionality to give a new generic approach to design EPTASs and subexponential time parameterized algorithms for problems on classes of graphs which are not minor closed, but instead exhibit a geometric structure. In particular we present EPTASs and subexponential time parameterized algorithms for Feedback Vertex Set, Vertex Cover, Connected Vertex Cover, on map graphs and unit disk graphs, PTASs for Diamond Hitting Set on map graphs and unit disk graphs, and a PTAS and a subexponential time algorithm for Cycle Packing on unit disk graphs. To the best of our knowledge, these results were previously unknown, with the exception of the EPTAS and a subexponential time parameterized algorithm on unit disk graphs for Vertex Cover, which were obtained by Marx [ESA 2005] and Alber and Fiala [J. Algorithms 2004], respectively.Our results are based on the recent decomposition theorems proved by Fomin et al. in [SODA 2011] and novel grid-excluding theorems in unit disk and map graphs without large cliques. Our algorithms work directly on the input graph and do not require the geometric representations of the input graph. We also show that our approach can not be extended in its full generality to more general classes of geometric graphs, such as intersection graphs of unit balls in Rd, d ≥ 3. Specifically, we prove that Feedback Vertex Set on unit-ball graphs in R3 neither admits PTASs unless P=NP, nor subexponential time algorithms unless the Exponential Time Hypothesis fails. Additionally, we show that the decomposition theorems which our approach is based on, fail for disk graphs and that therefore any extension of our results to disk graphs would require new algorithmic ideas. On the other hand, we prove that our EPTASs and subexponential time algorithms for Vertex Cover and Connected Vertex Cover carry over both to disk graphs and to unit-ball graphs in Rd for every fixed d.

Journal ArticleDOI
TL;DR: An algorithm with constant approximation factor 18 is provided to solve the discrete unit disk cover problem, a geometric version of the general set cover problem which is NP-hard.
Abstract: Given a set of n points and a set of m unit disks on a 2-dimensional plane, the discrete unit disk cover (DUDC) problem is (i) to check whether each point in is covered by at least one disk in or not and (ii) if so, then find a minimum cardinality subset such that the unit disks in cover all the points in . The discrete unit disk cover problem is a geometric version of the general set cover problem which is NP-hard. The general set cover problem is not approximable within , for some constant c, but the DUDC problem was shown to admit a constant factor approximation. In this paper, we provide an algorithm with constant approximation factor 18. The running time of the proposed algorithm is . The previous best known tractable solution for the same problem was a 22-factor approximation algorithm with running time .

Journal ArticleDOI
TL;DR: It is shown that Bounded-Degree Vertex Deletion becomes fixed-parameter tractable when parameterized by the combined parameter treewidth and number of vertices to delete, and when parametrized by the feedback edge set number.

Book ChapterDOI
27 Aug 2012
TL;DR: This work studies the fixed parameter tractability of basic graph theoretic problems related to coloring and Hamiltonicity parameterized by cluster vertex deletion number and pushes the borderline between tractability and intractability towards the clique-width parameter.
Abstract: The cluster vertex deletion number of a graph is the minimum number of its vertices whose deletion results in a disjoint union of complete graphs. This generalizes the vertex cover number, provides an upper bound to the clique-width and is related to the previously studied notion of the twin cover of the graph under consideration. We study the fixed parameter tractability of basic graph theoretic problems related to coloring and Hamiltonicity parameterized by cluster vertex deletion number. Our results show that most of these problems remain fixed parameter tractable as well, and thus we push the borderline between tractability and intractability towards the clique-width parameter.

Journal ArticleDOI
TL;DR: It is shown that Connected Feedback Vertex Set can be solved in time O(2O(k)nO(1)) on general graphs and in time $O(2^{O(\sqrt{k}\log k)}n^{O( 1)})$ on graphs excluding a fixed graph H as a minor.
Abstract: We study the recently introduced Connected Feedback Vertex Set (CFVS) problem from the view-point of parameterized algorithms. CFVS is the connected variant of the classical Feedback Vertex Set problem and is defined as follows: given a graph G=(V,E) and an integer k, decide whether there exists F?V, |F|?k, such that G[V?F] is a forest and G[F] is connected. We show that Connected Feedback Vertex Set can be solved in time O(2 O(k) n O(1)) on general graphs and in time $O(2^{O(\sqrt{k}\log k)}n^{O(1)})$ on graphs excluding a fixed graph H as a minor. Our result on general undirected graphs uses, as a subroutine, a parameterized algorithm for Group Steiner Tree, a well studied variant of Steiner Tree. We find the algorithm for Group Steiner Tree of independent interest and believe that it could be useful for obtaining parameterized algorithms for other connectivity problems.

Journal ArticleDOI
TL;DR: This work investigates generalizations of the following well-known problems in the framework of parameterized complexity: the feedback set problem and the cycle packing problem and gives the first fixed parameter algorithms for the two problems.

01 Jan 2012
TL;DR: For a graph with n vertices and m edges, Onak and Rubinfeld as discussed by the authors gave an O((n + m) 0.7072 ) update time algorithm which is sublinear only for a sparse graph.
Abstract: We present an algorithm for maintaining maximal matching in a graph under addition and deletion of edges. Our data structure is randomized that takes O(logn) expected amortized time for each edge update where n is the number of vertices in the graph. While there is a trivia l O(n) algorithm for edge update, the previous best known result for this problem was due to Ivkovic and Llyod(4). For a graph with n vertices and m edges, they give an O((n + m) 0.7072 ) update time algorithm which is sublinear only for a sparse graph. For the related problem of maximum matching, Onak and Rubinfeld (5) designed a randomized data structure that achieves O(log 2 n) expected amortized time for each update for maintaining a c- approximate maximum matching for some large constant c. In contrast, we can maintain a factor two approximate maximum matching inO(logn) expected amortized time per update as a direct corollary of the maximal matching scheme. This in turn also implies a two approximate vertex cover maintenance scheme that takes O(logn) expected amortized time per update.

Journal ArticleDOI
TL;DR: It is shown that the natural decision versions of all problems in two prominent classes of constant-factor approximable problems, namely MIN F+Π1 and MAX NP, admit polynomial problem kernels.
Abstract: It has been observed in many places that constant-factor approximable problems often admit polynomial or even linear problem kernels for their decision versions, e.g., Vertex Cover, Feedback Vertex Set, and Triangle Packing. While there exist examples like Bin Packing, which does not admit any kernel unless P = NP, there apparently is a strong relation between these two polynomial-time techniques. We add to this picture by showing that the natural decision versions of all problems in two prominent classes of constant-factor approximable problems, namely MIN F+Π1 and MAX NP, admit polynomial problem kernels. Problems in MAX SNP, a subclass of MAX NP, are shown to admit kernels with a linear base set, e.g., the set of vertices of a graph. This extends results of Cai and Chen (J. Comput. Syst. Sci. 54(3): 465–474, 1997), stating that the standard parameterizations of problems in MAX SNP and MIN F+Π1 are fixed-parameter tractable, and complements recent research on problems that do not admit polynomial kernelizations (Bodlaender et al. in J. Comput. Syst. Sci. 75(8): 423–434, 2009).

Book ChapterDOI
09 Jul 2012
TL;DR: In this paper, the authors propose a general method for converting online algorithms to local computation algorithms, by selecting a random permutation of the input, and simulating running the online algorithm.
Abstract: We propose a general method for converting online algorithms to local computation algorithms, by selecting a random permutation of the input, and simulating running the online algorithm. We bound the number of steps of the algorithm using a query tree, which models the dependencies between queries. We improve previous analyses of query trees on graphs of bounded degree, and extend this improved analysis to the cases where the degrees are distributed binomially, and to a special case of bipartite graphs. Using this method, we give a local computation algorithm for maximal matching in graphs of bounded degree, which runs in time and space O(log3n). We also show how to convert a large family of load balancing algorithms (related to balls and bins problems) to local computation algorithms. This gives several local load balancing algorithms which achieve the same approximation ratios as the online algorithms, but run in O(logn) time and space. Finally, we modify existing local computation algorithms for hypergraph 2-coloring and k-CNF and use our improved analysis to obtain better time and space bounds, of O(log4n), removing the dependency on the maximal degree of the graph from the exponent.

Journal ArticleDOI
TL;DR: An algorithm that decides whether a vertex is contained in a some fixed maximal independent set with expected query complexity $O(d^2)$, where $d$ is the degree bound is presented.
Abstract: We study constant-time approximation algorithms for bounded-degree graphs, which run in time independent of the number of vertices $n$. We present an algorithm that decides whether a vertex is contained in a some fixed maximal independent set with expected query complexity $O(d^2)$, where $d$ is the degree bound. Using this algorithm, we show constant-time approximation algorithms with certain multiplicative error and additive error $\epsilon n$ for many other problems, e.g., the maximum matching problem, the minimum vertex cover problem, and the minimum set cover problem, that run exponentially faster than existing algorithms with respect to $d$ and $\frac{1}{\epsilon}$. Our approximation algorithm for the maximum matching problem can be transformed to a two-sided error tester for the property of having a perfect matching. On the contrary, we show that every one-sided error tester for the property requires at least $\Omega(n)$ queries.

Journal ArticleDOI
TL;DR: A new method based on “approximation and tidying” for kernelizing vertex deletion problems whose goal graphs can be characterized by forbidden induced subgraphs is proposed and exploit structural properties of the specific problem in order to significantly improve the running time of the proposed kernelization method.
Abstract: We introduce the NP-hard graph-based data clustering problem s-Plex Cluster Vertex Deletion, where the task is to delete at most k vertices from a graph so that the connected components of the resulting graph are s-plexes. In an s-plex, every vertex has an edge to all but at most s−1 other vertices; cliques are 1-plexes. We propose a new method based on “approximation and tidying” for kernelizing vertex deletion problems whose goal graphs can be characterized by forbidden induced subgraphs. The method exploits polynomial-time approximation results and thus provides a useful link between approximation and kernelization. Employing “approximation and tidying”, we develop data reduction rules that, in O(ksn 2) time, transform an s-Plex Cluster Vertex Deletion instance with n vertices into an equivalent instance with O(k 2 s 3) vertices, yielding a problem kernel. To this end, we also show how to exploit structural properties of the specific problem in order to significantly improve the running time of the proposed kernelization method.