scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Kernel(s) for problems with no kernel: On out-trees with many leaves

TL;DR: For the k-Leaf-Out-Branching problem, it was shown in this paper that no polynomial-sized kernel is possible unless coNP is in NP/poly.
Abstract: The k-Leaf Out-Branching problem is to find an out-branching, that is a rooted oriented spanning tree, with at least k leaves in a given digraph. The problem has recently received much attention from the viewpoint of parameterized algorithms. Here, we take a kernelization based approach to the k-Leaf-Out-Branching problem. We give the first polynomial kernel for Rooted k-Leaf-Out-Branching, a variant of k-Leaf-Out-Branching where the root of the tree searched for is also a part of the input. Our kernel with O(k3) vertices is obtained using extremal combinatorics.For the k-Leaf-Out-Branching problem, we show that no polynomial-sized kernel is possible unless coNP is in NP/poly. However, our positive results for Rooted k-Leaf-Out-Branching immediately imply that the seemingly intractable k-Leaf-Out-Branching problem admits a data reduction to n independent polynomial-sized kernels. These two results, tractability and intractability side by side, are the first ones separating Karp kernelization from Turing kernelization. This answers affirmatively an open problem regarding “cheat kernelization” raised by Mike Fellows and Jiong Guo independently.

Content maybe subject to copyright    Report

Citations
More filters
Book
28 Feb 2019
TL;DR: Kernelization: Theory of Parameterized Preprocessing, by Fomin et al., is unique in that it is a text focusing solely on the titular topic of kernelization, and is able to more effectively showcase and teach the tools used in the field than a more traditional text on fixed parameter complexity.
Abstract: Preprocessing, or data reduction, is a standard technique for simplifying and speeding up computation. Written by a team of experts in the field, this book introduces a rapidly developing area of preprocessing analysis known as kernelization. The authors provide an overview of basic methods and important results, with accessible explanations of the most recent advances in the area, such as meta-kernelization, representative sets, polynomial lower bounds, and lossy kernelization. The text is divided into four parts, which cover the different theoretical aspects of the area: upper bounds, meta-theorems, lower bounds, and beyond kernelization. The methods are demonstrated through extensive examples using a single data set. Written to be self-contained, the book only requires a basic background in algorithmics and will be of use to professionals, researchers and graduate students in theoretical computer science, optimization, combinatorics, and related fields.

181 citations

Journal ArticleDOI
TL;DR: Under the hypothesis that coNP is not in NP/poly, the result implies tight lower bounds for parameters of interest in several areas, namely sparsification, kernelization in parameterized complexity, lossy compression, and probabilistically checkable proofs.
Abstract: Consider the following two-player communication process to decide a language L: The first player holds the entire input x but is polynomially bounded; the second player is computationally unbounded but does not know any part of x; their goal is to decide cooperatively whether x belongs to L at small cost, where the cost measure is the number of bits of communication from the first player to the second player.For any integer d ≥ 3 and positive real e, we show that, if satisfiability for n-variable d-CNF formulas has a protocol of cost O(nd − e), then coNP is in NP/poly, which implies that the polynomial-time hierarchy collapses to its third level. The result even holds when the first player is conondeterministic, and is tight as there exists a trivial protocol for e = 0. Under the hypothesis that coNP is not in NP/poly, our result implies tight lower bounds for parameters of interest in several areas, namely sparsification, kernelization in parameterized complexity, lossy compression, and probabilistically checkable proofs.By reduction, similar results hold for other NP-complete problems. For the vertex cover problem on n-vertex d-uniform hypergraphs, this statement holds for any integer d ≥ 2. The case d = 2 implies that no NP-hard vertex deletion problem based on a graph property that is inherited by subgraphs can have kernels consisting of O(k2 − e) edges unless coNP is in NP/poly, where k denotes the size of the deletion set. Kernels consisting of O(k2) edges are known for several problems in the class, including vertex cover, feedback vertex set, and bounded-degree deletion.

112 citations

Journal Article
TL;DR: This survey gives a general introduction to the area of kernelization and discusses some recent developments, and attempts a reasonably self-contained update and introduction on the following topics: Lower bounds for kernelization, taking into account the recent progress on the and-conjecture.
Abstract: Kernelization is a formalization of efficient preprocessing, aimed mainly at combinatorially hard problems. Empirically, preprocessing is highly successful in practice, e.g., in state-of-the-art SAT and ILP solvers. The notion of kernelization from parameterized complexity makes it possible to rigorously prove upper and lower bounds on, e.g., the maximum output size of a preprocessing in terms of one or more problem-specific parameters. This avoids the often-raised issue that we should not expect an efficient algorithm that provably shrinks every instance of any NP-hard problem. In this survey, we give a general introduction to the area of kernelization and then discuss some recent developments. After the introductory material we attempt a reasonably self-contained update and introduction on the following topics: (1) Lower bounds for kernelization, taking into account the recent progress on the and-conjecture. (2) The use of matroids and representative sets for kernelization. (3) Turing kernelization, i.e., understanding preprocessing that adaptively or non-adaptively creates a large number of small outputs.

106 citations

Journal ArticleDOI
TL;DR: This work conjecture that no WK[1]-hard problem admits a polynomial Turing kernel, and defines two kernelization hardness hierarchies which are akin to the M- and W-hierarchies of parameterized complexity.
Abstract: The framework of Bodlaender et al. (J Comput Sys Sci 75(8):423---434, 2009) and Fortnow and Santhanam (J Comput Sys Sci 77(1):91---106, 2011) allows us to exclude the existence of polynomial kernels for a range of problems under reasonable complexity-theoretical assumptions. However, some issues are not addressed by this framework, including the existence of Turing kernels such as the "kernelization" of leaf out-branching $$(k)$$ ( k ) that outputs $$n$$ n instances each of size poly $$(k)$$ ( k ) . Observing that Turing kernels are preserved by polynomial parametric transformations (PPTs), we define two kernelization hardness hierarchies by the PPT-closure of problems that seem fundamentally unlikely to admit efficient Turing kernelizations. This gives rise to the MK- and WK-hierarchies which are akin to the M- and W-hierarchies of parameterized complexity. We find that several previously considered problems are complete for the fundamental hardness class WK[1], including Min Ones $$d$$ d -SAT $$(k)$$ ( k ) , Binary NDTM Halting $$(k)$$ ( k ) , Connected Vertex Cover $$(k)$$ ( k ) , and Clique parameterized by $$k \log n$$ k log n . We conjecture that no WK[1]-hard problem admits a polynomial Turing kernel. Our hierarchy subsumes an earlier hierarchy of Harnik and Naor that, from a parameterized perspective, is restricted to classical problems parameterized by witness size. Our results provide the first natural complete problems for, e.g., their class $$VC_1$$ V C 1 ; this had been left open.

77 citations

Dissertation
01 Jul 2013
TL;DR: The concept of kernelization, developed within the field of parameterized complexity theory, is used to give a mathematical analysis of the power of data reduction for dealing with fundamental NP-hard graph problems and it is proved that Treewidth and Pathwidth do not admit polynomial kernels parameterized by the vertex-deletion distance to a clique, unless thePolynomial hierarchy collapses.
Abstract: The purpose of this thesis is to give a mathematical analysis of the power of data reduction for dealing with fundamental NP-hard graph problems. It has often been observed that the use of heuristic reduction rules in a preprocessing phase gives significant performance gains when solving such problems. However, there is little scientific explanation for these empirically observed successes. We use the concept of kernelization, developed within the field of parameterized complexity theory, to give a mathematical analysis of the power of such data reduction techniques. A kernelization, or kernel, is a polynomial-time preprocessing algorithm that transforms an instance of a parameterized problem into an equivalent instance whose size depends only on the parameter. The concept of kernelization therefore formalizes efficient and provably effective preprocessing. In our analysis of fundamental graph problems we utilize various structural measures of graphs as the complexity parameter; these include the vertex cover number, the feedback vertex number, the treewidth, and the vertex-deletion distance to various well-studied graph classes. We parameterize four fundamental classes of graph problems by such graph-structural measures. We determine which of these parameterizations admit kernelizations for which the size of the output is bounded by a polynomial in the parameter. Towards this end, we also develop technical tools to prove that a parameterized problem does not admit a kernel of polynomial size, subject to certain complexity-theoretic assumptions. The four fundamental problems we study are Vertex Cover, Treewidth, Graph Coloring, and Longest Path. For the Vertex Cover problem we introduce novel reduction rules that provably reduce the size of an instance to at most O(k^3) vertices in polynomial time, where k is the size of a feedback vertex set of the input graph. We also prove that the existence of a kernel for the parameterization by the vertex-deletion distance to an outerplanar graph or a clique, leads to a collapse of the polynomial hierarchy and is therefore unlikely. In our analysis of the Treewidth problem, we prove that preprocessing rules that were initially developed for heuristic algorithms, lead to a polynomial kernel for Treewidth parameterized by the vertex cover number. By developing additional rules that eliminate almost-simplicial vertices and shrink clique-seeing paths, we obtain a polynomial kernel parameterized by the feedback vertex number. Finally, we prove that Treewidth and Pathwidth do not admit polynomial kernels parameterized by the vertex-deletion distance to a clique, unless the polynomial hierarchy collapses. We analyze the kernelization complexity of graph coloring problems with respect to parameterizations that measure the vertex-deletion to graph classes such as cographs and co-chordal graphs. We show that the existence of polynomial kernels is determined by the extremal properties of No-instances of the List Coloring problem on such graph classes. Finally, we investigate Longest Path and related problems, with structural parameterizations. We obtain polynomial kernels for parameterizations by the vertex cover number, the max leaf number, and the vertex-deletion distance to a cluster graph. These results are complemented by a lower bound for the parameterization by the deletion distance to an outerplanar graph

45 citations

References
More filters
01 Jan 1972
TL;DR: Throughout the 1960s I worked on combinatorial optimization problems including logic circuit design with Paul Roth and assembly line balancing and the traveling salesman problem with Mike Held, which made me aware of the importance of distinction between polynomial-time and superpolynomial-time solvability.
Abstract: Throughout the 1960s I worked on combinatorial optimization problems including logic circuit design with Paul Roth and assembly line balancing and the traveling salesman problem with Mike Held. These experiences made me aware that seemingly simple discrete optimization problems could hold the seeds of combinatorial explosions. The work of Dantzig, Fulkerson, Hoffman, Edmonds, Lawler and other pioneers on network flows, matching and matroids acquainted me with the elegant and efficient algorithms that were sometimes possible. Jack Edmonds’ papers and a few key discussions with him drew my attention to the crucial distinction between polynomial-time and superpolynomial-time solvability. I was also influenced by Jack’s emphasis on min-max theorems as a tool for fast verification of optimal solutions, which foreshadowed Steve Cook’s definition of the complexity class NP. Another influence was George Dantzig’s suggestion that integer programming could serve as a universal format for combinatorial optimization problems.

7,714 citations

Book
06 Nov 1998
TL;DR: An approach to complexity theory which offers a means of analysing algorithms in terms of their tractability, and introduces readers to new classes of algorithms which may be analysed more precisely than was the case until now.
Abstract: An approach to complexity theory which offers a means of analysing algorithms in terms of their tractability. The authors consider the problem in terms of parameterized languages and taking "k-slices" of the language, thus introducing readers to new classes of algorithms which may be analysed more precisely than was the case until now. The book is as self-contained as possible and includes a great deal of background material. As a result, computer scientists, mathematicians, and graduate students interested in the design and analysis of algorithms will find much of interest.

3,651 citations

Book
01 Jan 2010
TL;DR: Fixed-Parameter Tractability.
Abstract: Fixed-Parameter Tractability.- Reductions and Parameterized Intractability.- The Class W[P].- Logic and Complexity.- Two Fundamental Hierarchies.- The First Level of the Hierarchies.- The W-Hierarchy.- The A- Hierarchy.- Kernelization and Linear Programming Techniques.- The Automata-Theoretic Approach.- Tree Width.- Planarity and Bounded Local Tree Width.- Homomorphisms and Embeddings.- Parameterized Counting Problems.- Bounded Fixed-Parameter Tractability.- Subexponential Fixed-Parameter Tractability.- Appendix, Background from Complexity Theory.- References.- Notation.- Index.

2,343 citations

Book
01 Jan 2006
TL;DR: This paper discusses Fixed-Parameter Algorithms, Parameterized Complexity Theory, and Selected Case Studies, and some of the techniques used in this work.
Abstract: PART I: FOUNDATIONS 1. Introduction to Fixed-Parameter Algorithms 2. Preliminaries and Agreements 3. Parameterized Complexity Theory - A Primer 4. Vertex Cover - An Illustrative Example 5. The Art of Problem Parameterization 6. Summary and Concluding Remarks PART II: ALGORITHMIC METHODS 7. Data Reduction and Problem Kernels 8. Depth-Bounded Search Trees 9. Dynamic Programming 10. Tree Decompositions of Graphs 11. Further Advanced Techniques 12. Summary and Concluding Remarks PART III: SOME THEORY, SOME CASE STUDIES 13. Parameterized Complexity Theory 14. Connections to Approximation Algorithms 15. Selected Case Studies 16. Zukunftsmusik References Index

1,730 citations