scispace - formally typeset
Search or ask a question
Book

Parameterized complexity theory

TL;DR: Fixed-Parameter Tractability.
Abstract: Fixed-Parameter Tractability.- Reductions and Parameterized Intractability.- The Class W[P].- Logic and Complexity.- Two Fundamental Hierarchies.- The First Level of the Hierarchies.- The W-Hierarchy.- The A- Hierarchy.- Kernelization and Linear Programming Techniques.- The Automata-Theoretic Approach.- Tree Width.- Planarity and Bounded Local Tree Width.- Homomorphisms and Embeddings.- Parameterized Counting Problems.- Bounded Fixed-Parameter Tractability.- Subexponential Fixed-Parameter Tractability.- Appendix, Background from Complexity Theory.- References.- Notation.- Index.
Citations
More filters
Book
27 Jul 2015
TL;DR: This comprehensive textbook presents a clean and coherent account of most fundamental tools and techniques in Parameterized Algorithms and is a self-contained guide to the area, providing a toolbox of algorithmic techniques.
Abstract: This comprehensive textbook presents a clean and coherent account of most fundamental tools and techniques in Parameterized Algorithms and is a self-contained guide to the area. The book covers many of the recent developments of the field, including application of important separators, branching based on linear programming, Cut & Count to obtain faster algorithms on tree decompositions, algorithms based on representative families of matroids, and use of the Strong Exponential Time Hypothesis. A number of older results are revisited and explained in a modern and didactic way. The book provides a toolbox of algorithmic techniques. Part I is an overview of basic techniques, each chapter discussing a certain algorithmic paradigm. The material covered in this part can be used for an introductory course on fixed-parameter tractability. Part II discusses more advanced and specialized algorithmic ideas, bringing the reader to the cutting edge of current research. Part III presents complexity results and lower bounds, giving negative evidence by way of W[1]-hardness, the Exponential Time Hypothesis, and kernelization lower bounds. All the results and concepts are introduced at a level accessible to graduate students and advanced undergraduate students. Every chapter is accompanied by exercises, many with hints, while the bibliographic notes point to original publications and related work.

1,544 citations


Cites background from "Parameterized complexity theory"

  • ...The formal foundation for Turing kernelization is the concept of oracle Turing machines (see [189]), where we constrain the power of the oracle to being able to answer only queries that are short in terms of the parameter....

    [...]

  • ...The book of Flum and Grohe [189] is an extensive introduction to the area with a strong emphasis on the complexity viewpoint....

    [...]

  • ...The area has been developing at such a fast rate that even the two books that appeared in 2006, by Flum and Grohe [189] and Niedermeier [376], do not contain some of the new tools and techniques that we feel need to be taught in a modern introductory course....

    [...]

  • ...The book of Flum and Grohe [189] focuses to a large extent on complexity aspects of parameterized algorithmics from the viewpoint of logic, while the material we wanted to cover in the school is primarily algorithmic, viewing complexity as a tool for proving that certain kinds of algorithms do not exist....

    [...]

  • ...Our kernelization for d-Hitting Set follows the lines of [189]....

    [...]

Proceedings ArticleDOI
21 Oct 2011
TL;DR: In this article, the authors discuss an emerging field of study: adversarial machine learning (AML), the study of effective machine learning techniques against an adversarial opponent, and give a taxonomy for classifying attacks against online machine learning algorithms.
Abstract: In this paper (expanded from an invited talk at AISEC 2010), we discuss an emerging field of study: adversarial machine learning---the study of effective machine learning techniques against an adversarial opponent. In this paper, we: give a taxonomy for classifying attacks against online machine learning algorithms; discuss application-specific factors that limit an adversary's capabilities; introduce two models for modeling an adversary's capabilities; explore the limits of an adversary's knowledge about the algorithm, feature space, training, and input data; explore vulnerabilities in machine learning algorithms; discuss countermeasures against attacks; introduce the evasion challenge; and discuss privacy-preserving learning techniques.

947 citations

Journal ArticleDOI
TL;DR: The author briefly introduces the emerging field of adversarial machine learning, in which opponents can cause traditional machine learning algorithms to behave poorly in security applications.
Abstract: The author briefly introduces the emerging field of adversarial machine learning, in which opponents can cause traditional machine learning algorithms to behave poorly in security applications. He gives a high-level overview and mentions several types of attacks, as well as several types of defenses, and theoretical limits derived from a study of near-optimal evasion.

703 citations

Journal ArticleDOI
TL;DR: Using the notion of distillation algorithms, a generic lower-bound engine is developed that allows showing that a variety of FPT problems, fulfilling certain criteria, cannot have polynomial kernels unless the polynomially-bounded hierarchy collapses.

671 citations


Cites background from "Parameterized complexity theory"

  • ...For more background on parameterized complexity, the reader is referred to [7,24,32]....

    [...]

  • ...It is known that FPT = M[1] iff n variable 3-SAT can be solved in 2o(n) time [13,23,32], and FPT = W[1] iff k-Step NonDeterministic Turing Machine Halting is fixed-parameter tractable [22,24,29]....

    [...]

Journal ArticleDOI
TL;DR: This paper considers multi-user massive MIMO systems and proposes a distributed compressive CSIT estimation scheme so that the compressed measurements are observed at the users locally, while the CSIT recovery is performed at the base station jointly.
Abstract: To fully utilize the spatial multiplexing gains or array gains of massive MIMO, the channel state information must be obtained at the transmitter side (CSIT). However, conventional CSIT estimation approaches are not suitable for FDD massive MIMO systems because of the overwhelming training and feedback overhead. In this paper, we consider multi-user massive MIMO systems and deploy the compressive sensing (CS) technique to reduce the training as well as the feedback overhead in the CSIT estimation. The multi-user massive MIMO systems exhibits a hidden joint sparsity structure in the user channel matrices due to the shared local scatterers in the physical propagation environment. As such, instead of naively applying the conventional CS to the CSIT estimation, we propose a distributed compressive CSIT estimation scheme so that the compressed measurements are observed at the users locally, while the CSIT recovery is performed at the base station jointly. A joint orthogonal matching pursuit recovery algorithm is proposed to perform the CSIT recovery, with the capability of exploiting the hidden joint sparsity in the user channel matrices. We analyze the obtained CSIT quality in terms of the normalized mean absolute error, and through the closed-form expressions, we obtain simple insights into how the joint channel sparsity can be exploited to improve the CSIT recovery performance.

642 citations