# Parameterized Complexity of d-Hitting Set with Quotas

25 Jan 2021-pp 293-307

TL;DR: In this paper, the authors studied a variant of the classic d-Hitting Set problem with lower and upper capacity constraints, where each set in the family is of size at most d, a non-negative integer k; and additionally two functions are defined.

Abstract: In this paper we study a variant of the classic d -Hitting Set problem with lower and upper capacity constraints, say A and B, respectively. The input to the problem consists of a universe U, a set family, \(\mathscr {S} \), of sets over U, where each set in the family is of size at most d, a non-negative integer k; and additionally two functions \(\alpha :\mathscr {S} \rightarrow \{1,\ldots ,A\}\) and \(\beta :\mathscr {S} \rightarrow \{1,\ldots ,B\}\). The goal is to decide if there exists a hitting set of size at most k such that for every set S in the family \(\mathscr {S} \), the solution contains at least \(\alpha (S)\) elements and at most \(\beta (S)\) elements from S. We call this the \((A, B)\)-Multi d-Hitting Set problem. We study the problem in the realm of parameterized complexity. We show that \((A, B)\)-Multi d-Hitting Set can be solved in \(\mathcal {O}^{\star }(d^{k}) \) time. For the special case when \(d=3\) and \(d=4\), we have an improved bound of \(\mathcal {O}^\star (2.2738^k)\) and \(\mathcal {O}^\star (3.562^{k})\), respectively. The former matches the running time of the classical 3-Hitting Set problem. Furthermore, we show that if we do not have an upper bound constraint and the lower bound constraint is same for all the sets in the family, say \(A>1\), then the problem can be solved even faster than d-Hitting Set.

##### References

More filters

•

[...]

27 Jul 2015

TL;DR: This comprehensive textbook presents a clean and coherent account of most fundamental tools and techniques in Parameterized Algorithms and is a self-contained guide to the area, providing a toolbox of algorithmic techniques.

Abstract: This comprehensive textbook presents a clean and coherent account of most fundamental tools and techniques in Parameterized Algorithms and is a self-contained guide to the area. The book covers many of the recent developments of the field, including application of important separators, branching based on linear programming, Cut & Count to obtain faster algorithms on tree decompositions, algorithms based on representative families of matroids, and use of the Strong Exponential Time Hypothesis. A number of older results are revisited and explained in a modern and didactic way. The book provides a toolbox of algorithmic techniques. Part I is an overview of basic techniques, each chapter discussing a certain algorithmic paradigm. The material covered in this part can be used for an introductory course on fixed-parameter tractability. Part II discusses more advanced and specialized algorithmic ideas, bringing the reader to the cutting edge of current research. Part III presents complexity results and lower bounds, giving negative evidence by way of W[1]-hardness, the Exponential Time Hypothesis, and kernelization lower bounds. All the results and concepts are introduced at a level accessible to graduate students and advanced undergraduate students. Every chapter is accompanied by exercises, many with hints, while the bibliographic notes point to original publications and related work.

1,348 citations

•

[...]

06 Dec 2013

TL;DR: This comprehensive and self-contained textbook presents an accessible overview of the state of the art of multivariate algorithmics and complexity, enabling the reader who masters the complexity issues under discussion to use the positive and negative toolkits in their own research.

Abstract: This comprehensive and self-contained textbook presents an accessible overview of the state of the art of multivariate algorithmics and complexity. Increasingly, multivariate algorithmics is having significant practical impact in many application domains, with even more developments on the horizon. The text describes how the multivariate framework allows an extended dialog with a problem, enabling the reader who masters the complexity issues under discussion to use the positive and negative toolkits in their own research. Features: describes many of the standard algorithmic techniques available for establishing parametric tractability; reviews the classical hardness classes; explores the various limitations and relaxations of the methods; showcases the powerful new lower bound techniques; examines various different algorithmic solutions to the same problems, highlighting the insights to be gained from each approach; demonstrates how complexity methods and ideas have evolved over the past 25 years.

1,219 citations

••

[...]

TL;DR: This paper proposes a new concept, that of kernel diagnosis, which is free of this problem with minimal diagnosis and considers restricting the axioms used to describe the system to ensure that the concept of minimal diagnosis is adequate.

Abstract: Most approaches to model-based diagnosis describe a diagnosis for a system as a set of failing components that explains the symptoms. In order to characterize the typically very large number of diagnoses, usually only the minimal such sets of failing components are represented. This method of characterizing all diagnoses is inadequate in general, in part because not every superset of the faulty components of a diagnosis necessarily provides a diagnosis. In this paper we analyze the concept of diagnosis in depth exploiting the notions of implicate/implicant and prime implicate/implicant. We use these notions to consider two alternative approaches for addressing the inadequacy of the concept of minimal diagnosis. First, we propose a new concept, that of kernel diagnosis, which is free of this problem with minimal diagnosis. This concept is useful to both the consistency and abductive views of diagnosis. Second, we consider restricting the axioms used to describe the system to ensure that the concept of minimal diagnosis is adequate.

587 citations

••

[...]

TL;DR: A kernelization algorithm for the 3-Hitting Set problem is presented along with a general kernelization for d-H hitting Set, which guarantees a kernel whose order does not exceed (2d-1)k^d^-^1+k.

Abstract: For a given parameterized problem, @p, a kernelization algorithm is a polynomial-time pre-processing procedure that transforms an arbitrary instance of @p into an equivalent one whose size depends only on the input parameter(s). The resulting instance is called a problem kernel. In this paper, a kernelization algorithm for the 3-Hitting Set problem is presented along with a general kernelization for d-Hitting Set. For 3-Hitting Set, an arbitrary instance is reduced into an equivalent one that contains at most 5k^2+k elements. This kernelization is an improvement over previously known methods that guarantee cubic-order kernels. Our method is used also to obtain quadratic kernels for several other problems. For a constant d>=3, a kernelization of d-Hitting Set is achieved by a non-trivial generalization of the 3-Hitting Set method, and guarantees a kernel whose order does not exceed (2d-1)k^d^-^1+k.

148 citations

[...]

01 Jan 2007

TL;DR: The topic of exact, exponential-time algorithms for NP-hard problems has received a lot of attention, particularly with the focus of producing algorithms with stronger theoretical guarantees, e.g. ...

Abstract: The topic of exact, exponential-time algorithms for NP-hard problems has received a lot of attention, particularly with the focus of producing algorithms with stronger theoretical guarantees, e.g. ...

138 citations

##### Related Papers (5)

[...]

[...]