scispace - formally typeset
Search or ask a question

Showing papers on "Vertex cover published in 2022"


Book ChapterDOI
01 Jan 2022
TL;DR: A lower bound on the space complexity of two-pass semi-streaming algorithms that approximate the maximum matching problem was shown in this paper , where the lower bound is parameterized by the density of Ruzsa-Szemerédi graphs, where RS(n) denotes the maximum number of induced matchings of size n in any n-vertex graph.
Abstract: Previous chapter Next chapter Full AccessProceedings Proceedings of the 2022 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)A Two-Pass (Conditional) Lower Bound for Semi-Streaming Maximum MatchingSepehr AssadiSepehr Assadipp.708 - 742Chapter DOI:https://doi.org/10.1137/1.9781611977073.32PDFBibTexSections ToolsAdd to favoritesExport CitationTrack CitationsEmail SectionsAboutAbstract We prove a lower bound on the space complexity of two-pass semi-streaming algorithms that approximate the maximum matching problem. The lower bound is parameterized by the density of Ruzsa-Szemerédi graphs: Any two-pass semi-streaming algorithm for maximum matching has approximation ratio at most , where RS(n) denotes the maximum number of induced matchings of size Θ(n) in any n-vertex graph, i.e., the largest density of a Ruzsa-Szemerédi graph. Currently, it is known that and closing this (large) gap between upper and lower bounds has remained a notoriously difficult problem in combinatorics. Under the plausible hypothesis that RS(n) = nΩ(1), our lower bound is the first to rule out small-constant approximation two-pass semi-streaming algorithms for the maximum matching problem, making progress on a longstanding open question in the graph streaming literature. Previous chapter Next chapter RelatedDetails Published:2022eISBN:978-1-61197-707-3 https://doi.org/10.1137/1.9781611977073Book Series Name:ProceedingsBook Code:PRDA22Book Pages:xvii + 3771

9 citations


Book ChapterDOI
01 Jan 2022
TL;DR: In this article , a large suite of data reduction rules for the vertex clique cover (VCC) problem is introduced, which can solve large, sparse, real-world graphs significantly faster than the state-of-the-art.
Abstract: The vertex clique cover (VCC) problem—the problem of computing a minimum cardinality set of cliques covering all vertices of a graph—is a classic NP-hard problem. Despite recent advances in parameterized algorithms that have been used to solve NP-hard problems in practice, the VCC problem has been almost completely unexplored. In particular, data reduction rules, which transform the input graph to a smaller equivalent instance, are well studied and highly effective at solving other NP-hard problems (e.g., the minimum vertex cover problem) in practice on sparse graphs of millions of vertices. Practical rules for the VCC problem, on the other hand, are nearly nonexistent: instead, the complementary graph coloring problem has received the lion's share of attention, and the available rules for that problem are either theoretical or they do not translate to effective rules for solving the VCC problem on sparse graphs. In this paper, we introduce a large suite of data reduction rules for the VCC problem. These rules enable us to solve large, sparse, real-world graphs significantly faster than the state of the art. Of the 52 graphs tested, without any additional techniques, our reduction rules completely solve 14 graphs with up to 326K vertices in a few milliseconds. Furthermore, applying our rules as a preprocessing step accelerates the state-of-the-art iterated greedy (IG) approach due to Chalupa, enabling us to find higher-quality solutions up to multiple orders of magnitude faster than previously possible. Furthermore, we integrate our data reductions into the branch-and-reduce framework, exactly solving instances on up to millions of vertices. As an added bonus, our data reduction rules partially explain why the clique cover number and independence number have been observed to match for many sparse instances—our data reduction rules apply to both the maximum independent set and VCC problems.

7 citations


Journal ArticleDOI
TL;DR: In this article, it was shown that the vertex cover ideal of chordal graphs is componentwise linear for trees, and that if G is a unicyclic vertex decomposable graph, then the symbolic powers of J (G ) are componentwise linearly linear.

5 citations


Book ChapterDOI
01 Jan 2022
TL;DR: Cáceres et al. as discussed by the authors presented a parallel algorithm for MPC in directed acyclic graphs with running time O(k2|V| log |V| + |E|) in the PRAM model.
Abstract: Previous chapter Next chapter Full AccessProceedings Proceedings of the 2022 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)Sparsifying, Shrinking and Splicing for Minimum Path Cover in Parameterized Linear TimeManuel Cáceres, Massimo Cairo, Brendan Mumey, Romeo Rizzi, and Alexandru I. TomescuManuel Cáceres, Massimo Cairo, Brendan Mumey, Romeo Rizzi, and Alexandru I. Tomescupp.359 - 376Chapter DOI:https://doi.org/10.1137/1.9781611977073.18PDFBibTexSections ToolsAdd to favoritesExport CitationTrack CitationsEmail SectionsAboutAbstract A minimum path cover (MPC) of a directed acyclic graph (DAG) G = (V, E) is a minimum-size set of paths that together cover all the vertices of the DAG. Computing an MPC is a basic polynomial problem, dating back to Dilworth's and Fulkerson's results in the 1950s. Since the size k of an MPC (also known as the width) can be small in practical applications, research has also studied algorithms whose running time is parameterized on k. We obtain two new MPC parameterized algorithms for DAGs running in time O(k2|V| log |V| + |E|) and O(k3|V| + |E|). We also obtain a parallel algorithm running in O(k2|V| + |E|) parallel steps and using O(log |V|) processors (in the PRAM model). Our latter two algorithms are the first solving the problem in parameterized linear time. Finally, we show that we can transform (in O(k2|V|) time) a given MPC into another MPC that uses less than 2|V| distinct edges, which we prove to be asymptotically tight. As such, we also obtain edge sparsification algorithms preserving the width of the DAG with the same running time as our MPC algorithms. At the core of all our algorithms we interleave the usage of three techniques: transitive sparsification, shrinking of a path cover, and the splicing of a set of paths along a given path. Previous chapter Next chapter RelatedDetails Published:2022eISBN:978-1-61197-707-3 https://doi.org/10.1137/1.9781611977073Book Series Name:ProceedingsBook Code:PRDA22Book Pages:xvii + 3771

5 citations


Journal ArticleDOI
TL;DR: In this paper , the authors show that deciding the feasibility of a PESP instance is NP-hard even when the treewidth is 2, the branchwidth is 2 or the carvingwidth is 3.
Abstract: Abstract Public transportation networks are typically operated with a periodic timetable. The periodic event scheduling problem (PESP) is the standard mathematical modeling tool for periodic timetabling. PESP is a computationally very challenging problem: For example, solving the instances of the benchmarking library PESPlib to optimality seems out of reach. Since PESP can be solved in linear time on trees, and the treewidth is a rather small graph parameter in the networks of the PESPlib, it is a natural question to ask whether there are polynomial-time algorithms for input networks of bounded treewidth, or even better, fixed-parameter tractable algorithms. We show that deciding the feasibility of a PESP instance is NP-hard even when the treewidth is 2, the branchwidth is 2, or the carvingwidth is 3. Analogous results hold for the optimization of reduced PESP instances, where the feasibility problem is trivial. Moreover, we show W[1]-hardness of the general feasibility problem with respect to treewidth, which means that we can most likely only accomplish pseudo-polynomial-time algorithms on input networks with bounded tree- or branchwidth. We present two such algorithms based on dynamic programming. We further analyze the parameterized complexity of PESP with bounded cyclomatic number, diameter, or vertex cover number. For event-activity networks with a special—but standard—structure, we give explicit and sharp bounds on the branchwidth in terms of the maximum degree and the carvingwidth of an underlying line network. Finally, we investigate several parameters on the smallest instance of the benchmarking library PESPlib.

5 citations


Journal ArticleDOI
TL;DR: In this paper , a general, partial-order based formulation of the dominating set and feedback vertex/edge set problems is proposed, where the parameter is a measure of the pre-solution as defined by the framework.
Abstract: The question if a given partial solution to a problem can be extended reasonably occurs in many algorithmic approaches for optimization problems. For instance, when enumerating minimal vertex covers of a graph G=(V,E), one usually arrives at the problem to decide for a vertex set U⊆V (pre-solution), if there exists a minimal vertex cover S (i.e., a vertex cover S⊆V such that no proper subset of S is a vertex cover) with U⊆S (minimal extension of U). We propose a general, partial-order based formulation of such extension problems which allows to model parameterization and approximation aspects of extension, and also highlights relationships between extension tasks for different specific problems. As examples, we study a number of specific problems which can be expressed and related in this framework. In particular, we discuss extension variants of the problems dominating set and feedback vertex/edge set. All these problems are shown to be NP-complete even when restricted to bipartite graphs of bounded degree, with the exception of our extension version of feedback edge set on undirected graphs which is shown to be solvable in polynomial time. For the extension variants of dominating and feedback vertex set, we also show NP-completeness for the restriction to planar graphs of bounded degree. As non-graph problem, we also study an extension version of the bin packing problem. We further consider the parameterized complexity of all these extension variants, where the parameter is a measure of the pre-solution as defined by our framework.

4 citations


Journal ArticleDOI
TL;DR: This work introduces the minimum idleness connectivity-constrained multi-robot patrolling problem, shows that it is NP-hard, and model it as a mixed-integer linear program (MILP), and develops approximate algorithms taking a solution for MMCCP as input.
Abstract: We consider a multi-robot patrolling scenario with intermittent connectivity constraints, ensuring that robots' data finally arrive at a base station. In particular, each robot traverses a closed tour periodically and meets with the robots on neighboring tours to exchange data. We model the problem as a variant of the min-max vertex cycle cover problem (MMCCP), which is the problem of covering all vertices with a given number of disjoint tours such that the longest tour length is minimal. In this work, we introduce the minimum idleness connectivity-constrained multi-robot patrolling problem, show that it is NP-hard, and model it as a mixed-integer linear program (MILP). The computational complexity of exactly solving this problem restrains practical applications, and therefore we develop approximate algorithms taking a solution for MMCCP as input. Our simulation experiments on 10 vertices and up to 3 robots compare the results of different solution approaches (including solving the MILP formulation) and show that our greedy algorithm can obtain an objective value close to the one of the MILP formulations but requires much less computation time. Experiments on instances with up to 100 vertices and 10 robots indicate that the greedy approximation algorithm tries to keep the length of the longest tour small by extending smaller tours for data exchange.

4 citations



Journal ArticleDOI
TL;DR: In this paper , the complexity of TVC and Delta-TVC on sparse graphs was studied and it was shown that TVC is NP-hard even when the underlying topology is described by a path or a cycle.
Abstract: Temporal graphs naturally model graphs whose underlying topology changes over time. Recently, the problems Temporal Vertex Cover (or TVC) and Sliding-Window Temporal Vertex Cover (or Delta-TVC for time-windows of a fixed-length Delta) have been established as natural extensions of the classic Vertex Cover problem on static graphs with connections to areas such as surveillance in sensor networks. In this paper we initiate a systematic study of the complexity of TVC and Delta-TVC on sparse graphs. Our main result shows that for every Delta geq 2, Delta-TVC is NP-hard even when the underlying topology is described by a path or a cycle. This resolves an open problem from literature and shows a surprising contrast between Delta-TVC and TVC for which we provide a polynomial-time algorithm in the same setting. To circumvent this hardness, we present a number of exact and approximation algorithms for temporal graphs whose underlying topologies are given by a path, that have bounded vertex degree in every time step, or that admit a small-sized temporal vertex cover.

4 citations




Proceedings ArticleDOI
29 Nov 2022
TL;DR: In this paper , it was shown that at least n1.2 − o(1) queries in the adjacency list model are needed for obtaining a (2/3 + Ω(1))-approximation of the maximum matching size.
Abstract: Sublinear time algorithms for approximating maximum matching size have long been studied. Much of the progress over the last two decades on this problem has been on the algorithmic side. For instance, an algorithm of [Behnezhad; FOCS’21] obtains a 1/2-approximation in O(n) time for n-vertex graphs. A more recent algorithm by [Behnezhad, Roghani, Rubinstein, and Saberi; SODA’23] obtains a slightly-better-than-1/2 approximation in O(n1+є) time (for arbitrarily small constant ε>0). On the lower bound side, [Parnas and Ron; TCS’07] showed 15 years ago that obtaining any constant approximation of maximum matching size requires Ω(n) time. Proving any super-linear in n lower bound, even for (1−є)-approximations, has remained elusive since then. In this paper, we prove the first super-linear in n lower bound for this problem. We show that at least n1.2 − o(1) queries in the adjacency list model are needed for obtaining a (2/3 + Ω(1))-approximation of the maximum matching size. This holds even if the graph is bipartite and is promised to have a matching of size Θ(n). Our lower bound argument builds on techniques such as correlation decay that to our knowledge have not been used before in proving sublinear time lower bounds. We complement our lower bound by presenting two algorithms that run in strongly sublinear time of n2−Ω(1). The first algorithm achieves a (2/3−ε)-approximation (for any arbitrarily small constant ε>0); this significantly improves prior close-to-1/2 approximations. Our second algorithm obtains an even better approximation factor of (2/3+Ω(1)) for bipartite graphs. This breaks 2/3-approximation which has been a barrier in various settings of the matching problem, and importantly shows that our n1.2−o(1) time lower bound for (2/3+Ω(1))-approximations cannot be improved all the way to n2−o(1).

Journal ArticleDOI
01 May 2022-Sensors
TL;DR: Two self-stabilizing capacitated vertex cover algorithms for WSNs are proposed, which are the first attempts in this manner and are faster than their competitors, use less energy and offer better vertex cover solutions.
Abstract: Wireless sensor networks (WSNs) achieving environmental sensing are fundamental communication layer technologies in the Internet of Things. Battery-powered sensor nodes may face many problems, such as battery drain and software problems. Therefore, the utilization of self-stabilization, which is one of the fault-tolerance techniques, brings the network back to its legitimate state when the topology is changed due to node leaves. In this technique, a scheduler decides on which nodes could execute their rules regarding spatial and temporal properties. A useful graph theoretical structure is the vertex cover that can be utilized in various WSN applications such as routing, clustering, replica placement and link monitoring. A capacitated vertex cover is the generalized version of the problem which restricts the number of edges covered by a vertex by applying a capacity constraint to limit the covered edge count. In this paper, we propose two self-stabilizing capacitated vertex cover algorithms for WSNs. To the best of our knowledge, these algorithms are the first attempts in this manner. The first algorithm is stabilized under an unfair distributed scheduler (that is, the scheduler which does not grant all enabled nodes to make their moves but guarantees the global progress of the system) at most O(n2) step, where n is the count of nodes. The second algorithm assumes 2-hop (degree 2) knowledge about the network and runs under the unfair scheduler, which subsumes the synchronous and distributed fair scheduler and stabilizes itself after O(n) moves in O(n) step, which is acceptable for most WSN setups. We theoretically analyze the algorithms to provide proof of correctness and their step complexities. Moreover, we provide simulation setups by applying IRIS sensor node parameters and compare our algorithms with their counterparts. The gathered measurements from the simulations revealed that the proposed algorithms are faster than their competitors, use less energy and offer better vertex cover solutions.

Journal ArticleDOI
TL;DR: In this article , the existence of polynomial kernels for k-Dominating Set on graphs of twin-width at most 4 would contradict a standard complexity-theoretic assumption.
Abstract: We study the existence of polynomial kernels, for parameterized problems without a polynomial kernel on general graphs, when restricted to graphs of bounded twin-width. Our main result is that a polynomial kernel for k -Dominating Set on graphs of twin-width at most 4 would contradict a standard complexity-theoretic assumption. The reduction is quite involved, especially to get the twin-width upper bound down to 4, and can be tweaked to work for Connected k -Dominating Set and Total k -Dominating Set (albeit with a worse upper bound on the twin-width). The k -Independent Set problem admits the same lower bound by a much simpler argument, previously observed [ICALP ’21], which extends to k -Independent Dominating Set, k -Path, k -Induced Path, k -Induced Matching, etc. On the positive side, we obtain a simple quadratic vertex kernel for Connected k -Vertex Cover and Capacitated k -Vertex Cover on graphs of bounded twin-width. Interestingly the kernel applies to graphs of Vapnik–Chervonenkis density 1, and does not require a witness sequence. We also present a more intricate $$O(k^{1.5})$$ vertex kernel for Connected k -Vertex Cover. Finally we show that deciding if a graph has twin-width at most 1 can be done in polynomial time, and observe that most optimization/decision graph problems can be solved in polynomial time on graphs of twin-width at most 1.


Journal ArticleDOI
TL;DR: In this article , the authors proposed a distributed localized algorithm for detecting vertex cover using 2-hop local neighborhood information in the distributed systems, where the score of each node determines the average coverage ratio by each neighbor of that node.

Journal ArticleDOI
TL;DR: In this paper , the authors studied the problem of multistage vertex cover in temporal graphs and showed that it is computationally hard even in fairly restricted settings, and also spot several fixed-parameter tractability results based on some of themost natural parameterizations.
Abstract: Abstract The NP-complete Vertex Cover problem asks to cover all edges of a graph by a small (given) number of vertices. It is among the most prominent graph-algorithmic problems. Following a recent trend in studying temporal graphs (a sequence of graphs, so-called layers, over the same vertex set but, over time, changing edge sets), we initiate the study of Multistage Vertex Cover . Herein, given a temporal graph, the goal is to find for each layer of the temporal graph a small vertex cover and to guarantee that two vertex cover sets of every two consecutive layers differ not too much (specified by a given parameter). We show that, different from classic Vertex Cover and some other dynamic or temporal variants of it, Multistage Vertex Cover is computationally hard even in fairly restricted settings. On the positive side, however, we also spot several fixed-parameter tractability results based on some of themost natural parameterizations.


Proceedings ArticleDOI
04 Feb 2022
TL;DR: A framework for sparsification of planar graphs which approximately preserves all separators and near-separators between subsets of the given terminal set is developed and yields an improvement over the state-of-art approximation algorithms for Vertex planarization.
Abstract: In the F-minor-free deletion problem we are given an undirected graph G and the goal is to find a minimum vertex set that intersects all minor models of graphs from the family F. This captures numerous important problems including Vertex cover, Feedback vertex set, Treewidth-η modulator, and Vertex planarization. In the latter one, we ask for a minimum vertex set whose removal makes the graph planar. This is a special case of F-minor-free deletion for the family F = {K5, K3,3}. Whenever the family F contains at least one planar graph, then F-minor-free deletion is known to admit a constant-factor approximation algorithm and a polynomial kernelization [Fomin, Lokshtanov, Misra, and Saurabh, FOCS’12]. A polynomial kernelization is a polynomial-time algorithm that, given a graph G and integer k, outputs a graph G′ on poly(k) vertices and integer k′, so that OPT(G) ≤ k if and only if OPT(G′) ≤ k′. The Vertex planarization problem is arguably the simplest setting for which F does not contain a planar graph and the existence of a constant-factor approximation or a polynomial kernelization remains a major open problem. In this work we show that Vertex planarization admits an algorithm which is a combination of both approaches. Namely, we present a polynomial α-approximate kernelization, for some constant α > 1, based on the framework of lossy kernelization [Lokshtanov, Panolan, Ramanujan, and Saurabh, STOC’17]. Simply speaking, when given a graph G and integer k, we show how to compute a graph G′ on poly(k) vertices so that any β-approximate solution to G′ can be lifted to an (α· β)-approximate solution to G, as long as OPT(G) ≤ k/α· β. In order to achieve this, we develop a framework for sparsification of planar graphs which approximately preserves all separators and near-separators between subsets of the given terminal set. Our result yields an improvement over the state-of-art approximation algorithms for Vertex planarization. The problem admits a polynomial-time O(nє)-approximation algorithm, for any є > 0, and a quasi-polynomial-time (logn)O(1)-approximation algorithm, where n is the input size, both randomized [Kawarabayashi and Sidiropoulos, FOCS’17]. By pipelining these algorithms with our approximate kernelization, we improve the approximation factors to respectively O(OPTє) and (logOPT)O(1).

Proceedings ArticleDOI
01 Jul 2022
TL;DR: In this article , a multivariate complexity analysis involving the following parameters: number of vertices, lifetime of the temporal graph, number of intervals per vertex, and the interval length bound is performed.
Abstract: We study the recently introduced network untangling problem, a variant of Vertex Cover on temporal graphs---graphs whose edge set changes over discrete time steps. There are two versions of this problem. The goal is to select at most k time intervals for each vertex such that all time-edges are covered and (depending on the problem variant) either the maximum interval length or the total sum of interval lengths is minimized. This problem has data mining applications in finding activity timelines that explain the interactions of entities in complex networks. Both variants of the problem are NP-hard. In this paper, we initiate a multivariate complexity analysis involving the following parameters: number of vertices, lifetime of the temporal graph, number of intervals per vertex, and the interval length bound. For both problem versions, we (almost) completely settle the parameterized complexity for all combinations of those four parameters, thereby delineating the border of fixed-parameter tractability.

Journal ArticleDOI
TL;DR: In this paper, two new versions of enumeration kernels were proposed by asking that the solutions of the original instance can be enumerated in polynomial time or with a delay from the kernel solutions.

Journal ArticleDOI
TL;DR: In this article , it was shown that the eternal vertex cover problem is in polynomial time for locally connected graphs, a graph class which includes all biconnected internally triangulated planar graphs.

Book ChapterDOI
01 Jan 2022
TL;DR: In this article , Behnezhad et al. presented a (2 + ∊)-approximation for general graphs which queries edges per vertex, and a 1.367-approximate algorithm for bipartite graphs that queries poly(1/p) edges per node.
Abstract: Previous chapter Next chapter Full AccessProceedings Proceedings of the 2022 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)Stochastic Vertex Cover with Few QueriesSoheil Behnezhad, Avrim Blum, and Mahsa DerakhshanSoheil Behnezhad, Avrim Blum, and Mahsa Derakhshanpp.1808 - 1846Chapter DOI:https://doi.org/10.1137/1.9781611977073.73PDFBibTexSections ToolsAdd to favoritesExport CitationTrack CitationsEmail SectionsAboutAbstract We study the minimum vertex cover problem in the following stochastic setting. Let G be an arbitrary given graph, p ∊ (0, 1] a parameter of the problem, and let Gp be a random subgraph that includes each edge of G independently with probability p. We are unaware of the realization Gp, but can learn if an edge e exists in Gp by querying it. The goal is to find an approximate minimum vertex cover (MVC) of Gp by querying few edges of G non-adaptively. This stochastic setting has been studied extensively for various problems such as minimum spanning trees, matroids, shortest paths, and matchings. To our knowledge, however, no non-trivial bound was known for MVC prior to our work. In this work, we present a: (2 + ∊)-approximation for general graphs which queries edges per vertex, and a 1.367-approximation for bipartite graphs which queries poly(1/p) edges per vertex. Additionally, we show that at the expense of a triple-exponential dependence on p–1 in the number of queries, the approximation ratio can be improved down to (1 + ∊) for bipartite graphs. Our techniques also lead to improved bounds for bipartite stochastic matching. We obtain a 0.731-approximation with nearly-linear in 1/p per-vertex queries. This is the first result to break the prevalent (2/3∼ 0.66)-approximation barrier in the poly(1/p) query regime, improving algorithms of [Behnezhad et al., SODA'19] and [Assadi and Bernstein, SOSA'19]. Previous chapter Next chapter RelatedDetails Published:2022eISBN:978-1-61197-707-3 https://doi.org/10.1137/1.9781611977073Book Series Name:ProceedingsBook Code:PRDA22Book Pages:xvii + 3771

Book ChapterDOI
11 Jan 2022
TL;DR: In this article , the partial vertex cover (PVC) problem was shown to be W[1]-hard on general graphs, but admits a parameterized subexponential time algorithm with running time O(n 2 O(k)n √ n √ O(1) ) on planar and apex-minor free graphs.
Abstract: In the Partial Vertex Cover (PVC) problem, we are given an n-vertex graph G and a positive integer k, and the objective is to find a vertex subset S of size k maximizing the number of edges with at least one end-point in S. This problem is W[1]-hard on general graphs, but admits a parameterized subexponential time algorithm with running time $$2^{O(\sqrt{k})}n^{O(1)}$$ on planar and apex-minor free graphs [Fomin et al. (FSTTCS 2009, IPL 2011)], and a $$k^{O(k)}n^{O(1)}$$ time algorithm on bounded degeneracy graphs [Amini et al. (FSTTCS 2009, JCSS 2011)]. Graphs of bounded degeneracy contain many sparse graph classes like planar graphs, H-minor free graphs, and bounded tree-width graphs (see Fig. 1). In this work, we prove the following results:

Journal ArticleDOI
TL;DR: In this article , the authors consider the Late Accept/Reject model, where a request can be accepted at a later point, but any acceptance is irrevocable, and they show that the late accept/reject model is necessary to obtain a constant competitive ratio.
Abstract: Online graph problems are considered in models where the irrevocability requirement is relaxed. We consider the Late Accept model, where a request can be accepted at a later point, but any acceptance is irrevocable. Similarly, we consider a Late Reject model, where an accepted request can later be rejected, but any rejection is irrevocable (this is sometimes called preemption). Finally, we consider the Late Accept/Reject model, where late accepts and rejects are both allowed, but any late reject is irrevocable. We consider four classical graph problems: For Maximum Independent Set, the Late Accept/Reject model is necessary to obtain a constant competitive ratio, for Minimum Vertex Cover the Late Accept model is sufficient, and for Minimum Spanning Forest the Late Reject model is sufficient. The Maximum Matching problem admits constant competitive ratios in all cases. We also consider Maximum Acyclic Subgraph and Maximum Planar Subgraph, which exhibit patterns similar to Maximum Independent Set.


Book ChapterDOI
01 Jan 2022
TL;DR: In this article , the authors investigate the parameterized complexity of the MMDS problem and obtain the following results about MMDS: 1. W[1]-hardness of the problem parameterized by the pathwidth (and thus, treewidth) of the input graph.
Abstract: AbstractGiven a graph \(G=(V,E)\) and an integer k, the Minimum Membership Dominating Set (MMDS) problem seeks to find a dominating set \(S \subseteq V\) of G such that for each \(v \in V\), \(|N[v] \cap S|\) is at most k. We investigate the parameterized complexity of the problem and obtain the following results about MMDS: 1. W[1]-hardness of the problem parameterized by the pathwidth (and thus, treewidth) of the input graph. 2. W[1]-hardness parameterized by k on split graphs. 3. An algorithm running in time \(2^{\mathcal {O}(\mathbf{vc} )} |V|^{\mathcal {O}(1)}\), where \(\mathbf{vc} \) is the size of a minimum-sized vertex cover of the input graph. 4. An ETH-based lower bound showing that the algorithm mentioned in the previous item is optimal.

Book ChapterDOI
01 Jan 2022
TL;DR: Guruswami et al. as mentioned in this paper gave a factor approximation based on LP rounding for an algorithmic version of the hypergraph Turán problem (AHTP), which is to pick the smallest collection of (t − 1)-sized subsets of vertices of an input t-uniform hypergraph such that every hyperedge contains one of these subsets.
Abstract: Previous chapter Next chapter Full AccessProceedings Proceedings of the 2022 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)Approximate Hypergraph Vertex Cover and generalized Tuza's conjectureVenkatesan Guruswami and Sai SandeepVenkatesan Guruswami and Sai Sandeeppp.927 - 944Chapter DOI:https://doi.org/10.1137/1.9781611977073.40PDFBibTexSections ToolsAdd to favoritesExport CitationTrack CitationsEmail SectionsAboutAbstract A famous conjecture of Tuza states that the minimum number of edges needed to cover all the triangles in a graph is at most twice the maximum number of edge-disjoint triangles. This conjecture was couched in a broader setting by Aharoni and Zerbib who proposed a hypergraph version of this conjecture, and also studied its implied fractional versions. We establish the fractional version of the Aharoni-Zerbib conjecture up to lower order terms. Specifically, we give a factor approximation based on LP rounding for an algorithmic version of the hypergraph Turán problem (AHTP). The objective in AHTP is to pick the smallest collection of (t–1)-sized subsets of vertices of an input t-uniform hypergraph such that every hyperedge contains one of these subsets. Aharoni and Zerbib also posed whether Tuza's conjecture and its hypergraph versions could follow from non-trivial duality gaps between vertex covers and matchings on hypergraphs that exclude certain sub-hypergraphs, for instance, a “tent” structure that cannot occur in the incidence of triangles and edges. We give a strong negative answer to this question, by exhibiting tent-free hypergraphs, and indeed ℱ-free hypergraphs for any finite family ℱ of excluded sub-hypergraphs, whose vertex covers must include almost all the vertices. The algorithmic questions arising in the above study can be phrased as instances of vertex cover on simple hypergraphs, whose hyperedges can pairwise share at most one vertex. We prove that the trivial factor t approximation for vertex cover is hard to improve for simple t-uniform hypergraphs. However, for set cover on simple n-vertex hypergraphs, the greedy algorithm achieves a factor (ln n)/2, better than the optimal ln n factor for general hypergraphs. Previous chapter Next chapter RelatedDetails Published:2022eISBN:978-1-61197-707-3 https://doi.org/10.1137/1.9781611977073Book Series Name:ProceedingsBook Code:PRDA22Book Pages:xvii + 3771


Journal ArticleDOI
TL;DR: An improved algorithm is shown to compute a $\Delta$-temporal matching of maximum size with a running time of $Delta^{O( u)}\cdot |\mathcal G|$ and hence provide an exponential speedup in terms of $\Delta$.