scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Fully Dynamic Maximal Matching in $O(\log n)$ Update Time

05 Feb 2015-SIAM Journal on Computing (Society for Industrial and Applied Mathematics)-Vol. 44, Iss: 1, pp 88-113
TL;DR: An algorithm for maintaining maximal matching in a graph under addition and deletion of edges that can maintain a factor 2 approximate maximum matching in expected amortized $O(\log n )$ time per update as a direct corollary of the maximal matching scheme.
Abstract: We present an algorithm for maintaining maximal matching in a graph under addition and deletion of edges. Our algorithm is randomized and it takes expected amortized $O(\log n)$ time for each edge update, where $n$ is the number of vertices in the graph. While there exists a trivial $O(n)$ time algorithm for each edge update, the previous best known result for this problem is due to Ivkovicź and Lloyd [Lecture Notes in Comput. Sci. 790, Springer-Verlag, London, 1994, pp. 99--111]. For a graph with $n$ vertices and $m$ edges, they gave an $O( {(n+ m)}^{0.7072})$ update time algorithm which is sublinear only for a sparse graph. For the related problem of maximum matching, Onak and Rubinfeld [Proceedings of STOC'10, Cambridge, MA, 2010, pp. 457--464] designed a randomized algorithm that achieves expected amortized $O(\log^2 n)$ time for each update for maintaining a $c$-approximate maximum matching for some unspecified large constant $c$. In contrast, we can maintain a factor 2 approximate maximum matching in expected amortized $O(\log n )$ time per update as a direct corollary of the maximal matching scheme. This in turn also implies a 2-approximate vertex cover maintenance scheme that takes expected amortized $O(\log n )$ time per update.
Citations
More filters
Proceedings ArticleDOI
01 Oct 2017
TL;DR: In this article, a distributed probabilistically checkable proofs (PCP) model was proposed, where Alice and Bob jointly write a PCP that x satisfies a CNF formula, while exchanging little or no information.
Abstract: We present a new distributed} model of probabilistically checkable proofs (PCP). A satisfying assignment x ∊ \{0,1\}^n to a CNF formula \phi is shared between two parties, where Alice knows x_1, \dots, x_{n/2, Bob knows x_{n/2+1},\dots,x_n, and both parties know \phi. The goal is to have Alice and Bob jointly write a PCP that x satisfies \phi, while exchanging little or no information. Unfortunately, this model as-is does not allow for nontrivial query complexity. Instead, we focus on a non-deterministic} variant, where the players are helped by Merlin, a third party who knows all of x.Using our framework, we obtain, for the first time, PCP-like reductions from the Strong Exponential Time Hypothesis (SETH) to approximation problems in \P. In particular, under SETH we show that %(assuming SETH) there are no truly-subquadratic approximation algorithms for %the following problems: Maximum Inner Product over \{0,1\}-vectors, LCS Closest Pair over permutations, Approximate Partial Match, Approximate Regular Expression Matching, and Diameter in Product Metric. All our inapproximability factors are nearly-tight. In particular, for the first three problems we obtain nearly-polynomial factors of 2^{(log n)^{1-o(1)}};only (1+o(1))-factor lower bounds (under SETH) were known before.As an additional feature of our reduction, we obtain new SETH lower bounds for the exact} monochromatic Closest Pair problem in the Euclidean, Manhattan, and Hamming metrics.

75 citations

Proceedings ArticleDOI
19 Jun 2016
TL;DR: This work presents two deterministic dynamic algorithms for the maximum matching problem and is the first deterministic algorithm that can maintain an o(logn)-approximate maximum matching with polylogarithmic update time.
Abstract: We present two deterministic dynamic algorithms for the maximum matching problem. (1) An algorithm that maintains a (2+є)-approximate maximum matching in general graphs with O(poly(logn, 1/є)) update time. (2) An algorithm that maintains an αK approximation of the value of the maximum matching with O(n2/K) update time in bipartite graphs, for every sufficiently large constant positive integer K. Here, 1≤ αK K. Result (1) is the first deterministic algorithm that can maintain an o(logn)-approximate maximum matching with polylogarithmic update time, improving the seminal result of Onak et al. [STOC 2010]. Its approximation guarantee almost matches the guarantee of the best randomized polylogarithmic update time algorithm [Baswana et al. FOCS 2011]. Result (2) achieves a better-than-two approximation with arbitrarily small polynomial update time on bipartite graphs. Previously the best update time for this problem was O(m1/4) [Bernstein et al. ICALP 2015], where m is the current number of edges in the graph.

58 citations

Proceedings ArticleDOI
19 Jun 2017
TL;DR: In this paper, the authors give new results for the set cover problem in the fully dynamic model, where the set of "active" elements to be covered changes over time, and the goal is to maintain a near-optimal solution for the currently active elements, while making few changes in each timestep.
Abstract: In this paper, we give new results for the set cover problem in the fully dynamic model. In this model, the set of "active" elements to be covered changes over time. The goal is to maintain a near-optimal solution for the currently active elements, while making few changes in each timestep. This model is popular in both dynamic and online algorithms: in the former, the goal is to minimize the update time of the solution, while in the latter, the recourse (number of changes) is bounded. We present generic techniques for the dynamic set cover problem inspired by the classic greedy and primal-dual offline algorithms for set cover. The former leads to a competitive ratio of O(lognt), where nt is the number of currently active elements at timestep t, while the latter yields competitive ratios dependent on ft, the maximum number of sets that a currently active element belongs to. We demonstrate that these techniques are useful for obtaining tight results in both settings: update time bounds and limited recourse, exhibiting algorithmic techniques common to these two parallel threads of research.

58 citations

Posted Content
TL;DR: The main insight of this work is that the intractability of matching and vertex cover in the simultaneous communication model is inherently connected to an adversarial partitioning of the underlying graph across machines.
Abstract: A common approach for designing scalable algorithms for massive data sets is to distribute the computation across, say $k$, machines and process the data using limited communication between them. A particularly appealing framework here is the simultaneous communication model whereby each machine constructs a small representative summary of its own data and one obtains an approximate/exact solution from the union of the representative summaries. If the representative summaries needed for a problem are small, then this results in a communication-efficient and round-optimal protocol. While many fundamental graph problems admit efficient solutions in this model, two prominent problems are notably absent from the list of successes, namely, the maximum matching problem and the minimum vertex cover problem. Indeed, it was shown recently that for both these problems, even achieving a polylog$(n)$ approximation requires essentially sending the entire input graph from each machine. The main insight of our work is that the intractability of matching and vertex cover in the simultaneous communication model is inherently connected to an adversarial partitioning of the underlying graph across machines. We show that when the underlying graph is randomly partitioned across machines, both these problems admit randomized composable coresets of size $\widetilde{O}(n)$ that yield an $\widetilde{O}(1)$-approximate solution. This results in an $\widetilde{O}(1)$-approximation simultaneous protocol for these problems with $\widetilde{O}(nk)$ total communication when the input is randomly partitioned across $k$ machines. We further prove the optimality of our results. Finally, by a standard application of composable coresets, our results also imply MapReduce algorithms with the same approximation guarantee in one or two rounds of communication

46 citations


Cites methods from "Fully Dynamic Maximal Matching in $..."

  • ...n from the disjointness problem. 5 1.3 Further Related Work Maximum matching and minimum vertex cover are among the most studied problems in the context of massive graphs including, in dynamic graphs [14,55,58,64], sub-linear algorithms [33,56,57,59,66], streaming algorithms [3–6, 9, 10, 20–22, 26–32, 37, 38, 43, 44, 48, 49, 51, 61], MapReduce computation [5,46], and different distributed computation models [8,...

    [...]

Proceedings ArticleDOI
TL;DR: The study of fast dynamic algorithms for graph sparsification problems is initiated and fully dynamic algorithms, allowing both edge insertions and edge deletions, that take polylogarithmic time after each update in the graph are obtained.
Abstract: We initiate the study of dynamic algorithms for graph sparsification problems and obtain fully dynamic algorithms, allowing both edge insertions and edge deletions, that take polylogarithmic time after each update in the graph. Our three main results are as follows. First, we give a fully dynamic algorithm for maintaining a $ (1 \pm \epsilon) $-spectral sparsifier with amortized update time $poly(\log{n}, \epsilon^{-1})$. Second, we give a fully dynamic algorithm for maintaining a $ (1 \pm \epsilon) $-cut sparsifier with \emph{worst-case} update time $poly(\log{n}, \epsilon^{-1})$. Both sparsifiers have size $ n \cdot poly(\log{n}, \epsilon^{-1})$. Third, we apply our dynamic sparsifier algorithm to obtain a fully dynamic algorithm for maintaining a $(1 + \epsilon)$-approximation to the value of the maximum flow in an unweighted, undirected, bipartite graph with amortized update time $poly(\log{n}, \epsilon^{-1})$.

46 citations

References
More filters
Journal ArticleDOI
TL;DR: In this article, the authors studied the relationship between college admission and the stability of marriage in the United States, and found that college admission is correlated with the number of stable marriages.
Abstract: (2013). College Admissions and the Stability of Marriage. The American Mathematical Monthly: Vol. 120, No. 5, pp. 386-391.

5,655 citations

Journal ArticleDOI
TL;DR: This paper shows how to construct a maximum matching in a bipartite graph with n vertices and m edges in a number of computation steps proportional to $(m + n)\sqrt n $.
Abstract: The present paper shows how to construct a maximum matching in a bipartite graph with n vertices and m edges in a number of computation steps proportional to $(m + n)\sqrt n $.

2,785 citations

Book
16 Aug 2021

2,526 citations

Journal ArticleDOI
TL;DR: The solution of the Chinese postman problem using matching theory is given and the convex hull of integer solutions is described as a linear programming polyhedron, used to show that a good algorithm gives an optimum solution.
Abstract: The solution of the Chinese postman problem using matching theory is given. The convex hull of integer solutions is described as a linear programming polyhedron. This polyhedron is used to show that a good algorithm gives an optimum solution. The algorithm is a specialization of the more generalb-matching blossom algorithm. Algorithms for finding Euler tours and related problems are also discussed.

963 citations

Proceedings ArticleDOI
13 Oct 1980
TL;DR: An 0(√|V|¿|E|) algorithm for finding a maximum matching in general graphs works in 'phases'.
Abstract: In this paper we present an 0(√|V|?|E|) algorithm for finding a maximum matching in general graphs. This algorithm works in 'phases'. In each phase a maximal set of disjoint minimum length augmenting paths is found, and the existing matching is increased along these paths. Our contribution consists in devising a special way of handling blossoms, which enables an O(|E|) implementation of a phase. In each phase, the algorithm grows Breadth First Search trees at all unmatched vertices. When it detects the presence of a blossom, it does not 'shrink' the blossom immediately. Instead, it delays the shrinking in such a way that the first augmenting path found is of minimum length. Furthermore, it achieves the effect of shrinking a blossom by a special labeling procedure which enables it to find an augmenting path through a blossom quickly.

943 citations