scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Fully Dynamic Maximal Matching in $O(\log n)$ Update Time (Corrected Version)

03 May 2018-SIAM Journal on Computing (Society for Industrial and Applied Mathematics)-Vol. 47, Iss: 3, pp 617-650
TL;DR: In this article, the authors present an algorithm for maintaining a maximal matching in a graph under addition and deletion of edges, which is randomized and takes expected amortized O(log n)$ time for each edge.
Abstract: We present an algorithm for maintaining a maximal matching in a graph under addition and deletion of edges. Our algorithm is randomized and it takes expected amortized $O(\log n)$ time for each edg...
Citations
More filters
Posted Content
TL;DR: The first non-trivial efficient adaptive algorithms for maintaining spanners and cut sparisifers are presented, which imply improvements over existing algorithms for other problems.
Abstract: Designing dynamic graph algorithms against an adaptive adversary is a major goal in the field of dynamic graph algorithms. While a few such algorithms are known for spanning trees, matchings, and single-source shortest paths, very little was known for an important primitive like graph sparsifiers. The challenge is how to approximately preserve so much information about the graph (e.g., all-pairs distances and all cuts) without revealing the algorithms' underlying randomness to the adaptive adversary. In this paper we present the first non-trivial efficient adaptive algorithms for maintaining spanners and cut sparisifers. These algorithms in turn imply improvements over existing algorithms for other problems. Our first algorithm maintains a polylog$(n)$-spanner of size $\tilde O(n)$ in polylog$(n)$ amortized update time. The second algorithm maintains an $O(k)$-approximate cut sparsifier of size $\tilde O(n)$ in $\tilde O(n^{1/k})$ amortized update time, for any $k\ge1$, which is polylog$(n)$ time when $k=\log(n)$. The third algorithm maintains a polylog$(n)$-approximate spectral sparsifier in polylog$(n)$ amortized update time. The amortized update time of both algorithms can be made worst-case by paying some sub-polynomial factors. Prior to our result, there were near-optimal algorithms against oblivious adversaries (e.g. Baswana et al. [TALG'12] and Abraham et al. [FOCS'16]), but the only non-trivial adaptive dynamic algorithm requires $O(n)$ amortized update time to maintain $3$- and $5$-spanner of size $O(n^{1+1/2})$ and $O(n^{1+1/3})$, respectively [Ausiello et al. ESA'05]. Our results are based on two novel techniques. The first technique, is a generic black-box reduction that allows us to assume that the graph undergoes only edge deletions and, more importantly, remains an expander with almost-uniform degree. The second technique we call proactive resampling. [...]

41 citations

Proceedings ArticleDOI
01 Nov 2019
TL;DR: The first algorithm for maintaining a maximal independent set (MIS) of a fully dynamic graph---which undergoes both edge insertions and deletions---in polylogarithmic time is presented and a simpler variant of the algorithm can be used to maintain a random-order lexicographically first maximal matching in the same update-time.
Abstract: We present the first algorithm for maintaining a maximal independent set (MIS) of a fully dynamic graph---which undergoes both edge insertions and deletions---in polylogarithmic time. Our algorithm is randomized and, per update, takes O(log^2 Δ log^2 n) expected time. Furthermore, the algorithm can be adjusted to have O(log^2 Δ log^4 n) worst-case update-time with high probability. Here, n denotes the number of vertices and Δ is the maximum degree in the graph. The MIS problem in fully dynamic graphs has attracted significant attention after a breakthrough result of Assadi, Onak, Schieber, and Solomon [STOC'18] who presented an algorithm with O(m^3/4) update-time (and thus broke the natural Ω(m) barrier) where m denotes the number of edges in the graph. This result was improved in a series of subsequent papers, though, the update-time remained polynomial. In particular, the fastest algorithm prior to our work had O (min{√n, m^1/3}) update-time [Assadi et al. SODA'19]. Our algorithm maintains the lexicographically first MIS over a random order of the vertices. As a result, the same algorithm also maintains a 3-approximation of correlation clustering. We also show that a simpler variant of our algorithm can be used to maintain a random-order lexicographically first maximal matching in the same update-time.

39 citations

Posted Content
TL;DR: In this article, the first algorithm for maintaining a maximal independent set (MIS) of a fully dynamic graph, which undergoes both edge insertions and deletions, in polylogarithmic time was presented.
Abstract: We present the first algorithm for maintaining a maximal independent set (MIS) of a fully dynamic graph---which undergoes both edge insertions and deletions---in polylogarithmic time. Our algorithm is randomized and, per update, takes $O(\log^2 \Delta \cdot \log^2 n)$ expected time. Furthermore, the algorithm can be adjusted to have $O(\log^2 \Delta \cdot \log^4 n)$ worst-case update-time with high probability. Here, $n$ denotes the number of vertices and $\Delta$ is the maximum degree in the graph. The MIS problem in fully dynamic graphs has attracted significant attention after a breakthrough result of Assadi, Onak, Schieber, and Solomon [STOC'18] who presented an algorithm with $O(m^{3/4})$ update-time (and thus broke the natural $\Omega(m)$ barrier) where $m$ denotes the number of edges in the graph. This result was improved in a series of subsequent papers, though, the update-time remained polynomial. In particular, the fastest algorithm prior to our work had $\widetilde{O}(\min\{\sqrt{n}, m^{1/3}\})$ update-time [Assadi et al. SODA'19]. Our algorithm maintains the lexicographically first MIS over a random order of the vertices. As a result, the same algorithm also maintains a 3-approximation of correlation clustering. We also show that a simpler variant of our algorithm can be used to maintain a random-order lexicographically first maximal matching in the same update-time.

19 citations

Proceedings ArticleDOI
01 Nov 2019
TL;DR: A deterministic (1+e)f-approximation algorithm with O(f log(Cn)/e^2) amortized update time was given in this paper.
Abstract: We present a deterministic dynamic algorithm for maintaining a (1+e)f-approximate minimum cost set cover with O(f log(Cn)/e^2) amortized update time, when the input set system is undergoing element insertions and deletions. Here, n denotes the number of elements, each element appears in at most f sets, and the cost of each set lies in the range [1/C, 1]. Our result, together with that of Gupta~et~al.~[STOC'17], implies that there is a deterministic algorithm for this problem with O(f log(Cn)) amortized update time and O(min(log n, f)) -approximation ratio, which nearly matches the polynomial-time hardness of approximation for minimum set cover in the static setting. Our update time is only O(log (Cn)) away from a trivial lower bound. Prior to our work, the previous best approximation ratio guaranteed by deterministic algorithms was O(f^2), which was due to Bhattacharya~et~al.~[ICALP`15]. In contrast, the only result that guaranteed O(f) -approximation was obtained very recently by Abboud~et~al.~[STOC`19], who designed a dynamic algorithm with (1+e)f-approximation ratio and O(f^2 log n/e) amortized update time. Besides the extra O(f) factor in the update time compared to our and Gupta~et~al.'s results, the Abboud~et~al.~algorithm is randomized, and works only when the adversary is oblivious and the sets are unweighted (each set has the same cost). We achieve our result via the primal-dual approach, by maintaining a fractional packing solution as a dual certificate. This approach was pursued previously by Bhattacharya~et~al.~and Gupta~et~al., but not in the recent paper by Abboud~et~al. Unlike previous primal-dual algorithms that try to satisfy some local constraints for individual sets at all time, our algorithm basically waits until the dual solution changes significantly globally, and fixes the solution only where the fix is needed.

17 citations

Proceedings ArticleDOI
15 Jun 2021
TL;DR: In this paper, the authors introduce a new framework for computing approximate maximum weight matchings for bipartite graphs. But their main focus is on the fully dynamic setting, where there is a large gap between the guarantees of the best known algorithms for computing weighted and unweighted matchings.
Abstract: We introduce a new framework for computing approximate maximum weight matchings. Our primary focus is on the fully dynamic setting, where there is a large gap between the guarantees of the best known algorithms for computing weighted and unweighted matchings. Indeed, almost all current weighted matching algorithms that reduce to the unweighted problem lose a factor of two in the approximation ratio. In contrast, in other sublinear models such as the distributed and streaming models, recent work has largely closed this weighted/unweighted gap. For bipartite graphs, we almost completely settle the gap with a general reduction that converts any algorithm for α-approximate unweighted matching to an algorithm for (1−)α-approximate weighted matching, while only increasing the update time by an O(logn) factor for constant . We also show that our framework leads to significant improvements for non-bipartite graphs, though not in the form of a universal reduction. In particular, we give two algorithms for weighted non-bipartite matching: 1. A randomized (Las Vegas) fully dynamic algorithm that maintains a (1/2−)-approximate maximum weight matching in worst-case update time O(polylog n) with high probability against an adaptive adversary. Our bounds are essentially the same as those of the unweighted algorithm of Wajc [STOC 2020]. 2. A deterministic fully dynamic algorithm that maintains a (2/3−)-approximate maximum weight matching in amortized update time O(m1/4). Our bounds are essentially the same as those of the unweighted algorithm of Bernstein and Stein [SODA 2016]. A key feature of our framework is that it uses existing algorithms for unweighted matching as black-boxes. As a result, our framework is simple and versatile. Moreover, our framework easily translates to other models, and we use it to derive new results for the weighted matching problem in streaming and communication complexity models.

13 citations