scispace - formally typeset
Search or ask a question

Showing papers by "Robert E. Tarjan published in 2020"


Proceedings ArticleDOI
06 Jul 2020
TL;DR: In this paper, the authors presented an O(log d + log logm/n n n)-time randomized PRAM algorithm for computing the connected components of an n-vertex, m-edge undirected graph with maximum component diameter d. The algorithm runs on an ARBITRARY CRCW (concurrent-read, concurrent-write with arbitrary write resolution) PRAM using O(m) processors.
Abstract: We present an O(log d + log logm/n n)-time randomized PRAM algorithm for computing the connected components of an n-vertex, m-edge undirected graph with maximum component diameter d. The algorithm runs on an ARBITRARY CRCW (concurrent-read, concurrent-write with arbitrary write resolution) PRAM using O(m) processors. The time bound holds with good probability. Our algorithm is based on the breakthrough results of Andoni et al. [FOCS'18] and Behnezhad et al. [FOCS'19]. Their algorithms run on the more powerful MPC model and rely on sorting and computing prefix sums in O(1) time, tasks that take Ω(log n / log log n) time on a CRCW PRAM with poly(n) processors. Our simpler algorithm uses limited-collision hashing and does not sort or do prefix sums. It matches the time and space bounds of the algorithm of Behnezhad et al., who improved the time bound of Andoni et al. It is widely believed that the larger private memory per processor and unbounded local computation of the MPC model admit algorithms faster than that on a PRAM. Our result suggests that such additional power might not be necessary, at least for fundamental graph problems like connected components and spanning forest.

8 citations


Posted Content
TL;DR: This work presents an O(log d + log logm/n n)-time randomized PRAM algorithm for computing the connected components of an n-vertex, m-edge undirected graph with maximum component diameter d and suggests that additional power might not be necessary for fundamental graph problems like connected components and spanning forest.
Abstract: We present an $O(\log d + \log\log_{m/n} n)$-time randomized PRAM algorithm for computing the connected components of an $n$-vertex, $m$-edge undirected graph with maximum component diameter $d$. The algorithm runs on an ARBITRARY CRCW (concurrent-read, concurrent-write with arbitrary write resolution) PRAM using $O(m)$ processors. The time bound holds with good probability. Our algorithm is based on the breakthrough results of Andoni et al. [FOCS'18] and Behnezhad et al. [FOCS'19]. Their algorithms run on the more powerful MPC model and rely on sorting and computing prefix sums in $O(1)$ time, tasks that take $\Omega(\log n / \log\log n)$ time on a CRCW PRAM with $\text{poly}(n)$ processors. Our simpler algorithm uses limited-collision hashing and does not sort or do prefix sums. It matches the time and space bounds of the algorithm of Behnezhad et al., who improved the time bound of Andoni et al. It is widely believed that the larger private memory per processor and unbounded local computation of the MPC model admit algorithms faster than that on a PRAM. Our result suggests that such additional power might not be necessary, at least for fundamental graph problems like connected components and spanning forest.

6 citations


Posted Content
TL;DR: In this article, the disjoint set union (union-find) problem in the shared memory, asynchronous multiprocessor model of computation, with CAS (compare and swap) or double compare and swap (DCAS) as the synchronization primitive was studied.
Abstract: We develop and analyze concurrent algorithms for the disjoint set union (union-find) problem in the shared memory, asynchronous multiprocessor model of computation, with CAS (compare and swap) or DCAS (double compare and swap) as the synchronization primitive. We give a deterministic bounded wait-free algorithm that uses DCAS and has a total work bound of $O(m \cdot (\log(np/m + 1) + \alpha(n, m/(np)))$ for a problem with $n$ elements and $m$ operations solved by $p$ processes, where $\alpha$ is a functional inverse of Ackermann's function. We give two randomized algorithms that use only CAS and have the same work bound in expectation. The analysis of the second randomized algorithm is valid even if the scheduler is adversarial. Our DCAS and randomized algorithms take $O(\log n)$ steps per operation, worst-case for the DCAS algorithm, high-probability for the randomized algorithms. Our work and step bounds grow only logarithmically with $p$, making our algorithms truly scalable. We prove that for a class of symmetric algorithms that includes ours, no better step or work bound is possible.

5 citations