scispace - formally typeset
Search or ask a question

Showing papers by "Robert E. Tarjan published in 2021"



Journal ArticleDOI
TL;DR: It is proved that for a class of symmetric algorithms that includes the authors', no better step or work bound is possible, making their algorithms truly scalable.
Abstract: We develop and analyze concurrent algorithms for the disjoint set union (“union-find” ) problem in the shared memory, asynchronous multiprocessor model of computation, with CAS (compare and swap) or DCAS (double compare and swap) as the synchronization primitive. We give a deterministic bounded wait-free algorithm that uses DCAS and has a total work bound of $$O\biggl ( m \cdot \left( \log {\left( \frac{np}{m} + 1 \right) } + \alpha {\left( n, \frac{m}{np} \right) } \right) \biggr )$$ for a problem with n elements and m operations solved by p processes, where $$\alpha $$ is a functional inverse of Ackermann’s function. We give two randomized algorithms that use only CAS and have the same work bound in expectation. The analysis of the second randomized algorithm is valid even if the scheduler is adversarial. Our DCAS and randomized algorithms take $$O(\log n)$$ steps per operation, worst-case for the DCAS algorithm, high-probability for the randomized algorithms. Our work and step bounds grow only logarithmically with p, making our algorithms truly scalable. We prove that for a class of symmetric algorithms that includes ours, no better step or work bound is possible. Our work is theoretical, but Alistarh et al (In search of the fastest concurrent union-find algorithm, 2019), Dhulipala et al (A framework for static and incremental parallel graph connectivity algorithms, 2020) and Hong et al (Exploring the design space of static and incremental graph connectivity algorithms on gpus, 2020) have implemented some of our algorithms on CPUs and GPUs and experimented with them. On many realistic data sets, our algorithms run as fast or faster than all others.

2 citations


Posted Content
TL;DR: The smooth heap and the closely related slim heap are recently invented self-adjusting implementations of the heap (priority queue) data structure, and the efficiency of these data structures is analyzed in this article.
Abstract: The smooth heap and the closely related slim heap are recently invented self-adjusting implementations of the heap (priority queue) data structure. We analyze the efficiency of these data structures. We obtain the following amortized bounds on the time per operation: $O(1)$ for make-heap, insert, find-min, and meld; $O(\log\log n)$ for decrease-key; and $O(\log n)$ for delete-min and delete, where $n$ is the current number of items in the heap. These bounds are tight not only for smooth and slim heaps but for any heap implementation in Iacono and Ozkan's pure heap model, intended to capture all possible "self-adjusting" heap implementations. Slim and smooth heaps are the first known data structures to match Iacono and Ozkan's lower bounds and to satisfy the constraints of their model. Our analysis builds on Pettie's insights into the efficiency of pairing heaps, a classical self-adjusting heap implementation.

1 citations


Journal ArticleDOI
TL;DR: The Zip Tree as mentioned in this paper is a top-down data structure for binary search trees, which allows rank ties and uses fewer random bits per node than the Zip Tree data structure of as mentioned in this paper.
Abstract: We introduce the zip tree,1 a form of randomized binary search tree that integrates previous ideas into one practical, performant, and pleasant-to-implement package. A zip tree is a binary search tree in which each node has a numeric rank and the tree is (max)-heap-ordered with respect to ranks, with rank ties broken in favor of smaller keys. Zip trees are essentially treaps [8], except that ranks are drawn from a geometric distribution instead of a uniform distribution, and we allow rank ties. These changes enable us to use fewer random bits per node. We perform insertions and deletions by unmerging and merging paths (unzipping and zipping) rather than by doing rotations, which avoids some pointer changes and improves efficiency. The methods of zipping and unzipping take inspiration from previous top-down approaches to insertion and deletion by Stephenson [10], Martinez and Roura [5], and Sprugnoli [9]. From a theoretical standpoint, this work provides two main results. First, zip trees require only O(log log n) bits (with high probability) to represent the largest rank in an n-node binary search tree; previous data structures require O(log n) bits for the largest rank. Second, zip trees are naturally isomorphic to skip lists [7], and simplify Dean and Jones’ mapping between skip lists

1 citations


Posted Content
TL;DR: In this article, the authors give a self-contained analysis of smooth heaps and slim heaps in unrestricted operation, obtaining amortized bounds that match the best bounds known for self-adjusting heaps.
Abstract: The smooth heap is a recently introduced self-adjusting heap [Kozma, Saranurak, 2018] similar to the pairing heap [Fredman, Sedgewick, Sleator, Tarjan, 1986]. The smooth heap was obtained as a heap-counterpart of Greedy BST, a binary search tree updating strategy conjectured to be \emph{instance-optimal} [Lucas, 1988], [Munro, 2000]. Several adaptive properties of smooth heaps follow from this connection; moreover, the smooth heap itself has been conjectured to be instance-optimal within a certain class of heaps. Nevertheless, no general analysis of smooth heaps has existed until now, the only previous analysis showing that, when used in \emph{sorting mode} ($n$ insertions followed by $n$ delete-min operations), smooth heaps sort $n$ numbers in $O(n\lg n)$ time. In this paper we describe a simpler variant of the smooth heap we call the \emph{slim heap}. We give a new, self-contained analysis of smooth heaps and slim heaps in unrestricted operation, obtaining amortized bounds that match the best bounds known for self-adjusting heaps. Previous experimental work has found the pairing heap to dominate other data structures in this class in various settings. Our tests show that smooth heaps and slim heaps are competitive with pairing heaps, outperforming them in some cases, while being comparably easy to implement.