scispace - formally typeset
Search or ask a question

Showing papers by "Robert E. Tarjan published in 2019"


Proceedings ArticleDOI
06 Jan 2019
TL;DR: The search path to the accessed element s is rebuilt as follows and Cost(T,X) is used to denote the cost of serving X with initial tree T by splay.
Abstract: Consider the task of performing a sequence of searches in a binary search tree. After each search, an algorithm is allowed to arbitrarily restructure the tree, at a cost proportional to the amount of restructuring performed. The cost of an execution is the sum of the time spent searching and the time spent optimizing those searches with restructuring operations. This notion was introduced by Sleator and Tarjan in 1985 [27], along with an algorithm and a conjecture. The algorithm, Splay, is an elegant procedure for performing adjustments while moving searched items to the top of the tree. The conjecture, called dynamic optimality, is that the cost of splaying is always within a constant factor of the optimal algorithm for performing searches. The conjecture stands to this day.We offer the first systematic proposal for settling the dynamic optimality conjecture. At the heart of our methods is what we term a simulation embedding: a mapping from executions to lists of keys that induces a target algorithm to simulate the execution. We build a simulation embedding for Splay by inducing it to perform arbitrary subtree transformations, and use this to show that if the cost of splaying a sequence of items is an upper bound on the cost of splaying every subsequence thereof, then Splay is dynamically optimal. We call this the subsequence property. Building on this machinery, we show that if Splay is dynamically optimal, then with respect to optimal costs, its additive overhead is at most linear in the sum of initial tree size and number of requests. As a corollary, the subsequence property is also a necessary condition for dynamic optimality. The subsequence property also implies both the traversal [27] and deque [30] conjectures.The notions of simulation embeddings and bounding additive overheads should be of general interest in competitive analysis. For readers especially interested in dynamic optimality, we provide an outline of a proof that a lower bound on search costs by Wilber [32] has the subsequence property, and extensive suggestions for adapting this proof to Splay.

17 citations


Proceedings ArticleDOI
16 Jul 2019
TL;DR: This work designs a randomized algorithm that performs at most O(log n) work per operation, and designs a class of "symmetric algorithms'' that captures the complexities of all the known algorithms for the disjoint set union problem, and proves that the algorithm has optimal total work complexity for this class.
Abstract: We consider the disjoint set union problem in the asynchronous shared memory multiprocessor computation model. We design a randomized algorithm that performs at most O(log n) work per operation (with high probability), and performs at most O(m #8226; (α(n, m/(np)) + log(np/m + 1)) total work in expectation for a problem instance with m operations on n elements solved by p processes. Our algorithm is the first to have work bounds that grow sublinearly with p against an adversarial scheduler.We use Jayanti's Wake Up problem and our newly defined Generalized Wake Up problem to prove several lower bounds on concurrent set union. We show an Ω(log min {n,p}) expected work lower bound on the cost of any single operation on a set union algorithm. This shows that our single-operation upper bound is optimal across all algorithms when p = nΩ(1). Furthermore, we identify a class of "symmetric algorithms'' that captures the complexities of all the known algorithms for the disjoint set union problem, and prove an Ω(m•(α(n, m(np)) + log(np/m + 1))) expected total work lower bound on algorithms of this class, thereby showing that our algorithm has optimal total work complexity for this class. Finally, we prove that any randomized algorithm, symmetric or not, cannot breach an Ω(m •(α(n, m/n) + log log(np/m + 1))) expected total work lower bound.

15 citations


DOI
01 Jan 2019
TL;DR: In this paper, a class of simple algorithms for concurrently computing the connected components of an n-vertex, m-edge graph were studied and they were implemented in either the COMBINING CRCW PRAM or MPC.
Abstract: We study a class of simple algorithms for concurrently computing the connected components of an $n$-vertex, $m$-edge graph. Our algorithms are easy to implement in either the COMBINING CRCW PRAM or the MPC computing model. For two related algorithms in this class, we obtain $\Theta(\lg n)$ step and $\Theta(m \lg n)$ work bounds. For two others, we obtain $O(\lg^2 n)$ step and $O(m \lg^2 n)$ work bounds, which are tight for one of them. All our algorithms are simpler than related algorithms in the literature. We also point out some gaps and errors in the analysis of previous algorithms. Our results show that even a basic problem like connected components still has secrets to reveal.

11 citations


Book ChapterDOI
05 Aug 2019
TL;DR: It is demonstrated that preorders and postorders of balanced search trees do not contain many large "jumps" in symmetric order, and the dynamic finger theorem is exploited, providing further evidence in favor of the elusive "dynamic optimality conjecture".
Abstract: Let T be a binary search tree of n nodes with root r, left subtree \(L=\text {left}(r)\), and right subtree \(R=\text {right}(r)\). The preorder and postorder of T are defined as follows: the preorder and postorder of the empty tree is the empty sequence, and $$\begin{aligned} \text {preorder}(T)&= (r)\oplus \text {preorder}(L)\oplus \text {preorder}(R)\\ \text {postorder}(T)&= \text {postorder}(L)\oplus \text {postorder}(R)\oplus (r), \end{aligned}$$ where \(\oplus \) denotes sequence concatenation. (We will refer to any such sequence as a preorder or a postorder). We prove the following results about the behavior of splaying [21] preorders and postorders: 1. Inserting the nodes of preorder(T) into an empty tree via splaying costs O(n). (Theorem 2.) 2. Inserting the nodes of postorder(T) into an empty tree via splaying costs O(n). (Theorem 3.) 3. If \(T'\) has the same keys as T and T is weight-balanced [18] then splaying either preorder(T) or postorder(T) starting from \(T'\) costs O(n). (Theorem 4.)

5 citations


Posted Content
TL;DR: This work attempts to lay the foundations for a proof of the dynamic optimality conjecture, which is that the cost of splaying is always within a constant factor of the optimal algorithm for performing searches.
Abstract: Consider the task of performing a sequence of searches in a binary search tree. After each search, an algorithm is allowed to arbitrarily restructure the tree, at a cost proportional to the amount of restructuring performed. The cost of an execution is the sum of the time spent searching and the time spent optimizing those searches with restructuring operations. This notion was introduced by Sleator and Tarjan in (JACM, 1985), along with an algorithm and a conjecture. The algorithm, Splay, is an elegant procedure for performing adjustments while moving searched items to the top of the tree. The conjecture, called "dynamic optimality," is that the cost of splaying is always within a constant factor of the optimal algorithm for performing searches. The conjecture stands to this day. In this work, we attempt to lay the foundations for a proof of the dynamic optimality conjecture.

1 citations


Posted Content
TL;DR: In this paper, the authors proved that preorders and postorders are pattern-avoiding, i.e. they contain no subsequences that are order-isomorphic to $(2,3,1) and $(3, 1,2), respectively.
Abstract: Let $T$ be a binary search tree. We prove two results about the behavior of the Splay algorithm (Sleator and Tarjan 1985). Our first result is that inserting keys into an empty binary search tree via splaying in the order of either $T$'s preorder or $T$'s postorder takes linear time. Our proof uses the fact that preorders and postorders are pattern-avoiding: i.e. they contain no subsequences that are order-isomorphic to $(2,3,1)$ and $(3,1,2)$, respectively. Pattern-avoidance implies certain constraints on the manner in which items are inserted. We exploit this structure with a simple potential function that counts inserted nodes lying on access paths to uninserted nodes. Our methods can likely be extended to permutations that avoid more general patterns. Second, if $T'$ is any other binary search tree with the same keys as $T$ and $T$ is weight-balanced (Nievergelt and Reingold 1973), then splaying $T$'s preorder sequence or $T$'s postorder sequence starting from $T'$ takes linear time. To prove this, we demonstrate that preorders and postorders of balanced search trees do not contain many large "jumps" in symmetric order, and exploit this fact by using the dynamic finger theorem (Cole et al. 2000). Both of our results provide further evidence in favor of the elusive "dynamic optimality conjecture."