scispace - formally typeset
Search or ask a question

Showing papers by "Gerth Stølting Brodal published in 1996"


Proceedings ArticleDOI
28 Jan 1996
TL;DR: An implementation of priority queues is presented that supports the operations MAKEQUEUE, FINDMIN, INSERT, MELD and DECREASEKEY in worst case time O(1) and DELETEMIN and DELETE in worstCaseTime O(logn).
Abstract: Au implementation of priority queues is presented that supports the operations MAKEQUEUE, FINDMIN, INSERT, MELD and DECREASEKEY in worst case time O(1) and DELETEMIN and DELETE in worst case time O(logn). The space requirement is linear. The data structure presented is the first achieving this worst case performance.

106 citations


01 Jan 1996
TL;DR: In this article, the problem of answering d-queries was considered, where a d-query is to report if there exists a string in the set within Hamming distance d of α.
Abstract: Given a set of n binary strings of length m each. We consider the problem of answering d-queries. Given a binary query string α of length m, a d-query is to report if there exists a string in the set within Hamming distance d of α.

53 citations


Book ChapterDOI
10 Jun 1996
TL;DR: This work considers the problem of answering d-queries, given a binary query string α of length m, and a d-query is to report if there exists a string in the set within Hamming distance d of α.
Abstract: Given a set of n binary strings of length m each. We consider the problem of answering d-queries. Given a binary query string α of length m, a d-query is to report if there exists a string in the set within Hamming distance d of α.

50 citations


Journal ArticleDOI
TL;DR: This paper adapts Brodal’s data structure to a purely functional setting and clarifies its relationship to the binomial queues of Vuillemin, which support all four operations in O(log n) time.
Abstract: Brodal recently introduced the first implementation of imperative priority queues to support findMin, insert and meld in O(1) worst-case time, and deleteMin in O(log n) worst-case time. These bounds are asymptotically optimal among all comparison-based priority queues. In this paper, we adapt Brodal's data structure to a purely functional setting. In doing so, we both simplify the data structure and clarify its relationship to the binomial queues of Vuillemin, which support all four operations in O(log n) time. Specifically, we derive our implementation from binomial queues in three steps: first, we reduce the running time of insert to O(1) by eliminating the possibility of cascading links; second, we reduce the running time of findMin to O(1) by adding a global root to hold the minimum element; and finally, we reduce the running time of meld to O(1) by allowing priority queues to contain other priority queues. Each of these steps is expressed using ML-style functors. The last transformation, known as data-structural bootstrapping, is an interesting application of higher-order functors and recursive structures.

41 citations


Journal ArticleDOI
TL;DR: A new strategy for Dietz and Raman's dynamic two player pebble game on graphs is presented and the upper bound on the required number of pebbles is improved from 2b+2d+O(√b) to d+2b, and a lower bound is given that shows that the number ofpebbles depends on the out-degree d.
Abstract: The problem of making bounded in-degree and out-degree data structures partially persistent is considered. The node copying method of Driscoll et al. is extended so that updates can be performed in worst-case constant time on the pointer machine model. Previously it was only known to be possible in amortised constant time.The result is presented in terms of a new strategy for Dietz and Raman's dynamic two player pebble game on graphs.It is shown how to implement the strategy and the upper bound on the required number of pebbles is improved from 2b+2d+O(√b) to d+2b. where b is the bound of the in-degree and d the bound of the out-degree. We also give a lower bound that shows that the number of pebbles depends on the out-degree d.

36 citations


Book ChapterDOI
03 Jul 1996
TL;DR: In this paper, the complexity of maintaining a set under the operations Insert, Delete and FindMin is considered, and it is shown that any randomized algorithm with expected amortized cost t comparisons per Insert and Delete has expected cost at least n/(e22t) − 1 comparisons for FindMin.
Abstract: The complexity of maintaining a set under the operations Insert, Delete and FindMin is considered. In the comparison model it is shown that any randomized algorithm with expected amortized cost t comparisons per Insert and Delete has expected cost at least n/(e22t) − 1 comparisons for FindMin. If FindMin is replaced by a weaker operation, FindAny, then it is shown that a randomized algorithm with constant expected cost per operation exists, but no deterministic algorithm. Finally, a deterministic algorithm with constant amortized cost per operation for an offline version of the problem is given.

26 citations


Journal ArticleDOI
TL;DR: This paper adapts Brodal's data structure to a purely functional setting and clarifies its relationship to the binomial queues of Vuillemin, which support all four operations in O(log n) time.
Abstract: Brodal recently introduced the first implementation of imperative priority queues to support findMin, insert, and meld in O(1) worst-case time, and deleteMin in O(log n) worst-case time. These bounds are asymptotically optimal among all comparison-based priority queues. In this paper, we adapt Brodal's data structure to a purely functional setting. In doing so, we both simplify the data structure and clarify its relationship to the binomial queues of Vuillemin, which support all four operations in O(log n) time. Specifically, we derive our implementation from binomial queues in three steps: first, we reduce the running time of insert to O(1) by eliminating the possibility of cascading links; second, we reduce the running time of findMin to O(1) by adding a global root to hold the minimum element; and finally, we reduce the running time of meld to O(1) by allowing priority queues to contain other priority queues. Each of these steps is expressed using ML-style functors. The last transformation, known as data-structural bootstrapping, is an interesting application of higher-order functors and recursive structures.

21 citations


Journal ArticleDOI
TL;DR: A direct protocol with logarithmic communication that finds an element in the symmetric difference of two sets of different size yields a simple proof that symmetric functions have logarithsmic circuit depth.
Abstract: We present a direct protocol with logarithmic communication that finds an element in the symmetric difference of two sets of different size. This yields a simple proof that symmetric functions have logarithmic circuit depth.

15 citations


Journal Article
TL;DR: The complexity of maintaining a set under the operations Insert, Delete and FindMin is considered in this paper, where it is shown that any randomized algorithm with expected amortized cost t comparisons per Insert and Delete has expected cost at least n/(e22t)-1 comparisons for FindMin if FindMin was replaced by a weaker operation FindAny, and it is also shown that no deterministic algorithm can have constant expected cost per operation.
Abstract: The complexity of maintaining a set under the operations Insert, Delete and FindMin is considered In the comparison model it is shown that any randomized algorithm with expected amortized cost t comparisons per Insert and Delete has expected cost at least n/(e22t)-1 comparisons for FindMin If FindMin is replaced by a weaker operation FindAny, then it is shown that a randomized algorithm with constant expected cost per operation exists; in contrast, it is shown that no deterministic algorithm can have constant cost per operation Finally, a deterministic algorithm with constant amortized cost per operation for an offline version of the problem is given

13 citations


01 Jan 1996
TL;DR: A pipelined version of the priority queues adopt to a processor array of size O(log n), supporting the operations MakeQueue, Insert, Meld, FindMin, Extractmin, Delete and DecreaseKey in constant time.
Abstract: We present time and work optimal priority queues for the CREW PRAM, supporting F ind M in in constant time with one processor and M ake Q ueue , I nsert , M eld , F ind M in , E xtract M in , D elete and D ecrease K ey in constant time with O ( log n) processors. A priority queue can be build in time O ( log n) with O (n/ log n) processors. A pipelined version of the priority queues adopt to a processor array of size O ( log n) , supporting the operations M ake Q ueue , I nsert , M eld , F ind M in , E xtract M in , D elete and D ecrease K ey in constant time. By applying the k-bandwidth technique we get a data structure for the CREW PRAM which supports M ulti I nsert k operations in O ( log k) time and M ulti E xtract M in k in O ( log log k) time.

8 citations


Book ChapterDOI
03 Jul 1996
TL;DR: In this article, the authors presented time and work optimal priority queues for the CREW PRAM, supporting FindMin in constant time with one processor and makeQueue, Insert, Meld, Findmin, Extractmin, Delete and DecreaseKey in constant speed with O(log n) processors.
Abstract: We present time and work optimal priority queues for the CREW PRAM, supporting FindMin in constant time with one processor and MakeQueue, Insert, Meld, Findmin, Extractmin, Delete and DecreaseKey in constant time with O(log n) processors. A priority queue can be build in time O(log n) with O(n/log n) processors and k elements can be inserted into a priority queue in time O(log k) with O((log n + k)/log k) processors. With a slowdown of O(log log n) in time the priority queues adopt to the EREW PRAM by only increasing the required work by a constant factor. A pipelined version of the priority queues adopt to a processor array of size O(log n), supporting the operations MakeQueue, Insert, Meld, FindMin, Extractmin, Delete and DecreaseKey in constant time.

Journal ArticleDOI
TL;DR: It is shown that the problem of computing all contiguous k-ary compositions of a sequence of n values under an associative and commutative operator requires 3(k−1)/ (k+1)n − O(k) operations.
Abstract: We show that the problem of computing all contiguous k-ary compositions of a sequence of n values under an associative and commutative operator requires 3(k−1)/ (k+1)n − O(k) operations. For the operator max we show in contrast that in the decision tree model the complexity is (1+ Theta(1/sqrt(k)) n − O(k). Finally we show that the complexity of the corresponding on-line problem for the operator max is (2 − 1/(k−1)) n − O(k).

Journal ArticleDOI
TL;DR: In this paper, the complexity of maintaining a set under the operations Insert, Delete and FindMin is considered, and it is shown that any randomized algorithm with expected amortized cost t comparisons per Insert and Delete has expected cost at least n/(e2^2t) − 1 comparisons for FindMin.
Abstract: The complexity of maintaining a set under the operations Insert, Delete and FindMin is considered. In the comparison model it is shown that any randomized algorithm with expected amortized cost t comparisons per Insert and Delete has expected cost at least n/(e2^2t) − 1 comparisons for FindMin. If FindMin is replaced by a weaker operation, FindAny, then it is shown that a randomized algorithm with constant expected cost per operation exists; in contrast, it is shown that no deterministic algorithm can have constant cost per operation. Finally, a deterministic algorithm with constant amortized cost per operation for an offline version of the problem is given.