scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

A fast probabilistic parallel sorting algorithm

28 Oct 1981-pp 212-219
TL;DR: A probabilistic parallel algorithm to sort n keys drawn from some arbitrary total ordered set such that the average runtime is bounded by O(log n), which means the product of time and number of processors meets the information theoretic lower bound for sorting.
Abstract: We describe a probabilistic parallel algorithm to sort n keys drawn from some arbitrary total ordered set. This algorithm can be implemented on a parallel computer consisting of n RAMs, each with small private memory, and a common memory of size O(n) such that the average runtime is bounded by O(log n). Hence for this algorithm the product of time and number of processors meets the information theoretic lower bound for sorting.
Citations
More filters
Journal ArticleDOI
TL;DR: A class of new applications of the nested dissection method is presented, this time to path algebra computations, where the path algebra problem is defined by a symmetric matrix A whose associated undirected graph G has a known family of separators of small size s ( n ) (in many cases of interest).

43 citations

Book ChapterDOI
08 Jul 1996
TL;DR: A novel, simple sequential algorithm is presented that constructs the suffix tree of a binary string of length n in O (log n) time and O(n) work with high probability, in contrast to the previously known work-optimal algorithms.
Abstract: The suffix tree of a string, the fundamental data structure in the area of combinatorial pattern matching, has many elegant applications. In this paper, we present a novel, simple sequential algorithm for the construction of suffix trees. We are also able to parallelize our algorithm so that we settle the main open problem in the construction of suffix trees: we give a Las Vegas CRCW PRAM algorithm that constructs the suffix tree of a binary string of length n in O (log n) time and O(n) work with high probability. In contrast, the previously known work-optimal algorithms, while deterministic, take Ω(log2n) time.

41 citations

Proceedings ArticleDOI
01 Dec 1983
TL;DR: A randomized algorithm that sorts on an N node network with constant valence in 0(log N) time and terminates within k within log N time with probability at least 1−N−&agr;.
Abstract: We give a randomized algorithm that sorts on an N node network with constant valence in 0(log N) time. More particularly the algorithm sorts N items on an N node cube-connected cycles graph and for some constant k for all large enough a it terminates within ka log N time with probability at least 1−N−a.

38 citations

Journal ArticleDOI
TL;DR: These algorithms involve tackling some of the very basic problems, like binary search and load balancing, that are taken for granted in PRAM models and are the first nontrivial geometric algorithms that attain this performance on fixed connection networks.
Abstract: There are now a number of fundamental problems in computational geometry that have optimal algorithms on PRAM models. This paper presents randomized parallel algorithms that execute on an $n$-processor butterfly interconnection network in $O(\log n)$ time for the following problems of input size $n$: trapezoidal decomposition, visibility, triangulation, and two-dimensional convex hull. These algorithms involve tackling some of the very basic problems, like binary search and load balancing, that are taken for granted in PRAM models. Apart from a two-dimensional convex hull algorithm, these are the first nontrivial geometric algorithms that attain this performance on fixed connection networks. These techniques use a number of ideas from Flashsort that have to be modified to handle more difficult situations; it seems likely that they will have wider applications.

36 citations

Proceedings ArticleDOI
01 Nov 1986
TL;DR: It is shown that in the deterministic comparison model for parallel computation, n processors can select the k th smallest item from a set of n numbers in O(loglogn) parallel time.
Abstract: We show that in the deterministic comparison model for parallel computation, n processors can select the k th smallest item from a set of n numbers in O(loglogn) parallel time. With this result all comparison tasks (selection, merging, sorting), now have upper and lower bounds of the same order in both random and deterministic models. 1 I N T R O D U C T I O N The study of parallel algorithms is important from both practical and theoretical points of view. It provides a context in which one may identify the difficult computational problems and a framework within which we may understand inherent similarities and differences between tasks. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the ACM copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Association for Computing Machinery. To copy otherwise, or to republish, requires a fee and/or specific permission. © 1986 ACM 0-89791-193-8/86/0500/0188 $00.75 Comparison problems r~selecting, sorting, and merging) are an interesting group of tasks, partly because they are so well understood in serial models. Given a set S = { a l , . . . , a n } write py = I{ah: aj

35 citations

References
More filters
Journal ArticleDOI
TL;DR: The worst-case time complexity of algorithms for multiprocessor computers with binary comparisons as the basic operations is investigated and the algorithm for finding the maximum is shown to be optimal for all values of k and n.
Abstract: The worst-case time complexity of algorithms for multiprocessor computers with binary comparisons as the basic operations is investigated. It is shown that for the problems of finding the maximum, sorting, and merging a pair of sorted lists, if n, the size of the input set, is not less than k, the number of processors, speedups of at least $O(k/\log \log k)$ can be achieved with respect to comparison operations. The algorithm for finding the maximum is shown to be optimal for all values of k and n.

412 citations

Journal ArticleDOI
TL;DR: A new selection algorithm is presented which is shown to be very efficient on the average, both theoretically and practically.
Abstract: A new selection algorithm is presented which is shown to be very efficient on the average, both theoretically and practically. The number of comparisons used to select the ith smallest of n numbers is n + min(i,n-i) + o(n). A lower bound within 9 percent of the above formula is also derived.

319 citations

Journal ArticleDOI
TL;DR: A family of parallel-sorting algorithms for a multiprocessor system that is enumeration sortings and includes the use of parallel merging to implement count acquisition, matching the performance of Hirschberg's algoithm, which, however, is not free of fetch conflicts.
Abstract: In this paper, we describe a family of parallel-sorting algorithms for a multiprocessor system. These algorithms are enumeration sortings and comprise the following phases: 1) count acquisition: the keys are subdivided into subsets and for each key we determine the number of smaller keys (count) in every subset; 2) rank determination: the rank of a key is the sum of the previously obtained counts; 3) data rearrangement: each key is placed in the position specified by its rank. The basic novelty of the algorithms is the use of parallel merging to implement count acquisition. By using Valiant's merging scheme, we show that n keys can be sorted in parallel with n log2n processors in time C log 2 n + o(log 2 n); in addition, if memory fetch conflicts are not allowed, using a modified version of Batcher's merging algorithm to implement phase 1), we show that n keys can be sorted with n1 +αprocessors in time (C'/α a) log 2 n + o(log 2 n), thereby matching the performance of Hirschberg's algoithm, which, however, is not free of fetch conflicts.

169 citations