scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Lower bounds for parallel algebraic decision trees, parallel complexity of convex hulls and related problems

30 Nov 1997-Theoretical Computer Science (Elsevier Science Publishers Ltd.)-Vol. 188, Iss: 1, pp 59-78
TL;DR: It is shown that any parallel algorithm in the fixed degree algebraic decision tree model that answers membership queries in W ⊑ R n using p processors, requires Ω(¦W¦/n log(p/n) rounds where ¦w¦ is the number of connected components of W.
About: This article is published in Theoretical Computer Science.The article was published on 1997-11-30 and is currently open access. It has received 4 citations till now. The article focuses on the topics: Average-case complexity & Analysis of parallel algorithms.
Citations
More filters
Journal ArticleDOI
TL;DR: It is shown that it is not possible to speed-up the Knapsack problems in the parallel algebraic decision tree model and extended to the PRAM model without bit-operations, consistent with Mulmuley's recent result on the separa-tion of the strongly-polynomial class and the corresponding NC class in the arithmeticPRAM model.
Abstract: We show that it is not possible to speed-up the Knapsack problemeciently in the parallel algebraic decision tree model More speci -cally, we prove that any parallel algorithm in the xed degree algebraicdecision tree model that solves the decision version of the Knapsackproblem requires Ω(pn) rounds even by using 2 pn processors Weextend the result to the PRAM model without bit-operations Theseresults are consistent with Mulmuley’s [6] recent result on the separa-tion of the strongly-polynomial class and the corresponding NCclassin the arithmetic PRAM model Keywords lower-bounds, parallel algorithms, algebraic decision tree 1 Introduction The primary objective of designing parallel algorithms is to obtain fasteralgorithms Nonetheless, the pursuit of higher speed has to be weightedagainst the concerns of eciency, namely, if we are getting our money’s(processor’s) worth It has been an open theoretical problem whether all theproblems in the class Pcan be made to run in polylogarithmic running time

11 citations


Cites background from "Lower bounds for parallel algebraic..."

  • ...It has been an open theoretical problem whether all the problems in the class P can be made to run in polylogarithmic running time ∗Part of the work was done when the author was visiting BRICS, University of Aarhus, Denmark in summer of 1998....

    [...]

  • ...Sandeep Sen ∗ Department of Computer Science and Engineering Indian Institute of Technology, New Delhi 110016, India....

    [...]

  • ...…building 540 DK–8000 Aarhus C Denmark Telephone: +45 8942 3360 Telefax: +45 8942 3255 Internet: BRICS@brics.dk BRICS publications are in general accessible through the World Wide Web and anonymous FTP through these URLs: http://www.brics.dk ftp://ftp.brics.dk This document in subdirectory RS/98/14/...

    [...]

Journal ArticleDOI
TL;DR: An optimal speed-up (with respect to the input size only) sublogarithmic time algorithm that uses superlinear number of processors for vector maxima in three dimensions that is faster than previously known algorithms.

11 citations


Cites background or methods from "Lower bounds for parallel algebraic..."

  • ...& Lemma 2.6 (Sen [Sen97])....

    [...]

  • ...Lemma 2.6 (Sen [ Sen97 ] ). The vector of n vectors in two dimensions can be computed...

    [...]

  • ...We also make use of a number of sophisticated techniques like bootstrapping and super-linear processors parallel algorithms for convex hulls [Sen97] combined with a very fine-tuned analysis....

    [...]

  • ...Lemma 1.1 (Sen [Sen97])....

    [...]

  • ...Lemma 1.1 (Sen [ Sen97 ] ). Any randomized in the parallel degree decision tree model...

    [...]

Journal ArticleDOI
TL;DR: This paper describes an O(logn.(logH+loglogn)) time deterministic algorithm for the problem, that achieves O(nlogH) work bound for [email protected](logn), and presents a fast randomized algorithm that runs in expected time O(logH).

2 citations


Cites methods from "Lower bounds for parallel algebraic..."

  • ...The underlying algorithm is similar to the algorithm described for the planar convex hulls by Gupta and Sen [21]....

    [...]

  • ...The technique of Sen [28,34] to filter the redundant line segments and the processor allocation scheme used by them is not particularly effective here....

    [...]

  • ...The algorithm uses the iterative method of Gupta and Sen [21]....

    [...]

  • ...The general technique given by Sen [34] to develop sub-logarithmic algorithms can be used to design on O(log n/ log k) algorithm for the problem....

    [...]

  • ...The analysis of Gupta and Sen [21] goes through here also....

    [...]

Book ChapterDOI
13 Dec 2001
TL;DR: This paper describes an O(log n ċ (log H + log log n) time deterministic algorithm for the problem, that achieves O(n log H) work bound for H = Ω( log n), and presents a fast randomized algorithm that runs in expectedtime O( log H ċ log logn) with high probability and does O( n log H] work.
Abstract: In this paper we focus on the problem of designing very fast parallel algorithms for constructing the upper envelope of straight-line segments that achieve the O(n log H) work-bound for input size n and output size H. Our algorithms are designed for the arbitrary CRCW PRAM model. We first describe an O(log n ċ (log H + log log n)) time deterministic algorithm for the problem, that achieves O(n log H) work boundfor H = Ω(log n). We present a fast randomized algorithm that runs in expectedtime O(log H ċ log log n) with high probability and does O(n log H) work. For log H = Ω(log log n), we can achieve the running time of O(log H) while simultaneously keeping the work optimal. We also present a fast randomized algorithm that runs in O(log n/ log k) time with nk processors, k > logΩ(1) n. The algorithms do not assume any input distribution and the running times holdwith high probability.

1 citations


Cites background or methods from "Lower bounds for parallel algebraic..."

  • ...This algorithm is based on the general technique given by Sen to develop sublogarithmic algorithms [ 21 ]....

    [...]

  • ...The other issue is to design a sub-logarithmic time algorithm using superlinear number of processors for k> 1. The technique of Sen [18, 21 ] to filter the redundant line segments to control the blowup in the problem size and the processor allocation scheme used by them is not particularly effective here....

    [...]

  • ...we can find their upper envelope in expected O(log n/ log k) time with high probability. Proof. Refer to Sen [ 21 ]....

    [...]

  • ...The general technique given by Sen [ 21 ] to develop sub-logarithmic algorithms can be used to design an O(log n/ log k)algorithm for the problem....

    [...]

References
More filters
Book
01 Oct 1992
TL;DR: This book provides an introduction to the design and analysis of parallel algorithms, with the emphasis on the application of the PRAM model of parallel computation, with all its variants, to algorithm analysis.
Abstract: Written by an authority in the field, this book provides an introduction to the design and analysis of parallel algorithms. The emphasis is on the application of the PRAM (parallel random access machine) model of parallel computation, with all its variants, to algorithm analysis. Special attention is given to the selection of relevant data structures and to algorithm design principles that have proved to be useful. Features *Uses PRAM (parallel random access machine) as the model for parallel computation. *Covers all essential classes of parallel algorithms. *Rich exercise sets. *Written by a highly respected author within the field. 0201548569B04062001

1,577 citations

Proceedings ArticleDOI
30 Sep 1977
TL;DR: Two approaches to the study of expected running time of algoritruns lead naturally to two different definitions of intrinsic complexity of a problem, which are the distributional complexity and the randomized complexity, respectively.
Abstract: 1. Introduction The study of expected running time of algoritruns is an interesting subject from both a theoretical and a practical point of view. Basically there exist two approaches to this study. In the first approach (we shall call it the distributional approach), some "natural" distribution is assumed for the input of a problem, and one looks for fast algorithms under this assumption (see Knuth [8J). For example, in sorting n numbers, it is usually assumed that all n! initial orderings of the numbers are equally likely. A common criticism of this approach is that distributions vary a great deal in real life situations; fu.rthermore, very often the true distribution of the input is simply not known. An alternative approach which attempts to overcome this shortcoming by allowing stochastic moves in the computation has recently been proposed. This is the randomized approach made popular by Habin [lOJ(also see Gill[3J, Solovay and Strassen [13J), although the concept was familiar to statisticians (for exa'1lple, see Luce and Raiffa [9J). Note that by allowing stochastic moves in an algorithm, the input is effectively being randomized. We shall refer to such an algoritlvn as a randomized algorithm. These two approaches lead naturally to two different definitions of intrinsic complexity of a problem, which we term the distributional complexity and the randomized complexity, respectively. (Precise definitions and examples will be given in Sections 2 and 3.) To solidify the ideas, we look at familiar combinatorial problems that can be modeled by decision trees. In particular, we consider (a) the testing of an arbitrary graph property from an adjacency matrix (Section 2), and (b) partial order problems on n We will show that for these two classes of problems, the two complexity measures always agree by virtue of a famous theorem, the Minimax Theorem of Von Neumann [14J. The connection between the two approaches lends itself to applications. With two different views (and in a sense complementary to each other) on the complexity of a problem, it is frequently easier to derive upper and lower bounds. For example, using adjacency matrix representation for a graph, it can be shown that no randomized algorithm can determine 2 the existence of a perfect matching in less than O(n) probes. Such lower bounds to the randomized approach were lacking previously. As another example of application , we can prove that for the partial order problems in (b), assuming uniform …

1,188 citations

Proceedings ArticleDOI
Kenneth L. Clarkson1
06 Jan 1988
TL;DR: Asymptotically tight bounds for a combinatorial quantity of interest in discrete and computational geometry, related to halfspace partitions of point sets, are given.
Abstract: Random sampling is used for several new geometric algorithms. The algorithms are “Las Vegas,” and their expected bounds are with respect to the random behavior of the algorithms. One algorithm reports all the intersecting pairs of a set of line segments in the plane, and requires O(A + n log n) expected time, where A is the size of the answer, the number of intersecting pairs reported. The algorithm requires O(n) space in the worst case. Another algorithm computes the convex hull of a point set in E3 in O(n log A) expected time, where n is the number of points and A is the number of points on the surface of the hull. A simple Las Vegas algorithm triangulates simple polygons in O(n log log n) expected time. Algorithms for half-space range reporting are also given. In addition, this paper gives asymptotically tight bounds for a combinatorial quantity of interest in discrete and computational geometry, related to halfspace partitions of point sets.

1,163 citations

Journal ArticleDOI
Richard Cole1
TL;DR: A parallel implementation of merge sort on a CREW PRAM that uses n processors and O(logn) time; the constant in the running time is small.
Abstract: We give a parallel implementation of merge sort on a CREW PRAM that uses n processors and $O(\log n)$ time; the constant in the running time is small. We also give a more complex version of the algorithm for the EREW PRAM; it also uses n processors and $O(\log n)$ time. The constant in the running time is still moderate, though not as small.

847 citations

Proceedings ArticleDOI
01 Dec 1983
TL;DR: All the apparently known lower bounds for linear decision trees are extended to bounded degree algebraic decision trees, thus answering the open questions raised by Steele and Yao [20].
Abstract: A topological method is given for obtaining lower bounds for the height of algebraic computation trees, and algebraic decision trees. Using this method we are able to generalize, and present in a uniform and easy way, almost all the known nonlinear lower bounds for algebraic computations. Applying the method to decision trees we extend all the apparently known lower bounds for linear decision trees to bounded degree algebraic decision trees, thus answering the open questions raised by Steele and Yao [20]. We also show how this new method can be used to establish lower bounds on the complexity of constructions with ruler and compass in plane Euclidean geometry.

584 citations