scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Optimal Randomized Parallel Algorithms for Computational Geometry I

H J Reif1, Sandeep Sen1
01 Jan 1988-Algorithmica (Duke University)-Vol. 7, Iss: 1, pp 91-117
TL;DR: In this paper, the authors present parallel algorithms for 3-D maxima and two-set dominance counting by an application of integer sorting, which have running time of O(logn)$ using $n$ processors, with very high probability.
Abstract: We present parallel algorithms for some fundamental problems in computational geometry which have running time of $O(logn)$ using $n$ processors, with very high probability (approaching 1 as $n~ \rightarrow~ \infty$). These include planar point location, triangulation and trapezoidal decomposition. We also present optimal algorithms for 3-D maxima and two-set dominance counting by an application of integer sorting. Most of these algorithms run on CREW PRAM model and have optimal processor-time product which improve on the previously best known algorithms of Atallah and Goodrich [3] for these problems. The crux of these algorithms is a useful data structure which emulates the plane sweeping paradigm used for sequential algorithms. We extend some of the techniques used by Reischuk [22] Reif and Valiant [21] for flashsort algorithm to perform divide and conquer in a plane very efficiently leading to the improved performance by our approach.
Citations
More filters
Proceedings ArticleDOI
01 Sep 1991
TL;DR: An optimal algorithm for computing hyperplane cuttings results in a new kind of cutting, which enjoys all the properties of the previous ones and, in addition, can be refined by composition.
Abstract: An optimal algorithm for computing hyperplane cuttings is given. It results in a new kind of cutting, which enjoys all the properties of the previous ones and, in addition, can be refined by composition. An optimal algorithm for computing the convex hull of a finite point set in any fixed dimension is also given. >

82 citations

Book ChapterDOI
11 Jul 1989
TL;DR: An effective procedure for stratifying a real semi-algebraic set into cells of constant description size that compares favorably with the doubly exponential size of Collins’ decomposition and is able to apply in interesting ways to problems of point location and geometric optimization.
Abstract: Chazelle, B., H. Edelsbrunner, L.J. Guibas and M. Sharir, A singly exponential stratification scheme for real semi-algebraic varieties and its applications, Theoretical Computer Science 84 (1991) 77-105. This paper describes an effective procedure for stratifying a real semi-algebraic set into cells of constant description size. The attractive feature of our method is that the number of cells produced is singly exponential in the number of input variables. This compares favorably with the doubly exponential size of Collins’ decomposition. Unlike Collins’ construction, however, our scheme does not produce a cell complex but only a smooth stratification. Nevertheless, we are able to apply our results in interesting ways to problems of point location and geometric optimization.

66 citations

Proceedings ArticleDOI
23 May 1994
TL;DR: This work will survey some of its principal accomplishments, and in light of recent developments, it will discuss the profound transformations the field has begun to undergo.
Abstract: BERNARD CHAZELLE Department of Computer Science Princeton University Princetoq NJ08544, USA Computational geometry is at a crossroads. New challenges and opportunities are likely to reshape the field rather drastically in the years ahead. I will survey some of its principal accomplishments, and in light of recent developments, I will discuss the profound transformations the field has begun to undergo. There are reasons to believe that computational geometry will emerge from this transition far richer and stronger but barely recognizable from what it was ten years ago. Over the last two decades the field has enjoyed tremendous successes. Some of them might be dismissed as the cheap payoffs to be expected from any field lacking maturity. But others are the products of indisputable creativity and should be held as genuirle scientific achievements. More important, the field is now able to claim a broad, solid foundation upon which its future can be securely built. To mature fully as an original subfield of computer science, however, computational geometry must broaden its connections to applied mathematics while at the same time pay more than lip service to the applications areas that it purports to serve. Happily, active efforts to meet these challenges are underway. Three recent developments are particular encouraging: one is the building of a theory of geometric sampiing and its revolutionary impact on the design of geometric algorithms. Another is the maturing of computational real-algebraic geometry and computational topology both subjects are being revitalized by the introduction of geometric (as opposed to purely algebraic) methods. On the practical end of the spectrum, the emergence of a sub-area concerned specifically with issues of This work was supported in part by NSF Grant CCR-9301254 and The Geometry Center, University of Minnesota, an STC funded by NSF, DOE, and Mimesota Technology, Inc. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the ACM copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Asmciation of Computing Machinery. To copy otherwise, or to republish, requires a fee and/or specific permission. finite precision and degeneracy in geometric computing is a most welcome development.

56 citations

Proceedings ArticleDOI
01 Feb 1989
TL;DR: A new randomized sampling technique, called Polling, is introduced which has applications to deriving efficient parallel algorithms for fundamental problems like the convex hull in three dimensions, Voronoi diagram of point sites on a plane and Euclidean minimal spanning tree.
Abstract: We introduce a new randomized sampling technique, called Polling which has applications to deriving efficient parallel algorithms. As an example of its use in computational geometry, we present an optimal parallel randomized algorithm for intersection of half-spaces in three dimensions. Because of well-known reductions, our methods also yield equally efficient algorithms for fundamental problems like the convex hull in three dimensions, Voronoi diagram of point sites on a plane and Euclidean minimal spanning tree. Our algorithms run in time T = O(logn) for worst-case inputs and uses P = O(n) processors in a CREW PRAM model where n is the input size. They are randomized in the sense that they use a total of only O(log2n) random bits and terminate in the claimed time bound with probability 1 - n-a for any a > 0. They are also optimal in P . T product since the sequential time bound for all these problems is O(nlogn). The best known determistic parallel algorithms for 2-D Voronoi-diagram and 3-D Convex hull run in O(log2n) and O(log2nlog * n) time respectively while using O(n) processors.

52 citations

Proceedings ArticleDOI
29 May 1995
TL;DR: This work gives the first work-optimal deterministic parallel algorithm for constructing a set of m = O(nd 1 logc n+k) cells of constant descriptive complexity that covers their arrangement, and describes a sequential algorithm for computing a single face in an arrangement of n line segments that improves on a previous O(n log n) time algorithm.
Abstract: For a set S of n line segments in the plane, we give the first work-optimal deterministic parallel algorithm for constructing their arrangement. It runs in O(log2 n) time using O(n logn + k) work in the EREW PRAM model, where k is the number of intersecting line segment pairs, and provides a fairly simple divide-and-conquer alternative to the optimal sequential “plane-sweep” algorithm of Chazelle and Edelsbrunner. Moreover, our method can be used to output all k intersecting pairs while using only O(n) working space, which solves an open problem posed by Chazelle and Edelsbrunner. We also describe a sequential algorithm for computing a single face in an arrangement of n line segments that runs in O(n 2(n) logn) time, which improves on a previous O(n log n) time algorithm. For collections of simplices in IRd, we give methods for constructing a set ofm = O(nd 1 logc n+k) cells of constant descriptive complexity that covers their arrangement, where c > 1 is a constant and k is the number of faces in the arrangement. The construction is performed sequentially in O(m) time, or in O(logn) time using O(m) work in the EREW PRAM model. The covering can be augmented to answer point location queries in O(logn) time. In addition to supplying the first parallel methods for these problems, we improve on the previous best sequential methods by reducing the query times (from O(log2 n) in IR and O(log3 n) in IRd, d > 3), and also the size and construction cost of the covering (from O(nd 1+ + k)).

40 citations

References
More filters
01 Jan 1985
TL;DR: This book offers a coherent treatment, at the graduate textbook level, of the field that has come to be known in the last decade or so as computational geometry.
Abstract: From the reviews: "This book offers a coherent treatment, at the graduate textbook level, of the field that has come to be known in the last decade or so as computational geometry...The book is well organized and lucidly written; a timely contribution by two founders of the field. It clearly demonstrates that computational geometry in the plane is now a fairly well-understood branch of computer science and mathematics. It also points the way to the solution of the more challenging problems in dimensions higher than two."

6,525 citations

Journal ArticleDOI
TL;DR: In this paper, it was shown that the likelihood ratio test for fixed sample size can be reduced to this form, and that for large samples, a sample of size $n$ with the first test will give about the same probabilities of error as a sample with the second test.
Abstract: In many cases an optimum or computationally convenient test of a simple hypothesis $H_0$ against a simple alternative $H_1$ may be given in the following form. Reject $H_0$ if $S_n = \sum^n_{j=1} X_j \leqq k,$ where $X_1, X_2, \cdots, X_n$ are $n$ independent observations of a chance variable $X$ whose distribution depends on the true hypothesis and where $k$ is some appropriate number. In particular the likelihood ratio test for fixed sample size can be reduced to this form. It is shown that with each test of the above form there is associated an index $\rho$. If $\rho_1$ and $\rho_2$ are the indices corresponding to two alternative tests $e = \log \rho_1/\log \rho_2$ measures the relative efficiency of these tests in the following sense. For large samples, a sample of size $n$ with the first test will give about the same probabilities of error as a sample of size $en$ with the second test. To obtain the above result, use is made of the fact that $P(S_n \leqq na)$ behaves roughly like $m^n$ where $m$ is the minimum value assumed by the moment generating function of $X - a$. It is shown that if $H_0$ and $H_1$ specify probability distributions of $X$ which are very close to each other, one may approximate $\rho$ by assuming that $X$ is normally distributed.

3,760 citations

Proceedings ArticleDOI
Kenneth L. Clarkson1
06 Jan 1988
TL;DR: Asymptotically tight bounds for a combinatorial quantity of interest in discrete and computational geometry, related to halfspace partitions of point sets, are given.
Abstract: Random sampling is used for several new geometric algorithms. The algorithms are “Las Vegas,” and their expected bounds are with respect to the random behavior of the algorithms. One algorithm reports all the intersecting pairs of a set of line segments in the plane, and requires O(A + n log n) expected time, where A is the size of the answer, the number of intersecting pairs reported. The algorithm requires O(n) space in the worst case. Another algorithm computes the convex hull of a point set in E3 in O(n log A) expected time, where n is the number of points and A is the number of points on the surface of the hull. A simple Las Vegas algorithm triangulates simple polygons in O(n log log n) expected time. Algorithms for half-space range reporting are also given. In addition, this paper gives asymptotically tight bounds for a combinatorial quantity of interest in discrete and computational geometry, related to halfspace partitions of point sets.

1,163 citations

Journal ArticleDOI
Richard Cole1
TL;DR: A parallel implementation of merge sort on a CREW PRAM that uses n processors and O(logn) time; the constant in the running time is small.
Abstract: We give a parallel implementation of merge sort on a CREW PRAM that uses n processors and $O(\log n)$ time; the constant in the running time is small. We also give a more complex version of the algorithm for the EREW PRAM; it also uses n processors and $O(\log n)$ time. The constant in the running time is still moderate, though not as small.

847 citations

Journal ArticleDOI
TL;DR: This work presents a practical algorithm for subdivision search that achieves the same (optimal) worst case complexity bounds as the significantly more complex algorithm of Lipton and Tarjan, namely $O(\log n)$ search time with $O(n)$ storage.
Abstract: A planar subdivision is any partition of the plane into (possibly unbounded) polygonal regions. The subdivision search problem is the following: given a subdivision $S$ with $n$ line segments and a query point $p$, determine which region of $S$ contains $p$. We present a practical algorithm for subdivision search that achieves the same (optimal) worst case complexity bounds as the significantly more complex algorithm of Lipton and Tarjan, namely $O(\log n)$ search time with $O(n)$ storage. Our subdivision search structure can be constructed in linear time from the subdivision representation used in many applications.

810 citations