scispace - formally typeset
Journal ArticleDOI

Optimal Randomized Parallel Algorithms for Computational Geometry I

H J Reif1, Sandeep Sen1
01 Jan 1988-Algorithmica (Duke University)-Vol. 7, Iss: 1, pp 91-117

TL;DR: In this paper, the authors present parallel algorithms for 3-D maxima and two-set dominance counting by an application of integer sorting, which have running time of O(logn)$ using $n$ processors, with very high probability.

AbstractWe present parallel algorithms for some fundamental problems in computational geometry which have running time of $O(logn)$ using $n$ processors, with very high probability (approaching 1 as $n~ \rightarrow~ \infty$). These include planar point location, triangulation and trapezoidal decomposition. We also present optimal algorithms for 3-D maxima and two-set dominance counting by an application of integer sorting. Most of these algorithms run on CREW PRAM model and have optimal processor-time product which improve on the previously best known algorithms of Atallah and Goodrich [3] for these problems. The crux of these algorithms is a useful data structure which emulates the plane sweeping paradigm used for sequential algorithms. We extend some of the techniques used by Reischuk [22] Reif and Valiant [21] for flashsort algorithm to perform divide and conquer in a plane very efficiently leading to the improved performance by our approach.

...read more


Citations
More filters
Proceedings ArticleDOI
Kenneth L. Clarkson1
06 Jan 1988
TL;DR: Asymptotically tight bounds for a combinatorial quantity of interest in discrete and computational geometry, related to halfspace partitions of point sets, are given.
Abstract: Random sampling is used for several new geometric algorithms. The algorithms are “Las Vegas,” and their expected bounds are with respect to the random behavior of the algorithms. One algorithm reports all the intersecting pairs of a set of line segments in the plane, and requires O(A + n log n) expected time, where A is the size of the answer, the number of intersecting pairs reported. The algorithm requires O(n) space in the worst case. Another algorithm computes the convex hull of a point set in E3 in O(n log A) expected time, where n is the number of points and A is the number of points on the surface of the hull. A simple Las Vegas algorithm triangulates simple polygons in O(n log log n) expected time. Algorithms for half-space range reporting are also given. In addition, this paper gives asymptotically tight bounds for a combinatorial quantity of interest in discrete and computational geometry, related to halfspace partitions of point sets.

1,138 citations

Journal ArticleDOI
TL;DR: This paper describes an effective procedure for stratifying a real semi-algebraic set into cells of constant description size that compares favorably with the doubly exponential size of Collins' decomposition.
Abstract: This paper describes an effective procedure for stratifying a real semi-algebraic set into cells of constant description size. The attractive feature of our method is that the number of cells produced is singly exponential in the number of input variables. This compares favorably with the doubly exponential size of Collins' decomposition. Unlike Collins' construction, however, our scheme does not produce a cell complex but only a smooth stratification. Nevertheless, we are able to apply our results in interesting ways to problems of point location and geometric optimization.

175 citations

Book
09 Sep 2015
TL;DR: In this article, the authors present techniques for parallel divide-and-conquer, resulting in improved parallel algorithms for a number of problems including intersection detection, trapezoidal decomposition, and planar point location.
Abstract: We present techniques for parallel divide-and-conquer, resulting in improved parallel algorithms for a number of problems. The problems for which we give improved algorithms include intersection detection, trapezoidal decomposition (hence, polygon triangulation), and planar point location (hence, Voronoi diagram construction). We also give efficient parallel algorithms for fractional cascading, 3-dimensional maxima, 2-set dominance counting, and visibility from a point. All of our algorithms run in O(log n) time with either a linear or sub-linear number of processors in the CREW PRAM model.

162 citations

Proceedings ArticleDOI
24 Oct 1988
TL;DR: It is shown how to compute, in polynomial time, a simplicial packing of size O(r/sup d/) that covers d-space, each of whose simplices intersects O(n/r) hyperplanes.
Abstract: A number of efficient probabilistic algorithms based on the combination of divide-and-conquer and random sampling have been recently discovered. It is shown that all those algorithms can be derandomized with only polynomial overhead. In the process. results of independent interest concerning the covering of hypergraphs are established, and various probabilistic bounds in geometry complexity are improved. For example, given n hyperplanes in d-space and any large enough integer r, it is shown how to compute, in polynomial time, a simplicial packing of size O(r/sup d/) that covers d-space, each of whose simplices intersects O(n/r) hyperplanes. It is also shown how to locate a point among n hyperplanes in d-space in O(log n) query time, using O(n/sup d/) storage and polynomial preprocessing. >

136 citations

Journal ArticleDOI
30 Oct 1989
TL;DR: The general form of the case for which the method of conditional probabilities can be applied in the parallel context is given and the reason why this form does not lend itself to parallelization is discussed.
Abstract: A method is provided for converting randomized parallel algorithms into deterministic parallel algorithms. The approach is based on a parallel implementation of the method of conditional probabilities. Results obtained by applying the method to the set balancing problem, lattice approximation, edge-coloring graphs, random sampling, and combinatorial constructions are presented. The general form in which the method of conditional probabilities is applied sequentially is described. The reason why this form does not lend itself to parallelization are discussed. The general form of the case for which the method of conditional probabilities can be applied in the parallel context is given. >

123 citations


References
More filters
Journal ArticleDOI
TL;DR: The concept of an ɛ-net of a set of points for an abstract set of ranges is introduced and sufficient conditions that a random sample is an Â-net with any desired probability are given.
Abstract: We demonstrate the existence of data structures for half-space and simplex range queries on finite point sets ind-dimensional space,dÂ?2, with linear storage andO(nÂ?) query time, $$\alpha = \frac{{d(d - 1)}}{{d(d - 1) + 1}} + \gamma for all \gamma > 0$$ . These bounds are better than those previously published for alldÂ?2. Based on ideas due to Vapnik and Chervonenkis, we introduce the concept of an Â?-net of a set of points for an abstract set of ranges and give sufficient conditions that a random sample is an Â?-net with any desired probability. Using these results, we demonstrate how random samples can be used to build a partition-tree structure that achieves the above query time.

766 citations

Journal ArticleDOI
TL;DR: There is a distributed randomized algorithm that can route every packet to its destination without two packets passing down the same wire at any one time, and finishes within time $O(\log N)$ with overwhelming probability for all such routing requests.
Abstract: Consider $N = 2^n $ nodes connected by wires to make an n-dimensional binary cube. Suppose that initially the nodes contain one packet each addressed to distinct nodes of the cube. We show that the...

650 citations

Journal ArticleDOI
TL;DR: A substantial refinement of the technique of Lee and Preparata for locating a point in $\mathcal{S}$ based on separating chains is exhibited, which can be implemented in a simple and practical way, and is extensible to subdivisions with edges more general than straight-line segments.
Abstract: Point location, often known in graphics as “hit detection,” is one of the fundamental problems of computational geometry. In a point location query we want to identify which of a given collection of geometric objects contains a particular point. Let $\mathcal{S}$ denote a subdivision of the Euclidean plane into monotone regions by a straight-line graph of m edges. In this paper we exhibit a substantial refinement of the technique of Lee and Preparata [SIAM J. Comput., 6 (1977), pp. 594–606] for locating a point in $\mathcal{S}$ based on separating chains. The new data structure, called a layered dag, can be built in $O(m)$ time, uses $O(m)$ storage, and makes possible point location in $O(\log m)$ time. Unlike previous structures that attain these optimal bounds, the layered dag can be implemented in a simple and practical way, and is extensible to subdivisions with edges more general than straight-line segments.

539 citations

Journal ArticleDOI
TL;DR: A sorting network withcn logn comparisons where in thei-th step of the algorithm the contents of registersRj, andRk, wherej, k are absolute constants then change their contents or not according to the result of the comparison.
Abstract: We give a sorting network withcn logn comparisons. The algorithm can be performed inc logn parallel steps as well, where in a parallel step we comparen/2 disjoint pairs. In thei-th step of the algorithm we compare the contents of registersR j(i) , andR k(i) , wherej(i), k(i) are absolute constants then change their contents or not according to the result of the comparison.

485 citations

Proceedings ArticleDOI
21 Oct 1985
TL;DR: A bottom-up algorithm to handle trees which has two major advantages over the top-down approach: the control structure is straight forward and easier to implement facilitating new algorithms using fewer processors and less time; and problems for which it was too difficult or too complicated to find polylog parallel algorithms are now easy.
Abstract: : Trees play a fundamental role in many computations, both for sequential as well as parallel problems. The classic paradigm applied to generate parallel algorithms in the presence of trees has been divide-conquer; finding a 1/3 - 2/3 separator and recursively solving the two subproblems. A now classic example is Brent's work on parallel evaluation of arithmetic expressions. This top-down approach has several complications, one of which is finding the separators. We define dynamic expression evaluation as the task of evaluating the expression with no free preprocessing. If we apply Brent's method, finding the separators seems to add a factor of log n to the running time. We give a bottom-up algorithm to handle trees. That is, all modifications to the tree are done locally. This bottom-up approach which we call CONTRACT has two major advantages over the top-down approach: (1) the control structure is straight forward and easier to implement facilitating new algorithms using fewer processors and less time; and (2) problems for which it was too difficult or too complicated to find polylog parallel algorithms are now easy.

427 citations