scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Optimal Randomized Parallel Algorithms for Computational Geometry I

H J Reif1, Sandeep Sen1
01 Jan 1988-Algorithmica (Duke University)-Vol. 7, Iss: 1, pp 91-117
TL;DR: In this paper, the authors present parallel algorithms for 3-D maxima and two-set dominance counting by an application of integer sorting, which have running time of O(logn)$ using $n$ processors, with very high probability.
Abstract: We present parallel algorithms for some fundamental problems in computational geometry which have running time of $O(logn)$ using $n$ processors, with very high probability (approaching 1 as $n~ \rightarrow~ \infty$). These include planar point location, triangulation and trapezoidal decomposition. We also present optimal algorithms for 3-D maxima and two-set dominance counting by an application of integer sorting. Most of these algorithms run on CREW PRAM model and have optimal processor-time product which improve on the previously best known algorithms of Atallah and Goodrich [3] for these problems. The crux of these algorithms is a useful data structure which emulates the plane sweeping paradigm used for sequential algorithms. We extend some of the techniques used by Reischuk [22] Reif and Valiant [21] for flashsort algorithm to perform divide and conquer in a plane very efficiently leading to the improved performance by our approach.
Citations
More filters
Journal ArticleDOI
TL;DR: This paper describes several parallel algorithms that solve geometric problems based on a vector model of computation-the scan model, and describes how to implement the CREW version of Cole's merge sort in O(lg n) calls to the primitives.

39 citations

Proceedings ArticleDOI
11 Jul 2016
TL;DR: In this article, it was shown that most sequential randomized incremental algorithms are in fact parallel, and the dependence structure is shallow for all of the algorithms, implying high parallelism, and three types of dependences found in the algorithms studied and presented a framework for analyzing each type of algorithm.
Abstract: In this paper we show that most sequential randomized incremental algorithms are in fact parallel. We consider several random incremental algorithms including algorithms for comparison sorting and Delaunay triangulation; linear programming, closest pair, and smallest enclosing disk in constant dimensions; as well as least-element lists and strongly connected components on graphs.We analyze the dependence between iterations in an algorithm, and show that the dependence structure is shallow for all of the algorithms, implying high parallelism. We identify three types of dependences found in the algorithms studied and present a framework for analyzing each type of algorithm. Using the framework gives work-efficient polylogarithmic-depth parallel algorithms for most of the problems that we study. Some of these algorithms are straightforward (e.g., sorting and linear programming), while others are more novel and require more effort to obtain the desired bounds (e.g., Delaunay triangulation and strongly connected components). The most surprising of these results is for planar Delaunay triangulation for which the incremental approach is by far the most commonly used in practice, but for which it was not previously known whether it is theoretically efficient in parallel.

36 citations

Proceedings ArticleDOI
11 Jun 2020
TL;DR: This paper presents new parallel algorithms for Euclidean exact DBSCAN and approximate DBS CAN that match the work bounds of their sequential counterparts, and are highly parallel (polylogarithmic depth).
Abstract: The DBSCAN method for spatial clustering has received significant attention due to its applicability in a variety of data analysis tasks. There are fast sequential algorithms for DBSCAN in Euclidean space that take O(nlog n) work for two dimensions, sub-quadratic work for three or more dimensions, and can be computed approximately in linear work for any constant number of dimensions. However, existing parallel DBSCAN algorithms require quadratic work in the worst case. This paper bridges the gap between theory and practice of parallel DBSCAN by presenting new parallel algorithms for Euclidean exact DBSCAN and approximate DBSCAN that match the work bounds of their sequential counterparts, and are highly parallel (polylogarithmic depth). We present implementations of our algorithms along with optimizations that improve their practical performance. We perform a comprehensive experimental evaluation of our algorithms on a variety of datasets and parameter settings. Our experiments on a 36-core machine with two-way hyper-threading show that our implementations outperform existing parallel implementations by up to several orders of magnitude, and achieve speedups of up to 33x over the best sequential algorithms.

35 citations

Journal ArticleDOI
TL;DR: An algorithm for locating a query point q in an arrangement of n hyperplanes in R d that will also report, in addition to the face of the arrangement that contains q, the first hyperplane that is hit by shooting the pointq in some fixed direction.
Abstract: We present an algorithm for locating a query point q in an arrangement of n hyperplanes in R d. The size of the data structure is O(nd) and the time to answer any query is O(logn). Unlike previous data structures, our solution will also report, in addition to the face of the arrangement that contains q, the first hyperplane that is hit (if any) by shooting the point q in some fixed direction. Actually, if this ray-shooting capability is all that is needed, or if one only desires to know a single vertex of the face enclosing q, then the storage can be reduced to O(nd/(logn)⌈d/2⌉-ϵ), for any fixed ϵ >0.

35 citations

Proceedings ArticleDOI
11 Jul 2018
TL;DR: This paper designs parallel write-efficient geometric algorithms that perform asymptotically fewer writes than standard algorithms for the same problem, and introduces several techniques for obtaining write-efficiency, including DAG tracing, prefix doubling, and α-labeling.
Abstract: In this paper, we design parallel write-efficient geometric algorithms that perform asymptotically fewer writes than standard algorithms for the same problem. This is motivated by emerging non-volatile memory technologies with read performance being close to that of random access memory but writes being significantly more expensive in terms of energy and latency. We design algorithms for planar Delaunay triangulation, k -d trees, and static and dynamic augmented trees. Our algorithms are designed in the recently introduced Asymmetric Nested-Parallel Model, which captures the parallel setting in which there is a small symmetric memory where reads and writes are unit cost as well as a large asymmetric memory where writes are $omega$ times more expensive than reads. In designing these algorithms, we introduce several techniques for obtaining write-efficiency, including DAG tracing, prefix doubling, and α-labeling, which we believe will be useful for designing other parallel write-efficient algorithms.

32 citations

References
More filters
01 Jan 1985
TL;DR: This book offers a coherent treatment, at the graduate textbook level, of the field that has come to be known in the last decade or so as computational geometry.
Abstract: From the reviews: "This book offers a coherent treatment, at the graduate textbook level, of the field that has come to be known in the last decade or so as computational geometry...The book is well organized and lucidly written; a timely contribution by two founders of the field. It clearly demonstrates that computational geometry in the plane is now a fairly well-understood branch of computer science and mathematics. It also points the way to the solution of the more challenging problems in dimensions higher than two."

6,525 citations

Journal ArticleDOI
TL;DR: In this paper, it was shown that the likelihood ratio test for fixed sample size can be reduced to this form, and that for large samples, a sample of size $n$ with the first test will give about the same probabilities of error as a sample with the second test.
Abstract: In many cases an optimum or computationally convenient test of a simple hypothesis $H_0$ against a simple alternative $H_1$ may be given in the following form. Reject $H_0$ if $S_n = \sum^n_{j=1} X_j \leqq k,$ where $X_1, X_2, \cdots, X_n$ are $n$ independent observations of a chance variable $X$ whose distribution depends on the true hypothesis and where $k$ is some appropriate number. In particular the likelihood ratio test for fixed sample size can be reduced to this form. It is shown that with each test of the above form there is associated an index $\rho$. If $\rho_1$ and $\rho_2$ are the indices corresponding to two alternative tests $e = \log \rho_1/\log \rho_2$ measures the relative efficiency of these tests in the following sense. For large samples, a sample of size $n$ with the first test will give about the same probabilities of error as a sample of size $en$ with the second test. To obtain the above result, use is made of the fact that $P(S_n \leqq na)$ behaves roughly like $m^n$ where $m$ is the minimum value assumed by the moment generating function of $X - a$. It is shown that if $H_0$ and $H_1$ specify probability distributions of $X$ which are very close to each other, one may approximate $\rho$ by assuming that $X$ is normally distributed.

3,760 citations

Proceedings ArticleDOI
Kenneth L. Clarkson1
06 Jan 1988
TL;DR: Asymptotically tight bounds for a combinatorial quantity of interest in discrete and computational geometry, related to halfspace partitions of point sets, are given.
Abstract: Random sampling is used for several new geometric algorithms. The algorithms are “Las Vegas,” and their expected bounds are with respect to the random behavior of the algorithms. One algorithm reports all the intersecting pairs of a set of line segments in the plane, and requires O(A + n log n) expected time, where A is the size of the answer, the number of intersecting pairs reported. The algorithm requires O(n) space in the worst case. Another algorithm computes the convex hull of a point set in E3 in O(n log A) expected time, where n is the number of points and A is the number of points on the surface of the hull. A simple Las Vegas algorithm triangulates simple polygons in O(n log log n) expected time. Algorithms for half-space range reporting are also given. In addition, this paper gives asymptotically tight bounds for a combinatorial quantity of interest in discrete and computational geometry, related to halfspace partitions of point sets.

1,163 citations

Journal ArticleDOI
Richard Cole1
TL;DR: A parallel implementation of merge sort on a CREW PRAM that uses n processors and O(logn) time; the constant in the running time is small.
Abstract: We give a parallel implementation of merge sort on a CREW PRAM that uses n processors and $O(\log n)$ time; the constant in the running time is small. We also give a more complex version of the algorithm for the EREW PRAM; it also uses n processors and $O(\log n)$ time. The constant in the running time is still moderate, though not as small.

847 citations

Journal ArticleDOI
TL;DR: This work presents a practical algorithm for subdivision search that achieves the same (optimal) worst case complexity bounds as the significantly more complex algorithm of Lipton and Tarjan, namely $O(\log n)$ search time with $O(n)$ storage.
Abstract: A planar subdivision is any partition of the plane into (possibly unbounded) polygonal regions. The subdivision search problem is the following: given a subdivision $S$ with $n$ line segments and a query point $p$, determine which region of $S$ contains $p$. We present a practical algorithm for subdivision search that achieves the same (optimal) worst case complexity bounds as the significantly more complex algorithm of Lipton and Tarjan, namely $O(\log n)$ search time with $O(n)$ storage. Our subdivision search structure can be constructed in linear time from the subdivision representation used in many applications.

810 citations