scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Faster output-sensitive parallel algorithms for 3D convex hulls and vector maxima

01 Apr 2003-Journal of Parallel and Distributed Computing (Academic Press, Inc.)-Vol. 63, Iss: 4, pp 488-500
TL;DR: An optimal speed-up (with respect to the input size only) sublogarithmic time algorithm that uses superlinear number of processors for vector maxima in three dimensions that is faster than previously known algorithms.
About: This article is published in Journal of Parallel and Distributed Computing.The article was published on 2003-04-01. It has received 11 citations till now. The article focuses on the topics: Output-sensitive algorithm & Convex hull.
Citations
More filters
Journal ArticleDOI
TL;DR: A novel parallel algorithm for computing the convex hull of a set of points in 3D using the CUDA programming model based on the QuickHull approach and starts by constructing an initial tetrahedron using four extreme points, discards the internal points, and distributes the external points to the four faces.

52 citations


Cites methods from "Faster output-sensitive parallel al..."

  • ...Many parallel algorithms for convex-hull construction were proposed; most of them use the parallel random-access machine (PRAM) to design algorithms because of its close relationship with the sequential models [8]....

    [...]

Journal ArticleDOI
TL;DR: A hybrid algorithm to compute the convex hull of points in three or higher dimensional spaces using a GPU-based interior point filter to cull away many of the points that do not lie on the boundary and a pseudo-hull that is contained inside the conveX hull of the original points is computed.

46 citations

Proceedings ArticleDOI
06 Jul 2020
TL;DR: A strong theoretical analysis is provided showing that for n points in any constant dimension, the standard incremental algorithm is inherently parallel, and it is shown that for problems where the size of the support set can be bounded by a constant, the depth of the configuration dependence graph is shallow.
Abstract: The randomized incremental convex hull algorithm is one of the most practical and important geometric algorithms in the literature. Due to its simplicity, and the fact that many points or facets can be added independently, it is also widely used in parallel convex hull implementations. However, to date there have been no non-trivial theoretical bounds on the parallelism available in these implementations. In this paper, we provide a strong theoretical analysis showing that the standard incremental algorithm is inherently parallel. In particular, we show that for n points in any constant dimension, the algorithm has O(log n) dependence depth with high probability. This leads to a simple work-optimal parallel algorithm with polylogarithmic span with high probability. Our key technical contribution is a new definition and analysis of the configuration dependence graph extending the traditional configuration space, which allows for asynchrony in adding configurations. To capture the "true" dependence between configurations, we define the support set of configuration c to be the set of already added configurations that it depends on. We show that for problems where the size of the support set can be bounded by a constant, the depth of the configuration dependence graph is shallow (O(log n) with high probability for input size n). In addition to convex hull, our approach also extends to several related problems, including half-space intersection and finding the intersection of a set of unit circles. We believe that the configuration dependence graph and its analysis is a general idea that could potentially be applied to more problems.

26 citations


Cites methods from "Faster output-sensitive parallel al..."

  • ...Gupta and Sen [42] later used the parallel divide-and-conquer approach to develop output-sensitive algorithms for convex hull....

    [...]

  • ...Gupta and Sen [42] later used the parallel divide-and-conquer approach to develop output-sensitive algorithms for convex hull....

    [...]

  • ...In the parallel setting, there have been several asymptotically efficient parallel algorithms for convex hull [5, 7, 8, 42, 49, 52], although none of them are based on the incremental approach....

    [...]

  • ...Reif and Sen [52] developed the first work-optimal PRAM algorithm for 3D convex hull, and it also had optimal logarithmic span....

    [...]

  • ...This idea is used in many parallel implementations of convex hull [27, 34, 38, 40, 42, 47, 56, 59], although with no strong theoretical bounds....

    [...]

Journal ArticleDOI
TL;DR: The works demonstrate that the GPU can be used to solve nontrivial computational geometry problems with significant performance benefit and up to an order of magnitude faster than other sequential convex hull implementations running on the CPU for inputs of millions of points.
Abstract: A novel algorithm is presented to compute the convex hull of a point set in ℝ3 using the graphics processing unit (GPU). By exploiting the relationship between the Voronoi diagram and the convex hull, the algorithm derives the approximation of the convex hull from the former. The other extreme vertices of the convex hull are then found by using a two-round checking in the digital and the continuous space successively. The algorithm does not need explicit locking or any other concurrency control mechanism, thus it can maximize the parallelism available on the modern GPU.The implementation using the CUDA programming model on NVIDIA GPUs is exact and efficient. The experiments show that it is up to an order of magnitude faster than other sequential convex hull implementations running on the CPU for inputs of millions of points. The works demonstrate that the GPU can be used to solve nontrivial computational geometry problems with significant performance benefit.

22 citations

Journal ArticleDOI
TL;DR: This paper presents a fast, simple to implement and robust Smart Convex Hull (S-CH) algorithm for computing the convex hull of a set of points in E3, based on "spherical" space subdivision.

10 citations


Cites background from "Faster output-sensitive parallel al..."

  • ...Reif and Sen [14] proposed a randomized algorithm for three dimensional convex hulls that runs at (log ) time using a divide and conquer approach on ( ) processors....

    [...]

  • ...Gupta and Sen [11] proposed a fast parallel convex hull algorithm that is output-size sensitive....

    [...]

References
More filters
Proceedings ArticleDOI
Kenneth L. Clarkson1
06 Jan 1988
TL;DR: Asymptotically tight bounds for a combinatorial quantity of interest in discrete and computational geometry, related to halfspace partitions of point sets, are given.
Abstract: Random sampling is used for several new geometric algorithms. The algorithms are “Las Vegas,” and their expected bounds are with respect to the random behavior of the algorithms. One algorithm reports all the intersecting pairs of a set of line segments in the plane, and requires O(A + n log n) expected time, where A is the size of the answer, the number of intersecting pairs reported. The algorithm requires O(n) space in the worst case. Another algorithm computes the convex hull of a point set in E3 in O(n log A) expected time, where n is the number of points and A is the number of points on the surface of the hull. A simple Las Vegas algorithm triangulates simple polygons in O(n log log n) expected time. Algorithms for half-space range reporting are also given. In addition, this paper gives asymptotically tight bounds for a combinatorial quantity of interest in discrete and computational geometry, related to halfspace partitions of point sets.

1,163 citations

Journal ArticleDOI
TL;DR: It is shown that arithmetic expressions with n ≥ 1 variables and constants; operations of addition, multiplication, and division; and any depth of parenthesis nesting can be evaluated in time 4 log 2 + 10(n - 1) using processors which can independently perform arithmetic operations in unit time.
Abstract: It is shown that arithmetic expressions with n ≥ 1 variables and constants; operations of addition, multiplication, and division; and any depth of parenthesis nesting can be evaluated in time 4 log2n + 10(n - 1)/p using p ≥ 1 processors which can independently perform arithmetic operations in unit time. This bound is within a constant factor of the best possible. A sharper result is given for expressions without the division operation, and the question of numerical stability is discussed.

864 citations

Journal ArticleDOI
TL;DR: The concept of an ɛ-net of a set of points for an abstract set of ranges is introduced and sufficient conditions that a random sample is an Â-net with any desired probability are given.
Abstract: We demonstrate the existence of data structures for half-space and simplex range queries on finite point sets ind-dimensional space,dÂ?2, with linear storage andO(nÂ?) query time, $$\alpha = \frac{{d(d - 1)}}{{d(d - 1) + 1}} + \gamma for all \gamma > 0$$ . These bounds are better than those previously published for alldÂ?2. Based on ideas due to Vapnik and Chervonenkis, we introduce the concept of an Â?-net of a set of points for an abstract set of ranges and give sufficient conditions that a random sample is an Â?-net with any desired probability. Using these results, we demonstrate how random samples can be used to build a partition-tree structure that achieves the above query time.

799 citations

Journal ArticleDOI
TL;DR: The presented algorithms use the “divide and conquer” technique and recursively apply a merge procedure for two nonintersecting convex hulls to ensure optimal time complexity within a multiplicative constant.
Abstract: The convex hulls of sets of n points in two and three dimensions can be determined with O(n log n) operations. The presented algorithms use the “divide and conquer” technique and recursively apply a merge procedure for two nonintersecting convex hulls. Since any convex hull algorithm requires at least O(n log n) operations, the time complexity of the proposed algorithms is optimal within a multiplicative constant.

731 citations

Book
01 Jan 1994
TL;DR: A comparison of quick-sort and search problems, Voronoi diagrams of Hyperplanes, and the model of randomness: The number of faces and the expected structural and conflict change.
Abstract: I BASICS 1 Quick-sort and Search Quick-sort Another view of quick-sort Randomized binary trees Skip lists 2 What Is Computational Geometry? Range queries Arrangements Trapezoidal decompositions Convex polytopes Voronoi diagrams Hidden surface removal Numerical precision and degeneracies Early deterministic algorithms Deterministic vs randomized algorithms The model of randomness 3 Incremental Algorithms Trapezoidal decompositions Convex polytopes Voronoi diagrams Configuration spaces Tail estimates 4 Dynamic Algorithms trapezoidal decompositions Voronoi diagrams History and configuration spaces Rebuilding history Deletions in history Dynamic shuffling 5 Random Sampling Configuration spaces with bounded valence Top-down sampling Bottom-up sampling Dynamic sampling Average conflict size More dynamic algorithms Range spaces and E-nets Comparisons II APPLICATIONS 6 Arrangements of Hyperplanes Incremental construction Zone Theorem Canonical triangulations Point location and ray shooting Point location and range queries 7 Convex Polytopes Linear Programming The number of faces Incremental construction The expected structural and conflict change Dynamic maintenance Voronoi diagrams Search problems Levels and Voronoi diagrams of order k 8 Range Search Orthogonal intersection search Nonintersecting segments in the plane Dynamic point location Simplex range search Half-space range queries Decomposable search problems Parametric search 9 Computer Graphics Hidden surface removal Binary Space Partitions Moving viewpoint 10 How Crucial Is Randomness? Pseudo-random sources Derandomization Appendix: Tail Estimates Chernoff's technique Chebychev's technique Bibliography Index

595 citations