scispace - formally typeset
Search or ask a question
Journal ArticleDOI

New applications of random sampling in computational geometry

Kenneth L. Clarkson1
01 Jun 1987-Discrete and Computational Geometry (Springer New York)-Vol. 2, Iss: 1, pp 195-222
TL;DR: This paper gives several new demonstrations of the usefulness of random sampling techniques in computational geometry by creating a search structure for arrangements of hyperplanes by sampling the hyperplanes and using information from the resulting arrangement to divide and conquer.
Abstract: This paper gives several new demonstrations of the usefulness of random sampling techniques in computational geometry. One new algorithm creates a search structure for arrangements of hyperplanes by sampling the hyperplanes and using information from the resulting arrangement to divide and conquer. This algorithm requiresO(sd+?) expected preprocessing time to build a search structure for an arrangement ofs hyperplanes ind dimensions. The expectation, as with all expected times reported here, is with respect to the random behavior of the algorithm, and holds for any input. Given the data structure, and a query pointp, the cell of the arrangement containingp can be found inO(logs) worst-case time. (The bound holds for any fixed ?>0, with the constant factors dependent ond and ?.) Using point-plane duality, the algorithm may be used for answering halfspace range queries. Another algorithm finds random samples of simplices to determine the separation distance of two polytopes. The algorithm uses expectedO(n[d/2]) time, wheren is the total number of vertices of the two polytopes. This matches previous results [10] for the cased = 3 and extends them. Another algorithm samples points in the plane to determine their orderk Voronoi diagram, and requires expectedO(s1+?k) time fors points. (It is assumed that no four of the points are cocircular.) This sharpens the boundO(sk2 logs) for Lee's algorithm [21], andO(s2 logs+k(s?k) log2s) for Chazelle and Edelsbrunner's algorithm [4]. Finally, random sampling is used to show that any set ofs points inE3 hasO(sk2 log8s/(log logs)6) distinctj-sets withj≤k. (ForS ?Ed, a setS? ?S with |S?| =j is aj-set ofS if there is a half-spaceh+ withS? =S ?h+.) This sharpens with respect tok the previous boundO(sk5) [5]. The proof of the bound given here is an instance of a "probabilistic method" [15].

Content maybe subject to copyright    Report

Citations
More filters
Book ChapterDOI
01 Jan 2000
TL;DR: This work describes general methods for designing deterministic parallel algorithms in computational geometry and focuses on techniques for shared-memory parallel machines.
Abstract: We describe general methods for designing deterministic parallel algorithms in computational geometry. We focus on techniques for shared-memory parallel machines, which we describe and illustrate with examples. We also discuss some open problems in this area.

16 citations

Journal ArticleDOI
TL;DR: For planar convex sets, the authors showed that the number of facets of the convex hull of random points distributed uniformly and independently in a smooth compact convex body is asymptotically increasing.
Abstract: Let $K$ be a compact convex body in ${\mathbb R}^d$, let $K_n$ be the convex hull of $n$ points chosen uniformly and independently in $K$, and let $f_{i}(K_n)$ denote the number of $i$-dimensional faces of $K_n$. We show that for planar convex sets, $E[f_0 (K_n)]$ is increasing in $n$. In dimension $d \geq 3$ we prove that if $\lim_{n \to \infty} \frac{E[f_{d-1}(K_n)]}{An^c}=1$ for some constants $A$ and $c>0$ then the function $n \mapsto E[f_{d-1}(K_n)]$ is increasing for $n$ large enough. In particular, the number of facets of the convex hull of $n$ random points distributed uniformly and independently in a smooth compact convex body is asymptotically increasing. Our proof relies on a random sampling argument.

15 citations

Proceedings ArticleDOI
Noga Alon1
23 Oct 2010
TL;DR: It is shown that the minimum possible size of an epsilon-net for point objects and line (or rectangle)-ranges in the plane is (slightly) bigger than linear in 1/\epsilon.
Abstract: We show that the minimum possible size of an epsilon-net for point objects and line (or rectangle)-ranges in the plane is (slightly) bigger than linear in 1/\epsilon. This settles a problem raised by Matousek, Seidel and Welzl in 1990.

15 citations


Cites background from "New applications of random sampling..."

  • ...A linear (in 1= ) upper bound for the size of -nets has been established for several special geometric cases, such as point objects and halfspace ranges in two and three dimensions, and point objects and disk or pseudo-disk ranges in the plane; see [ 11 ], [1], [28], [10], [24],[22] and the survey [14] for some earlier results on the subject....

    [...]

Journal ArticleDOI
TL;DR: An improved variant of K-nearest neighborhood (KNN) rule is proposed, aimed at ensuring sensitivity of data for critical applications and enhancing classification accuracy.
Abstract: There is currently a great need for research in gene expression data to help with cancer classification in the field of oncogenomics. This is especially true since the disease occurs sporadically and often does not show symptoms. Typically, gene expression data is disproportionate with a large number of features and a low number of samples. A small sample size is likely to adversely affect accuracy of classification, as the performance of a classifier depends largely on the data. There is a pressing need to generate data which could be provided as better input to classifiers. Primitive augmentation techniques like uniform random generation and addition of noise do not assure good probability distribution. Secondly, as we deal with critical applications, the augmented data needs to have greater likelihood to the original values. Thus, we propose an improved variant of K-nearest neighborhood (KNN) rule. We use Counting Quotient Filter, Euclidean distance and mean best value from the k-neighbors for each target sample to get synthetic samples. A comparison is drawn amongst the raw data from public domain (original data), data generated using standard K-nearest neighbor rule and data generated using improved K-nearest neighbor rule. The data generated through these approaches is then further classified using state-of-art classifiers like SVM, J48 and DNN. The samples generated through our improvisation technique yield better recall values than the standard implementation; ensuring sensitivity of data. Average classification accuracy from all the three classifiers conclude enhancement of 7.72% as compared to traditional KNN approach and 16% when raw data is considered as input to the classifiers. Thus, the proposed algorithm attains two objectives; firstly, ensuring sensitivity of data for critical applications and secondly, enhancing classification accuracy.

15 citations

Proceedings ArticleDOI
Peyman Afshani1
17 Jun 2012
TL;DR: New techniques are developed that lead to new and improved lower bounds for simplex range reporting as well as some other geometric problems, and a new framework for proving lower bound in the external memory model is offered.
Abstract: We investigate one of the fundamental areas in computational geometry: lower bounds for range reporting problems in the pointer machine and the external memory models. We develop new techniques that lead to new and improved lower bounds for simplex range reporting as well as some other geometric problems.Simplex range reporting is the problem of storing n points in a data structure such that the $k$ points that lie inside a query simplex can be outputted efficiently. This is one of the fundamental and extensively studied problems in computational geometry. Currently, the best data structures for the problem achieve Q(n) + O(k) query time using ~O( (n / Q(n))d) space in which the ~O(.) notation either hides a polylogarithmic or an ne factor for any constant e > 0, (depending on the data structure and Q(n)). The best lower bound on this problem is due to Chazelle and Rosenberg who proved a space lower bound of Ω(nd-e-dγ) for pointer machine data structures that can answer queries in O(nγ + k) time.For data structures with Q(n) + O(k) query time, we improve the space lower bound to Ω( (n/Q(n))d / 2O(√log Q(n))). Not only this reduces the overhead from polynomial to sub-polynomial, it also offers a smooth trade-off curve. For instance, for polylogarithmic values of Q(n), our lower bound is within a o(log n) factor of the conjectured trade-off curve.By a simple geometric transformation, we also improve the best lower bounds for the halfspace range reporting problem. Furthermore, we also study the external memory model and offer a new framework for proving lower bound in this model. For the first time we show that answering simplex range reporting queries with Q(n) + k/B I/Os requires Ω(B (n/(BQ(n)))d / 2O(√log Q(n)) space in which B is the block size.

15 citations


Cites background from "New applications of random sampling..."

  • ...At the other side of the spectrum, data structures with polylogarithmic query time were obtained through cuttings [12, 16, 11]....

    [...]

References
More filters
Book
01 Jan 1968
TL;DR: The arrangement of this invention provides a strong vibration free hold-down mechanism while avoiding a large pressure drop to the flow of coolant fluid.
Abstract: A fuel pin hold-down and spacing apparatus for use in nuclear reactors is disclosed. Fuel pins forming a hexagonal array are spaced apart from each other and held-down at their lower end, securely attached at two places along their length to one of a plurality of vertically disposed parallel plates arranged in horizontally spaced rows. These plates are in turn spaced apart from each other and held together by a combination of spacing and fastening means. The arrangement of this invention provides a strong vibration free hold-down mechanism while avoiding a large pressure drop to the flow of coolant fluid. This apparatus is particularly useful in connection with liquid cooled reactors such as liquid metal cooled fast breeder reactors.

17,939 citations

01 Jan 1985
TL;DR: This book offers a coherent treatment, at the graduate textbook level, of the field that has come to be known in the last decade or so as computational geometry.
Abstract: From the reviews: "This book offers a coherent treatment, at the graduate textbook level, of the field that has come to be known in the last decade or so as computational geometry...The book is well organized and lucidly written; a timely contribution by two founders of the field. It clearly demonstrates that computational geometry in the plane is now a fairly well-understood branch of computer science and mathematics. It also points the way to the solution of the more challenging problems in dimensions higher than two."

6,525 citations


"New applications of random sampling..." refers background in this paper

  • ...(In fact the mapping γ is not unique in this regard: see [13, 23, 2]....

    [...]

Book ChapterDOI
TL;DR: This chapter reproduces the English translation by B. Seckler of the paper by Vapnik and Chervonenkis in which they gave proofs for the innovative results they had obtained in a draft form in July 1966 and announced in 1968 in their note in Soviet Mathematics Doklady.
Abstract: This chapter reproduces the English translation by B. Seckler of the paper by Vapnik and Chervonenkis in which they gave proofs for the innovative results they had obtained in a draft form in July 1966 and announced in 1968 in their note in Soviet Mathematics Doklady. The paper was first published in Russian as Вапник В. Н. and Червоненкис А. Я. О равномерноЙ сходимости частот появления событиЙ к их вероятностям. Теория вероятностеЙ и ее применения 16(2), 264–279 (1971).

3,939 citations


"New applications of random sampling..." refers background in this paper

  • ...Vapnik and Chervonenkis [27] have derived general conditions under which several probabilities may be uniformly estimated using one random sample....

    [...]

Book
01 Jan 1978
TL;DR: In this article, the authors present a coherent treatment of computational geometry in the plane, at the graduate textbook level, and point out the way to the solution of the more challenging problems in dimensions higher than two.
Abstract: From the reviews: "This book offers a coherent treatment, at the graduate textbook level, of the field that has come to be known in the last decade or so as computational geometry...The book is well organized and lucidly written; a timely contribution by two founders of the field. It clearly demonstrates that computational geometry in the plane is now a fairly well-understood branch of computer science and mathematics. It also points the way to the solution of the more challenging problems in dimensions higher than two."

3,419 citations