scispace - formally typeset
Search or ask a question

Showing papers on "Computational geometry published in 1986"


Journal ArticleDOI
TL;DR: This work introduces a new technique for solving problems of the following form: preprocess a set of objects so that those satisfying a given property with respect to a query object can be listed very effectively.
Abstract: We introduce a new technique for solving problems of the following form: preprocess a set of objects so that those satisfying a given property with respect to a query object can be listed very effectively. Well-known problems that fall into this category include range search, point enclosure, intersection, and near-neighbor problems. The approach which we take is very general and rests on a new concept called filtering search. We show on a number of examples how it can be used to improve the complexity of known algorithms and simplify their implementations as well. In particular, filtering search allows us to improve on the worst-case complexity of the best algorithms known so far for solving the problems mentioned above.

318 citations


Book
01 Jun 1986
TL;DR: This paper presents some of this theoretical robotics research, emphasizing work relating to the geometric aspects of robot motion planning.
Abstract: Robotics has come to attract the attention of mathematicians and theoretical computer scientists to a rapidly increasing degree. Initial investigations have shown that robotics is a rich source of deep theoretical problems, which range over computational geometry, control theory, and many aspects of physics, and whose solutions draw upon methods developed in subjects as diverse as automata theory, algebraic topology, and Fourier analysis. Also presented is some of this theoretical robotics research, emphasizing work relating to the geometric aspects of robot motion planning.

217 citations


Journal ArticleDOI
TL;DR: This paper considers the problem of approximating a piecewise linear curve by another whose vertices are a subset of the vertices of the former, and shows that an optimum solution of this problem can be found in a polynomial time.
Abstract: In cartography, computer graphics, pattern recognition, etc., we often encounter the problem of approximating a given finer piecewise linear curve by another coarser piecewise linear curve consisting of fewer line segments. In connection with this problem, a number of papers have been published, but it seems that the problem itself has not been well modelled from the standpoint of specific applications, nor has a nice algorithm, nice from the computational-geometric viewpoint, been proposed. In the present paper, we first consider (i) the problem of approximating a piecewise linear curve by another whose vertices are a subset of the vertices of the former, and show that an optimum solution of this problem can be found in a polynomial time. We also mention recent results on related problems by several researchers including the authors themselves. We then pose (ii) a problem of covering a sequence of n points by a minimum number of rectangles with a given width, and present an O(n long n )-time algorithm by making use of some fundamental established techniques in computational geometry. Furthermore, an O(mn (log n ) 2 )-time algorithm is presented for finding the minimum width w such that a sequence of n points can be covered by at most m rectangles with width w . Finally, (iii) several related problems are discussed.

190 citations


Proceedings ArticleDOI
27 Oct 1986
TL;DR: This paper forms criteria for a satisfactory solution to the discrete version of the problem of finding all intersections of a collection of line segments, and designs an interface between the continuous domain and the discrete domain which supports certain invariants.
Abstract: Geometric algorithms are usually designed with continuous parameters in mind. When the underlying geometric space is intrinsically discrete, as is the case for computer graphics problems, such algorithms are apt to give invalid solutions if properties of a finite-resolution space are not taken into account. In this paper we discuss an approach for transforming geometric concepts and algorithms from the continuous domain to the discrete domain. As an example we consider the discrete version of the problem of finding all intersections of a collection of line segments. We formulate criteria for a satisfactory solution to this problem, and design an interface between the continuous domain and the discrete domain which supports certain invariants. This interface enables us to obtain a satisfactory solution by using plane-sweep and a variant of the continued fraction algorithm.

184 citations


Journal ArticleDOI
TL;DR: An algorithm of time-complexity O(3^{(d + 2)^2 } n)$ is derived for this problem and improves the best previous bound even in the case $d = 2$.
Abstract: The paper is divided into two main sections. The first deals with a multidimensional search technique of Megiddo [J. Assoc. Comput. Mach., 31 (1984), pp. 114–127], and suggests an improvement. The second gives an application of the technique to the Euclidean one-centre problem in $\mathbb{R}^d $. An algorithm of time-complexity $O(3^{(d + 2)^2 } n)$ is derived for this problem. This improves the best previous bound even in the case $d = 2$.

178 citations


Journal ArticleDOI
TL;DR: This paper describes an O(n)-time algorithm for recognizing and sorting Jordan sequences that uses level-linked search trees and a reduction of the recognition and sorting problem to a list-splitting problem.
Abstract: For a Jordan curve C in the plane nowhere tangent to the x axis, let x1, x2,…, xn be the abscissas of the intersection points of C with the x axis, listed in the order the points occur on C. We call x1, x2,…, xn a Jordan sequence. In this paper we describe an O(n)-time algorithm for recognizing and sorting Jordan sequences. The problem of sorting such sequences arises in computational geometry and computational geography. Our algorithm is based on a reduction of the recognition and sorting problem to a list-splitting problem. To solve the list-splitting problem we use level-linked search trees.

129 citations


Journal ArticleDOI
TL;DR: An Ω(n logn) lower bound is proved for these problems under appropriate models of computation for a set ofn demand points with weightWi,i = 1,2,...,n, in the plane.
Abstract: Given a set ofn demand points with weightWi,i = 1,2,...,n, in the plane, we consider several geometric facility location problems. Specifically we study the complexity of the Euclidean 1-line center problem, discrete 1-point center problem and a competitive location problem. The Euclidean 1-line center problem is to locate a line which minimizes the maximum weighted distance from the line (or the center) to the demand points. The discrete 1-point center problem is to locate one of the demand points so as to minimize the maximum unweighted distance from the point to other demand points. The competitive location problem studied is to locate a new facility point to compete against an existing facility so that a certain objective function is optimized. An Ω(n logn) lower bound is proved for these problems under appropriate models of computation. Efficient algorithms for these problems that achieve the lower bound and other related problems are also given.

122 citations


Journal ArticleDOI
TL;DR: The z-buffer display algorithm operates directly on CSG, does not require explicit boundary data, and is easier to implement than ray casting, which may lead to machines simpler than those now being built for ray casting.
Abstract: Solid modelers based on constructive solid geometry (CSG) typically generate shaded displays directly from CSG by using ray-casting techniques, which do not require informatin on the faces, edges, and vertices that bound a solid. This article describes an alternative-a simple new algorithm based on a depth-buffering or z-buffering approach. The z-buffer display algorithm operates directly on CSG, does not require explicit boundary data, and is easier to implement than ray casting. Ray-casting and z-beffering algorithms have comparable performances, but z-buffering is often faster for objects with complex surfaces, because it avoids expensive curve/surface intersection calculations. Because of their simplicity, depth-buffering algorithms for CSG are well-suited to hardware implementations, and may lead to machines simpler than those now being built for ray casting.

99 citations


Journal ArticleDOI
TL;DR: A method using techniques of computational geometry for triangular mesh generation for regions with complicated polygonal boundaries in the plane is presented, which can be extended to provide additional control of the triangulation by a mesh distribution function.
Abstract: A method using techniques of computational geometry for triangular mesh generation for regions with complicated polygonal boundaries in the plane is presented. The input to the method includes the desired number of triangles and a mesh smoothness parameter to be specified, as well as the polygonal curves of the region's boundary and, possibly, internal interfaces. The triangulation generated conforms to the length scales of the edges of the boundary curves, but the method can be extended to provide additional control of the triangulation by a mesh distribution function. The region is decomposed into convex subregions in two stages, such that triangles of one length scale can be generated in each subregion. This decomposition uses algorithms which run in times that are linear in the number of vertices of the input polygons. Details of two major computational experiments are provided.

77 citations


Journal ArticleDOI
TL;DR: An asymptotically optimal algorithm to locate all the axes of mirror symmetry of a planar point set is presented by reducing the 2-D symmetry problem to linear pattern-matching.

62 citations


Journal ArticleDOI
TL;DR: A table‐driven algorithm for drawing a variety of space‐filling curves is presented and a method for discovering new curves is described.
Abstract: A table-driven algorithm for drawing a variety of space-filling curves is presented. A method for discovering new curves is described. Numerous examples are shown.

Journal ArticleDOI
TL;DR: This tutorial examines resultants, curve implicitization, curve inversion, and curve intersection, which deals strictly with algorithms and introduces abstractions far removed from the algorithmic nature of computer-aided design.
Abstract: Classical algebraic geometry has been virtually ignored in computer-aided geometric design. However, because it deals strictly with algorithms, it is really more suited to this field than is modern algebraic geometry, which introduces abstractions far removed from the algorithmic nature of computer-aided design. This tutorial examines resultants, curve implicitization, curve inversion, and curve intersection. Discussion follows a series of examples simple enough for those with only a modest algebra background to follow.

Journal ArticleDOI
TL;DR: A number of algorithms are presented for obtaining power series expansions of curves and surfaces at a point by means of truncated series, and some results on the radius of convergence are given.
Abstract: A number of algorithms are presented for obtaining power series expansions of curves and surfaces at a point. Some results on the radius of convergence are given. Two applications of series are given: 1. • for curve tracing algorithms, where a truncated series is used to approximate the curve of intersection of two surfaces 2. • to define nth degree geometric continuity, for arbitrary

Proceedings ArticleDOI
Kenneth L. Clarkson1
01 Nov 1986
TL;DR: This paper gives several new demonstrations of the usefulness of random sampling techniques in computational geometry by creating a search structure for arrangements of hyperplanes by sampling the hyperplanes and using information from the resulting arrangement to divide and conquer.
Abstract: 1 Introduction This paper gives several new demonstrations of the usefulness of random sampling techniques in computational geometry. One new algorithm creates a search structure for arrangements of hyperplanes by sampling the hyperplanes and using information from the resulting arrangement to divide and conquer. This algorithm requires randomized O(s d+`) preprocess-ing time to build a search structure for an arrangement of s hyperplanes in d dimensions. The structure has a query time that is worst-case O(logs). (The bound holds for any fixed ~ > 0, with the constant factors dependent on d and ~.) Using point-plane du-ality, the algorithm may be used for answering halfs-pace range queries. Another algorithm finds random samples of simplices to determine the separation distance of two polytopes. The algorithm uses random-ized O(n[ d/2j) time, where n is the total number of vertices of the two polytopes. This matches previous results [DK851 for the case d : 3 and extends them. Another algorithm samples points in the plane to determine their order k Voronoi diagram, and requires randomized O(sk)o(s ~) time for s points. This sharpens the bound O(sk 2 logs) for Lee's algorithm [Lee821, and O(s 2 logs + s(s-k) log2 s) for Chazelle and Edelsbrunner's algorithm ICE851. Finally, random sampling is used to show that any set of s points in E 3 has O(sk 2 log 9 s/(log log s) 6) distinct j-sets with j < k. (For S C E d, a set S' C S with IS'} =j is a j-set of S if there is a halfspace h + with S' = S fqh+.) This sharpens with respect to k the previous bound O(sk 5) [CP851. The proof of the bound given here is an instance of a "probabilistic method" IES741. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the ACM copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Association for Computing Machinery. To copy otherwise, or to republish, requires a fee and/or specific permission. 1.1 The problems and results The use of random sampling to divide and conquer is quite old: the partitioning step of quicksort may be viewed as an example. This paper describes several new applications of this technique. Searching arrangements. Given a set …

Journal ArticleDOI
01 Mar 1986
TL;DR: An algorithm that finds the externally visible vertices of a polygon is described, which generates a new geometric construction, termed the convex ropes of each visible vertex, which is useful in image interpretation and graphics where efficient computation of visible points is important.
Abstract: An algorithm that finds the externally visible vertices of a polygon is described. This algorithm generates a new geometric construction, termed the convex ropes of each visible vertex. The convex ropes give the range of angles from which each vertex is visible, and they give all the pairs of vertices which are reachable by a straight robot finger. All of the convex ropes can be found in expected time order n, where n is the number of vertices of the polygon. We discuss the application of this geometric construction to automated grasp planning. The algorithm may also be useful in image interpretation and graphics where efficient computation of visible points is important. The direct application of the algorithm is restricted to two dimension since sequential ordering of vertices is required. Extension to three dimension would rely on well chosen intersecting or projective planes.

Proceedings ArticleDOI
27 Oct 1986
TL;DR: An efficient algorithms for preprocessing of a 2-D polyhedral terrain so as to support fast ray shooting queries from a fixed point and for determining whether two disjoint interlocking simple polygons can be separated from one another by a sequence of translations are presented.
Abstract: We present efficient algorithms for the following geometric problems: (i) Preprocessing of a 2-D polyhedral terrain so as to support fast ray shooting queries from a fixed point. (ii) Determining whether two disjoint interlocking simple polygons can be separated from one another by a sequence of translations. (iii) Determining whether a given convex polygon can be translated and rotated so as to fit into another given polygonal region. (iv) Motion planning for a convex polygon in the plane amidst polygonal barriers. All our algorithms make use of Davenport Schinzel sequences and on some generalizations of them; these sequences are a powerful combinatorial tool applicable in contexts which involve the calculation of the pointwise maximum or minimum of a collection of functions.

Journal ArticleDOI
TL;DR: The algorithms can be generalized to solved the d-dimensional maximal elements and ECDF searching problem in O(n 1 2 + log 2 ( d −2) )( d > 2) time.

Journal ArticleDOI
TL;DR: This work presents efficient algorithms for computing union-of-rectangle representations of derived sets (union, intersection, complement) and for conversion between the union of rectangles and other representations of a subset.
Abstract: The digital medial axis transform (MAT) represents an image subset S as the union of maximal upright squares contained in S . Brute-force algorithms for computing geometric properties of S from its MAT require time O(n 2 ) , where n is the number of squares. Over the past few years, however, algorithms have been developed that compute properties for a union of upright rectangles in time O(n log n) , which makes the use of the MAT much more attractive. We review these algorithms and also present efficient algorithms for computing union-of-rectangle representations of derived sets (union, intersection, complement) and for conversion between the union of rectangles and other representations of a subset.

Book
01 Jan 1986
TL;DR: This dissertation presents algorithms that support persistent search trees, with applications in computational geometry, and a general result is shown that allows making arbitrary ephemeral data structures partially persistent with an O(1) space overhead per update operation.
Abstract: This dissertation introduces the concept of persistence in data structures. Classical algorithms operate on data structures is such a manner that modifications to the structure do not preserve its state as it appeared before the modification. A persistent data structure is one in which multiple versions of the structure as it varies through time are maintained. Data structures that do not maintain the history of states of the structure are called ephemeral. A differentiation between two types of persistence, partial persistence and full persistence, is made. A partially persistent data structure allows the modification only of the most recent version of the structure. This makes partial persistence useful in cases where the history of update operations is required for query purposes but no changes of prior versions are desired. Under certain constraints, any ephemeral data structure may be made persistent without a major blow-up of the space and time complexity measures. Full persistence allows modification of any version of the data structure. This dissertation presents algorithms that support persistent search trees, with applications in computational geometry. In particular, the planar point location problem will be solved using persistent binary search trees with an O(log n) query time and O(n) space. Persistent lists are described, with applications in applicative programming languages. In particular, persistent deques are presented that have constant space overhead per deque operation, while still maintaining O(1) update times. Persistent finger search trees are also presented, with applications in text editing. Persistent finger search trees are implemented with an O(log d) space overhead per update, and an O(log d) time bound, where d is the distance between the finger and the affected position. A general result is shown that allows making arbitrary ephemeral data structures partially persistent with an O(1) space overhead per update operation.

Book ChapterDOI
01 Jan 1986
TL;DR: It is shown that the “rotating calipers” can be used to obtain an extremely simple 0(n log n) divide-and-conquer algorithm that neither sorts nor uses backtracking.
Abstract: Finding convex hulls, intersecting convex polygons, and triangulating sets of points are computational geometric problems that occur frequently in pattern recognition. In this paper we present new results on these three problems. Let S be a set of n points in the plane specified by their cartesian coordinates. Almost all 0(n log n) convex hull algorithms published to date contain explicit sorting and/or backtracking as steps towards their final goal. We show here that the “rotating calipers” can be used to obtain an extremely simple 0(n log n) divide-and-conquer algorithm that neither sorts nor uses backtracking. For the triangulation problem, an 0(n 2 log n) algorithm was recently proposed for computing the “onion” triangulation of a set S. We show here that the “rotating calipers” can be used in conjunction with a result of Chazelle to obtain 0(n log n) algorithms for computing the “onion” and “spiral” triangulations of S. Finally, let P and Q be two convex polygons with m and n vertices, respectively, which are specified by their cartesian coordinates in order. A simple 0(m+n) algorithm is presented for computing the intersection of P and Q. Unlike previous algorithms the new algorithm consists of a two-step combination of two simple procedures for finding convex hulls and triangulations of polygons.

Journal ArticleDOI
TL;DR: This article describes the experiences of implementing programs to solve several problems in Prolog, including a subset of the Graphical Kernel System, planar graph traversal, recognition of groupings of objects, Boolean combinations of polygons using multiple precision rational numbers, and cartographic map overlay.
Abstract: Prolog is a userful tool for geometry and graphics implementations because its primitives, such as unification, match the requiements of many geometric algorithms. During the last two years, we have implemented programs to solve several problems in Prolog, including a subset of the Graphical Kernel System, convex-hull calculation, planar graph traversal, recognition of groupings of objects, Boolean combinations of polygons using multiple precision rational numbers, and cartographic map overlay. Certain paradigms or standard forms of geometric programming in Prolog are becoming evident. They include applying a function to every element of a set, executing a procedure so long as a certain geometric pattern exists, and using unification to propagate a transitive function. This article describes the experiences, including paradigms of programming that seem useful, and finally lists what we see as a advantaes and disadvantages of Prolog.



Journal ArticleDOI
08 Jun 1986
TL;DR: Comparable accuracy and computation time can be achieved by evaluating I(u, v) by a brute-force fast Fourier transform (FFT) described in this note and there is virtually no restriction on the reflector geometry by using the brute-forces FFT.
Abstract: Using high-frequency approximations, the secondary pattern of a reflector antenna can be calculated by numerically evaluating a radiation integral I(u, v) . In recent years, tremendous effort has been expended to reducing I(u, v) to Fourier integrals. These reduction schemes are invariably reflector geometry dependent. Hence, different analyses/computer software development must be carried out for different reflector shapes/boundaries. The purpose of this communication is to point out that as the computer power improves, these reduction schemes may not always be necessary. Comparable accuracy and computation time can be achieved by evaluating I(u, v) by a brute-force fast Fourier transform (FFT) described in this note. Furthermore, there is virtually no restriction on the reflector geometry by using the brute-force FFT.

Proceedings ArticleDOI
01 Aug 1986
TL;DR: Combined with existing algorithms for computing Voronoi diagrams on the surface of polyhedra, this structure provides an efficient solution to the nearest neighbor query problem on polyhedral surfaces.
Abstract: A common structure arising in computational geometry is the subdivision of a plane defined by the faces of a straight line planar graph. We consider a natural generalization of this structure on a polyhedral surface. The regions of the subdivision are bounded by geodesics on the surface of the polyhedron. A method is given for representing such a subdivision that is efficient both with respect to space and the time required to answer a number of different queries involving the subdivision. For example, given a point

Journal ArticleDOI
TL;DR: This note describes how to perform a general decomposition of a set of polygons with fixed orientations in order to solve various computational geometry problems which are important in VLSI design.
Abstract: Objects with fixed orientations play an important role in many application areas, for instance VLSI design. Problems involving only rectilinearly oriented (rectangular) objects, as a simplest case, have been studied with the VLSI design application in mind. These objects can be transistors, cells or macros. In reality, they are more suitably represented by polygons rather than just rectangles. In this note we describe how to perform a general decomposition of a set of polygons with fixed orientations in order to solve various computational geometry problems which are important in VLSI design. The decomposition is very simple and efficiently computable, and it allows the subsequent application of algorithms for the rectilinear case, leading to some very efficient and some optimal solutions. We illustrate the technique in detail at the problem of finding the connected components of a set of polygons, for which we derive an optimal solution. The wide applicability of the method is then demonstrated at the problem of finding all pairs of intersecting polygons, yielding an optimal solution.


Journal ArticleDOI
TL;DR: The method is found that the method is effective in achieving consistency between maps with different attributes and can be an alternative method to calculation of risk factors by complex algorithms.
Abstract: A geographic information overlay method is defined whereby the intersection of figures caused by overlaid maps is detected and socio-economic information about the detected region is restructured. By this method map information compiled with the different social attributes can be compared and their consistency can be checked for regional analysis. This method can be used as an alternative simulation model in generating indices for certain problems. For given map data geometric figure intersections are treated by intersection calculus theory in computational geometry. For socio-economic information, integration and distribution of numerical data and representation of attributes are implemented from the practical point of view. As a case study, this method is applied to the overlay of geological and administrative maps in earthquake disaster prevention. It is found that the method is effective in achieving consistency between maps with different attributes and can be an alternative method to calculation of risk factors by complex algorithms.

Journal ArticleDOI
TL;DR: This paper argues that at least for sets of orthogonal objects divide-and-conquer is competitive, if a suitable representation of the objects is used, and sketches three (new) time-optimal dividing algorithms to solve the line segment intersection problem, the measure problem and the contour problem.
Abstract: In the last few years line-sweep has become the standard method to solve problems that involve computing some property of a set of planar objects. In this paper we argue that at least for sets of orthogonal objects divide-and-conquer is competitive, if a suitable representation of the objects is used. We support this claim by sketching three (new) time-optimal divide-and-conquer algorithms to solve the line segment intersection problem, the measure problem and the contour problem, respectively. It turns out that divide-and-conquer requires simpler supporting data structures while line-sweep permits an easier reduction to a one-dimensional problem.