scispace - formally typeset
Search or ask a question

Showing papers on "Computational geometry published in 1987"


Book
01 Jan 1987
TL;DR: This book offers a modern approach to computational geo- metry, an area thatstudies the computational complexity of geometric problems with an important role in this study.
Abstract: This book offers a modern approach to computational geo- metry, an area thatstudies the computational complexity of geometric problems. Combinatorial investigations play an important role in this study.

2,284 citations



Journal ArticleDOI
01 Mar 1987
TL;DR: This work considers uncertain points, curves and surfaces, and shows how they can be manipulated and transformed between coordinate frames in an efficient and consistent manner.
Abstract: Robots must operate in an environment which is inherently uncertain. This uncertainty is important in areas such as modeling, planning and the motion of manipulators and objects; areas where geometric analysis also plays an important part. To operate efficiently, a robot system must be able to represent, account for, and reason about the effects of uncertainty in these geometries in a consistent manner. We maintain that uncertainty should be represented as an intrinsic part of all geometric descriptions. We develop a description of uncertain geometric features as families of parameterized functions together with a distribution function defined on the associated parameter vector. We consider uncertain points, curves and surfaces, and show how they can be manipulated and transformed between coordinate frames in an efficient and consistent manner. The effectiveness of these techniques is demonstrated by application to the problem of developing maximal information sensing strategies.

226 citations


Journal ArticleDOI
TL;DR: A model for determining the directional relationship in 2-D space between two simply-connected polygons of arbitrary shape, size and distance from each other is developed and found to have a computational complexity of O ( n ), where n is the total number of vertices or cells used to represent the two polygons.

212 citations


Journal ArticleDOI
TL;DR: For this problem, none of the previous lower bounds are valid and algorithms are proposed requiring sublinear time for their solution in two and three dimensions.
Abstract: One of the basic geometric operations involves determining whether a pair of convex objects intersect. This problem is well understood in a model of computation in which the objects are given as input and their intersection is returned as output. For many applications, however, it may be assumed that the objects already exist within the computer and that the only output desired is a single piece of data giving a common point if the objects intersect or reporting no intersection if they are disjoint. For this problem, none of the previous lower bounds are valid and algorithms are proposed requiring sublinear time for their solution in two and three dimensions.

175 citations


Proceedings ArticleDOI
01 Oct 1987
TL;DR: A data structure for representing three-dimensional cell complexes is proposed along with the primitive operations necessary to make it useful and applications of the structure are given.
Abstract: Algorithms for manipulating three-dimensional cell complexes are seldom implemented due to the lack of a suitable data structure for representing them. Such a data structure is proposed here along with the primitive operations necessary to make it useful. Applications of the structure are also given.

164 citations


Proceedings ArticleDOI
01 Jan 1987
TL;DR: In this paper, an assortment of algorithms, termed three-dimensional (3D) scan-conversion algorithms, is presented. And all algorithms are incremental and use only additions, subtractions, tests and simpler operations inside the inner algorithm loops.
Abstract: An assortment of algorithms, termed three-dimensional (3D) scan-conversion algorithms, is presented. These algorithms scan-convert 3D geometric objects into their discrete voxel-map representation within a Cubic Frame Buffer (CFB). The geometric objects that are studied here include three-dimensional lines, polygons (optionally filled), polyhedra (optionally filled), cubic parametric curves, bicubic parametric surface patches, circles (optionally filled), and quadratic objects (optionally filled) like those used in constructive solid geometry: cylinders, cones, and spheres.All algorithms presented here do scan-conversion with computational complexity which is linear in the number of voxels written to the CFB. All algorithms are incremental and use only additions, subtractions, tests and simpler operations inside the inner algorithm loops. Since the algorithms are basically sequential, the temporal complexity is also linear. However, the polyhedron-fill and sphere-fill algorithms have less than linear temporal complexity, as they use a mechanism for writing a voxel run into the CFB. The temporal complexity would then be linear with the number of pixels in the object's 2D projection. All algorithms have been implemented as part of the CUBE Architecture, which is a voxel-based system for 3D graphics. The CUBE architecture is also presented.

163 citations


Book
28 Dec 1987
TL;DR: You might not require more era to spend to go to the book instigation as competently as search for the soft documents of this perceptrons an introduction to computational geometry expanded edition by online.
Abstract: This is likewise one of the factors by obtaining the soft documents of this perceptrons an introduction to computational geometry expanded edition by online. You might not require more era to spend to go to the book instigation as competently as search for them. In some cases, you likewise accomplish not discover the notice perceptrons an introduction to computational geometry expanded edition that you are looking for. It will categorically squander the time.

145 citations


Journal ArticleDOI
P. Widmayer, Y. F. Wu, C. K. Wong1
TL;DR: The distance concept is generalized to the case where any fixed set of orientations is allowed, and a family of naturally induced metrics is introduced, and the subsequent generalization of geometrical concepts are introduced.
Abstract: In VLSI design, technology requirements often dictate the use of only two orthogonal orientations, determining both the shape of objects and the distance function, the $L_1 $-metric, to be used for wiring objects. More recent VLSI fabrication technology is capable of creating edges and wires in both the orthogonal and diagonal orientations.We generalize the distance concept to the case where any fixed set of orientations is allowed, and introduce a family of naturally induced metrics, and the subsequent generalization of geometrical concepts. A shortest connection between two points is in this case a path composed of line segments with only the given orientations. We derive optimal solutions for various basic planar distance problems in this setting, such as the computation of a Voronoi diagram, a minimum spanning tree, and the (minimum and maximum) distance between two convex polygons. Many other theoretically interesting and practically relevant problems remain to be solved. In particular, the new famil...

122 citations


Journal ArticleDOI
01 May 1987
TL;DR: Views of objects composed of smooth surface patches whose intersections form smooth space curves are classified by mappings and diagrams of mappings from the line to the plane or from the plane to the planes.
Abstract: Views of objects composed of smooth surface patches whose intersections form smooth space curves are classified. These views may he described algebraically by mappings and diagrams of mappings from the line to the plane (for the crease) or from the plane to the plane (for the apparent contour). It is possible to derive a finite catalogue of generic views and their transitions, so that every view is either (up to smooth coordinate changes in source and target) equivalent to one of those in the catalogue or is of sufficiently high codimension.

73 citations


Proceedings ArticleDOI
01 Mar 1987
TL;DR: An efficient and reliable algorithm for computing the Euclidean distance between a pair of convex sets in Rmdescribed that has special features which make its application in a variety of robotics problems attractive.
Abstract: An efficient and reliable algorithm for computing the Euclidean distance between a pair of convex sets in Rmdescribed. Extensive numerical experience with a broad family of polytopes in Rsshows that the computational cost is approximately linear in the total number of vertices specifying the two polytopes. The algorithm has special features which make its application in a variety of robotics problems attractive. These are discussed and an example of collision detection is given.

Journal ArticleDOI
TL;DR: Three special cases of increasing difficulty and generality of the hidden line elimination problem are studied, and applying some methods from computational geometry these problems can be solved with better worst-case bounds than those of the best known algorithms for the general problem.
Abstract: Hidden line elimination is a well-known problem in computer graphics and many practical solutions have been proposed. Only recently the problem has been studied from a theoretical point of view, taking asymptotic worst-case time- and spacebounds into account. Here we study three special cases of increasing difficulty and generality of the hidden line elimination problem. Applying some methods from computational geometry these problems can be solved with better worst-case bounds than those of the best known algorithms for the general problem.

Journal ArticleDOI
TL;DR: A new methodology reveals the structure of free space and constructs the hypergraph representation through a directed search for a set of fundamental circits in an abstract graphical representation of the environment geometry.
Abstract: This paper presents a method of structuring the free space of a roving robot's environment into a set of overlapping convex regions ideally suited to path planning and navigation tasks. The structure of the free space environment is maintained as a hypergraph with each convex region represented by a hyperedge identifying the boundary walls of the region. A new methodology reveals the structure of free space and constructs the hypergraph representation through a directed search for a set of fundamental circits in an abstract graphical representation of the environment geometry.

Proceedings ArticleDOI
01 Oct 1987
TL;DR: It is proved that it is always possible to find piecewise-linear homeomorphisms between rectangular regions and described then in terms of a joint triangulation of the domain and the range rectangular regions.
Abstract: In rubber-sheeting applications in cartography, it is useful to seek piecewise-linear homeomorphisms (PLH maps) between rectangular regions which map an arbitrary sequence of n points {p1, p2, …,pn} from the interior of one rectangle to a corresponding sequence {q1, q2, …, qn} of n points in the interior of the second region. This paper proves that it is always possible to find such PLH maps and describes then in terms of a joint triangulation of the domain and the range rectangular regions.One naive approach to finding a PLH map is to triangulate (in any fashion) the domain rectangle on its n points and four corners and to define a piecewise affine map on each triangle up11p12p13 to be the unique affine map that sends the three vertices p11, p12, p13 of the triangle to the three corresponding vertices q11, q12, q13 of the image triangle uq11q12q13. Such piecewise affine maps send triangles to triangles, agree on shared edges, and thus extend globally, and will be called triangulation maps. The shortcoming of building transformations in this fashion is that the resulting triangulation map need not be one-to-one, although there is a simple test to determine if such a map is one-to-one (see Theorem 2 below). If the map is one-to-one, then the image triangles will form a triangulation of the range space; and we will have a joint triangulation. If the map is not one-to-one, then there will be folding over of triangles. It may be possible to alleviate this folding by choosing a different triangulation of the n domain points, or it may be the case that no triangulation of the n domain points will work. (See figures 5 and 6 below). We show that it will be possible, in all cases, to rectify the folding by adding appropriate additional triangulation vertex pairs {pn+1, pn+2, …, pn+m} and {qn+1, qn+2, …, qn+m} and retriangulating (see Theorem 1 below). This paper examines conditions for triangulation maps to be homeomorphisms and explores different ways of modifying triangulations and triangulation maps to make them joint triangulations and homeomorphisms.The paper concludes with a section on alternative constructive approaches to the open problem of finding joint triangulations on the original sequences of vertex pairs without augmenting those sequences of pairs.The existence proofs in this paper do not solve computational geometry problems per se; instead they permit us to formulate new computational geometry problems. The problems we pose are of interest to us because of a particular application in automated cartography.

01 Aug 1987
TL;DR: This thesis deals with several constructive (in contrast to query) problems in computational geometry and presents algorithms whose running time depends non-trivially on the output size.
Abstract: In computer science the efficiency of algorithms is usually measured in terms of the size of the input The output size, on the other hand, has been used for this purpose rather infrequently, except in certain enumerative query problems This thesis deals with several constructive (in contrast to query) problems in computational geometry and presents algorithms whose running time depends non-trivially on the output size We present an algorithm that finds the convex hull on n points in the plane in worst case time O(nlogH), where H is the number of points that turn out to be vertices of the convex hull We examine the d-dimensional maximal vector problem and show that as long as V, the number of maximal vectors in a set, is not too large, these maximal vectors can be found in time O(nlogV) We present an algorithm for solving the planar convex subdivision overlay problem in time proportional to the combined input and output size Finally we show that, after some preprocessing in the form of linear programs, d-dimensional convex hulls can be constructed at logarithmic cost per face

Journal ArticleDOI
W.E. Blanz1, Jorge L. C. Sanz1, E.B. Hinkle1
01 Mar 1987
TL;DR: A general image-segmentation architecture is proposed, which enables the computation of the necessary low-level image features as well as pixel classification at video-rate speed and can be considered as a control-free image segmentation paradigm.
Abstract: Machine vision methods are presented for the analysis of solder balls in integrated circuits. The algorithms are founded on counter fitting using a multiparameter Hough transform and on polynomial-classifier-based pattern recognition. The first method is used to show the complexity of the inspection problem, especially in the presence of high-precision requirements. In this connection, it is shown that subpixel accuracy is not obtainable even under the assumption of a perfect camera system which determines the resolution necessary for the measurement of a given maximum-volume distortion. The second method is carried out by computing a large number of features on the original image after individual solder balls are segmented by a projection technique. This approach can be considered as a control-free image segmentation paradigm, since it does not rely on properly sequencing several image-analysis modules. Further experimentation with a large pool of defective solder balls is necessary to confirm the applicability of these machine vision algorithms to a real-world manufacturing inspection systems. A general image-segmentation architecture is proposed, which enables the computation of the necessary low-level image features as well as pixel classification at video-rate speed. >

Proceedings ArticleDOI
M. Sharir1
01 Mar 1987
TL;DR: A collection of results representing recent progress in the design and analysis of efficient algorithms for planning purely translational collision-free motion of rigid objects moving in two-or three-dimensional space amidst a collection of obstacles whose geometry is known to the system are presented.
Abstract: 1. Introduction In this abstract we present a collection of results representing recent progress in the design and analysis of efficient algorithms for planning purely transla-tional collision-free motion of rigid objects moving in two-or three-dimensional space amidst a collection of obstacles whose geometry is known to the system. (The results reported here were obtained in collaboration with K. Motion planning problems of this kind are particularly favorable because they involve only two or three (translational) degrees of freedom, and because they do not have to consider rotational degrees of freedom, which tend to make the structure of the free configuration space of the moving system more complex to analyze. Besides obvious applications for mobile (or flying) autonomous systems , these problems are significant for two reasons: First they constitute the simplest of all motion planning problems, so careful analysis of their complexity is called for before moving to more difficult and complex problems. Second, these translational problems often arise as subtasks in motion planning algorithms for systems with additional rotational degrees of freedom (as in [LS], [KS]). In spite of their relative simplicity, precise analysis of the combinatorial structure of the configuration space of such a translating system leads to many deep and difficult geometric problems, whose efficient solutions require development and application of sophisticated tools from computational geometry. It is the purpose of this abstract to review these prob-l e m , describe the techniques used in their solution, present sharp upper bounds on the complexity of these solutions, and mention some open problems that arise in connection with certain variants and extensions of the translational motion planning problem. 2. Two-Dimensional Problems Let B be a rigid k-sided polygonal object free to translate in the plane amidst a collection of polygonal goal is to calculate the free configuration space FP of B , consisting of all free placements of B (i.e. placements in which B does not intersect any obstacle). Having calculated FF, we can then decompose it into its arcwise connected components, so that, given any pair of free placements Z1, 22 of B, we can determine whether they lie in the same connected component of FP, in which case collision-free translational motion of B from Z1 to Zz is possible. (We can use FP for other applications, e.g. preprocess FP so that we capa determine in logarithmic time whether a given "query" placement of B is free …

01 Jan 1987
TL;DR: A number of new techniques for solving many of the fundamental problems in computational geometry efficiently in parallel have linear or "almost" linear speed-ups over the best known sequential algorithms for these problems.
Abstract: In this thesis we present a number of new techniques for solving many of the fundamental problems in computational geometry efficiently in parallel. The resulting algorithms all have linear or "almost" linear speed-ups over the best known sequential algorithms for these problems. Specifically, the problems we address include the following: computing the diameter of a convex polygon, planar convex hull finding, finding a closest pair of points, polygon triangulation, 3-dimensional maxima finding, dominance counting, determining the visibility from a point, trapezoidal decomposition, and planar point location. The techniques presented are quite different from the ones used in the efficient sequential algorithms. All of our results are for the CREW PRAM or EREW PRAM computational models.

Proceedings ArticleDOI
01 Dec 1987
TL;DR: It is shown that a number of geometric problems can be solved on a √n × √ n mesh-connected computer (MCC) inO(√n) time, which is optimal to within a constant factor, since a nontrivial data movement on an MCC requires Ω(⩽) time.
Abstract: We show that a number of geometric problems can be solved on a \trn x \trn mesh connected computer(MCC) in 0(\trn) time, which is optimal to within a constant factor, since a nontrivial data movement on an MCC requires \gW(\trn) time. The problems studied here include multipoint location, planar point location, trapezoidal decomposition, intersection detection, intersection of two convex polygons, Voronoi diagram, the largest empty circle, the smallest enclosing circle, etc. The 0(\trn) algorithms for all of the above problems are based on the classical divide-and-conquer problem solving strategy.

Journal ArticleDOI
TL;DR: This work presents an algorithm for solving the problem of determining whether a set of polygons is multi-directionally separable, and shows how to compute all directions of unidirectional separability for sets of arbitrary simple polygons.
Abstract: We consider the problem of separating a set of polygons by a sequence of translations (one such collision-free translation motion for each polygon). If all translations are performed in a common direction the separability problem so obtained has been referred to as the uni-directional separability problem; for different translation directions, the more general multi-directional separability problem arises. The class of such separability problems has been studied previously and arises e.g. in computer graphics and robotics. Existing solutions to the uni-directional problem typically assume the objects to have a certain predetermined shape (e.g., rectangular or convex objects), or to have a direction of separation already available. Here we show how to compute all directions of unidirectional separability for sets of arbitrary simple polygons. The problem of determining whether a set of polygons is multi-directionally separable had been posed by G.T. Toussaint. Here we present an algorithm for solving this problem which, in addition to detecting whether or not the given set is multidirectionally separable, also provides an ordering in which to separate the polygons. In case that the entire set is not multi-directionally separable, the algorithm will find the largest separable subset.


01 Jun 1987
TL;DR: The main contributions presented are optimal procedures for computing minimum-area (minimum-perimeter) equiangular enclosures, a characterization that makes possible the best-known solution for the unrestricted minimum- area k-gonal enclosure problem, characterization of various other modes of polygon approximation, and characterization and solution of the unrestricted Minimum Perimeter triangular enclosure problem.
Abstract: Computational Geometry is a new branch of research in the larger field of algorithm design and analysis. In his ground-breaking thesis (Sh78), M. I. Shamos writes in his abstract: "(Computational Geometry) is a study of the computational aspects of geometry within the framework of analysis of algorithms." In this current thesis, we continue this trend of recasting classical geometric notions in the light of present-day computational capabilities. It is often useful and sometimes crucial to be able to represent complex geometric objects using simpler substitutes that somehow capture the original objects' properties. For example, in two-dimensional space, one could use polygonal enclosures to model the clutter that might lie in a robot's locus of operation. Or, the enclosures may represent packaging schemes for products with complex shapes. We look at different criteria by which such enclosures may be optimized. Among these are area, perimeter, number of edges or vertices, and clearance. The analysis is limited to the two-dimensional case, although a number of results do find extensions in higher dimensions. The typical problem that is analyzed in this work can be stated as follows: Given a polygon on the plane with property A, find an approximating polygon with property B that optimizes criterion C. A rich collection of problems is analyzed and solutions are characterized. Algorithms are then designed based on these characterizations and their complexity estimated. The main contributions presented are optimal procedures for computing minimum-area (minimum-perimeter) equiangular enclosures, a characterization that makes possible the best-known solution for the unrestricted minimum-area k-gonal enclosure problem, characterization and solution of the unrestricted minimum-perimeter triangular enclosure problem, and characterization of various other modes of polygon approximation. A number of remaining open problems are discussed at the conclusion of the work.

Proceedings ArticleDOI
12 Oct 1987
TL;DR: The most significant of the results is that the lower envelope of n triangles in three dimensions has combinatorial complexity at most O(n2α(n)) (where α(n) is the extremely slowly growing inverse of Ackermann's function), that this bound is tight in the worst case, and that this envelope can be calculated in time O( n2β(n).
Abstract: We consider the problem of obtaining sharp (nearly quadratic) bounds for the combinatorial complexity of the lower envelope (i.e. pointwise minimum) of a collection of n bivariate (or generally multi-variate) continuous and "simple" functions, and of designing efficient algorithms for the calculation of this envelope. This problem generalizes the well-studied univariate case (whose analysis is based on the theory of Davenport-Schinzel sequences), but appears to be much more difficult and still largely unsolved. It is a central problem that arises in many areas in computational and combinatorial geometry, and has numerous applications including generalized planar Voronoi diagrams, hidden surface elimination for intersecting surfaces, purely translational motion planning, finding common transversals of polyhedra, and more. In this abstract we provide several partial solutions and generalizations of this problem, and apply them to the problems mentioned above. The most significant of our results is that the lower envelope of n triangles in three dimensions has combinatorial complexity at most O(n2α(n)) (where α(n) is the extremely slowly growing inverse of Ackermann's function), that this bound is tight in the worst case, and that this envelope can be calculated in time O(n2α(n)).

Proceedings ArticleDOI
01 Jan 1987
TL;DR: The efficiency of the proposed approach is illustrated by a simulation of a spherical robot navigating in a 3-D room with static obstacles.
Abstract: In applications of robotics to surveillance and mapping at nuclear facilities the scene to be described is three-dimensional. Using range data a 3-D model of the environment can be built. First, each measured point on the object surface is surrounded by a solid sphere with a radius determined by the range to that point. Then the 3-D shapes of the visible surfaces are obtained by taking the (Boolean) union of the spheres. Using this representation distances to boundary surfaces can be efficiently calculated. This feature is particularly useful for navigation purposes. The efficiency of the proposed approach is illustrated by a simulation of a spherical robot navigating in a 3-D room with static obstacles.

Journal ArticleDOI
TL;DR: A new scan-line algorithm for displaying bicubic surfaces that computes the intersection of the surfaces with only a restricted subset of scan planes and obtains the intersection with other scan planes by linear interpolation between exact instersections.
Abstract: This article presents a new scan-line algorithm for displaying bicubic surfaces. Patches are decomposed on regions of constant sign of the z component of the normal before the scan process. Most of the computations are done in parametric space. The algorithm computes the intersection of the surfaces with only a restricted subset of scan planes and obtains the intersection with other scan planes by linear interpolation between exact instersections. A bound of the algorithm's error is given. Finally, the new method is compared with Whitted's algorithm.

Book ChapterDOI
25 May 1987
TL;DR: It is pointed out that most problems in computational geometry in fact have fast parallel algorithms by reduction to the cell decomposition result of Kozen and Yap by using a new notion of generalized Voronoi diagrams that subsumes all known instances.
Abstract: This paper has two goals. First, we point out that most problems in computational geometry in fact have fast parallel algorithms (that is, in NC*) by reduction to the cell decomposition result of Kozen and Yap. We illustrate this using a new notion of generalized Voronoi diagrams that subsumes all known instances. While the existence of NC* algorithms for computational geometry is theoretically significant, it leaves much to be desired for specific problems. Therefore, the second part of the paper surveys some recent results in a fast growing list of parallel algorithms for computational geometry.


Journal ArticleDOI
TL;DR: A set of criteria is proposed that a digitization of a collection of line segments should satisfy in order to be said to represent the same structure as their continuous plane counterpart, and it is shown that these criteria cannot be satisfied by grid cells of uniform size.

Journal ArticleDOI
TL;DR: This paper looks at several occurrences of this problem in computational geometry and proposes various lines of attack to improve the solutions of several specific problems; for example, computing order statistics, performing polygonal range searching, testing algebraic predicates, etc.
Abstract: There are many efficient ways of searching a set when all its elements can be represented in memory. Often, however, the domain of the search is too large to have each element stored separately, and some implicit representation must be used. Whether it is still possible to search efficiently in these conditions is the underlying theme of this paper. We look at several occurrences of this problem in computational geometry and we propose various lines of attack. In the course of doing so, we improve the solutions of several specific problems; for example, computing order statistics, performing polygonal range searching, testing algebraic predicates, etc.

Journal ArticleDOI
TL;DR: This work presents the first time- and space-optimal algorithm for the problem of computing the contours of the disjoint polygons defined by the union of n rectangles in the plane, where e is the total number of edges in the contour cycles.