scispace - formally typeset
Search or ask a question

Showing papers presented at "Symposium on Computational Geometry in 1995"


Proceedings ArticleDOI
01 Sep 1995
TL;DR: This paper considers the main known classes of algorithms for solving convex polytope enumeration problems and argues that they all have at least one of two weaknesses inability to deal well with degen eracies or inability to control the sizes of intermediate results.
Abstract: A convex polytope P can be speci ed in two ways as the convex hull of the vertex set V of P or as the intersection of the set H of its facet inducing halfspaces The vertex enumeration problem is to compute V from H The facet enumeration problem it to compute H from V These two problems are essentially equivalent under point hyperplane duality They are among the central computational problems in the theory of polytopes It is open whether they can be solved in time polynomial in jHj jVj In this paper we consider the main known classes of algorithms for solving these problems We argue that they all have at least one of two weaknesses inability to deal well with degen eracies or inability to control the sizes of intermediate results We then introduce families of polytopes that exercise those weaknesses Roughly speaking fat lattice or intricate polytopes cause algorithms with bad degeneracy handling to perform badly dwarfed polytopes cause al gorithms with bad intermediate size control to perform badly We also present computational experience with trying to solve these problem on these hard polytopes using various implementations of the main algorithms

205 citations


Proceedings ArticleDOI
01 Sep 1995
TL;DR: A new deterministic algorithm for finding intersecting pairs from a given set of N segments in the plane that is asymptotically optimal and has time and space complexity O(AJ log N+ K) and 0( IV ) respectively.
Abstract: This paper deals with a new deterministic algorithm for finding intersecting pairs from a given set of N segments in the plane. The algorithm is asymptotically optimal and has time and space complexity O(AJ log N+ K) and 0( IV ) respectively, where K is the number of intersecting pairs. The algorithm may be used for finding intersections not only line segments but also curve segments.

191 citations


Proceedings ArticleDOI
01 Sep 1995
TL;DR: An experimental comparison of a number of different algorithms for computing the Deluanay triangulation and analyzes the major high-level primitives that algorithms use and does an experimental analysis of how often implementations of these algorithms perform each operation.
Abstract: This paper presents an experimental comparison of a number of different algorithms for computing the Deluanay triangulation. The algorithms examined are: Dwyer’s divide and conquer algorithm, Fortune’s sweepline algorithm, several versions of the incremental algorithm (including one by Ohya, Iri, and Murota, a new bucketing-based algorithm described in this paper, and Devillers’s version of a Delaunay-tree based algorithm that appears in LEDA), an algorithm that incrementally adds a correct Delaunay triangle adjacent to a current triangle in a manner similar to gift wrapping algorithms for convex hulls, and Barber’s convex hull based algorithm. Most of the algorithms examined are designed for good performance on uniformly distributed sites. However, we also test implementations of these algorithms on a number of non-uniform distibutions. The experiments go beyond measuring total running time, which tends to be machine-dependent. We also analyze the major high-level primitives that algorithms use and do an experimental analysis of how often implementations of these algorithms perform each operation.

171 citations


Proceedings ArticleDOI
01 Sep 1995
TL;DR: The r-reduced surface of a set of n spheres representing a molecule in relation to the r-accessible and rexchtded surfaces is defined and algorithms to compute the outer component and the analytical description of the corresponding r-excluded surface are given.
Abstract: In this paper we define the r-reduced surface of a set of n spheres representing a molecule in relation to the r-accessible and rexchtded surfaces. Algorithms are given to compute the outer component of the r-reduced surface in O[rdogn] operations and the analytical description of the corresponding r-excluded surface in time O[n]. An algorithm to handle the self-intersecting parts of that surface is described. These algorithms have been implemented in a program called MSMS, which was tested on a set of 709 proteins. The CPU time spent in the different algorithms composing MSMS are given for this set of molecules.

137 citations


Proceedings ArticleDOI
01 Sep 1995
TL;DR: It is shown that if one is willing to allow approximate ranges, then it is possible to do much better than current state-of-the-art results, and empirical evidence is given showing that allowing small relative errors can significantly improve query execution times.
Abstract: The range searching problem is a fundamental problem in computational geometry, with numerous important applications. Most research has focused on solving this problem exactly, but lower bounds show that if linear space is assumed, the problem cannot be solved in polylogarithmic time, except for the case of orthogonal ranges. In this paper we show that if one is willing to allow approximate ranges, then it is possible to do much better. In particular, given a bounded range Q of diameter w and "> 0, an approximate range query treats the range as a fuzzy object, meaning that points lying within distance "w of the boundary of Q either may or may not be counted. We show that in any fixed dimension d ,as et ofn points in R d can be preprocessed in O.nC logn/ time and O.n/ space, such that approximate queries can be answered in O.logn.1="/ d / time. The only assumption we make about ranges is that the intersection of a range and a d-dimensional cube can be answered in constant time (depending on dimension). For convex ranges, we tighten this to O.lognC.1="/ d 1 / time. We also present a lower bound for approximate range searching based on partition trees of .lognC.1="/ d 1 /, which implies optimality for convex ranges (assuming fixed dimensions). Finally, we give empirical evidence showing that allowing small relative errors can significantly improve query execution times. © 2000 Elsevier Science B.V. All rights reserved.

135 citations


Proceedings ArticleDOI
01 Sep 1995
TL;DR: This work proposes a hierarchy of detail levels for a polyhedral terrain (or, triangulated irregular network) that allows this: given a view point, it is possible to select the appropriate level of detail for each part of the terrain in such a way that the parts still t together continuously.
Abstract: In many applications it is important that one can view a scene at di erent levels of detail. A prime example is ight simulation: a high level of detail is needed when ying low, whereas a low level of detail su ces when ying high. More precisely, one would like to visualize the part of the scene that is close at a high level of detail, and the part that is far away at a low level of detail. We propose a hierarchy of detail levels for a polyhedral terrain (or, triangulated irregular network) that allows this: given a view point, it is possible to select the appropriate level of detail for each part of the terrain in such a way that the parts still t together continuously. The main advantage of our structure is that it uses the Delaunay triangulation at each level, so that triangles with very small angles are avoided. This is the rst method that uses the Delaunay triangulation and still allows to combine di erent levels into a single representation.

121 citations


Proceedings ArticleDOI
01 Sep 1995
TL;DR: Improved time bounds for other problems including levels in arrangements and linear programming with few violated constraints are obtained and an algorithm that computes the vertices of all the convex layers ofP inO(n 2−γ) time for any constant.
Abstract: We use known data structures for ray-shooting and linear-programming queries to derive new output-sensitive results on convex hulls, extreme points, and related problems. We show that thef-face convex hull of ann-point setP in a fixed dimensiond≥2 can be constructed in\(0\left( {n log f + \left( {nf} \right)^{1 - 1/\left( {\left[ {d/2} \right] + 1} \right)} \log ^{0\left( 1 \right)} n} \right)\) time; this is optimal if\(f = 0\left( {n^{1/\left[ {d/2} \right]} /\log ^K n} \right)\) for some sufficiently large constantK. We also show that theh extreme points ofP can be computed in\(0\left( {n log^{0\left( 1 \right)} h + \left( {nh} \right)^{1 - 1/\left( {\left[ {d/2} \right] + 1} \right)} \log ^{0\left( 1 \right)} n} \right)\) time. These results are then applied to produce an algorithm that computes the vertices of all the convex layers ofP inO(n2−γ) time for any constant\(\gamma< 2/\left( {\left[ {d/2} \right]^2 + 1} \right)\). Finally, we obtain improved time bounds for other problems including levels in arrangements and linear programming with few violated constraints. In all of our algorithms the input is assumed to be in general position.

117 citations


Proceedings ArticleDOI
01 Sep 1995
TL;DR: It is observed that in practice one often need not partition the polyhedron itself but only its boundary, in other words, it often suffices to decompose a polyhedral surface into a small number of convex patches.
Abstract: This paper addresses the problem of decomposing a complex polyhedral surface into a small number of “convex” patches (i.e., boundary parts of convex polyhedra). The corresponding optimization problem is shown to be NP-complete and an experimental search for good heuristics is undertaken.

116 citations


Proceedings ArticleDOI
01 Sep 1995
TL;DR: This work shows that the combinatorial complexity of the vertical decomposition of the ${\le}k-level of the arrangement of the bivariate algebraic functions of constant maximum degree is $O(k^{3+\varepsilon}\psi({n/k}))', which implies the existence of shallow cuttings of small size in arrangements of bivariategebraic functions.
Abstract: Let ${\cal F}$ be a collection of $n$ bivariate algebraic functions of constant maximum degree. We show that the combinatorial complexity of the vertical decomposition of the ${\le}k$-level of the arrangement ${\cal A}({\cal F})$ is $O(k^{3+\varepsilon}\psi({n/k}))$, for any $\varepsilon<0$, where $\psi (r)$ is the maximum complexity of the lower envelope of a subset of at most $r$ functions of ${\cal F}$. This bound is nearly optimal in the worst case, and implies the existence of shallow cuttings of small size in arrangements of bivariate algebraic functions. We also present numerous applications of these results, including: (i) data structures for several generalized three-dimensional range searching problems; (ii) dynamic data structures for planar nearest and farthest neighbor searching under various fairly general distance functions; (iii) %dynamic data structures for maintaining bichromatic %closest pairs under a fairly general distance function, and for %maintaining minimum spanning trees of a set of points under an improved (near-quadratic) algorithm for minimum-weight bipartite Euclidean matching in the plane; and (iv) efficient algorithms for certain geometric optimization problems in static and dynamic settings.

97 citations


Proceedings ArticleDOI
01 Sep 1995
TL;DR: It is shown that the k free bitangents of a collection of n pairwise disjoint convex plane sets can be computed in time O(k + n log n) and O(n) working space.
Abstract: We show that the k free bitangents of a collection of n pairwise disjoint convex plane sets can be computed in time O(k + n log n) and O(n) working space. The algorithm uses only one advanced data structure, namely a splittable queue. We introduce (weakly) greedy pseudo-triangulations, whose combinatorial properties are crucial for our method.

83 citations


Proceedings ArticleDOI
01 Sep 1995
TL;DR: This paper shows how a given set of curves can be refined such that the resulting curves define a “well-behaved” Voronoi diagram, and gives a randomized incremental algorithm to compute this diagram.
Abstract: Voronoi diagrams of curved objects can show certain phenomena that are often considered artifacts: The Voronoi diagram is not connected; there are pairs of objects whose bisector is a closed curve or even a two-dimensional object; there are Voronoi edges between different parts of the same site (so-called self-Voronoi-edges); these self-Voronoi-edges may end at seemingly arbitrary points not on a site, and, in the case of a circular site, even degenerate to a single isolated point. We give a systematic study of these phenomena, characterizing their differential-geometric and topological properties. We show how a given set of curves can be refined such that the resulting curves define a “well-behaved” Voronoi diagram. We also give a randomized incremental algorithm to compute this diagram. The expected running time of this algorithm is O(n log n).

Proceedings ArticleDOI
01 Sep 1995
TL;DR: An O(log m) approximation bound algorithm is provided for RBSP, where m < n is the minimum number of sides.

Proceedings ArticleDOI
01 Sep 1995
TL;DR: An algorithm which computes the overlay Hb r+ rIg of two Simply connected planar subdivisions ~b and Hg, where n denotes the total nuruber of edges of~b and IIg and k the number of intersections between blue and green edges.
Abstract: We present an algorithm which computes the overlay Hb r+ rIg of two Simply connected planar subdivisions ~b and Hg; we assume that ~b (resp. ~) and all its components are colored in blue (resp. green). The algorithm runs in O(n + k) time and space, where n denotes the total nuruber of edges of ~b and IIg and k the number of intersections between blue and green edges.

Proceedings ArticleDOI
01 Sep 1995
TL;DR: Almost all geometric algorithms are based on the RealRAM model, but implementors often simply replace the exact real arithmetic of this model by fixed precision arithmetic, thereby making correct algorithms incorrect, and preventing application areas from making use of the rich literature of geometric algorithms developed in computational geometry.

Proceedings ArticleDOI
01 Sep 1995
TL;DR: This work describes a robust, dynamic algorithm to compute the arrangement of a set of line segments in the plane, and its implementation that marries the robustness of the Greene and Hobby algorithms with Mulmuley’s dynamic algorithm in a way that preserves the desirable properties of each.
Abstract: We describe a robust, dynamic algorithm to compute the arrangement of a set of line segments in the plane, and its implementation. The algorithm is robust because, following Greene7 and Hobby,11 it rounds the endpoints and intersections of all line segments to representable points, but in a way that is globally topologically consistent. The algorithm is dynamic because, following Mulmuley,16 it uses a randomized hierarchy of vertical cell decompositions to make locating points, and inserting and deleting line segments, efficient. Our algorithm is novel because it marries the robustness of the Greene and Hobby algorithms with Mulmuley's dynamic algorithm in a way that preserves the desirable properties of each.

Proceedings ArticleDOI
01 Sep 1995
TL;DR: The paper bounds the combinatorial complexity of the Voronoi diagram of a set of points under certain polyhedral distance functions to be \(\Theta(n^2)\) .
Abstract: The paper bounds the combinatorial complexity of the Voronoi diagram of a set of points under certain polyhedral distance functions. Specifically, if S is a set of n points in general position in R d , the maximum complexity of its Voronoi diagram under the L ∞ metric, and also under a simplicial distance function, are both shown to be \(\Theta(n^{\lceil d/2 \rceil})\) . The upper bound for the case of the L ∞ metric follows from a new upper bound, also proved in this paper, on the maximum complexity of the union of n axis-parallel hypercubes in R d . This complexity is \(\Theta(n^{\left\lceil d/2 \right\rceil})\) , for d ≥ 1 , and it improves to \(\Theta(n^{\left\lfloor d/2 \right\rfloor})\) , for d ≥ 2 , if all the hypercubes have the same size. Under the L 1 metric, the maximum complexity of the Voronoi diagram of a set of n points in general position in R 3 is shown to be \(\Theta(n^2)\) . We also show that the general position assumption is essential, and give examples where the complexity of the diagram increases significantly when the points are in degenerate configurations. (This increase does not occur with an appropriate modification of the diagram definition.) Finally, on-line algorithms are proposed for computing the Voronoi diagram of n points in Rd under a simplicial or L ∞ distance function. Their expected randomized complexities are \(O(n \log n + n ^{\left\lceil d/2 \right\rceil})\) for simplicial diagrams and \(O(n ^{\left\lceil d/2 \right\rceil} \log ^{d-1} n)\) for L ∞ -diagrams.

Proceedings ArticleDOI
01 Sep 1995
TL;DR: An accurate analysis of the number of cells visited in nearest-neighbor searching by the bucketing andk-d tree algorithms is provided and empirical evidence is presented showing that the analysis applies even in low dimensions.
Abstract: Given n data points in d-dimensional space, nearest neighbor searching involves determining the nearest of these data points to a given query point. Most averagecase analyses of nearest neighbor searching algorithms are made under the simplifying assumption that d is xed and that n is so large relative to d that boundary eects can be ignored. This means that for any query point the statistical distribution of the data points surrounding it is independent of the location of the query point. However, in many applications of nearest neighbor searching (such as data compression by vector quantization) this assumption is not met, since the number of data points n grows roughly as 2 d . Largely for this reason, the actual performances of many nearest neighbor algorithms tend to be much better than their theoretical analyses would suggest. We present evidence of why this is the case. We provide an accurate analysis of the number of cells visited in nearest neighbor searching by the bucketing and k-d tree algorithms. We assume m d points uniformly distributed in dimension d, where m is a xed integer 2. Further, we assume that distances are measured in the L1 metric. Our analysis is tight in the limit as d approaches innity. Empirical evidence is presented showing that the analysis applies even in low dimensions.

Proceedings ArticleDOI
01 Sep 1995
Abstract: We present a competitive strategy for walking into the kernel of an initially unknown star-shaped polygon. From an arbitrary start point, s, within the polygon, our strategy finds a path to the closest kernel point, k, whose length does not exceed 5:3331...times the distance from s to k. This is complemented by a general lower bound of v2. Our analysis relies on a result about a new and interesting class of curves which are self-approaching in the following sense. For any three consecutive points a, b, c on the curve the point b is closer to c than a to c. We show a tight upper bound of 5:3331... for the length of a self-approaching curve over the distance between its endpoints.

Proceedings ArticleDOI
01 Sep 1995
TL;DR: This paper presents an algorithm with running time O(n8/5+’) for the case of translational movements and running time qn5/3+f f ) or rotational movements, where c is an arbitrary positive constant.
Abstract: In this paper we consider the following problem: given two general polyhedra of complexity n, one of which is moving translationally or rotating about a fixed axis, determine the first collision (if any) between them. We present an algorithm with running time O(n8/5+’) for the case of translational movements and running time qn5/3+f f ) or rotational movements, where c is an arbitrary positive constant. This is the first known algorithm with sub-quadratic running time.

Proceedings ArticleDOI
01 Sep 1995
TL;DR: The mum contribution of this paper is the presentation of a heuristtcal approach that uses A’s result in order to guaranty the same opttmum, and a method wh~eh M new as far as the authors know.
Abstract: The lettering of maps ts a classical problem of cartography that conststs of placing names, symbols, or other data near to specified sites on a map. Certain design rules have to be obeyed. A practically interesting spe ctal case, the Map Labeling Problem, consists of placing azzs parallel rectangular labels of common size so that one of its corners w the szte, no two labels overlap, and the labels are of mazzmum size in order to have legible inscriptions. The problem w NP-hard; tt as even AfP-hard to approximate the solution with quality guaranty better than 50 percent. There w an approximation algorithm A with a qualzty guaranty of 50 percent and running ttme Q(n log n). So A M the best possible algortthm from a theoretical point of vzew. This is even true for the running tzme, stnce there M a lower bound on the running tame of any such approzimatton algorithm of Q(n log n). Unfortunately A M useless in practtce as d typically produces resuits that are Intolerably far off the maximum size. The mum contribution of this paper as the presentation of a heuristtcal approach that has A‘s advantages whtle avoiding tts disadvantages: 1. It uses A‘s result in order to guaranty the same opti*This work was done at the Institut fiir Informatik, Fachbereich Mathematilc und Informatik, Freie tJniversitiit Berlin, Takustrafie 9, 14195 BerlinDahlem, Germany. It was snpported by the ESPRIT BRA Project ALCOM II. t wagner@math .fu-berlin.de : awolff@inf .fu-berlin .de Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the ACM copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission’ of the Association of Computing Machinery.To copy otherwise, or to republish, requires a fee and/or specific permission. 1lth Computational Geometry, Vancouver, B.C. Canada G 1995 ACM 0-89791 -724 -3/9510006 ...$3.50 mal running time eficiency; a method wh~eh M new as far as we know. 2. Its practtcal results are close to the opttmum. The practical quality M analysed by comparing our results to the exact opttmum, where thts is known; and to lower and upper bounds on the opttmum otherwise. The sample data consists of three different classes of random problems and a selection of problems artstng an the production of groundwater quality maps by the authorities of the City of Mtinchen.

Proceedings ArticleDOI
01 Sep 1995
TL;DR: Geomview displays objects in three-space and lets you move them around, view them from different angles, and adjust other parameters such as lighting.
Abstract: Geomview displays objects in three-space and lets you move them around, view them from different angles, and adjust other parameters such as lighting. It is interactive, easy to use, and interfaces well with other software.

Proceedings ArticleDOI
László Lovász1
01 Sep 1995
TL;DR: It is shown that a thrackle has at most twice as many edges as vertices as it was conjectured that it cannot exceed the number of its vertices.
Abstract: A thrackle is a graph drawn in the plane so that its edges are represented by Jordan arcs and any two distinct arcs either meet at exactly one common vertex or cross at exactly one point interior to both arcs. About 40 years ago, J. H. Conway conjectured that the number of edges of a thrackle cannot exceed the number of its vertices. We show that a thrackle has at most twice as many edges as vertices. Some related problems and generalizations are also considered.

Proceedings ArticleDOI
01 Sep 1995
TL;DR: An extensive experimental study comparing three general-purpose graph drawing algorithms that take as input general graphs and construct orthogonal grid drawings, which are widely used in software and database visualization applications.
Abstract: Ashim Garg~ ag@cs.brown .edu Emanuele Tassinari $ tassinar@dis .unirorrral.it + + Dept. of Computer Science Brown University Providence, RI 02912-1910 USA In this paper we present an extensive experimental study comparing three general-purpose graph drawing algorithms. The three algorithms take as input general graphs (with no restrictions whatsoever on the connectivity, planarity, etc. ) and construct orthogonal grid drawings, which are widely used in software and database visualization applications. The test data (available by anonymous ftp) are 11,582 graphs, ranging from 10 to 100 vertices, which have been generated from a core set of 112 graphs used in “real-life” software engineering and database applications. The experiments provide a detailed quantitative evaluation of the performance of the three algorithms, and show that they exhibit trade-offs between “aesthetic” properties (e.g., crossings, bends, edge length) and running time. The observed practical behavior of the algorithms is consistent with their theoretical properties. *Research supported in part by the US National Science Foundation, by the US Army Research Office, by the US Office of Naval Research and the Advanced Research Projects Agency, by the NATO Scientific Affairs Division, by the “Progetto Finalizzato Sistemi Informatici e Calcolo Parallelo (Sottoprogetto 6, Infokit)” and Grant 94.23 .CT07 of the Italian National Research Council (CNR), and by the ESPRIT II Basic Research Actions Program of the European Community (project Algorithms and Complexity). Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the ACM copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Association of Computing Machinery.To copy otherwise, or to republish, requires a fee and/or specific permission. 1lth Computational Geometry, Vancouver, B.C. Canada @ 1995 ACM 0-89791 -724-3195/0006 ...$3.50 Giuseppe Liotta~ liotta@dls .nniromal. it

Proceedings ArticleDOI
01 Sep 1995
TL;DR: This work shows how to compute the smallest area parallelogram enclosing a convex n-gon in the plane in linear time, and describes an application of this result in digital image processing.
Abstract: We show how to compute the smallest area parallelogram enclosing a convex n-gon in the plane in linear time, and we describe an application of this result in digital image processing. Related work has been done on nding a minimal enclosing triangle, see e.g. [OAMB86], a minimal enclosing rectangle [FS75], a minimal enclosing k-gon [ACY85], and a minimal enclosing k-gon that has sides of equal lengths or a xed-angle sequence [DA84]. Note that whereas e.g. a rectangle would be contained in the latter class of polygons, our problem is di erent since the angles of the desired enclosing parallelogram are not given in advance. Nevertheless, our method clearly borrows from the techniques developed in the computational geometry literature, and our contribution is to show how these methods can help to obtain the result as requested by the application. In fact, we learned that the linear time algorithm has been previously published in a Russian journal, [Vai90]. There are two key facts which lead to the algorithm. First, let us consider the edges e1; e2; e3 and e4 of an enclosing parallelogram (in counterclockwise order), and let l1; l2; l3 and l4, respectively, be their supporting lines. Then there is an optimal enclosing parallelogram which has at least one of the edges e1 and e3 ush with an

Proceedings ArticleDOI
01 Sep 1995
TL;DR: A method to evaluate signs of 2 x 2 and 3 x 3 determinants with b-bit integer entries using b and (b+ I)-bit arithmetic respectively, that N typically half the number of bits usually required.
Abstract: We propose a method to evaluate signs of 2 x 2 and 3 x 3 determinants with b-bit integer entries using b and (b+ I)-bit arithmetic respectively, that N typically half the number of bits usually required. Algorithms of this kmd are very relevant to computational geometry, since most of the numerical aspects of geometric applications are reducible to evaluations of determinants. Therefore such algorithms provide a practical approach to robustness. The algorithm has been implemented and experimental results show that it slows down the computing time by only a small factor with respect to (error-prone) floating-point calculation, and compares favorably with other exact methods.

Proceedings ArticleDOI
01 Sep 1995
TL;DR: An algorithm is described that constructs homeomorphisms with prescribed area distortion that can be used to generate cartograms, which are geographic maps purposely distorted so its area distribution reflects a variable different from area.
Abstract: A homeomorphism from R 2 to itself distorts metric quantities, such as distance and area. We describe an algorithm that constructs homeomorphisms with prescribed area distortion. Such homeomorphisms can be used to generate cartograms, which are geographic maps purposely distorted so their area distributions reflects a variable different from area, as for example population density. The algorithm generates the homeomorphism through a sequence of local piecewise linear homeomorphic changes. Sample results produced by the preliminary implementation of the method are included.

Proceedings ArticleDOI
01 Sep 1995
TL;DR: A general technique that yields faster randomized algorithms for solving a number of geometric optimization problems, including computing the width of a point set in 3-space, and computing the minimum-width annulus enclosing a set ofn points in the plane.
Abstract: In this paper we first prove the following combinatorial bound, concerning the complexity of the vertical decomposition of the minimization diagram of trivariate functions: Let\(\mathcal{F}\) be a collection ofn totally or partially defined algebraic trivariate functions of constant maximum degree, with the additional property that, for a given pair of functionsf, f′∈\(\mathcal{F}\), the surfacef(x, y, z)=f′(x, y, z) isxy-monotone (actually, we need a somewhat weaker property). We show that the vertical decomposition of the minimization diagram of\(\mathcal{F}\) consists ofO(n 3+e) cells (each of constant description complexity), for any e>0. In the second part of the paper, we present a general technique that yields faster randomized algorithms for solving a number of geometric optimization problems, including (i) computing the width of a point set in 3-space, (ii) computing the minimum-width annulus enclosing a set ofn points in the plane, and (iii) computing the “biggest stick” inside a simple polygon in the plane. Using the above result on vertical decompositions, we show that the expected running time of all three algorithms isO(n 3/2+e), for any e>0. Our algorithm improves and simplifies previous solutions of all three problems.

Proceedings ArticleDOI
Hisao Tamaki1, Takeshi Tokuyama1
01 Sep 1995
TL;DR: This work investigates how to cut pseudo-parabolas into the minimum number of curve segments so that each pair of segments intersect at most once, and gives an O(n 23/12) bound and 0(nl116) bound on the complexity of a combinatorially concave chain of pseudo parabolas.
Abstract: into segments Tokuyama * Let r be a collection of unbounded z-monotone Jordan arcs intersecting at most twice each other, which we call pseudo-parabolas, since two axis parallel parabolas intersects at most twice. We investigate how to cut pseudo-parabolas into the minimum number of curve segments so that each pair of segments intersect at most once. We give an fl(n413) lower bound and 0(n5i3) upper bound. We give the same bounds for an arrangement of circles. Applying the upper bound, we give an O(n 23/12) bound on the complexity of a level of pseudo-parabolas, and 0(nl116) bound on the complexity of a combinatorially concave chain of pseudo parabolas. We also give some upper-bounds on the number of transitions of the minimum weight matroid base when the weight of each element changes as a quadratic function of a single parameter.

Proceedings ArticleDOI
01 Sep 1995
TL;DR: This work improves and generalizes the previously best-known results on computing rectilinear shortest paths among weighted polygonal obstacles and applies the techniques to processing twopoint L1 shortest obstacle-avoiding path queries among arbitrary (i.e., not necessarily rectil inear) polygonnal obstacles in the plane.
Abstract: We study the problems of processing single-source and two-point shortest path queries among weighted polygonal obstacles in the rectilinear plane. For the single-source case, we construct a data structure in O(n log n) time and O(n logn) space, where n is the number of obstacle vertices; this data structure enables us to report the length of a shortest path between the source and any query point in O(logn) time, and an actual shortest path in O(logn+k) time, where k is the number of edges on the output path. For the two-point case, we construct a data structure in O(n2 log n) time and space; this data structure enables us to report the length of a shortest path between two arbitrary query points in O(log n) time, and an actual shortest path in O(log n+ k) time. Our work improves and generalizes the previously best-known results on computing rectilinear shortest paths among weighted polygonal obstacles. We also apply our techniques to processing twopoint L1 shortest obstacle-avoiding path queries among arbitrary (i.e., not necessarily rectilinear) polygonal obstacles in the plane. No algorithm for processing two-point shortest path queries among weighted obstacles was previously known.

Proceedings ArticleDOI
01 Sep 1995
TL;DR: In this paper, the authors considered the planning problem of finding pivot grasps in a polyhedral part shape, coefficient of friction, and stable configuration as input, and gave an O(m/sup 2/n log n) algorithm to generate the m/spl times/m matrix of pivoting points for a part with n faces and m stable configurations.
Abstract: To rapidly feed industrial parts on an assembly line, Carlisle et. al. (1994) proposed a flexible part feeding system that drops parts on a flat conveyor belt, determines the pose of parts with a vision system and manipulates them into a desired pose. A robot arm with 4-DOF is capable of moving parts through 6-DOF when equipped with a passive pivoting axis between the parallel jaws of its gripper. We refer to these actions as pivot grasps. This paper considers the planning problem. Given a polyhedral part shape, coefficient of friction and a pair of stable configurations as input, find pairs of grasp points that will cause the part to pivot from one stable configuration to the other. For some transitions, pivot grasps may not exist. For a part with n faces and m stable configurations, we give an O(m/sup 2/n log n) algorithm to generate the m/spl times/m matrix of pivot grasps. When the part is star shaped, this reduces to O(m/sup 2/n). We also study a generalization that considers "capture regions" around stable configurations. Both algorithms are complete in that they are guaranteed to find pivot grasps when they exist.