scispace - formally typeset
Search or ask a question

Showing papers in "Computational Geometry: Theory and Applications in 2001"


Journal ArticleDOI
TL;DR: In this paper, the authors consider the problem of approximating the medial axis transform of a 3D object with a finite union of balls and define a new piecewise linear approximation to the object surface, which they call the power crust.
Abstract: The medial axis transform (or MAT) is a representation of an object as an infinite union of balls. We consider approximating the MAT of a three-dimensional object, and its complement, with a finite union of balls. Using this approximate MAT we define a new piecewise-linear approximation to the object surface, which we call the power crust. We assume that we are given as input a sufficiently dense sample of points from the object surface. We select a subset of the Voronoi balls of the sample, the polar balls, as the union of balls representation. We bound the geometric error of the union, and of the corresponding power crust, and show that both representations are topologically correct as well. Thus, our results provide a new algorithm for surface reconstruction from sample points. By construction, the power crust is always the boundary of a polyhedral solid, so we avoid the polygonization, hole-filling or manifold extraction steps used in previous algorithms. The union of balls representation and the power crust have corresponding piecewise-linear dual representations, which in some sense approximate the medial axis. We show a geometric relationship between these duals and the medial axis by proving that, as the sampling density goes to infinity, the set of poles, the centers of the polar balls, converges to the medial axis.

570 citations


Journal ArticleDOI
TL;DR: Vroni is a topology-oriented algorithm for the computation of Voronoi diagrams of points and line segments in the two-dimensional Euclidean space that is completely reliable and fast in practice and has been successfully tested within and integrated into several industrial software packages.
Abstract: We discuss the design and implementation of a topology-oriented algorithm for the computation of Voronoi diagrams of points and line segments in the two-dimensional Euclidean space The main focus of our work was on designing and engineering an algorithm that is completely reliable and fast in practice The algorithm was implemented in ANSI C, using standard floating-point arithmetic In addition to Sugihara and Iri's topology-oriented approach, it is based on a very careful implementation of the numerical computations required, an automatic relaxation of epsilon thresholds, and a multi-level recovery process combined with “desperate mode” The resulting code, named vroni , was tested extensively on real-world data and turned out to be reliable CPU-time statistics document that it is always faster than other popular Voronoi codes In our computing environment, vroni needs about 001n log 2 n milliseconds to compute the Voronoi diagram of n line segments, and this formula holds for a wide variety of synthetic and real-world data In particular, its CPU-time consumption is hardly affected by the actual distribution of the input data Vroni also features a function for computing offset curves, and it has been successfully tested within and integrated into several industrial software packages

314 citations


Journal ArticleDOI
TL;DR: It is shown by mathematical means that both concepts for solving the point in polygon problem for arbitrary polygons are very closely related, thereby developing a first version of an algorithm for determining the winding number.
Abstract: A detailed discussion of the point in polygon problem for arbitrary polygons is given. Two concepts for solving this problem are known in literature: the even–odd ruleand the winding number, the former leading to ray-crossing, the latter to angle summationalgorithms. First we show by mathematical means that both concepts are very closely related, thereby developing a first version of an algorithm for determining the winding number. Then we examine how to accelerate this algorithm and how to handle special cases. Furthermore we compare these algorithms with those found in literature and discuss the results.  2001 Elsevier Science B.V. All rights reserved.

309 citations


Journal ArticleDOI
TL;DR: This paper considers the following basic problem in polyhedral computation: Given two polyhedra in R^d, P and Q, decide whether their union is convex, and, if so, compute it, and shows that the first two problems are polynomially solvable and the third problem is strongly-polynomiallysolvable.
Abstract: In this paper we consider the following basic problem in polyhedral computation: Given two polyhedra in R^d, P and Q, decide whether their union is convex, and, if so, compute it. We consider the three natural specializations of the problem: (1) when the polyhedra are given by halfspaces (H-polyhedra), (2) when they are given by vertices and extreme rays (V-polyhedra), and (3) when both H- and V-polyhedral representations are available. Both the bounded (polytopes) and the unbounded case are considered. We show that the first two problems are polynomially solvable, and that the third problem is strongly-polynomially solvable.

104 citations


Journal ArticleDOI
TL;DR: It is shown that the natural neighbor coordinates of a point X belonging to S tends to behave as a local system of coordinates on the surface when the density of points increases.
Abstract: Natural neighbor coordinates and natural neighbor interpolation have been introduced by Sibson for interpolating multivariate scattered data. In this paper, we consider the case where the data points belong to a smooth surface S, i.e., a (d-1)-manifold of R^d. We show that the natural neighbor coordinates of a point X belonging to S tends to behave as a local system of coordinates on the surface when the density of points increases. Our result does not assume any knowledge about the ordering, connectivity or topology of the data points or of the surface. An important ingredient in our proof is the fact that a subset of the vertices of the Voronoi diagram of the data points converges towards the medial axis of S when the sampling density increases.

93 citations


Journal ArticleDOI
TL;DR: A linear-time connectivity compression scheme build upon Edgebreaker is constructed which explicitly takes advantage of regularity and it is proved rigorously that, for sufficiently large and regular meshes, it produces encodings not longer than 0.811 bits per triangle.
Abstract: One of the most natural measures of regularity of a triangular mesh homeomorphic to the two-dimensional sphere is the fraction of its vertices having degree 6. We construct a linear-time connectivity compression scheme build upon Edgebreaker which explicitly takes advantage of regularity and prove rigorously that, for sufficiently large and regular meshes, it produces encodings not longer than 0.811 bits per triangle: 50% below the information-theoretic lower bound for the class of all meshes. Our method uses predictive techniques enabled by the Spirale Reversi decoding algorithm.

77 citations


Journal ArticleDOI
TL;DR: A heuristic to reconstruct nonsmooth curves with multiple components is presented and the effectiveness of the algorithm is revealed in contrast with the other competing algorithms.
Abstract: In this paper we present a heuristic to reconstruct nonsmooth curves with multiple components. Experiments with several input data reveals the effectiveness of the algorithm in contrast with the other competing algorithms.

65 citations


Journal ArticleDOI
TL;DR: An algorithm which locally builds an iso-surface with two significant properties: it is a 2-manifold and the surface is a subcomplex of the Delaunay tetrahedrization of its vertices, and a graph is associated to each skeleton for two purposes: the amount of noise can be identified and quantified on the graph and the selection of the graph subpart that does not correspond to noise induces a filtering on the skeleton.
Abstract: Iso-surfaces are routinely used for the visualization of volumetric structures. Further processing (such as quantitative analysis, morphometric measurements, shape description) requires volume representations. The skeleton representation matches these requirements by providing a concise description of the object. This paper has two parts. First, we exhibit an algorithm which locally builds an iso-surface with two significant properties: it is a 2-manifold and the surface is a subcomplex of the Delaunay tetrahedrization of its vertices. Secondly, because of the latter property, the skeleton can in turn be computed from the dual of the Delaunay tetrahedrization of the iso-surface vertices. The skeleton representation, although informative, is very sensitive to noise. This is why we associate a graph to each skeleton for two purposes: (i) the amount of noise can be identified and quantified on the graph and (ii) the selection of the graph subpart that does not correspond to noise induces a filtering on the skeleton. Finally, we show some results on synthetic and medical images. An application, measuring the thickness of objects (heart ventricles, bone samples) is also presented.

59 citations


Journal ArticleDOI
TL;DR: Regular interpolants are introduced, which are polygonal approximations of curves and surfaces satisfying a new regularity condition that can be checked on the basis of the samples alone and can be turned into a provably correct curve and surface reconstruction algorithm.
Abstract: In this paper, we address the problem of curve and surface reconstruction from sets of points. We introduce regular interpolants, which are polygonal approximations of curves and surfaces satisfying a new regularity condition. This new condition, which is an extension of the popular notion of r-sampling to the practical case of discrete shapes, seems much more realistic than previously proposed conditions based on properties of the underlying continuous shapes. Indeed, contrary to previous sampling criteria, our regularity condition can be checked on the basis of the samples alone and can be turned into a provably correct curve and surface reconstruction algorithm. Our reconstruction methods can also be applied to non-regular and unorganized point sets, revealing a larger part of the inner structure of such point sets than past approaches. Several real-size reconstruction examples validate the new method.

53 citations


Journal ArticleDOI
TL;DR: In this paper, an algorithm for computing the exact interior medial axis of a union of balls in R d is presented, which combines the simple characterization of this medial axis given by Attali and Montanvert with the combinatorial information provided by Edelsbrunner's α-shape.
Abstract: We present an algorithm for computing the exact interior medial axis of a union of balls in R d Our algorithm combines the simple characterization of this medial axis given by Attali and Montanvert with the combinatorial information provided by Edelsbrunner's α-shape This leads to a simple algorithm, which we have implemented for d=3

47 citations


Journal ArticleDOI
TL;DR: This paper estimates the maximum number of convex quadrilaterals in all partitions of a planar point set P into disjoint convex polygons.
Abstract: For a given planar point set P, consider a partition of P into disjoint convex polygons. In this paper, we estimate the maximum number of convex quadrilaterals in all partitions.

Journal ArticleDOI
TL;DR: In this paper, the authors consider three closely related optimization problems, arising from the graph drawing and the VLSI research areas and conjectured to be NP-hard, and prove that, in fact, they are NP-complete.
Abstract: We consider three closely related optimization problems, arising from the graph drawing and the VLSI research areas, and conjectured to be NP-hard, and we prove that, in fact, they are NP-complete. Starting from an orthogonal representation of a graph, i.e., a description of the shape of the edges that does not specify segment lengths or vertex positions, the three problems consist of providing an orthogonal grid drawing of it, while minimizing the area, the total edge length, or the maximum edge length, respectively. This result confirms a long surviving conjecture of NP-hardness, justifies the research about applying sophisticated, yet possibly time consuming, techniques to obtain optimally compacted orthogonal grid drawings, and discourages the quest for an optimally compacting polynomial-time algorithm.

Journal ArticleDOI
TL;DR: A new technique to prove lower bounds for geometric on-line searching problems based on the competitive ratio, that is, the ratio of the distance traveled by the searcher to the distance of the target.
Abstract: We present a new technique to prove lower bounds for geometric on-line searching problems. We assume that a target of unknown location is hidden somewhere in a known environment and a searcher is trying to find it. We are interested in lower bounds on the competitive ratio of the search strategy, that is, the ratio of the distance traveled by the searcher to the distance of the target. The technique we present is applicable to a number of problems, such as biased searching on m rays and on-line construction of on-line algorithms. For each problem we prove tight lower bounds.

Journal ArticleDOI
TL;DR: It is shown that there exist configurations of points such that any matching with the above properties matches fewer than 98.95% of the points, and two new algorithms are presented which compute a large matching, with an improved guarantee in the number of matched points.
Abstract: Let S be a set with n=w+b points in general position in the plane, w of them white, and b of them black. We consider the problem of computing G(S) , a largest non-crossing matching of pairs of points of the same color, using straight line segments. We present two new algorithms which compute a large matching, with an improved guarantee in the number of matched points. The first one runs in O (n 2 ) time and finds a matching of at least 85.71% of the points. The second algorithm runs in O (n log n) time and achieves a performance guarantee as close as we want to that of the first algorithm. On the other hand, we show that there exist configurations of points such that any matching with the above properties matches fewer than 98.95% of the points. We further extend these results to point sets with a prescribed ratio of the sizes of the two color classes. In the end, we discuss the more general problem when the points are colored with any fixed number of colors.

Journal ArticleDOI
TL;DR: A new class of polygons that models wood, stone, glass and ceramic shapes that can be cut with a table saw, lapidary trim saw, or other circular saw is introduced and it is proved that a polygon has this property precisely if it does not have two adjacent reflex vertices.
Abstract: We introduce and characterize a new class of polygons that models wood, stone, glass and ceramic shapes that can be cut with a table saw, lapidary trim saw, or other circular saw In this model, a circular saw is a line segment (in projection) that can move freely in empty space, but can only cut straight into a portion of material Once a region of material is separated from the rest, it can be picked up and removed to allow the saw to move more freely A polygon is called cuttable by a circular saw if it can be cut out of a convex shape of material by a sufficiently small circular saw We prove that a polygon has this property precisely if it does not have two adjacent reflex vertices As a consequence, every polygon can be modified slightly to make it cuttable by a circular saw We give a linear-time algorithm to cut out such a polygon using a number of cuts and total length of cuts that are at most 25 times the optimal We also study polygons cuttable by an arbitrarily large circular saw, which is equivalent to a ray, and give two linear-time recognition algorithms

Journal ArticleDOI
TL;DR: In this paper, the authors introduce nonlinear Theil-Sen and repeated median (RM) variants for estimating the center and radius of a circular arc, and for estimating center and horizontal and vertical radii of an axis-aligned ellipse.
Abstract: Fitting two-dimensional conic sections (e.g., circular and elliptical arcs) to a finite collection of points in the plane is an important problem in statistical estimation and has significant industrial applications. Recently there has been a great deal of interest in robust estimators, because of their lack of sensitivity to outlying data points. The basic measure of the robustness of an estimator is its breakdown point, that is, the fraction (up to 50%) of outlying data points that can corrupt the estimator. In this paper we introduce nonlinear Theil–Sen and repeated median (RM) variants for estimating the center and radius of a circular arc, and for estimating the center and horizontal and vertical radii of an axis-aligned ellipse. The circular arc estimators have breakdown points of ≈ 21% and 50%, respectively, and the ellipse estimators have breakdown points of ≈ 16% and 50%, respectively. We present randomized algorithms for these estimators, whose expected running times are O (n 2 log n) for the circular case and O (n 3 log n) for the elliptical case. All algorithms use O (n) space in the worst case.

Journal ArticleDOI
TL;DR: The issues involved with the automatic placement of edge labels are considered and a model for the ELP problem is presented and the computational complexity of the ELPs is investigated and it is proved that various forms of theELP problem are NP-hard.
Abstract: Let G(V,E) be a graph, and let Γ be a drawing of G in the plane. We consider the problem of positioning text or symbol labels corresponding to edges of G , called the Edge Label Placement (ELP) problem. The goal is to convey the information associated with each edge (that a label for that edge describes) in the best possible way, by positioning each label in the most appropriate place. In this paper we consider the issues involved with the automatic placement of edge labels and present a model for the ELP problem. Also, we investigate the computational complexity of the ELP problem and prove that various forms of the ELP problem are NP-hard.

Journal ArticleDOI
TL;DR: It is shown that 9 is a lower bound on the competitive ratio for two large classes of strategies if m>=2 and 1+2(k+1)^k^+^1/k^k where log is used to denote the base-2 logarithm.
Abstract: We investigate parallel searching on m concurrent rays. We assume that a target t is located somewhere on one of the rays; we are given a group of m point robots each of which has to reach t. Furthermore, we assume that the robots have no way of communicating over distance. Given a strategy S we are interested in the competitive ratio defined as the ratio of the time needed by the robots to reach t using S and the time needed to reach t if the location of t is known in advance. If a lower bound on the distance to the target is known, then there is a simple strategy which achieves a competitive ratio of 9-independent of m. We show that 9 is a lower bound on the competitive ratio for two large classes of strategies if m>=2. If the minimum distance to the target is not known in advance, we show a lower bound on the competitive ratio of 1+2(k+1)^k^+^1/k^k where [email protected][email protected]? where log is used to denote the base-2 logarithm. We also give a strategy that obtains this ratio.

Journal ArticleDOI
TL;DR: In this paper, it was shown that in all dimensions d⩾4, every simple open polygonal chain and every tree can be straightened, and every simple closed polygonally chain may be convexified.
Abstract: We prove that, in all dimensions d⩾4, every simple open polygonal chain and every tree may be straightened, and every simple closed polygonal chain may be convexified. These reconfigurations can be achieved by algorithms that use polynomial time in the number of vertices, and result in a polynomial number of “moves”. These results contrast to those known for d=2, where trees can “lock”, and for d=3, where open and closed chains can lock.

Journal ArticleDOI
TL;DR: In this article, the problem of computing a minimal representation of the convex hull of the union of k H-polytopes in a two-dimensional space is addressed by projecting the polytopes onto the 2D space and solving a linear program.
Abstract: In this paper we address the problem of computing a minimal representation of the convex hull of the union of k H- polytopes in . Our method applies the reverse search algorithm to a shelling ordering of the facets of the convex hull. Efficient wrapping is done by projecting the polytopes onto the two-dimensional space and solving a linear program. The resulting algorithm is polynomial in the sizes of input and output under the general position assumption.

Journal ArticleDOI
TL;DR: In this article, it was shown that there is a direct motion from any convex polygon to any polygon with the same counterclockwise sequence of edge lengths that preserves the lengths of the edges, and keeps the polygon convex at all times.
Abstract: We prove that there is a motion from any convex polygon to any convex polygon with the same counterclockwise sequence of edge lengths, that preserves the lengths of the edges, and keeps the polygon convex at all times. Furthermore, the motion is “direct” (avoiding any intermediate canonical configuration like a subdivided triangle) in the sense that each angle changes monotonically throughout the motion. In contrast, we show that it is impossible to achieve such a result with each vertex-to-vertex distance changing monotonically. We also demonstrate that there is a motion between any two such polygons using three-dimensional moves known as pivots, although the complexity of the motion cannot be bounded as a function of the number of vertices in the polygon.

Journal ArticleDOI
TL;DR: In this paper, the shortest path trees in LR visibility polygons are derived and a characterization of shortest paths between vertices is presented, which suggests a simple algorithm for the following recognition problem.
Abstract: A simple polygon P is said to be LR -visibility polygon if there exists two points s and t on the boundary of P such that every point of the clockwise boundary of P from s to t (denoted as L ) is visible from some point of the counterclockwise boundary of P from s to t (denoted as R ) and vice versa. In this paper we derive properties of shortest paths in LR -visibility polygons and present a characterization of LR visibility polygons in terms of shortest paths between vertices. This characterization suggests a simple algorithm for the following recognition problem. Given a polygon P with distinguished vertices s and t , the problem is to determine whether P is a LR -visibility polygon with respect to s and t . Our algorithm for this problem checks LR -visibility by traversing shortest path trees rooted at s and t in DFS manner and it runs in linear time. Using our characterization of LR -visibility polygons, we show that the shortest path tree rooted at a vertex or a boundary point can be computed in linear time for a class of polygons which contains LR -visibility polygons as a subclass. As a result, this algorithm can be used as a procedure for computing the shortest path tree in our recognition algorithm as well as in the recognition algorithm of Das, Heffernan and Narasimhan. Our algorithm computes the shortest path tree by scanning the boundary of the given polygon and it does not require triangulation as a preprocessing step.

Journal ArticleDOI
TL;DR: The technical part of this paper focuses on the second task: the specification of a deformation mixing two or more shapes in continuously changing proportions.
Abstract: The construction of shape spaces is studied from a mathematical and a computational viewpoint. A program is outlined reducing the problem to four tasks: the representation of geometry, the canonical deformation of geometry, the measuring of distance in shape space, and the selection of base shapes. The technical part of this paper focuses on the second task: the specification of a deformation mixing two or more shapes in continuously changing proportions.

Journal ArticleDOI
TL;DR: In this article, a triangulation path enumerator for a given point set with respect to a given segment is presented, which takes O(tn3logn) and O(n) space.
Abstract: Recently, Aichholzer introduced the remarkable concept of the so-called triangulation path (of a triangulation with respect to a segment), which has the potential of providing efficient counting of triangulations of a point set, and efficient representations of all such triangulations. Experiments support such evidence, although – apart from the basic uniqueness properties – little has been proved so far. In this paper we provide an algorithm which enumerates all triangulation paths (of all triangulations of a given point set with respect to a given segment) in time O(tn3logn) and O(n) space, where n denotes the number of points and t is the number of triangulation paths. For the algorithm we introduce the notion of flips between such paths, and define a structure on all paths such that the reverse search approach can be applied. We also refute Aichholzer's conjecture that points in convex position maximize the number of such paths. There are configurations that allow Ω(2 2n−Θ( log n) ) paths.

Journal ArticleDOI
TL;DR: A software tool for planning, analyzing and visualizing deformations between two shapes in R^2 that is generated automatically without any user intervention or specification of feature correspondences.
Abstract: Shape deformation refers to the continuous change of one geometric object to another. We develop a software tool for planning, analyzing and visualizing deformations between two shapes in R^2. The deformation is generated automatically without any user intervention or specification of feature correspondences. A unique property of the tool is the explicit availability of a two-dimensional shape space, which can be used for designing the deformation either automatically by following constraints and objectives or manually by drawing deformation paths.

Journal ArticleDOI
TL;DR: It is proved a lower bound for β value ( β= 1 6 2 3 +45 ) such that if β is less than this value, the β -skeleton of S may not be always a subgraph of the minimum weight triangulation (MWT) of S .
Abstract: Given a set S of points in the Euclidean plane, the β -skeleton ( β>1 ) of S is a set of edges with endpoints in S and each edge e in the set satisfies the empty-disks condition, i.e., no element in S lies inside the two disks of diameter β|e| that pass through both endpoints of e . In this paper, we prove a lower bound for β value ( β= 1 6 2 3 +45 ) such that if β is less than this value, the β -skeleton of S may not be always a subgraph of the minimum weight triangulation (MWT) of S . Thus, we disprove Keil's conjecture that, for β= 2 3 3 , the β -skeleton is a subgraph of the MWT (Keil, 1994).

Journal ArticleDOI
TL;DR: An approximate algorithm for computing an approximation solution with error ratio e=1−α (called an α -sensitive solution), whose time complexity is O ((t/e) d−1 n) , which is very efficient if the threshold-ratio t=n/k is small.
Abstract: We present a unified scheme for detecting digital components of various planar curves in a binary edge image. A digital component of a curve is the set of input edge points from each of which the horizontal or vertical distance to the curve is at most 0.5. Our algorithm outputs all curve components containing at least k points ( k is a given threshold) in O (n d ) time (if d⩾2 ) and linear space, where n is the number of points, and d is a measure that reflects the complexity of a family of curves; for example, d=2,3 and 5 for lines, circles and ellipses, respectively. For most of the popular families of curves, our only primitive operations are algebraic operations of bounded degree and comparisons. We also propose an approximate algorithm for computing an approximation solution with error ratio e=1−α (called an α -sensitive solution), whose time complexity is O ((t/e) d−1 n) , which is very efficient if the threshold-ratio t=n/k is small.


Journal ArticleDOI
TL;DR: This paper shows that plane spanning trees can be enumerated efficiently in the order of their total length, which makes it possible to efficiently find the k best plane trees, or all those shorter than a given bound.
Abstract: A spanning tree constructed of straight line segments over a set of points in the Euclidean plane is called “non-crossing” or “plane tree”, if no two segments intersect. Imposing the additional constraint of non-crossing segments makes many combinatorial geometric problems harder. In the case of plane spanning trees, however, we show that they can be enumerated efficiently in the order of their total length. This makes it possible to efficiently find the k best plane trees, or all those shorter than a given bound.

Journal ArticleDOI
TL;DR: In this paper, the authors present a direct approach for routing a shortest rectilinear path between two points among a set of rectinear obstacles in a two-layer interconnection model that is used for VLSI routing applications.
Abstract: In this paper, we present a direct approach for routing a shortest rectilinear path between two points among a set of rectilinear obstacles in a two-layer interconnection model that is used for VLSI routing applications. The previously best known direct approach for this problem takes O(nlog^2n) time and O(nlogn) space, where n is the total number of obstacle edges. By using integer data structures and an implicit graph representation scheme (i.e., a generalization of the distance table method), we improve the time bound to O(nlog^3^/^2n) while still maintaining the O(nlogn) space bound. Comparing with the indirect approach for this problem, our algorithm is simpler to implement and is probably faster for a quite large range of input sizes.