scispace - formally typeset
Search or ask a question

Showing papers on "Computational geometry published in 2010"




Book ChapterDOI
01 Jun 2010
TL;DR: To appear as a chapter of the volume " Boolean Methods and Models " , this chapter describes the construction of Boolean models and some examples show how to model Boolean functions using LaSalle's inequality.
Abstract: To appear as a chapter of the volume " Boolean Methods and Models " ,

468 citations


Journal ArticleDOI
TL;DR: The aim of this work is to identify those relative sensor-target geometries which result in a measure of the uncertainty ellipse being minimized, and to show that an optimal sensor- target configuration is not, in general, unique.

393 citations


Journal ArticleDOI
TL;DR: This paper presents approximation algorithms for minimum vertex and edge guard problems for polygons with or without holes with a total of n vertices with the same approximation ratio of O(logn) times the optimal solution.

138 citations


Journal ArticleDOI
TL;DR: The proposed method combines topological and geometric techniques to provide interactive visualizations of discretely sampled high-dimensional scalar fields and is illustrated on several synthetic examples of two dimensional functions.
Abstract: An important goal of scientific data analysis is to understand the behavior of a system or process based on a sample of the system. In many instances it is possible to observe both input parameters and system outputs, and characterize the system as a high-dimensional function. Such data sets arise, for instance, in large numerical simulations, as energy landscapes in optimization problems, or in the analysis of image data relating to biological or medical parameters. This paper proposes an approach to analyze and visualizing such data sets. The proposed method combines topological and geometric techniques to provide interactive visualizations of discretely sampled high-dimensional scalar fields. The method relies on a segmentation of the parameter space using an approximate Morse-Smale complex on the cloud of point samples. For each crystal of the Morse-Smale complex, a regression of the system parameters with respect to the output yields a curve in the parameter space. The result is a simplified geometric representation of the Morse-Smale complex in the high dimensional input domain. Finally, the geometric representation is embedded in 2D, using dimension reduction, to provide a visualization platform. The geometric properties of the regression curves enable the visualization of additional information about each crystal such as local and global shape, width, length, and sampling densities. The method is illustrated on several synthetic examples of two dimensional functions. Two use cases, using data sets from the UCI machine learning repository, demonstrate the utility of the proposed approach on real data. Finally, in collaboration with domain experts the proposed method is applied to two scientific challenges. The analysis of parameters of climate simulations and their relationship to predicted global energy flux and the concentrations of chemical species in a combustion simulation and their integration with temperature.

111 citations


Journal ArticleDOI
TL;DR: This is the first method that guarantees polylogarithmic update and query cost for arbitrary sequences of insertions and deletions, and improves the previous O(nϵ)-time method by Agarwal and Matoušek a decade ago.
Abstract: We present a fully dynamic randomized data structure that can answer queries about the convex hull of a set of n points in three dimensions, where insertions take O(log3n) expected amortized time, deletions take O(log6n) expected amortized time, and extreme-point queries take O(log2n) worst-case time. This is the first method that guarantees polylogarithmic update and query cost for arbitrary sequences of insertions and deletions, and improves the previous O(nϵ)-time method by Agarwal and Matousek a decade ago. As a consequence, we obtain similar results for nearest neighbor queries in two dimensions and improved results for numerous fundamental geometric problems (such as levels in three dimensions and dynamic Euclidean minimum spanning trees in the plane).

90 citations


Journal ArticleDOI
TL;DR: This work introduces a rigorous and practical approach for automatic N-RoSy field design on arbitrary surfaces with user-defined field topologies and proposes to simplify the Riemannian metric to make it flat almost everywhere.
Abstract: Designing rotational symmetry fields on surfaces is an important task for a wide range of graphics applications. This work introduces a rigorous and practical approach for automatic N-RoSy field design on arbitrary surfaces with user-defined field topologies. The user has full control of the number, positions, and indexes of the singularities (as long as they are compatible with necessary global constraints), the turning numbers of the loops, and is able to edit the field interactively. We formulate N-RoSy field construction as designing a Riemannian metric such that the holonomy along any loop is compatible with the local symmetry of N-RoSy fields. We prove the compatibility condition using discrete parallel transport. The complexity of N-RoSy field design is caused by curvatures. In our work, we propose to simplify the Riemannian metric to make it flat almost everywhere. This approach greatly simplifies the process and improves the flexibility such that it can design N-RoSy fields with single singularity and mixed-RoSy fields. This approach can also be generalized to construct regular remeshing on surfaces. To demonstrate the effectiveness of our approach, we apply our design system to pen-and-ink sketching and geometry remeshing. Furthermore, based on our remeshing results with high global symmetry, we generate Celtic knots on surfaces directly.

74 citations


Journal ArticleDOI
TL;DR: In this paper, a cubic representation of the geometry based on curved PN triangles is proposed for surface remeshing based on harmonic maps, which can recover high quality meshes from both low input STL triangulations and complex surfaces defined by many CAD patches.
Abstract: In this paper, we present an efficient and robust technique for surface remeshing based on harmonic maps. We show how to ensure a one-to-one mapping for the discrete harmonic map and introduce a cubic representation of the geometry based on curved PN triangles. Topological and geometrical limitations of harmonic maps are also put to the fore and discussed. We show that, with the proposed approach, we are able to recover high quality meshes from both low input STL triangulations and complex surfaces defined by many CAD patches. The overall procedure is implemented in the open-source mesh generator Gmsh. Copyright (C) 2010 John Wiley & Sons, Ltd.

67 citations


Journal ArticleDOI
TL;DR: In this article, the authors proposed a distance oracle for finding shortest paths and nearest neighbors in a spatial network. But the distance oracles are not scalable and can only be used on sufficiently large road networks.
Abstract: The popularity of location-based services and the need to do real-time processing on them has led to an interest in performing queries on transportation networks, such as finding shortest paths and finding nearest neighbors. The challenge here is that the efficient execution of spatial operations usually involves the computation of distance along a spatial network instead of "as the crow flies," which is not simple. Techniques are described that enable the determination of the network distance between any pair of points (i.e., vertices) with as little as O(n) space rather than having to store the n2 distances between all pairs. This is done by being willing to expend a bit more time to achieve this goal such as O(log n) instead of O(1), as well as by accepting an error e in the accuracy of the distance that is provided. The strategy that is adopted reduces the space requirements and is based on the ability to identify groups of source and destination vertices for which the distance is approximately the same within some e. The reductions are achieved by introducing a construct termed a distance oracle that yields an estimate of the network distance (termed the e-approximate distance) between any two vertices in the spatial network. The distance oracle is obtained by showing how to adapt the well-separated pair technique from computational geometry to spatial networks. Initially, an e-approximate distance oracle of size O(n/(e2)) is used that is capable of retrieving the approximate network distance in O(log n) time using a B-tree. The retrieval time can be theoretically reduced further to O(1) time by proposing another e-approximate distance oracle of size O((n log n)/(e2)) that uses a hash table. Experimental results indicate that the proposed technique is scalable and can be applied to sufficiently large road networks. For example, a 10-percentapproximate oracle (e = 0.1) on a large network yielded an average error of 0.9 percent with 90 percent of the answers having an error of 2 percent or less and an average retrieval time of 68 μ seconds. The fact that the network distance can be approximated by one value is used to show how a number of spatial queries can be formulated using appropriate SQL constructs and a few built-in primitives. The result is that these operations can be executed on almost any modern database with no modifications, while taking advantage of the existing query optimizers and query processing strategies.

65 citations


Proceedings ArticleDOI
13 Jun 2010
TL;DR: This paper defines some similarity measures of the distributions based on an information geometry framework and shows how this conceptually simple approach can provide a satisfactory performance, comparable to the bag-of-keypoints for scene classification tasks.
Abstract: Local features provide powerful cues for generic image recognition. An image is represented by a “bag” of local features, which form a probabilistic distribution in the feature space. The problem is how to exploit the distributions efficiently. One of the most successful approaches is the bag-of-keypoints scheme, which can be interpreted as sparse sampling of high-level statistics, in the sense that it describes a complex structure of a local feature distribution using a relatively small number of parameters. In this paper, we propose the opposite approach, dense sampling of low-level statistics. A distribution is represented by a Gaussian in the entire feature space. We define some similarity measures of the distributions based on an information geometry framework and show how this conceptually simple approach can provide a satisfactory performance, comparable to the bag-of-keypoints for scene classification tasks. Furthermore, because our method and bag-of-keypoints illustrate different statistical points, we can further improve classification performance by using both of them in kernels.

Journal ArticleDOI
TL;DR: A novel framework for studying partially observable Markov decision processes (POMDPs) with finite state, action, observation sets, and discounted rewards based on future-reward vectors associated with future policies is presented, which is more parsimonious than the traditional framework based on belief vectors.
Abstract: This paper presents a novel framework for studying partially observable Markov decision processes (POMDPs) with finite state, action, observation sets, and discounted rewards. The new framework is solely based on future-reward vectors associated with future policies, which is more parsimonious than the traditional framework based on belief vectors. It reveals the connection between the POMDP problem and two computational geometry problems, i.e., finding the vertices of a convex hull and finding the Minkowski sum of convex polytopes, which can help solve the POMDP problem more efficiently. The new framework can clarify some existing algorithms over both finite and infinite horizons and shed new light on them. It also facilitates the comparison of POMDPs with respect to their degree of observability, as a useful structural result.

Proceedings ArticleDOI
03 Dec 2010
TL;DR: This paper presents an alternative system that makes use of stereo vision and combines two complementary techniques: bag-of-words to detect loop closing candidate images, and conditional random fields to discard those which are not geometrically consistent.
Abstract: Place recognition is a challenging task in any SLAM system. Algorithms based on visual appearance are becoming popular to detect locations already visited, also known as loop closures, because cameras are easily available and provide rich scene detail. These algorithms typically result in pairs of images considered depicting the same location. To avoid mismatches, most of them rely on epipolar geometry to check spatial consistency. In this paper we present an alternative system that makes use of stereo vision and combines two complementary techniques: bag-of-words to detect loop closing candidate images, and conditional random fields to discard those which are not geometrically consistent. We evaluate this system in public indoor and outdoor datasets from the Rawseeds project, with hundred-metre long trajectories. Our system achieves more robust results than using spatial consistency based on epipolar geometry.

Journal ArticleDOI
TL;DR: The presented theory of the @b-shape and the @ b-complex will be equally useful for diverse areas such as structural biology, computer graphics, geometric modelling, computational geometry, CAD, physics, and chemistry, where the core hurdle lies in determining the proximity among spherical particles.
Abstract: The proximity and topology among particles are often the most important factor for understanding the spatial structure of particles. Reasoning the morphological structure of molecules and reconstructing a surface from a point set are examples where proximity among particles is important. Traditionally, the Voronoi diagram of points, the power diagram, the Delaunay triangulation, and the regular triangulation, etc. have been used for understanding proximity among particles. In this paper, we present the theory of the @b-shape and the @b-complex and the corresponding algorithms for reasoning proximity among a set of spherical particles, both using the quasi-triangulation which is the dual of the Voronoi diagram of spheres. Given the Voronoi diagram of spheres, we first transform the Voronoi diagram to the quasi-triangulation. Then, we compute some intervals called @b-intervals for the singular, regular, and interior states of each simplex in the quasi-triangulation. From the sorted set of simplexes, the @b-shape and the @b-complex corresponding to a particular value of @b can be found efficiently. Given the Voronoi diagram of spheres, the quasi-triangulation can be obtained in O(m) time in the worst case, where m represents the number of simplexes in the quasi-triangulation. Then, the @b-intervals for all simplexes in the quasi-triangulation can also be computed in O(m) time in the worst case. After sorting the simplexes using the low bound values of the @b-intervals of each simplex in O(mlogm) time, the @b-shape and the @b-complex can be computed in O(logm+k) time in the worst case by a binary search followed by a sequential search in the neighborhood, where k represents the number of simplexes in the @b-shape or the @b-complex. The presented theory of the @b-shape and the @b-complex will be equally useful for diverse areas such as structural biology, computer graphics, geometric modelling, computational geometry, CAD, physics, and chemistry, where the core hurdle lies in determining the proximity among spherical particles.

Journal ArticleDOI
TL;DR: Two new algorithms for finding the 6 degrees of freedom of the motion are described and compared and one algorithm gives a linear solution and the other is a geometric algorithm that minimizes the maximum measurement error-the optimal L∞ solution.
Abstract: We investigate the problem of estimating the ego-motion of a multicamera rig from two positions of the rig. We describe and compare two new algorithms for finding the 6 degrees of freedom (3 for rotation and 3 for translation) of the motion. One algorithm gives a linear solution and the other is a geometric algorithm that minimizes the maximum measurement error-the optimal L∞ solution. They are described in the context of the General Camera Model (GCM), and we pay particular attention to multicamera systems in which the cameras have nonoverlapping or minimally overlapping field of view. Many nonlinear algorithms have been developed to solve the multicamera motion estimation problem. However, no linear solution or guaranteed optimal geometric solution has previously been proposed. We made two contributions: 1) a fast linear algebraic method using the GCM and 2) a guaranteed globally optimal algorithm based on the L∞ geometric error using the branch-and-bound technique. In deriving the linear method using the GCM, we give a detailed analysis of degeneracy of camera configurations. In finding the globally optimal solution, we apply a rotation space search technique recently proposed by Hartley and Kahl. Our experiments conducted on both synthetic and real data have shown excellent results.

Journal ArticleDOI
TL;DR: This work introduces as a foundational element the design of a container data structure that both provides concurrent addition and removal operations and is compact in memory, which makes it especially well-suited for storing large dynamic graphs such as Delaunay triangulations.
Abstract: Computers with multiple processor cores using shared memory are now ubiquitous. In this paper, we present several parallel geometric algorithms that specifically target this environment, with the goal of exploiting the additional computing power. The algorithms we describe are (a) 2-/3-dimensional spatial sorting of points, as is typically used for preprocessing before using incremental algorithms, (b) d-dimensional axis-aligned box intersection computation, and finally (c) 3D bulk insertion of points into Delaunay triangulations, which can be used for mesh generation algorithms, or simply for constructing 3D Delaunay triangulations. For the latter, we introduce as a foundational element the design of a container data structure that both provides concurrent addition and removal operations and is compact in memory. This makes it especially well-suited for storing large dynamic graphs such as Delaunay triangulations. We show experimental results for these algorithms, using our implementations based on the Computational Geometry Algorithms Library (CGAL). This work is a step towards what we hope will become a parallel mode for CGAL, where algorithms automatically use the available parallel resources without requiring significant user intervention.

Journal ArticleDOI
TL;DR: This paper introduces and study new optimization problems in the plane based on the bichromatic reverse nearest neighbor (BRNN) rule and provides efficient algorithms to compute a new blue point under criteria such as: the number of associated red points is maximum (MAXCOV criterion); the maximum distance to the associated red Points is minimum (MINMAX criterion).

Journal ArticleDOI
TL;DR: This paper surveys online algorithms in computational geometry that have been designed for mobile robots for searching a target and for exploring a region in the plane.

Journal ArticleDOI
TL;DR: The first (albeit small) improvement is given: the new algorithm runs in time n^d/^22^O^(^l^o^g^^^*^n^), where log^* denotes the iterated logarithm, and the related problem of computing the depth in an arrangement of n boxes is improved.
Abstract: Given n axis-parallel boxes in a fixed dimension d>=3, how efficiently can we compute the volume of the union? This standard problem in computational geometry, commonly referred to as Klee's measure problem, can be solved in time O(n^d^/^2logn) by an algorithm of Overmars and Yap (FOCS 1988). We give the first (albeit small) improvement: our new algorithm runs in time n^d^/^22^O^(^l^o^g^^^*^n^), where log^* denotes the iterated logarithm. For the related problem of computing the depth in an arrangement of n boxes, we further improve the time bound to near O(n^d^/^2/log^d^/^2^-^1n), ignoring loglogn factors. Other applications and lower-bound possibilities are discussed. The ideas behind the improved algorithms are simple.

Proceedings ArticleDOI
Qi Song1, Xiaodong Wu1, Yunlong Liu1, Milan Sonka1, Mona K. Garvin1 
13 Jun 2010
TL;DR: A novel method for globally optimal multi-surface searching with a shape prior represented by convex pairwise energies based on a 3-D graph-theoretic framework that provides more local and flexible control of the shape.
Abstract: Multiple surface searching with only image intensity information is a difficult job in the presence of high noise and weak edges. We present in this paper a novel method for globally optimal multi-surface searching with a shape prior represented by convex pairwise energies. A 3-D graph-theoretic framework is employed. An arc-weighted graph is constructed based on a shape model built from training datasets. A wide spectrum of constraints is then incorporated. The shape prior term penalizes the local topological change from the original shape model. The globally optimal solution for multiple surfaces can be obtained by computing a maximum flow in low-order polynomial time. Compared with other graph-based methods, our approach provides more local and flexible control of the shape. We also prove that our algorithm can handle the detection of multiple crossing surfaces with no shared voxels. Our method was applied to several application problems, including medical image segmentation, scenic image segmentation, and image resizing. Compared with results without using shape prior information, our improvement was quite impressive, demonstrating the promise of our method.

Proceedings ArticleDOI
Huijing Zhao1, Yiming Liu1, Xiaolong Zhu1, Yipu Zhao1, Hongbin Zha1 
03 May 2010
TL;DR: This research proposes a framework of simultaneous segmentation and classification of range image, where the classification of each segment is conducted based on its geometric properties, and homogeneity of each segments is evaluated conditioned on each object class.
Abstract: It became a well known technology that a map of complex environment containing low-level geometric primitives (such as laser points) can be generated using a robot with laser scanners. This research is motivated by the need of obtaining semantic knowledge of a large urban outdoor environment after the robot explores and generates a low-level sensing data set. An algorithm is developed with the data represented in a range image, while each pixel can be converted into a 3D coordinate. Using an existing segmentation method that models only geometric homogeneities, the data of a single object of complex geometry, such as people, cars, trees etc., is partitioned into different segments. Such a segmentation result will greatly restrict the capability of object recognition. This research proposes a framework of simultaneous segmentation and classification of range image, where the classification of each segment is conducted based on its geometric properties, and homogeneity of each segment is evaluated conditioned on each object class. Experiments are presented using the data of a large dynamic urban outdoor environment, and performance of the algorithm is evaluated.

Proceedings ArticleDOI
21 Jul 2010
TL;DR: A new linear algebra package for computing Gaussian elimination of Gröbner bases matrices is presented and the efficiency of the new software is demonstrated by showing computational results fr well known benchmarks as well as some crypto-challenges.
Abstract: Polynomial system solving is one of the important area of Computer Algebra with many applications in Robotics, Cryptology, Computational Geometry, etc. To this end computing a Grobner basis is often a crucial step. The most efficient algorithms [6, 7] for computing Grobner bases [2] rely heavily on linear algebra techniques. In this paper, we present a new linear algebra package for computing Gaussian elimination of Grobner bases matrices. The library is written in C and contains specific algorithms [11] to compute Gaussian elimination as well as specific internal representation of matrices (sparse triangular blocks, sparse rectangular blocks and hybrid rectangular blocks). The efficiency of the new software is demonstrated by showing computational results fr well known benchmarks as well as some crypto-challenges. For instance, for a medium size problem such as Katsura 15, it takes 849.7 sec on a PC with 8 cores to compute a DRL Grobner basis modulo p

Journal ArticleDOI
TL;DR: The interaction between computational geometry and music yields new insights into the theories of rhythm, melody, and voice-leading, as well as new problems for research in several areas, ranging from mathematics and computer science to music theory, music perception, and musicology.
Abstract: Many problems concerning the theory and technology of rhythm, melody, and voice-leading are fundamentally geometric in nature. It is therefore not surprising that the field of computational geometry can contribute greatly to these problems. The interaction between computational geometry and music yields new insights into the theories of rhythm, melody, and voice-leading, as well as new problems for research in several areas, ranging from mathematics and computer science to music theory, music perception, and musicology. Recent results on the geometric and computational aspects of rhythm, melody, and voice-leading are reviewed, connections to established areas of computer science, mathematics, statistics, computational biology, and crystallography are pointed out, and new open problems are proposed.

Journal ArticleDOI
TL;DR: This work presents the geodesic curvature flow equation on general smooth manifolds based on an energy minimization of curves as dGCF, and applies it to three problems: the closed-curve evolution on manifolds, the discrete scale-space construction, and the edge detection of images painted on triangulated surfaces.
Abstract: Curvature flow (planar geometric heat flow) has been extensively applied to image processing, computer vision, and material science. To extend the numerical schemes and algorithms of this flow on surfaces is very significant for corresponding motions of curves and images defined on surfaces. In this work, we are interested in the geodesic curvature flow over triangulated surfaces using a level set formulation. First, we present the geodesic curvature flow equation on general smooth manifolds based on an energy minimization of curves. The equation is then discretized by a semi-implicit finite volume method (FVM). For convenience of description, we call the discretized geodesic curvature flow as dGCF. The existence and uniqueness of dGCF are discussed. The regularization behavior of dGCF is also studied. Finally, we apply our dGCF to three problems: the closed-curve evolution on manifolds, the discrete scale-space construction, and the edge detection of images painted on triangulated surfaces. Our method works for compact triangular meshes of arbitrary geometry and topology, as long as there are no degenerate triangles. The implementation of the method is also simple.

Journal ArticleDOI
TL;DR: It is shown here how to preprocess a set of disjoint regions in the plane of total complexity $n$ in O(n\log n) time so that if one point per set is specified with precise coordinates, a triangulation of the points can be computed in linear time.
Abstract: Traditional algorithms in computational geometry assume that the input points are given precisely. In practice, data is usually imprecise, but information about the imprecision is often available. In this context, we investigate what the value of this information is. We show here how to preprocess a set of disjoint regions in the plane of total complexity $n$ in $O(n\log n)$ time so that if one point per set is specified with precise coordinates, a triangulation of the points can be computed in linear time. In our solution, we solve another problem which we believe to be of independent interest. Given a triangulation with red and blue vertices, we show how to compute a triangulation of only the blue vertices in linear time.


Book ChapterDOI
13 Sep 2010
TL;DR: How the design of Core 2 addresses key software issues such as modularity, extensibility, efficiency in a setting that combines algebraic and transcendental elements is described, a package designed for applications such as non-linear computational geometry.
Abstract: There is a growing interest in numeric-algebraic techniques in the computer algebra community as such techniques can speed up many applications. This paper is concerned with one such approach called Exact Numeric Computation (ENC). The ENC approach to algebraic number computation is based on iterative verified approximations, combined with constructive zero bounds. This paper describes Core 2, the latest version of the Core Library, a package designed for applications such as non-linear computational geometry. The adaptive complexity of ENC combined with filters makes such libraries practical. Core 2 smoothly integrates our algebraic ENC subsystem with transcendental functions with e-accurate comparisons. This paper describes how the design of Core 2 addresses key software issues such as modularity, extensibility, efficiency in a setting that combines algebraic and transcendental elements. Our redesign preserves the original goals of the Core Library, namely, to provide a simple and natural interface for ENC computation to support rapid prototyping and exploration. We present examples, experimental results, and timings for our new system, released as Core Library 2.0.

Proceedings ArticleDOI
17 Jan 2010
TL;DR: In this paper, a streaming algorithm for maintaining a blurred ball cover whose working space is linear in d and independent of n is presented, and lower bounds on the worst-case approximation ratio of any streaming algorithm that uses poly(d) space.
Abstract: We develop (single-pass) streaming algorithms for maintaining extent measures of a stream S of n points in Rd. We focus on designing streaming algorithms whose working space is polynomial in d (poly(d)) and sublinear in n. For the problems of computing diameter, width and minimum enclosing ball of S, we obtain lower bounds on the worst-case approximation ratio of any streaming algorithm that uses poly(d) space. On the positive side, we introduce the notion of blurred ball cover and use it for answering approximate farthest-point queries and maintaining approximate minimum enclosing ball and diameter of S. We describe a streaming algorithm for maintaining a blurred ball cover whose working space is linear in d and independent of n.

Book ChapterDOI
01 Jan 2010
TL;DR: In this article, the packing density of the pattern 2413 has been studied and a lower bound of 0.10472422757673209041 has been established for its packing density.
Abstract: We give a new lower bound of 0.10472422757673209041 for the packing density of 2413, justify it by a construction, and conjecture that this value is actually equal to the packing density. Along the way we define the packing rate of a permutation with respect to a measure, and show that maximizing the packing rate of a pattern over all measures gives the packing density of the pattern. In this paper we consider the packing density of the pattern 2413. This pattern is significant because it is not layered, and because up to Fig. 1. The conjecture is based on this measure, μ2

Journal ArticleDOI
TL;DR: It is proved that the algorithm is linear in time in the case of convex parts thanks to the specificity of digital data, and is in O(nlogn) otherwise.