scispace - formally typeset
Search or ask a question

Showing papers on "Computational geometry published in 2013"


Proceedings ArticleDOI
17 Jul 2013
TL;DR: The Multi-Parametric Toolbox is a collection of algorithms for modeling, control, analysis, and deployment of constrained optimal controllers developed under Matlab that features a powerful geometric library that extends the application of the toolbox beyond optimal control to various problems arising in computational geometry.
Abstract: The Multi-Parametric Toolbox is a collection of algorithms for modeling, control, analysis, and deployment of constrained optimal controllers developed under Matlab. It features a powerful geometric library that extends the application of the toolbox beyond optimal control to various problems arising in computational geometry. The new version 3.0 is a complete rewrite of the original toolbox with a more flexible structure that offers faster integration of new algorithms. The numerical side of the toolbox has been improved by adding interfaces to state of the art solvers and by incorporation of a new parametric solver that relies on solving linear-complementarity problems. The toolbox provides algorithms for design and implementation of real-time model predictive controllers that have been extensively tested.

1,054 citations


Journal ArticleDOI
TL;DR: The advantages and problems of techniques operating on quadrilateral meshes are discussed, including surface analysis and mesh quality, simplification, adaptive refinement, alignment with features, parametrisation and remeshing.
Abstract: Triangle meshes have been nearly ubiquitous in computer graphics, and a large body of data structures and geometry processing algorithms based on them has been developed in the literature. At the same time, quadrilateral meshes, especially semi-regular ones, have advantages for many applications, and significant progress was made in quadrilateral mesh generation and processing during the last several years. In this survey we discuss the advantages and problems of techniques operating on quadrilateral meshes, including surface analysis and mesh quality, simplification, adaptive refinement, alignment with features, parametrisation and remeshing.

296 citations


Journal ArticleDOI
TL;DR: GloMIQO is introduced, a numerical solver addressing mixed-integer quadratically-constrained quadratic programs to $${\varepsilon}$$-global optimality, and its algorithmic components are presented for reformulating user input, detecting special structure including convexity and edge-concavity, generating tight convex relaxations, and finding good feasible solutions.
Abstract: This paper introduces the global mixed-integer quadratic optimizer, GloMIQO, a numerical solver addressing mixed-integer quadratically-constrained quadratic programs to $${\varepsilon}$$ -global optimality. The algorithmic components are presented for: reformulating user input, detecting special structure including convexity and edge-concavity, generating tight convex relaxations, partitioning the search space, bounding the variables, and finding good feasible solutions. To demonstrate the capacity of GloMIQO, we extensively tested its performance on a test suite of 399 problems of diverse size and structure. The test cases are taken from process networks applications, computational geometry problems, GLOBALLib, MINLPLib, and the Bonmin test set. We compare the performance of GloMIQO with respect to four state-of-the-art global optimization solvers: BARON 10.1.2, Couenne 0.4, LindoGLOBAL 6.1.1.588, and SCIP 2.1.0.

200 citations


BookDOI
03 Oct 2013
TL;DR: This book contains the first systematic treatment of epsilon-nets, geometric tranversal theory, partitions of Euclidean spaces and a general method for the analysis of randomized geometric algorithms.
Abstract: Discrete and computational geometry are two fields which in recent years have benefitted from the interaction between mathematics and computer science. The results are applicable in areas such as motion planning, robotics, scene analysis, and computer aided design. The book consists of twelve chapters summarizing the most recent results and methods in discrete and computational geometry. All authors are well-known experts in these fields. They give concise and self-contained surveys of the most efficient combinatorical, probabilistic and topological methods that can be used to design effective geometric algorithms for the applications mentioned above. Most of the methods and results discussed in the book have not appeared in any previously published monograph. In particular, this book contains the first systematic treatment of epsilon-nets, geometric tranversal theory, partitions of Euclidean spaces and a general method for the analysis of randomized geometric algorithms. Apart from mathematicians working in discrete and computational geometry this book will also be of great use to computer scientists and engineers, who would like to learn about the most recent results.

112 citations


Proceedings ArticleDOI
05 Nov 2013
TL;DR: CG_Hadoop is introduced; a suite of scalable and efficient MapReduce algorithms for various fundamental computational geometry problems, namely, polygon union, skyline, convex hull, farthest pair, and closest pair, which present a set of key components for other geometric algorithms.
Abstract: Hadoop, employing the MapReduce programming paradigm, has been widely accepted as the standard framework for analyzing big data in distributed environments. Unfortunately, this rich framework was not truly exploited towards processing large-scale computational geometry operations. This paper introduces CG_Hadoop; a suite of scalable and efficient MapReduce algorithms for various fundamental computational geometry problems, namely, polygon union, skyline, convex hull, farthest pair, and closest pair, which present a set of key components for other geometric algorithms. For each computational geometry operation, CG_Hadoop has two versions, one for the Apache Hadoop system and one for the SpatialHadoop system; a Hadoop-based system that is more suited for spatial operations. These proposed algorithms form a nucleus of a comprehensive MapReduce library of computational geometry operations. Extensive experimental results on a cluster of 25 machines of datasets up to 128GB show that CG_Hadoop achieves up to 29x and 260x better performance than traditional algorithms when using Hadoop and SpatialHadoop systems, respectively.

103 citations


Proceedings ArticleDOI
23 Jun 2013
TL;DR: This work shows that using a light field instead of an image not only enables to train classifiers which can overcome many of these problems, but also provides an optimal data structure for label optimization by implicitly providing scene geometry information.
Abstract: We present the first variational framework for multi-label segmentation on the ray space of 4D light fields. For traditional segmentation of single images, features need to be extracted from the 2D projection of a three-dimensional scene. The associated loss of geometry information can cause severe problems, for example if different objects have a very similar visual appearance. In this work, we show that using a light field instead of an image not only enables to train classifiers which can overcome many of these problems, but also provides an optimal data structure for label optimization by implicitly providing scene geometry information. It is thus possible to consistently optimize label assignment over all views simultaneously. As a further contribution, we make all light fields available online with complete depth and segmentation ground truth data where available, and thus establish the first benchmark data set for light field analysis to facilitate competitive further development of algorithms.

99 citations


Proceedings ArticleDOI
23 Feb 2013
TL;DR: This work proposes efficient techniques to perform concurrent subgraph addition, subgraph deletion, conflict detection and several optimizations to improve the scalability of morph algorithms and provides several insights into how other morph algorithms can be efficiently implemented on GPUs.
Abstract: There is growing interest in using GPUs to accelerate graph algorithms such as breadth-first search, computing page-ranks, and finding shortest paths. However, these algorithms do not modify the graph structure, so their implementation is relatively easy compared to general graph algorithms like mesh generation and refinement, which morph the underlying graph in non-trivial ways by adding and removing nodes and edges. We know relatively little about how to implement morph algorithms efficiently on GPUs.In this paper, we present and study four morph algorithms: (i) a computational geometry algorithm called Delaunay Mesh Refinement (DMR), (ii) an approximate SAT solver called Survey Propagation (SP), (iii) a compiler analysis called Points-To Analysis (PTA), and (iv) Boruvka's Minimum Spanning Tree algorithm (MST). Each of these algorithms modifies the graph data structure in different ways and thus poses interesting challenges.We overcome these challenges using algorithmic and GPU-specific optimizations. We propose efficient techniques to perform concurrent subgraph addition, subgraph deletion, conflict detection and several optimizations to improve the scalability of morph algorithms. For an input mesh with 10 million triangles, our DMR code achieves an 80x speedup over the highly optimized serial Triangle program and a 2.3x speedup over a multicore implementation running with 48 threads. Our SP code is 3x faster than a multicore implementation with 48 threads on an input with 1 million literals. The PTA implementation is able to analyze six SPEC 2000 benchmark programs in just 74 milliseconds, achieving a geometric mean speedup of 9.3x over a 48-thread multicore version. Our MST code is slower than a multicore version with 48 threads for sparse graphs but significantly faster for denser graphs.This work provides several insights into how other morph algorithms can be efficiently implemented on GPUs.

90 citations


Journal ArticleDOI
TL;DR: A framework and several algorithms that automatically recognize building patterns from topographic data, with a focus on collinear and curvilinear alignments are proposed, where a mechanism is proposed to combine results from different algorithms to improve the recognition quality.
Abstract: Building patterns are important features that should be preserved in the map generalization process. However, the patterns are not explicitly accessible to automated systems. This paper proposes a framework and several algorithms that automatically recognize building patterns from topographic data, with a focus on collinear and curvilinear alignments. For both patterns two algorithms are developed, which are able to recognize alignment-of-center and alignment-of-side patterns. The presented approach integrates aspects of computational geometry, graph-theoretic concepts and theories of visual perception. Although the individual algorithms for collinear and curvilinear patterns show great potential for each type of the patterns, the recognized patterns are neither complete nor of enough good quality. We therefore advocate the use of a multi-algorithm paradigm, where a mechanism is proposed to combine results from different algorithms to improve the recognition quality. The potential of our method is demonstrated by an application of the framework to several real topographic datasets. The quality of the recognition results are validated in an expert survey.

73 citations


Proceedings ArticleDOI
01 Jun 2013
TL;DR: Borders are given on the ε-rank of a real matrix A, defined for any ε > 0 as the minimum rank over matrices that approximate every entry of A to within an additive ε.
Abstract: We study the e-rank of a real matrix A, defined for any e > 0 as the minimum rank over matrices that approximate every entry of A to within an additive e. This parameter is connected to other notions of approximate rank and is motivated by problems from various topics including communication complexity, combinatorial optimization, game theory, computational geometry and learning theory. Here we give bounds on the e-rank and use them for algorithmic applications. Our main algorithmic results are (a) polynomial-time additive approximation schemes for Nash equilibria for 2-player games when the payoff matrices are positive semidefinite or have logarithmic rank and (b) an additive PTAS for the densest subgraph problem for similar classes of weighted graphs. We use combinatorial, geometric and spectral techniques; our main new tool is an algorithm for efficiently covering a convex body with translates of another convex body.

68 citations


Journal ArticleDOI
TL;DR: A novel volumetric parameterization and spline construction framework is developed, which is an effective modeling tool for converting surface meshes tovolumetric splines and has great potential in shape modeling, engineering analysis, and reverse engineering applications.
Abstract: This paper develops a novel volumetric parameterization and spline construction framework, which is an effective modeling tool for converting surface meshes to volumetric splines. Our new splines are defined upon a novel parametric domain called generalized polycubes (GPCs). A GPC comprises a set of regular cube domains topologically glued together. Compared with conventional polycubes (CPCs), the GPC is much more powerful and flexible and has improved numerical accuracy and computational efficiency when serving as a parametric domain. We design an automatic algorithm to construct the GPC domain while also permitting the user to improve shape abstraction via interactive intervention. We then parameterize the input model on the GPC domain. Finally, we devise a new volumetric spline scheme based on this seamless volumetric parameterization. With a hierarchical fitting scheme, the proposed splines can fit data accurately using reduced number of superfluous control points. Our volumetric modeling scheme has great potential in shape modeling, engineering analysis, and reverse engineering applications.

62 citations


Journal ArticleDOI
TL;DR: A new method called Fast Approximate Convex Decomposition (FACD) is proposed that improves the quality of the decomposition and reduces the cost of computing it for both 2D and 3D models and uses a dynamic programming approach to select a set of non-crossing (independent) cuts that can be simultaneously applied to decompose the component into n"c+1 components.
Abstract: Approximate convex decomposition (ACD) is a technique that partitions an input object into approximately convex components. Decomposition into approximately convex pieces is both more efficient to compute than exact convex decomposition and can also generate a more manageable number of components. It can be used as a basis of divide-and-conquer algorithms for applications such as collision detection, skeleton extraction and mesh generation. In this paper, we propose a new method called Fast Approximate Convex Decomposition (FACD) that improves the quality of the decomposition and reduces the cost of computing it for both 2D and 3D models. In particular, we propose a new strategy for evaluating potential cuts that aims to reduce the relative concavity, rather than absolute concavity. As shown in our results, this leads to more natural and smaller decompositions that include components for small but important features such as toes or fingers while not decomposing larger components, such as the torso, that may have concavities due to surface texture. Second, instead of decomposing a component into two pieces at each step, as in the original ACD, we propose a new strategy that uses a dynamic programming approach to select a set of n"c non-crossing (independent) cuts that can be simultaneously applied to decompose the component into n"c+1 components. This reduces the depth of recursion and, together with a more efficient method for computing the concavity measure, leads to significant gains in efficiency. We provide comparative results for 2D and 3D models illustrating the improvements obtained by FACD over ACD and we compare with the segmentation methods in the Princeton Shape Benchmark by Chen et al. (2009) [31].

Journal ArticleDOI
TL;DR: This paper analyzes an approximation scheme that keeps the representation linear in the size of the input, while maintaining the guarantees on the inference quality close to those for the exact but costly representation.
Abstract: Distance functions to compact sets play a central role in several areas of computational geometry. Methods that rely on them are robust to the perturbations of the data by the Hausdorff noise, but fail in the presence of outliers. The recently introduced distance to a measure offers a solution by extending the distance function framework to reasoning about the geometry of probability measures, while maintaining theoretical guarantees about the quality of the inferred information. A combinatorial explosion hinders working with distance to a measure as an ordinary power distance function. In this paper, we analyze an approximation scheme that keeps the representation linear in the size of the input, while maintaining the guarantees on the inference quality close to those for the exact but costly representation.

Journal ArticleDOI
TL;DR: New methods for feature analysis in atom probe tomography data that have useful applications in materials characterisation and interfacial excess mapping of an InGaAs quantum dot are presented.


BookDOI
04 Jan 2013
TL;DR: Polyhedral and Algebraic Methods in Computational Geometry provides a thorough introduction into algorithmic geometry and its applications and presents its primary topics from the viewpoints of discrete, convex and elementary algebraic geometry.
Abstract: Polyhedral and Algebraic Methods in Computational Geometry provides a thorough introduction into algorithmic geometry and its applications. It presents its primary topics from the viewpoints of discrete, convex and elementary algebraic geometry. The first part of the book studies classical problems and techniques that refer to polyhedral structures. The authors include a study on algorithms for computing convex hulls as well as the construction of Voronoi diagrams and Delone triangulations. The second part of the book develops the primary concepts of (non-linear) computational algebraic geometry. Here, the book looks at Grbner bases and solving systems of polynomial equations. The theory is illustrated by applications in computer graphics, curve reconstruction and robotics. Throughout the book, interconnections between computational geometry and other disciplines (such as algebraic geometry, optimization and numerical mathematics) are established. Polyhedral and Algebraic Methods in Computational Geometry is directed towards advanced undergraduates in mathematics and computer science, as well as towards engineering students who are interested in the applications of computational geometry.

Journal ArticleDOI
TL;DR: This work proposes the first graphics processing unit (GPU) solution to compute the 2D constrained Delaunay triangulation (CDT) of a planar straight line graph (PSLG) consisting of points and edges using the CUDA programming model on NVIDIA GPUs, and accelerates the entire computation on the GPU.
Abstract: We propose the first graphics processing unit (GPU) solution to compute the 2D constrained Delaunay triangulation (CDT) of a planar straight line graph (PSLG) consisting of points and edges. There are many existing CPU algorithms to solve the CDT problem in computational geometry, yet there has been no prior approach to solve this problem efficiently using the parallel computing power of the GPU. For the special case of the CDT problem where the PSLG consists of just points, which is simply the normal Delaunay triangulation (DT) problem, a hybrid approach using the GPU together with the CPU to partially speed up the computation has already been presented in the literature. Our work, on the other hand, accelerates the entire computation on the GPU. Our implementation using the CUDA programming model on NVIDIA GPUs is numerically robust, and runs up to an order of magnitude faster than the best sequential implementations on the CPU. This result is reflected in our experiment with both randomly generated PSLGs and real-world GIS data having millions of points and edges.

Journal ArticleDOI
TL;DR: A biharmonic model for cross-object volumetric mapping that can provide C1 smoothness along the segmentation boundary interfaces and demonstrates the efficacy of the mapping framework on various geometric models with complex geometry or heterogeneous interior structures.
Abstract: We propose a biharmonic model for cross-object volumetric mapping. This new computational model aims to facilitate the mapping of solid models with complicated geometry or heterogeneous inner structures. In order to solve cross-shape mapping between such models through divide and conquer, solid models can be decomposed into subparts upon which mappings is computed individually. The biharmonic volumetric mapping can be performed in each subregion separately. Unlike the widely used harmonic mapping which only allows C0 continuity along the segmentation boundary interfaces, this biharmonic model can provide C1 smoothness. We demonstrate the efficacy of our mapping framework on various geometric models with complex geometry (which are decomposed into subparts with simpler and solvable geometry) or heterogeneous interior structures (whose different material layers can be segmented and processed separately).

Journal ArticleDOI
TL;DR: A new approach for defining continuous non‐oriented gradient fields from discrete inputs, a fundamental stage for a variety of computer graphics applications such as surface or curve reconstruction, and image stylization, is introduced.
Abstract: We introduce a new approach for defining continuous non-oriented gradient fields from discrete inputs, a fundamental stage for a variety of computer graphics applications such as surface or curve reconstruction, and image stylization. Our approach builds on a moving least square formalism that computes higher-order local approximations of non-oriented input gradients. In particular, we show that our novel isotropic linear approximation outperforms its lower-order alternative: surface or image structures are much better preserved, and instabilities are significantly reduced. Thanks to its ease of implementation (on both CPU and GPU) and small performance overhead, we believe our approach will find a widespread use in graphics applications, as demonstrated by the variety of our results.

Journal ArticleDOI
TL;DR: Three important variations of minimum enclosing circle problem are studied: computing k identical circles of minimum radius with centers on L, whose union covers all the points in P, and computing the minimum radius circle centered on L that can enclose at least k points of P.
Abstract: Given a set P of n points and a straight line L, we study three important variations of minimum enclosing circle problem as follows:

Posted Content
TL;DR: In this paper, a variant of the Frank-Wolfe (FW) method was proposed to accelerate the convergence of the basic FW procedure on quadratic forms, where the convergence rate was shown to be linear with the number of iterations.
Abstract: Recently, there has been a renewed interest in the machine learning community for variants of a sparse greedy approximation procedure for concave optimization known as {the Frank-Wolfe (FW) method}. In particular, this procedure has been successfully applied to train large-scale instances of non-linear Support Vector Machines (SVMs). Specializing FW to SVM training has allowed to obtain efficient algorithms but also important theoretical results, including convergence analysis of training algorithms and new characterizations of model sparsity. In this paper, we present and analyze a novel variant of the FW method based on a new way to perform away steps, a classic strategy used to accelerate the convergence of the basic FW procedure. Our formulation and analysis is focused on a general concave maximization problem on the simplex. However, the specialization of our algorithm to quadratic forms is strongly related to some classic methods in computational geometry, namely the Gilbert and MDM algorithms. On the theoretical side, we demonstrate that the method matches the guarantees in terms of convergence rate and number of iterations obtained by using classic away steps. In particular, the method enjoys a linear rate of convergence, a result that has been recently proved for MDM on quadratic forms. On the practical side, we provide experiments on several classification datasets, and evaluate the results using statistical tests. Experiments show that our method is faster than the FW method with classic away steps, and works well even in the cases in which classic away steps slow down the algorithm. Furthermore, these improvements are obtained without sacrificing the predictive accuracy of the obtained SVM model.

Proceedings ArticleDOI
23 Jun 2013
TL;DR: This work shows that ambient occlusion can be approximated using simple, per-pixel statistics over image stacks, based on a simplified image formation model, and uses the derived AO measure to compute reflectance and illumination for objects without relying on additional smoothness priors.
Abstract: We present a method for computing ambient occlusion (AO) for a stack of images of a scene from a fixed viewpoint. Ambient occlusion, a concept common in computer graphics, characterizes the local visibility at a point: it approximates how much light can reach that point from different directions without getting blocked by other geometry. While AO has received surprisingly little attention in vision, we show that it can be approximated using simple, per-pixel statistics over image stacks, based on a simplified image formation model. We use our derived AO measure to compute reflectance and illumination for objects without relying on additional smoothness priors, and demonstrate state-of-the art performance on the MIT Intrinsic Images benchmark. We also demonstrate our method on several synthetic and real scenes, including 3D printed objects with known ground truth geometry.

Journal ArticleDOI
TL;DR: A Monte Carlo approximation algorithm for the Tukey depth problem in high dimensions is introduced that is a generalization of an algorithm presented by Rousseeuw and Struyf (1998) and studied both analytically and experimentally.
Abstract: A Monte Carlo approximation algorithm for the Tukey depth problem in high dimensions is introduced. The algorithm is a generalization of an algorithm presented by Rousseeuw and Struyf (1998) [20]. The performance of this algorithm is studied both analytically and experimentally.

Proceedings ArticleDOI
17 Jun 2013
TL;DR: This video demonstrates how the robot platform IRMA3D can produce high-resolution, virtual 3D environments, based on a limited number of laser scans, according to an instance of the Art Gallery Problem.
Abstract: In this video, we illustrate how one of the classical areas of computational geometry has gained in practical relevance, which in turn gives rise to new, fascinating geometric problems. In particular, we demonstrate how the robot platform IRMA3D can produce high-resolution, virtual 3D environments, based on a limited number of laser scans. Computing an optimal set of scans amounts to solving an instance of the Art Gallery Problem (AGP): Place a minimum number of stationary guards in a polygonal region P, such that all points in P are guarded.

Journal ArticleDOI
TL;DR: In this paper, the authors derived bounds on the accuracy achievable by localization techniques under the assumption that the localization system is map-aware, i.e., it can benefit not only from the availability of observations, but also from the a priori knowledge provided by the map of the environment where it operates.
Abstract: Establishing bounds on the accuracy achievable by localization techniques represents a fundamental technical issue. Bounds on localization accuracy have been derived for cases in which the position of an agent is estimated on the basis of a set of observations and, possibly, of some a priori information related to them (e.g., information about anchor positions and properties of the communication channel). In this paper, new bounds are derived under the assumption that the localization system is map-aware, i.e., it can benefit not only from the availability of observations, but also from the a priori knowledge provided by the map of the environment where it operates. Our results show that: a) map-aware estimation accuracy can be related to some features of the map (e.g., its shape and area) even though, in general, the relation is complicated; b) maps are really useful in the presence of some combination of low SNRs and specific geometrical features of the map (e.g., the size of obstructions); c) in most cases, there is no need of refined maps since additional details do not improve estimation accuracy.

Journal ArticleDOI
TL;DR: A set of direct geometric processing techniques that enable the tele-fabrication of physical objects do not involve any intermediate geometric models such as STL, polygons or non-uniform rational B-splines that are otherwise commonly used in prevalent approaches.
Abstract: This paper presents a new approach for telefabrication where a physical object is scanned in one location and fabricated in another location. This approach integrates three-dimensional (3D) scanning, geometric processing of scanned data, and additive manufacturing (AM) technologies. In this paper, we focus on a set of direct geometric processing techniques that enable the telefabrication. In this approach, 3D scan data are directly sliced into layer-wise contours. Sacrificial supports are generated directly from the contours and digital mask images of the objects and the supports for stereolithography apparatus (SLA) processes are then automatically generated. The salient feature of this approach is that it does not involve any intermediate geometric models such as STL, polygons, or nonuniform rational B-splines (NURBS) that are otherwise commonly used in prevalent approaches. The experimental results on a set of objects fabricated on several SLA machines confirm the effectiveness of the approach in faithfully telefabricating physical objects.

Journal ArticleDOI
TL;DR: This paper proposes an O(n1+ε) time algorithm for the maximum diameter color-spanning set problem where ε could be an arbitrarily small positive constant and proposes two efficient constant factor approximation algorithms for the planar smallest perimeter color- spanning convex hull problem.
Abstract: In this paper we study several geometric problems of color-spanning sets: given n points with m colors in the plane, selecting m points with m distinct colors such that some geometric properties of the m selected points are minimized or maximized. The geometric properties studied in this paper are the maximum diameter, the largest closest pair, the planar smallest minimum spanning tree, the planar largest minimum spanning tree and the planar smallest perimeter convex hull. We propose an O(n 1+? ) time algorithm for the maximum diameter color-spanning set problem where ? could be an arbitrarily small positive constant. Then, we present hardness proofs for the other problems and propose two efficient constant factor approximation algorithms for the planar smallest perimeter color-spanning convex hull problem.

Journal ArticleDOI
TL;DR: This paper proposes approximate static range search (ARS) which combines two approaches, namely (i) lowerbound approximate range search, and (ii) upperbound approximaterange search, which is able to deliver a better performance, together with low false hit and reasonable false miss.
Abstract: For many years, spatial range search has been applied to computational geometry and multimedia problems to find interest objects within a given radius. Range search query has traditionally been used to return all objects within a given radius. However, having all objects is not necessary, especially when there are already enough objects closer to the query point. Furthermore, expanding the radius may give users better results, especially when there are a lot of objects just outside the search boundary. Therefore, in this paper, we focus on approximate range search, where the query results are approximate, rather than exact. We propose approximate static range search (ARS) which combines two approaches, namely (i) lowerbound approximate range search, and (ii) upperbound approximate range search. Using ARS, we are able to deliver a better performance, together with low false hit and reasonable false miss. We also extend ARS in the context of a continuous query setting, in which the query moves. This is particularly important in spatial databases as a mobile user who invokes the query is moving. In terms of continuous range search, the intention is to find split points--the locations where the query results will be updated. Accordingly, we propose two methods for approximate continuous range search, namely (i) range search minimization, and (ii) split points minimization. Our performance evaluation which compares our methods with the traditional continuous range search shows that our methods considerably reduce the number of split points, thereby improving overall performance.

Proceedings ArticleDOI
20 May 2013
TL;DR: This work revisits the distributed polygon overlay problem and its implementation on MapReduce platform and develops algorithms geared towards maximizing local processing and minimizing the communication overhead inherent with shuffle and sort phases in MapReduced.
Abstract: Polygon overlay is one of the complex operations in computational geometry. It is applied in many fields such as Geographic Information Systems (GIS), computer graphics and VLSI CAD. Sequential algorithms for this problem are in abundance in literature but there is a lack of distributed algorithms especially for MapReduce platform. In GIS, spatial data files tend to be large in size (in GBs) and the underlying overlay computation is highly irregular and compute intensive. The MapReduce paradigm is now standard in industry and academia for processing large-scale data. Motivated by the MapReduce programming model, we revisit the distributed polygon overlay problem and its implementation on MapReduce platform. Our algorithms are geared towards maximizing local processing and minimizing the communication overhead inherent with shuffle and sort phases in MapReduce. We have experimented with two data sets and achieved up to 22x speedup with dataset 1 using 64 CPU cores.

Proceedings ArticleDOI
23 Jun 2013
TL;DR: The algorithm is applied to study constrained human brain surface registration problem and demonstrates that, by changing the Riemannian metric, the registrations are always diffeomorphic, and achieve relative high performance when evaluated with some popular cortical surface registration evaluation standards.
Abstract: Automatic computation of surface correspondence via harmonic map is an active research field in computer vision, computer graphics and computational geometry. It may help document and understand physical and biological phenomena and also has broad applications in biometrics, medical imaging and motion capture. Although numerous studies have been devoted to harmonic map research, limited progress has been made to compute a diffeomorphic harmonic map on general topology surfaces with landmark constraints. This work conquer this problem by changing the Riemannian metric on the target surface to a hyperbolic metric, so that the harmonic mapping is guaranteed to be a diffeomorphism under landmark constraints. The computational algorithms are based on the Ricci flow method and the method is general and robust. We apply our algorithm to study constrained human brain surface registration problem. Experimental results demonstrate that, by changing the Riemannian metric, the registrations are always diffeomorphic, and achieve relative high performance when evaluated with some popular cortical surface registration evaluation standards.

Journal ArticleDOI
TL;DR: An array of variations of this problem in 2-D and 3-D, including points with weights, approximation with violations, using step functions or more generally piecewise linear functions, are studied, based on the uniform error metric.
Abstract: Approximating points by piecewise linear functions is an intensively researched topic in computational geometry. In this paper, we study, based on the uniform error metric, an array of variations of this problem in 2-D and 3-D, including points with weights, approximation with violations, using step functions or more generally piecewise linear functions. We consider both the min-# (i.e., given an error tolerance ϵ, minimizing the size k of the approximating function) and min-ϵ (i.e., given a size k of the approximating function, minimizing the error tolerance ϵ) versions of the problems. Our algorithms either improve on the previously best-known solutions or are the first known results for the respective problems. Our approaches are based on interesting geometric observations and algorithmic techniques. Some data structures we develop are of independent interest and may find other applications.