scispace - formally typeset
Search or ask a question

Showing papers on "Computational geometry published in 2017"


Proceedings ArticleDOI
21 Jul 2017
TL;DR: This work proposes a new mesh registration method that uses both 3D geometry and texture information to register all scans in a sequence to a common reference topology, and shows how using geometry alone results in significant errors in alignment when the motions are fast and non-rigid.
Abstract: While the ready availability of 3D scan data has influenced research throughout computer vision, less attention has focused on 4D data, that is 3D scans of moving non-rigid objects, captured over time. To be useful for vision research, such 4D scans need to be registered, or aligned, to a common topology. Consequently, extending mesh registration methods to 4D is important. Unfortunately, no ground-truth datasets are available for quantitative evaluation and comparison of 4D registration methods. To address this we create a novel dataset of high-resolution 4D scans of human subjects in motion, captured at 60 fps. We propose a new mesh registration method that uses both 3D geometry and texture information to register all scans in a sequence to a common reference topology. The approach exploits consistency in texture over both short and long time intervals and deals with temporal offsets between shape and texture capture. We show how using geometry alone results in significant errors in alignment when the motions are fast and non-rigid. We evaluate the accuracy of our registration and provide a dataset of 40,000 raw and aligned meshes. Dynamic FAUST extends the popular FAUST dataset to dynamic 4D data, and is available for research purposes at http://dfaust.is.tue.mpg.de.

330 citations


Book ChapterDOI
01 Jan 2017
TL;DR: In a geometric context, a collision or proximity query reports information about the relative configuration or placement of two objects as mentioned in this paper, and the common examples of such queries include checking whether two objects overlap in space, or whether their boundaries intersect, or computing the minimum Euclidean separation distance between their boundaries.
Abstract: In a geometric context, a collision or proximity query reports information about the relative configuration or placement of two objects Some of the common examples of such queries include checking whether two objects overlap in space, or whether their boundaries intersect, or computing the minimum Euclidean separation distance between their boundaries Hundreds of papers have been published on different aspects of these queries in computational geometry and related areas such as robotics, computer graphics, virtual environments, and computer-aided design These queries arise in different applications including robot motion planning, dynamic simulation, haptic rendering, virtual prototyping, interactive walkthroughs, computer gaming, and molecular modeling For example, a large-scale virtual environment, eg, a walkthrough, creates a model of the environment with virtual objects Such an environment is used to give the user a sense of presence in a synthetic world and it should make the images of both the user and the surrounding objects feel solid The objects should not pass through each other, and objects should move as expected when pushed, pulled, or grasped; see Fig 3901 Such actions require fast and accurate collision detection between the geometric representations of both real and virtual objects Another example is rapid prototyping, where digital representations of mechanical parts, tools, and machines, need to be tested for interconnectivity, functionality, and reliability In Fig 3902, the motion of the pistons within the combustion chamber wall is simulated to check for tolerances and verify the design

273 citations


Journal ArticleDOI
TL;DR: In this article, the Hopcroft-Karp algorithm and auction algorithm are implemented using k-d trees to replace a linear scan with a geometric proximity query, which leads to a substantial performance gain over their purely combinatorial counterparts.
Abstract: Exploiting geometric structure to improve the asymptotic complexity of discrete assignment problems is a well-studied subject. In contrast, the practical advantages of using geometry for such problems have not been explored. We implement geometric variants of the Hopcroft-Karp algorithm for bottleneck matching (based on previous work by Efrat el al.) and of the auction algorithm by Bertsekas for Wasserstein distance computation. Both implementations use k-d trees to replace a linear scan with a geometric proximity query. Our interest in this problem stems from the desire to compute distances between persistence diagrams, a problem that comes up frequently in topological data analysis. We show that our geometric matching algorithms lead to a substantial performance gain, both in running time and in memory consumption, over their purely combinatorial counterparts. Moreover, our implementation significantly outperforms the only other implementation available for comparing persistence diagrams.

177 citations


Journal ArticleDOI
TL;DR: This work proposes to optimize for the geometric representation during the network learning process using a novel metric alignment layer that maps unstructured geometric data to a regular domain by minimizing the metric distortion of the map using the regularized Gromov–Wasserstein objective.
Abstract: Deep neural networks provide a promising tool for incorporating semantic information in geometry processing applications. Unlike image and video processing, however, geometry processing requires handling unstructured geometric data, and thus data representation becomes an important challenge in this framework. Existing approaches tackle this challenge by converting point clouds, meshes, or polygon soups into regular representations using, e.g., multi-view images, volumetric grids or planar parameterizations. In each of these cases, geometric data representation is treated as a fixed pre-process that is largely disconnected from the machine learning tool. In contrast, we propose to optimize for the geometric representation during the network learning process using a novel metric alignment layer. Our approach maps unstructured geometric data to a regular domain by minimizing the metric distortion of the map using the regularized Gromov-Wasserstein objective. This objective is parameterized by the metric of the target domain and is differentiable; thus, it can be easily incorporated into a deep network framework. Furthermore, the objective aims to align the metrics of the input and output domains, promoting consistent output for similar shapes. We show the effectiveness of our layer within a deep network trained for shape classification, demonstrating state-of-the-art performance for nonrigid shapes.

93 citations


Journal ArticleDOI
TL;DR: This work shows that Gaussian mixtures admit coresets of size polynomial in dimension and the number of mixture components, while being independent of the data set size, which means that one can harness computationally intensive algorithms to compute a good approximation on a significantly smaller data set.
Abstract: How can we train a statistical mixture model on a massive data set? In this work we show how to construct coresets for mixtures of Gaussians. A coreset is a weighted subset of the data, which guarantees that models fitting the coreset also provide a good fit for the original data set. We show that, perhaps surprisingly, Gaussian mixtures admit coresets of size polynomial in dimension and the number of mixture components, while being independent of the data set size. Hence, one can harness computationally intensive algorithms to compute a good approximation on a significantly smaller data set. More importantly, such coresets can be efficiently constructed both in distributed and streaming settings and do not impose restrictions on the data generating process. Our results rely on a novel reduction of statistical estimation to problems in computational geometry and new combinatorial complexity results for mixtures of Gaussians. Empirical evaluation on several real-world data sets suggests that our coreset-based approach enables significant reduction in training-time with negligible approximation error.

63 citations


Posted Content
TL;DR: In this paper, the lemmas of Sperner and Tucker from combinatorial topology and the theorems of Carath-eodory, Helly, and Tverberg are discussed.
Abstract: We discuss five discrete results: the lemmas of Sperner and Tucker from combinatorial topology and the theorems of Carath\'eodory, Helly, and Tverberg from combinatorial geometry. We explore their connections and emphasize their broad impact in application areas such as game theory, graph theory, mathematical optimization, computational geometry, etc.

52 citations


Posted Content
TL;DR: In this paper, a coreset is a weighted subset of the data, which guarantees that models fitting the coreset also provide a good fit for the original data set, while being independent of the dataset size.
Abstract: How can we train a statistical mixture model on a massive data set? In this work we show how to construct coresets for mixtures of Gaussians. A coreset is a weighted subset of the data, which guarantees that models fitting the coreset also provide a good fit for the original data set. We show that, perhaps surprisingly, Gaussian mixtures admit coresets of size polynomial in dimension and the number of mixture components, while being independent of the data set size. Hence, one can harness computationally intensive algorithms to compute a good approximation on a significantly smaller data set. More importantly, such coresets can be efficiently constructed both in distributed and streaming settings and do not impose restrictions on the data generating process. Our results rely on a novel reduction of statistical estimation to problems in computational geometry and new combinatorial complexity results for mixtures of Gaussians. Empirical evaluation on several real-world datasets suggests that our coreset-based approach enables significant reduction in training-time with negligible approximation error.

35 citations


Journal ArticleDOI
TL;DR: In this article, the authors survey the literature on protein cavity detection and classify algorithms into three categories: evolution-based, energy-based and geometry-based algorithms, whose taxonomy is extended to include not only sphere-, grid-, and tessellation-based methods, but also surfacebased, hybrid geometric, consensus and time-varying methods.
Abstract: Detecting and analyzing protein cavities provides significant information about active sites for biological processes (eg, protein-protein or protein-ligand binding) in molecular graphics and modeling Using the three-dimensional structure of a given protein (ie, atom types and their locations in 3D) as retrieved from a PDB (Protein Data Bank) file, it is now computationally viable to determine a description of these cavities Such cavities correspond to pockets, clefts, invaginations, voids, tunnels, channels, and grooves on the surface of a given protein In this work, we survey the literature on protein cavity computation and classify algorithmic approaches into three categories: evolution-based, energy-based, and geometry-based Our survey focuses on geometric algorithms, whose taxonomy is extended to include not only sphere-, grid-, and tessellation-based methods, but also surface-based, hybrid geometric, consensus, and time-varying methods Finally, we detail those techniques that have been customized for GPU (Graphics Processing Unit) computing

34 citations


Proceedings ArticleDOI
07 Nov 2017
TL;DR: This paper describes an implementation of fast near-neighbours queries (also known as range searching) with respect to the Fréchet distance by using a quadtree data structure to enumerate all curves in the database that have similar start and endpoints as the query curve.
Abstract: This paper describes an implementation of fast near-neighbours queries (also known as range searching) with respect to the Frechet distance. The algorithm is designed to be efficient on practical data such as GPS trajectories. Our approach is to use a quadtree data structure to enumerate all curves in the database that have similar start and endpoints as the query curve. On these curves we run positive and negative filters to narrow the set of potential results. Only for those trajectories where these heuristics fail, we compute the Frechet distance exactly, by running a novel recursive variant of the classic free-space diagram algorithm. Our implementation is among the top 3 submissions to the ACM SIGSPATIAL GIS Cup 2017 (more specific standings will be available on the competition's website).

29 citations


Journal ArticleDOI
TL;DR: In this paper, a systematic approach that integrated computational geometry was used to study the mechanical behavior of gravelly sands. But the results were limited to the morphology and size of gravel grains.
Abstract: The mechanical behaviors of gravelly sands are greatly affected by the morphology and size of gravel grains. In this paper, a systematic approach that integrated computational geometry was ...

28 citations


Journal ArticleDOI
TL;DR: For 2-d convex hulls, it is proved that a version of the well known algorithm by Kirkpatrick and Seidel (1986) or Chan, Snoeyink, and Yap (1995) already attains this lower bound, and a new algorithm is proposed.
Abstract: We prove the existence of an algorithm A for computing 2D or 3D convex hulls that is optimal for every point set in the following sense: for every sequence σ of n points and for every algorithm A′ in a certain class A, the running time of A on input σ is at most a constant factor times the running time of A′ on the worst possible permutation of σ for A′. In fact, we can establish a stronger property: for every sequence σ of points and every algorithm A′, the running time of A on σ is at most a constant factor times the average running time of A′ over all permutations of σ. We call algorithms satisfying these properties instance optimal in the order-oblivious and random-order setting. Such instance-optimal algorithms simultaneously subsume output-sensitive algorithms and distribution-dependent average-case algorithms, and all algorithms that do not take advantage of the order of the input or that assume the input are given in a random order. The class A under consideration consists of all algorithms in a decision tree model where the tests involve only multilinear functions with a constant number of arguments. To establish an instance-specific lower bound, we deviate from traditional Ben-Or-style proofs and adopt a new adversary argument. For 2D convex hulls, we prove that a version of the well-known algorithm by Kirkpatrick and Seidel [1986] or Chan, Snoeyink, and Yap [1995] already attains this lower bound. For 3D convex hulls, we propose a new algorithm. We further obtain instance-optimal results for a few other standard problems in computational geometry, such as maxima in 2D and 3D, orthogonal line segment intersection in 2D, finding bichromatic L∞-close pairs in 2D, offline orthogonal range searching in 2D, offline dominance reporting in 2D and 3D, offline half-space range reporting in 2D and 3D, and offline point location in 2D. Our framework also reveals a connection to distribution-sensitive data structures and yields new results as a byproduct, for example, on online orthogonal range searching in 2D and online half-space range reporting in 2D and 3D.

Journal ArticleDOI
TL;DR: A novel and efficient algorithm for labelling area features externally, i.e. outside their polygonal boundary, that can efficiently place labels with a quality that is close to the quality of traditional cartographic products made by human cartographers is introduced.
Abstract: One of the subtasks of automated map labelling that has received little attention so far is the labelling of areas. Geographic areas often are represented by concave polygons which pose severe limitations on straightforward solutions due to their great variety of shape, a fact worsened by the lack of measures for quantifying feature-label relationships. We introduce a novel and efficient algorithm for labelling area features externally, i.e. outside their polygonal boundary. Two main contributions are presented in the following. First, it is a highly optimized algorithm of generating candidate placements utilizing algorithms from the field of computational geometry. Second, we describe a measure for scoring label positions. Both solutions based on a series of well-established cartographic precepts about name positioning in the case of semantic enclaves such as islands or lakes. The results of our experiments show that our algorithm can efficiently place labels with a quality that is close to the quality o...

Journal ArticleDOI
TL;DR: This work conquers the constrained surface registration problem by changing the Riemannian metric on the target surface to a hyperbolic metric so that the harmonic mapping is guaranteed to be a diffeomorphism under landmark constraints.
Abstract: Automatic computation of surface correspondence via harmonic map is an active research field in computer vision, computer graphics and computational geometry. It may help document and understand physical and biological phenomena and also has broad applications in biometrics, medical imaging and motion capture industries. Although numerous studies have been devoted to harmonic map research, limited progress has been made to compute a diffeomorphic harmonic map on general topology surfaces with landmark constraints. This work conquers this problem by changing the Riemannian metric on the target surface to a hyperbolic metric so that the harmonic mapping is guaranteed to be a diffeomorphism under landmark constraints. The computational algorithms are based on Ricci flow and nonlinear heat diffusion methods. The approach is general and robust. We employ our algorithm to study the constrained surface registration problem which applies to both computer vision and medical imaging applications. Experimental results demonstrate that, by changing the Riemannian metric, the registrations are always diffeomorphic and achieve relatively high performance when evaluated with some popular surface registration evaluation standards.


Journal ArticleDOI
TL;DR: In this article, the integration of localized surface geometry with ANCF elements lies in the inclusion of such detail in the element mass matrix and forces, and the integration can be implemented by overlapping the local geometry on the original ANCF element or by directly trimming the element domain to fit the required shape.

Journal ArticleDOI
TL;DR: A novel geometric predicate is defined and a class of objects is defined that enables us to prove a linear bound on the number of intersecting polygon pairs for colliding 3D objects in that class.
Abstract: We define a novel geometric predicate and a class of objects that enables us to prove a linear bound on the number of intersecting polygon pairs for colliding 3D objects in that class. Our predicate is relevant both in theory and in practice: it is easy to check and it needs to consider only the geometric properties of the individual objects – it does not depend on the configuration of a given pair of objects. In addition, it characterizes a practically relevant class of objects: we checked our predicate on a large database of real‐world 3D objects and the results show that it holds for all but the most pathological ones.

Proceedings ArticleDOI
14 Jun 2017
TL;DR: An efficient and generalized symmetric-key geometric range search scheme on encrypted spatial data in the cloud, which supports queries with different range shapes and dimensions and extends the secure kNN computation with dynamic geometric transformation, which dynamically transforms the points in the dataset and the queried geometric range simultaneously.
Abstract: With cloud services, users can easily host their data in the cloud and retrieve the part needed by search. Searchable encryption is proposed to conduct such process in a privacy-preserving way, which allows a cloud server to perform search over the encrypted data in the cloud according to the search token submitted by the user. However, existing works mainly focus on textual data and merely take numerical spatial data into account. Especially, geometric range search is an important queries on spatial data and has wide applications in machine learning, location-based services(LBS), computer-aided design(CAD), and computational geometry. In this paper, we proposed an efficient and generalized symmetric-key geometric range search scheme on encrypted spatial data in the cloud, which supports queries with different range shapes and dimensions. To provide secure and efficient search, we extend the secure kNN computation with dynamic geometric transformation, which dynamically transforms the points in the dataset and the queried geometric range simultaneously. Besides, we further extend the proposed scheme to support sub-linear search efficiency through novel usage of tree structures. We also present extensive experiments to evaluate the proposed schemes on a real-world dataset. The results show that the proposed schemes are efficient over encrypted datasets and secure against the curious cloud servers.

Posted Content
TL;DR: In this article, the authors define interleaving of functors with common codomain as solutions to an extension problem, and they obtain comparisons with previous notions of interleaveaving via the study of future equivalences.
Abstract: One of the central notions to emerge from the study of persistent homology is that of interleaving distance. It has found recent applications in symplectic and contact geometry, sheaf theory, computational geometry, and phylogenetics. Here we present a general study of this topic. We define interleaving of functors with common codomain as solutions to an extension problem. In order to define interleaving distance in this setting we are led to categorical generalizations of Hausdorff distance, Gromov-Hausdorff distance, and the space of metric spaces. We obtain comparisons with previous notions of interleaving via the study of future equivalences. As an application we recover a definition of shift equivalences of discrete dynamical systems.

Journal ArticleDOI
TL;DR: In this paper, the authors introduce the notion of approximate mean curvature and show various convergence results for sequences of discrete varifolds associated with point clouds or pixel/voxel-type discretizations of d-surfaces in the Euclidean n-space, without restrictions on dimension and codimension.
Abstract: We show that the theory of varifolds can be suitably enriched to open the way to applications in the field of discrete and computational geometry. Using appropriate regularizations of the mass and of the first variation of a varifold we introduce the notion of approximate mean curvature and show various convergence results that hold, in particular, for sequences of discrete varifolds associated with point clouds or pixel/voxel-type discretizations of d-surfaces in the Euclidean n-space, without restrictions on dimension and codimension. The variational nature of the approach also allows us to consider surfaces with singularities, and in that case the approximate mean curvature is consistent with the generalized mean curvature of the limit surface. A series of numerical tests are provided in order to illustrate the effectiveness and generality of the method.

Journal ArticleDOI
TL;DR: In this paper, a computational formulation able to simulate crack initiation and growth in layered structural systems is proposed, where a moving mesh strategy, based on Arbitrary Lagrangian-Eulerian (ALE) approach, is combined with a cohesive interface methodology, in which weak based moving connections are implemented by using a finite element formulation.
Abstract: A computational formulation able to simulate crack initiation and growth in layered structural systems is proposed. In order to identify the position of the onset interfacial defects and their dynamic debonding mechanisms, a moving mesh strategy, based on Arbitrary Lagrangian-Eulerian (ALE) approach, is combined with a cohesive interface methodology, in which weak based moving connections are implemented by using a finite element formulation. The numerical formulation has been implemented by means of separate steps, concerned, at first, to identify the correct position of the crack onset and, subsequently, the growth by changing the computational geometry of the interfaces. In order to verify the accuracy and to validate the proposed methodology, comparisons with experimental and numerical results are developed. In particular, results, in terms of location and speed of the debonding front, obtained by the proposed model, are compared with the ones arising from the literature. Moreover, a parametric study in terms of geometrical characteristics of the layered structure are developed. The investigation reveals the impact of the stiffening of the reinforced strip and of adhesive thickness on the dynamic debonding mechanisms.

Journal ArticleDOI
TL;DR: A framework that is capable of decomposing, and efficiently solving a wide variety of complex piecewise polynomial constraint systems, that include both zero constraints and inequality constraints, with zero-dimensional or univariate solution spaces is proposed.
Abstract: Piecewise polynomial constraint systems are common in numerous problems in computational geometry, such as constraint programming, modeling, and kinematics We propose a framework that is capable of decomposing, and efficiently solving a wide variety of complex piecewise polynomial constraint systems, that include both zero constraints and inequality constraints, with zero-dimensional or univariate solution spaces Our framework combines a subdivision-based polynomial solver with a decomposition algorithm in order to handle large and complex systems We demonstrate the capabilities of our framework on several types of problems and show its performance improvement over a state-of-the-art solver

Proceedings ArticleDOI
09 Jan 2017
TL;DR: In this article, an isogeometric Kirchhofflove inverse-shell element (iKLS) is developed on the basis of a weighted-least-squares functional that uses membrane and bending strain measures consistent with the KirchoffLove shell theory.
Abstract: This paper presents a novel isogeometric inverse Finite Element Method (iFEM) formulation, which couples the NURBS-based isogeometric analysis (IGA) together with the iFEM methodology for shape sensing of complex/curved thin shell structures. The primary goal is to be geometrically exact regardless of the discretization size and to obtain a smoother shape sensing even with less number of strain sensors. For this purpose, an isogeometric KirchhoffLove inverse-shell element (iKLS) is developed on the basis of a weighted-least-squares functional that uses membrane and bending strain measures consistent with the KirchhoffLove shell theory. The novel iKLS element employs NURBS not only as a geometry discretization technology, but also as a discretization tool for displacement domain. Therefore, this development serves the following beneficial aspects of the IGA for the shape sensing analysis based on iFEM methodology: (1) exact representation of computational geometry, (2) simplified mesh refinement, (3) smooth (high-order continuity) basis functions, and finally (4) integration of design and analysis in only one computational domain. The superior capabilities of iKLS element for shape sensing of curved shells are demonstrated by various case studies including a pinched hemisphere and a partly clamped hyperbolic paraboloid. Finally, the effect of sensor locations, number of sensors, and the discretization of the geometry on solution accuracy is examined.

Journal ArticleDOI
TL;DR: A new algorithm is proposed that chooses for a current node and among its neighbors in the graph the nearest polar angle node with respect to the node found in the previous iteration and modified to obtain a procedure of the given complexity.

Journal ArticleDOI
TL;DR: The study of Macbeath regions in a combinatorial setting is initiated and near-optimal bounds for several basic geometric set systems are established.
Abstract: The existence of Macbeath regions is a classical theorem in convex geometry (Macbeath in Ann Math 56:269---293, 1952), with recent applications in discrete and computational geometry. In this paper, we initiate the study of Macbeath regions in a combinatorial setting and establish near-optimal bounds for several basic geometric set systems.

Proceedings Article
01 Jan 2017
TL;DR: For both the Frechet distance and the symmetric difference, Smid et al. as mentioned in this paper showed that finding the simple polygon S restricted to a grid that best resembles a polygon P is NP-complete, even if: (1) S and P have equal area; (2) turns to occur in a specified sequence for the frechet distance; and (3) S must have holes for the symmetry difference.
Abstract: For both the Frechet distance and the symmetric difference, we show that finding the simple polygon S restricted to a grid that best resembles a simple polygon P is NP-complete, even if: (1) we require that S and P have equal area; (2) we require turns to occur in a specified sequence for the Frechet distance; (3) we permit S to have holes for the symmetric difference. Compilation copyright © 2017 Michiel Smid Copyright of individual papers retained by authors.All right reserved.

Journal ArticleDOI
TL;DR: It is observed that the local priors extracted from a family of 3D shapes lie in a very low‐dimensional manifold, Consequently, a compact and informative subset of priors can be learned to efficiently encode all shapes of the same family.
Abstract: We present a sparse optimization framework for extracting sparse shape priors from a collection of 3D models. Shape priors are defined as point‐set neighborhoods sampled from shape surfaces which convey important information encompassing normals and local shape characterization. A 3D shape model can be considered to be formed with a set of 3D local shape priors, while most of them are likely to have similar geometry. Our key observation is that the local priors extracted from a family of 3D shapes lie in a very low‐dimensional manifold. Consequently, a compact and informative subset of priors can be learned to efficiently encode all shapes of the same family. A comprehensive library of local shape priors is first built with the given collection of 3D models of the same family. We then formulate a global, sparse optimization problem which enforces selecting representative priors while minimizing the reconstruction error. To solve the optimization problem, we design an efficient solver based on the Augmented Lagrangian Multipliers method (ALM). Extensive experiments exhibit the power of our data‐driven sparse priors in elegantly solving several high‐level shape analysis applications and geometry processing tasks, such as shape retrieval, style analysis and symmetry detection.

Proceedings ArticleDOI
16 Jan 2017
TL;DR: This work shows that, for any e > 0, the triangles can be cut into O(n3/2+e) pieces, where each piece is a connected semi-algebraic set whose description complexity depends only on the choice of e, such that the depth relation among these pieces is now a proper partial order.
Abstract: Given n non-vertical pairwise disjoint triangles in 3-space, their vertical depth (above/below) relation may contain cycles. We show that, for any e > 0, the triangles can be cut into O(n3/2+e) pieces, where each piece is a connected semi-algebraic set whose description complexity depends only on the choice of e, such that the depth relation among these pieces is now a proper partial order. This bound is nearly tight in the worst case. We are not aware of any previous study of this problem with a subquadratic bound on the number of pieces.This work extends the recent study by two of the authors on eliminating depth cycles among lines in 3-space. Our approach is again algebraic, and makes use of a recent variant of the polynomial partitioning technique, due to Guth, which leads to a recursive procedure for cutting the triangles. In contrast to the case of lines, our analysis here is considerably more involved, due to the two-dimensional nature of the objects being cut, so additional tools, from topology and algebra, need to be brought to bear.Our result essentially settles a 35-year-old open problem in computational geometry, motivated by hidden-surface removal in computer graphics.

Proceedings ArticleDOI
01 Jun 2017
TL;DR: The first data structure with near linear space that achieves truly sublinear query time when the dimension is any constant multiple of log n is given.
Abstract: We revisit the orthogonal range searching problem and the exact l_infinity nearest neighbor searching problem for a static set of n points when the dimension d is moderately large. We give the first data structure with near linear space that achieves truly sublinear query time when the dimension is any constant multiple of log n. Specifically, the preprocessing time and space are O(n^{1+delta}) for any constant delta>0, and the expected query time is n^{1-1/O(c log c)} for d = c log n. The data structure is simple and is based on a new "augmented, randomized, lopsided" variant of k-d trees. It matches (in fact, slightly improves) the performance of previous combinatorial algorithms that work only in the case of offline queries [Impagliazzo, Lovett, Paturi, and Schneider (2014) and Chan (SODA'15)]. It leads to slightly faster combinatorial algorithms for all-pairs shortest paths in general real-weighted graphs and rectangular Boolean matrix multiplication. In the offline case, we show that the problem can be reduced to the Boolean orthogonal vectors problem and thus admits an n^{2-1/O(log c)}-time non-combinatorial algorithm [Abboud, Williams, and Yu (SODA'15)]. This reduction is also simple and is based on range trees. Finally, we use a similar approach to obtain a small improvement to Indyk's data structure [FOCS'98] for approximate l_infinity nearest neighbor search when d = c log n.

Proceedings ArticleDOI
01 Oct 2017
TL;DR: A novel automatic method for non-rigid 3D dynamic surface tracking with surface Ricci flow and Teichmiiller map methods that is applicable to low quality meshes and applies to multiple sequences of 3D facial surfaces with large expression deformations.
Abstract: 3D dynamic surface tracking is an important research problem and plays a vital role in many computer vision and medical imaging applications. However, it is still challenging to efficiently register surface sequences which has large deformations and strong noise. In this paper, we propose a novel automatic method for non-rigid 3D dynamic surface tracking with surface Ricci flow and Teichmiiller map methods. According to quasi-conformal Teichmiiller theory, the Techmuller map minimizes the maximal dilation so that our method is able to automatically register surfaces with large deformations. Besides, the adoption of Delaunay triangulation and quadrilateral meshes makes our method applicable to low quality meshes. In our work, the 3D dynamic surfaces are acquired by a high speed 3D scanner. We first identified sparse surface features using machine learning methods in the texture space. Then we assign landmark features with different curvature settings and the Riemannian metric of the surface is computed by the dynamic Ricci flow method, such that all the curvatures are concentrated on the feature points and the surface is flat everywhere else. The registration among frames is computed by the Teichmiiller mappings, which aligns the feature points with least angle distortions. We apply our new method to multiple sequences of 3D facial surfaces with large expression deformations and compare them with two other state-of-the-art tracking methods. The effectiveness of our method is demonstrated by the clearly improved accuracy and efficiency.

Proceedings ArticleDOI
01 Jun 2017
TL;DR: In this paper, Chebyshev polynomials were applied to obtain approximation algorithms in low constant dimensions, up to a small near-(1/epsilon)^{3/2} factor, for any d-dimensional n-point set.
Abstract: We apply the polynomial method - specifically, Chebyshev polynomials - to obtain a number of new results on geometric approximation algorithms in low constant dimensions. For example, we give an algorithm for constructing epsilon-kernels (coresets for approximate width and approximate convex hull) in close to optimal time O(n + (1/epsilon)^{(d-1)/2}), up to a small near-(1/epsilon)^{3/2} factor, for any d-dimensional n-point set. We obtain an improved data structure for Euclidean *approximate nearest neighbor search* with close to O(n log n + (1/epsilon)^{d/4} n) preprocessing time and O((1/epsilon)^{d/4} log n) query time. We obtain improved approximation algorithms for discrete Voronoi diagrams, diameter, and bichromatic closest pair in the L_s-metric for any even integer constant s >= 2. The techniques are general and may have further applications.