scispace - formally typeset
Search or ask a question

Showing papers on "Point (geometry) published in 2012"


Journal ArticleDOI
TL;DR: In this paper, the optimal neighborhood size for each point is calculated on spherical neighborhoods at various radius sizes based on combinations of the eigenvalues of the local structure tensor, indicating whether the local geometry is more linear (1D), planar (2D), or volumetric (3D).
Abstract: . This papers presents a multi-scale method that computes robust geometric features on lidar point clouds in order to retrieve the optimal neighborhood size for each point. Three dimensionality features are calculated on spherical neighborhoods at various radius sizes. Based on combinations of the eigenvalues of the local structure tensor, they describe the shape of the neighborhood, indicating whether the local geometry is more linear (1D), planar (2D) or volumetric (3D). Two radius-selection criteria have been tested and compared for finding automatically the optimal neighborhood radius for each point. Besides, such procedure allows a dimensionality labelling, giving significant hints for classification and segmentation purposes. The method is successfully applied to 3D point clouds from airborne, terrestrial, and mobile mapping systems since no a priori knowledge on the distribution of the 3D points is required. Extracted dimensionality features and labellings are then favorably compared to those computed from constant size neighborhoods.

297 citations


Journal ArticleDOI
01 Nov 2012
TL;DR: A novel formulation of the recently-introduced concept of capacity-constrained Voronoi tessellation as an optimal transport problem and an efficient optimization technique of point distributions via constrained minimization in the space of power diagrams are presented.
Abstract: We present a fast, scalable algorithm to generate high-quality blue noise point distributions of arbitrary density functions. At its core is a novel formulation of the recently-introduced concept of capacity-constrained Voronoi tessellation as an optimal transport problem. This insight leads to a continuous formulation able to enforce the capacity constraints exactly, unlike previous work. We exploit the variational nature of this formulation to design an efficient optimization technique of point distributions via constrained minimization in the space of power diagrams. Our mathematical, algorithmic, and practical contributions lead to high-quality blue noise point sets with improved spectral and spatial properties.

232 citations


Journal ArticleDOI
TL;DR: In this article, the authors investigated the MUCT of rounded-edge tools and proposed analytical models based on identifying the stagnant point of the workpiece material during the machining process.

201 citations


Journal ArticleDOI
TL;DR: This paper provides a formal description of the Iterative Closest Point algorithm, extends it to registration of partially overlapping surfaces, proves its convergence, derive the required covariance matrices for a set of selected applications, and presents means for optimizing the runtime.
Abstract: Since its introduction in the early 1990s, the Iterative Closest Point (ICP) algorithm has become one of the most well-known methods for geometric alignment of 3D models. Given two roughly aligned shapes represented by two point sets, the algorithm iteratively establishes point correspondences given the current alignment of the data and computes a rigid transformation accordingly. From a statistical point of view, however, it implicitly assumes that the points are observed with isotropic Gaussian noise. In this paper, we show that this assumption may lead to errors and generalize the ICP such that it can account for anisotropic and inhomogenous localization errors. We 1) provide a formal description of the algorithm, 2) extend it to registration of partially overlapping surfaces, 3) prove its convergence, 4) derive the required covariance matrices for a set of selected applications, and 5) present means for optimizing the runtime. An evaluation on publicly available surface meshes as well as on a set of meshes extracted from medical imaging data shows a dramatic increase in accuracy compared to the original ICP, especially in the case of partial surface registration. As point-based surface registration is a central component in various applications, the potential impact of the proposed method is high.

134 citations


Journal ArticleDOI
TL;DR: In this article, the authors present an algorithm for contact detection between polygonal or polyhedral (3-D) convex particles in the Discrete Element Method (DEM).

131 citations


Journal Article
TL;DR: In this paper, direct proofs of some common xed point results for two and three mappings under weak contractive condi- tions are given and some of these results are improved by using dier- ent arguments of control functions.
Abstract: In this paper direct proofs of some common xed point results for two and three mappings under weak contractive condi- tions are given. Some of these results are improved by using dier- ent arguments of control functions. Examples are presented show- ing that some generalizations can not be obtained and also that our results are distinct from the existing ones.

109 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed a method based on a fusion of persistent scatterer (PS) point clouds obtained from several stacks of meter-resolution synthetic aperture radar (SAR) data.
Abstract: In this paper the feasibility to monitor the shape and deformation of single buildings from space is investigated. The methodology is based on a fusion of persistent scatterer (PS) point clouds obtained from several stacks of meter-resolution synthetic aperture radar (SAR) data. This kind of high resolution imagery as well as accurate orbit information is available from, e.g., TerraSAR-X. The stacks are processed individually applying persistent scatterer interferometry (PSI), which provides deformation and height estimates for the PS. However, the geocoded PS point clouds cannot be simply merged in one common coordinate system, like UTM, by reason of residual offsets with respect to their final true positions. These deviations originate from the height uncertainty of the reference point, which has to be chosen during PSI processing of each stack. The presented methodology allows for a fusion of several PS point clouds, i.e., the correct reference point heights can be recovered. The algorithm is based on a point cloud matching procedure that consists of a determination of appropriate point correspondences and a minimization of the distances between all selected pairs of points in a least-squares sense. In addition, the reconstruction of the original motion vector from the deformation measurements in line of sight provided by PSI is desirable. The availability of separated motion components in vertical and horizontal directions greatly enhances the insights into deformation events at buildings and ground. To this end, the fused point clouds are used for a decomposition of motion components. By reason of the limited sensitivity of ascending and descending stacks of TerraSAR-X to deformation in north-southern directions the reconstruction of motion vector components is restricted to components in west-eastern and vertical directions. The latter are estimated in a least-squares adjustment including all PS within a spatially limited area. Deformation estimates of stacks from ascending and descending tracks must be included in order to separate motion components. The latter cannot be determined precisely from a combination of solely equal heading tracks, as the line of sight does not differ enough. Finally, deformation maps of the urban area are available that separately show seasonal and linear deformation in horizontal as well as vertical directions. These maps comprise important information on subsidence or uplift as well as structural stress at buildings due to thermal dilation. As a result of the presented methodology (and for the first time) sufficient and precise motion estimates are available for a detailed monitoring of single objects using meter-resolution SAR data in PSI. Several examples are discussed using the results of motion component estimation based on the fusion of four data stacks evaluated by PSI.

95 citations


Journal ArticleDOI
TL;DR: A simple, closed-form solution and a solution based on algebraic geometry which offers numerical advantages to the two-view, relative pose problem from three image point correspondences and one common reference direction are presented.
Abstract: This paper presents two new, efficient solutions to the two-view, relative pose problem from three image point correspondences and one common reference direction. This three-plus-one problem can be used either as a substitute for the classic five-point algorithm, using a vanishing point for the reference direction, or to make use of an inertial measurement unit commonly available on robots and mobile devices where the gravity vector becomes the reference direction. We provide a simple, closed-form solution and a solution based on algebraic geometry which offers numerical advantages. In addition, we introduce a new method for computing visual odometry with RANSAC and four point correspondences per hypothesis. In a set of real experiments, we demonstrate the power of our approach by comparing it to the five-point method in a hypothesize-and-test visual odometry setting.

86 citations


01 Jan 2012
TL;DR: In this article, a new method is proposed to identify vortices directly from velocity field information, which defines a point inside a vortex as a point around which there is rotation, and a measure of the strength of rotation at the point is also introduced based on the magnitude of the velocity components in the neighbourhood.
Abstract: Initially, this paper presents an overview of the existing methods used for the identification of vortices in fluids, attempting to point out their different strengths and weaknesses. In addition, a new method is proposed to identify vortices directly from velocity field information. It defines a point inside a vortex as a point around which there is rotation. By looking at the velocity directions in the neighbourhood of a point we decide if there is rotation. A measure of the strength of rotation at the point is also introduced based on the magnitude of the velocity components in the neighbourhood. The method is tested on simulated data and performs well. Finally a Lagrangian identification method (Direct Lyapunov Exponents, DLE) is used to attempt to calibrate the most widely used Eulerian method in order to match an objectively defined edge. The results here indicate that it might not be possible to find a threshold for this method to match the edges found using the DLE method.

82 citations


Journal ArticleDOI
01 Nov 2012
TL;DR: A unified analysis and general synthesis algorithms for point distributions that can generate distributions with given target characteristics, possibly extracted from an example point set, and introduce a unified characterization of distributions by mapping them to a space implied by pair correlations are proposed.
Abstract: Analyzing and synthesizing point distributions are of central importance for a wide range of problems in computer graphics. Existing synthesis algorithms can only generate white or blue-noise distributions with characteristics dictated by the underlying processes used, and analysis tools have not been focused on exploring relations among distributions. We propose a unified analysis and general synthesis algorithms for point distributions. We employ the pair correlation function as the basis of our methods and design synthesis algorithms that can generate distributions with given target characteristics, possibly extracted from an example point set, and introduce a unified characterization of distributions by mapping them to a space implied by pair correlations. The algorithms accept example and output point sets of different sizes and dimensions, are applicable to multi-class distributions and non-Euclidean domains, simple to implement and run in O(n) time. We illustrate applications of our method to real world distributions.

82 citations


Journal ArticleDOI
TL;DR: In this paper, the existence of a unique best proximity point for Geraghty contractions is proved and sufficient conditions for its existence are provided, which is an extension of a result due to Gaghty (Proc. Am. Math. Soc. 40:604-608, 1973).
Abstract: The purpose of this paper is to provide sufficient conditions for the existence of a unique best proximity point for Geraghty-contractions. Our paper provides an extension of a result due to Geraghty (Proc. Am. Math. Soc. 40:604-608, 1973).

Proceedings ArticleDOI
16 Jun 2012
TL;DR: This approach is based on denoising a height vector field by comparing the neighborhood of the point with neighborhoods of other points on the surface by using a low/high frequency decomposition and denoise only the high frequency.
Abstract: Denoising surfaces is a a crucial step in the surface processing pipeline. This is even more challenging when no underlying structure of the surface is known, id est when the surface is represented as a set of unorganized points. In this paper, a denoising method based on local similarities is introduced. The contributions are threefold: first, we do not denoise directly the point positions but use a low/high frequency decomposition and denoise only the high frequency. Second, we introduce a local surface parameterization which is proved stable. Finally, this method works directly on point clouds, thus avoiding building a mesh of a noisy surface which is a difficult problem. Our approach is based on denoising a height vector field by comparing the neighborhood of the point with neighborhoods of other points on the surface. It falls into the non-local denoising framework that has been extensively used in image processing, but extends it to unorganized point clouds.


Journal ArticleDOI
TL;DR: This work builds a map from the space of all streamlines to points in IRn based on the preservation of the Hausdorff metric in streamline space to provide a global analysis of 3D vector fields which incorporates the topological segmentation but yields additional information.
Abstract: We propose a new technique for visual exploration of streamlines in 3D vector fields. We construct a map from the space of all streamlines to points in IRn based on the preservation of the Hausdorff metric in streamline space. The image of a vector field under this map is a set of 2-manifolds in IRn with characteristic geometry and topology. Then standard clustering methods applied to the point sets in IRn yield a segmentation of the original vector field. Our approach provides a global analysis of 3D vector fields which incorporates the topological segmentation but yields additional information. In addition to a pure segmentation, the established map provides a natural "parametrization” visualized by the manifolds. We test our approach on a number of synthetic and real-world data sets.

Journal ArticleDOI
TL;DR: In this paper, an optimal measurement setup is defined such that the optimum stand-points are identified to fulfill predefined quality requirements and to ensure a complete spatial coverage, which can improve the quality of individual point measurements and results in a more uniform registered point cloud.
Abstract: One of the main applications of the terrestrial laser scanner is the visualization, modeling and monitoring of man-made structures like buildings. Especially surveying applications require on one hand a quickly obtainable, high resolution point cloud but also need observations with a known and well described quality. To obtain a 3D point cloud, the scene is scanned from different positions around the considered object. The scanning geometry plays an important role in the quality of the resulting point cloud. The ideal set-up for scanning a surface of an object is to position the laser scanner in such a way that the laser beam is near perpendicular to the surface. Due to scanning conditions, such an ideal set-up is in practice not possible. The different incidence angles and ranges of the laser beam on the surface result in 3D points of varying quality. The stand-point of the scanner that gives the best accuracy is generally not known. Using an optimal stand-point of the laser scanner on a scene will improve the quality of individual point measurements and results in a more uniform registered point cloud. The design of an optimum measurement setup is defined such that the optimum stand-points are identified to fulfill predefined quality requirements and to ensure a complete spatial coverage. The additional incidence angle and range constraints on the visibility from a view point ensure that individual scans are not affected by bad scanning geometry effects. A complex and large room that would normally require five view point to be fully covered, would require nineteen view points to obtain full coverage under the range and incidence angle constraints.

Journal ArticleDOI
TL;DR: A new approach for reliable and efficient segmentation of planar patches from a 3D laser point cloud using an adaptive cylinder and an octree space partitioning method to detect and extract peaks from the attribute space.
Abstract: . Automatic processing and object extraction from 3D laser point cloud is one of the major research topics in the field of photogrammetry. Segmentation is an essential step in the processing of laser point cloud, and the quality of extracted objects from laser data is highly dependent on the validity of the segmentation results. This paper presents a new approach for reliable and efficient segmentation of planar patches from a 3D laser point cloud. In this method, the neighbourhood of each point is firstly established using an adaptive cylinder while considering the local point density and surface trend. This neighbourhood definition has a major effect on the computational accuracy of the segmentation attributes. In order to efficiently cluster planar surfaces and prevent introducing ambiguities, the coordinates of the origin's projection on each point's best fitted plane are used as the clustering attributes. Then, an octree space partitioning method is utilized to detect and extract peaks from the attribute space. Each detected peak represents a specific cluster of points which are located on a distinct planar surface in the object space. Experimental results show the potential and feasibility of applying this method for segmentation of both airborne and terrestrial laser data.

Patent
Simon Winder1
03 Aug 2012
TL;DR: Point Cloud Smoother as mentioned in this paper provides various techniques for refining a 3D point cloud or other 3D input model to generate a smoothed and denoised 3D output model.
Abstract: A “Point Cloud Smoother” provides various techniques for refining a 3D point cloud or other 3D input model to generate a smoothed and denoised 3D output model. Smoothing and denoising is achieved, in part, by robustly fitting planes to a neighborhood of points around each point of the input model and using those planes to estimate new points and corresponding normals of the 3D output model. These techniques are useful for a number of purposes, including, but not limited to, free viewpoint video (FVV), which, when combined with the smoothing techniques enabled by the Point Cloud Smoother, allows 3D data of videos or images to be denoised and then rendered and viewed from any desired viewpoint that is supported by the input data.

Journal ArticleDOI
TL;DR: A hybrid algorithm to compute the convex hull of points in three or higher dimensional spaces using a GPU-based interior point filter to cull away many of the points that do not lie on the boundary and a pseudo-hull that is contained inside the conveX hull of the original points is computed.

Journal ArticleDOI
TL;DR: A PTAS is described for the problem of computing a minimum cover of given points by a set of weighted fat objects that allows the objects to expand by some prespecified δ-fraction of their diameter.
Abstract: We study several set cover problems in low dimensional geometric settings. Specifically, we describe a PTAS for the problem of computing a minimum cover of given points by a set of weighted fat objects. Here, we allow the objects to expand by some prespecified δ-fraction of their diameter. Next, we show that the problem of computing a minimum weight cover of points by weighted halfplanes (without expansion) can be solved exactly in the plane. We also study the problem of covering ℝ d by weighted halfspaces, and provide approximation algorithms and hardness results. We also investigate the dual settings of computing minimum weight simplex that covers a given target point. Finally, we provide a near linear time algorithm for the problem of solving a LP minimizing the total weight of violated constraints needed to be removed to make it feasible.

Patent
07 Dec 2012
TL;DR: In this article, a three-dimensional model built with an extrusion-based digital manufacturing system, and having a perimeter based on a contour tool path that defines an interior region of a layer of the 3D model, is presented.
Abstract: A three-dimensional model built with an extrusion-based digital manufacturing system, and having a perimeter based on a contour tool path that defines an interior region of a layer of the three-dimensional model, where at least one of a start point and a stop point of the contour tool path is located within the interior region of the layer.

Journal ArticleDOI
TL;DR: In this article, the authors study infinite families of biquotients defined by Eschenburg and Bazaikin from this viewpoint, together with torus quotients of S 3 × S 3.
Abstract: As a means to better understanding manifolds with positive curvature, there has been much recent interest in the study of non-negatively curved manifolds which contain either a point or an open dense set of points at which all 2-planes have positive curvature. We study infinite families of biquotients defined by Eschenburg and Bazaikin from this viewpoint, together with torus quotients of S 3 × S 3.

Journal ArticleDOI
TL;DR: In this article, it was shown that higher dimensional anti-self-duality equations on the total spaces of spinor bundles over low-dimensional manifolds can be interpreted as the Taubes-Pidstrygach generalization of the Seiberg-Witten equations.
Abstract: In this paper, connections between different gauge-theoretical problems in high and low dimensions are established. In particular, it is shown that higher dimensional anti-self-duality equations on the total spaces of spinor bundles over low-dimensional manifolds can be interpreted as the Taubes–Pidstrygach generalization of the Seiberg–Witten equations. By collapsing each fibre of the spinor bundle to a point, solutions of the Taubes–Pidstrygach equations are related to generalized harmonic spinors. This approach is also generalized to arbitrary fibrations (without singular fibres) compatible with an appropriate calibration.

Journal ArticleDOI
TL;DR: This work wants to summarize some established results on periodic surfaces which are minimal or have constant mean curvature, along with some recent results.
Abstract: We want to summarize some established results on periodic surfaces which are minimal or have constant mean curvature, along with some recent results. We will do this from a mathematical point of view with a general readership in mind.

Journal ArticleDOI
TL;DR: This work introduces a quasi-interpolation framework based on compactly supported RBF, which is robust and can successfully reconstruct surfaces on non-uniform and noisy point sets and can be easily parallelized on multi-core CPUs.

Patent
05 Sep 2012
TL;DR: In this article, a method for segmenting different objects in a 3D scene is proposed, which comprises the following steps of: establishing adjacency relations and a spatial searching mechanism of point cloud data to estimate a normal vector and a residual of each point for the outdoor scene three-dimensional point clouds acquired by laser scanning; determining the point with the minimum residual as a seed point and performing plane clustering by using a plane consistency restrictive condition and a region growing strategy to form the state that the entire plane building is segmented from other objects in the 3D space.
Abstract: The invention discloses a method for segmenting different objects in a three-dimensional scene. The method comprises the following steps of: establishing adjacency relations and a spatial searching mechanism of point cloud data to estimate a normal vector and a residual of each point for the outdoor scene three-dimensional point cloud data acquired by laser scanning; determining the point with the minimum residual as a seed point and performing plane clustering by using a plane consistency restrictive condition and a region growing strategy to form the state that the entire plane building is segmented from other objects in the three-dimensional scene; establishing locally connected search for a plane building region for the segmented entire building part, and clustering the points with connectivity in the same plane by using different seed point rules to realize the detailed segmentation of the plane of the building; and constructing distance label-based initial cluster blocks for theother segmented objects and establishing weighting control restriction for cluster merging to realize the optimal segmentation result of trees. Tests on a plurality of data sets show that the method can be used for effectively segmenting the trees and buildings in the three-dimensional scene.

Journal ArticleDOI
TL;DR: An algorithm to generate point distributions with high-quality blue noise characteristics on discrete surfaces based on the concept of Capacity-Constrained Surface Triangulation (CCST), which approximates the underlying continuous surface as a well-formed triangle mesh with uniform triangle areas.

Journal ArticleDOI
TL;DR: The aim of this paper is to demonstrate how the knowledge of the optimal neighborhood of each 3D point can improve the speed and the accuracy of each of these steps of the ICP process.
Abstract: . Automatic 3D point cloud registration is a main issue in computer vision and photogrammetry. The most commonly adopted solution is the well-known ICP (Iterative Closest Point) algorithm. This standard approach performs a fine registration of two overlapping point clouds by iteratively estimating the transformation parameters, and assuming that good a priori alignment is provided. A large body of literature has proposed many variations of this algorithm in order to improve each step of the process. The aim of this paper is to demonstrate how the knowledge of the optimal neighborhood of each 3D point can improve the speed and the accuracy of each of these steps. We will first present the geometrical features that are the basis of this work. These low-level attributes describe the shape of the neighborhood of each 3D point, computed by combining the eigenvalues of the local structure tensor. Furthermore, they allow to retrieve the optimal size for analyzing the neighborhood as well as the privileged local dimension (linear, planar, or volumetric). Besides, several variations of each step of the ICP process are proposed and analyzed by introducing these features. These variations are then compared on real datasets, as well with the original algorithm in order to retrieve the most efficient algorithm for the whole process. Finally, the method is successfully applied to various 3D lidar point clouds both from airborne, terrestrial and mobile mapping systems.

Journal ArticleDOI
TL;DR: In this article, a point to triangular patch (i.e., closest three points) match is established by checking if the point falls within the triangular dipyramid, which has the three triangular patch points as a base and a user-chosen normal distance as the height to establish the two peaks.
Abstract: The registration of multiple surface point clouds into a common reference frame is a well addressed topic, and the Iterative Closest Point (ICP) is – perhaps – the most used method when registering laser scans due to their irregular nature. In this paper, we examine the proposed Iterative Closest Projected Point (ICPP) algorithm for the simultaneous registration of multiple point clouds. First, a point to triangular patch (i.e. closest three points) match is established by checking if the point falls within the triangular dipyramid, which has the three triangular patch points as a base and a user-chosen normal distance as the height to establish the two peaks. Then, the point is projected onto the patch surface, and its projection is then used as a match for the original point. It is also shown through empirical experimentation that the Delaunay triangles are not a requirement for establishing matches. In fact, Delaunay triangles in some scenarios may force blunders into the final solution, while using the closest three points leads to avoiding some undesired erroneous points. In addition, we review the algorithm by which the ICPP is inspired, namely, the Iterative Closest Patch (ICPatch); where conjugate point-patch pairs are extracted in the overlapping surface areas, and the transformation parameters between all neighbouring surfaces are estimated in a pairwise manner. Then, using the conjugate point-patch pairs, and applying the transformation parameters from the pairwise registration as initial approximations, the final surface transformation parameters are solved for simultaneously. Finally, we evaluate the assumptions made and examine the performance of the new algorithm against the ICPatch.

Journal ArticleDOI
TL;DR: In this paper, the authors provide very accurate analytical solutions for modeling transient heat conduction processes in 2D Cartesian finite bodies for small values of the time, based on an accuracy of one part in 10n (n = 1,2,...,10,...).

Book
31 May 2012
TL;DR: This book reviews the algorithms for processing geometric data, with a practical focus on important techniques not covered by traditional courses on computer vision and computer graphics.
Abstract: This book reviews the algorithms for processing geometric data, with a practical focus on important techniques not covered by traditional courses on computer vision and computer graphics. Features: presents an overview of the underlying mathematical theory, covering vector spaces, metric space, affine spaces, differential geometry, and finite difference methods for derivatives and differential equations; reviews geometry representations, including polygonal meshes, splines, and subdivision surfaces; examines techniques for computing curvature from polygonal meshes; describes algorithms for mesh smoothing, mesh parametrization, and mesh optimization and simplification; discusses point location databases and convex hulls of point sets; investigates the reconstruction of triangle meshes from point clouds, including methods for registration of point clouds and surface reconstruction; provides additional material at a supplementary website; includes self-study exercises throughout the text.