scispace - formally typeset
Search or ask a question

Showing papers on "Point (geometry) published in 2017"


Posted Content
TL;DR: A hierarchical neural network that applies PointNet recursively on a nested partitioning of the input point set and proposes novel set learning layers to adaptively combine features from multiple scales to learn deep point set features efficiently and robustly.
Abstract: Few prior works study deep learning on point sets. PointNet by Qi et al. is a pioneer in this direction. However, by design PointNet does not capture local structures induced by the metric space points live in, limiting its ability to recognize fine-grained patterns and generalizability to complex scenes. In this work, we introduce a hierarchical neural network that applies PointNet recursively on a nested partitioning of the input point set. By exploiting metric space distances, our network is able to learn local features with increasing contextual scales. With further observation that point sets are usually sampled with varying densities, which results in greatly decreased performance for networks trained on uniform densities, we propose novel set learning layers to adaptively combine features from multiple scales. Experiments show that our network called PointNet++ is able to learn deep point set features efficiently and robustly. In particular, results significantly better than state-of-the-art have been obtained on challenging benchmarks of 3D point clouds.

4,802 citations


Proceedings Article
07 Jun 2017
TL;DR: PointNet++ as discussed by the authors applies PointNet recursively on a nested partitioning of the input point set to learn local features with increasing contextual scales, and proposes novel set learning layers to adaptively combine features from multiple scales.
Abstract: Few prior works study deep learning on point sets. PointNet is a pioneer in this direction. However, by design PointNet does not capture local structures induced by the metric space points live in, limiting its ability to recognize fine-grained patterns and generalizability to complex scenes. In this work, we introduce a hierarchical neural network that applies PointNet recursively on a nested partitioning of the input point set. By exploiting metric space distances, our network is able to learn local features with increasing contextual scales. With further observation that point sets are usually sampled with varying densities, which results in greatly decreased performance for networks trained on uniform densities, we propose novel set learning layers to adaptively combine features from multiple scales. Experiments show that our network called PointNet++ is able to learn deep point set features efficiently and robustly. In particular, results significantly better than state-of-the-art have been obtained on challenging benchmarks of 3D point clouds.

3,316 citations


Journal ArticleDOI
TL;DR: In this article, a design methodology based on a combination of isogeometric analysis (IGA), level set and point wise density mapping techniques is presented for topology optimization of piezoelectric/flexolectric materials.

279 citations



Journal ArticleDOI
TL;DR: Numerical simulations in image denoising, inpainting, and superresolution problems show that LDMM is a powerful method in image processing.
Abstract: In this paper, we propose a novel low dimensional manifold model (LDMM) and apply it to some image processing problems. LDMM is based on the fact that the patch manifolds of many natural images have low dimensional structure. Based on this fact, the dimension of the patch manifold is used as a regularization to recover the image. The key step in LDMM is to solve a Laplace--Beltrami equation over a point cloud which is solved by the point integral method. The point integral method enforces the sample point constraints correctly and gives better results than the standard graph Laplacian. Numerical simulations in image denoising, inpainting, and superresolution problems show that LDMM is a powerful method in image processing.

136 citations


Posted Content
TL;DR: In this article, the authors adopt the dynamical systems point of view, and analyze the lesioning properties of ResNet both theoretically and experimentally, and propose a novel method for accelerating ResNet training.
Abstract: Deep residual networks (ResNets) and their variants are widely used in many computer vision applications and natural language processing tasks. However, the theoretical principles for designing and training ResNets are still not fully understood. Recently, several points of view have emerged to try to interpret ResNet theoretically, such as unraveled view, unrolled iterative estimation and dynamical systems view. In this paper, we adopt the dynamical systems point of view, and analyze the lesioning properties of ResNet both theoretically and experimentally. Based on these analyses, we additionally propose a novel method for accelerating ResNet training. We apply the proposed method to train ResNets and Wide ResNets for three image classification benchmarks, reducing training time by more than 40% with superior or on-par accuracy.

91 citations


Journal ArticleDOI
TL;DR: This work proposes a new back-projection technique for NLOS reconstruction, up to a thousand times faster than previous work, with almost no quality loss, and demonstrates the efficiency and quality of the technique compared against previous methods.
Abstract: Recent works have demonstrated non-line of sight (NLOS) reconstruction by using the time-resolved signal from multiply scattered light. These works combine ultrafast imaging systems with computation, which back-projects the recorded space-time signal to build a probabilistic map of the hidden geometry. Unfortunately, this computation is slow, becoming a bottleneck as the imaging technology improves. In this work, we propose a new back-projection technique for NLOS reconstruction, which is up to a thousand times faster than previous work, with almost no quality loss. We base on the observation that the hidden geometry probability map can be built as the intersection of the three-bounce space-time manifolds defined by the light illuminating the hidden geometry and the visible point receiving the scattered light from such hidden geometry. This allows us to pose the reconstruction of the hidden geometry as the voxelization of these space-time manifolds, which has lower theoretic complexity and is easily implementable in the GPU. We demonstrate the efficiency and quality of our technique compared against previous methods in both captured and synthetic data.

86 citations


Patent
23 Feb 2017
TL;DR: In this article, a sound localization point simulates a location in empty space from where an origin of a voice of one individual occurs for the other individual during an electronic call between two individuals.
Abstract: During an electronic call between two individuals, a sound localization point simulates a location in empty space from where an origin of a voice of one individual occurs for the other individual.

79 citations


Journal ArticleDOI
TL;DR: A feature-retained DSM interpolation method for extracting buildings from LiDAR point cloud data by combining point-based and grid-based features and can obtain a promising result both at area-level and at object-level.
Abstract: This paper proposes a method for extracting buildings from LiDAR point cloud data by combining point-based and grid-based features. To accurately discriminate buildings from vegetation, a point feature based on the variance of normal vectors is proposed. For a robust building extraction, a graph cuts algorithm is employed to combine the used features and consider the neighbor contexture information. As grid feature computing and a graph cuts algorithm are performed on a grid structure, a feature-retained DSM interpolation method is proposed in this paper. The proposed method is validated by the benchmark ISPRS Test Project on Urban Classification and 3D Building Reconstruction and compared to the state-art-of-the methods. The evaluation shows that the proposed method can obtain a promising result both at area-level and at object-level. The method is further applied to the entire ISPRS dataset and to a real dataset of the Wuhan City. The results show a completeness of 94.9% and a correctness of 92.2% at the per-area level for the former dataset and a completeness of 94.4% and a correctness of 95.8% for the latter one. The proposed method has a good potential for large-size LiDAR data.

79 citations


Journal ArticleDOI
TL;DR: This paper analyzes a parallel implementation of the bilateral filter adapted for point clouds, which denoises a point with respect to its neighbors by considering not only the distance from the neighbors to the point but also the distance along a normal direction.
Abstract: Point sets obtained by 3D scanners are often corrupted with noise, that can have several causes, such as a tangential acquisition direction, changing environmental lights or a reflective object material. It is thus crucial to design efficient tools to remove noise from the acquired data without removing important information such as sharp edges or shape details. To do so, Fleish-man et al. introduced a bilateral filter for meshes adapted from the bilateral filter for gray level images. This anisotropic filter denoises a point with respect to its neighbors by considering not only the distance from the neighbors to the point but also the distance along a normal direction. This simple fact allows for a much better preservation of sharp edges. In this paper, we analyze a parallel implementation of the bilateral filter adapted for point clouds. Source Code The ANSI C++ source code permitting to reproduce results from the on-line demo is available on the web page of the article 1 .

68 citations


Proceedings Article
12 Feb 2017
TL;DR: A robust transformation estimation method based on manifold regularization for non-rigid point set registration that iteratively recovers the point correspondence and estimates the spatial transformation between two point sets.
Abstract: In this paper, we propose a robust transformation estimation method based on manifold regularization for non-rigid point set registration. The method iteratively recovers the point correspondence and estimates the spatial transformation between two point sets. The correspondence is established based on existing local feature descriptors which typically results in a number of outliers. To achieve an accurate estimate of the transformation from such putative point correspondence, we formulate the registration problem by a mixture model with a set of latent variables introduced to identify outliers, and a prior involving manifold regularization is imposed on the transformation to capture the underlying intrinsic geometry of the input data. The non-rigid transformation is specified in a reproducing kernel Hilbert space and a sparse approximation is adopted to achieve a fast implementation. Extensive experiments on both 2D and 3D data demonstrate that our method can yield superior results compared to other state-of-the-arts, especially in case of badly degraded data.

Journal ArticleDOI
TL;DR: In this paper, a necessary and sufficient condition for fully synchronous robots to solve the plane formation problem that does not depend on obliviousness, i.e., the availability of local memory at robots, was presented.
Abstract: Creating a swarm of mobile computing entities, frequently called robots, agents, or sensor nodes, with self-organization ability is a contemporary challenge in distributed computing. Motivated by this, we investigate the plane formation problem that requires a swarm of robots moving in the three-dimensional Euclidean space to land on a common plane. The robots are fully synchronous and endowed with visual perception. But they do not have identifiers, nor access to the global coordinate system, nor any means of explicit communication with each other. Though there are plenty of results on the agreement problem for robots in the two-dimensional plane, for example, the point formation problem, the pattern formation problem, and so on, this is the first result for robots in the three-dimensional space. This article presents a necessary and sufficient condition for fully synchronous robots to solve the plane formation problem that does not depend on obliviousness, i.e., the availability of local memory at robots. An implication of the result is somewhat counter-intuitive: The robots cannot form a plane from most of the semi-regular polyhedra, while they can form a plane from every regular polyhedron (except a regular icosahedron), whose symmetry is usually considered to be higher than any semi-regular polyhedron.

Journal ArticleDOI
TL;DR: A system for positioning markers and tracking the pose of a rigid object with 6 degrees of freedom in real-time using 3D digital image correlation, with two examples for medical imaging applications, indicates a great potential on tracking muscle and skin movements.

Journal ArticleDOI
TL;DR: The spatial locality feature of point sets is discovered and a novel search algorithm called local start search (LSS) is proposed to compute the exact Hausdorff Distance to solve the problem of comparing the similarity between large scale point sets in the time of big data.

Journal ArticleDOI
TL;DR: An efficient framework and two complementary subalgorithms to directly compute the exact Hausdorff distance for general 3D point sets and compares the proposed approach with other state-of-the-art algorithms.
Abstract: Hausdorff distance measure is very important in CAD/CAE/CAM related applications. This manuscript presents an efficient framework and two complementary subalgorithms to directly compute the exact Hausdorff distance for general 3D point sets. The first algorithm of Nonoverlap Hausdorff Distance (NOHD) combines branch-and-bound with early breaking to cut down the Octree traversal time in case of spatial nonoverlap. The second algorithm of Overlap Hausdorff Distance (OHD) integrates a point culling strategy and nearest neighbor search to reduce the number of points traversed in case of spatial overlap. The two complementary subalgorithms can achieve a highly efficient and balanced result. Both NOHD and OHD compute the exact Hausdorff distance directly for arbitrary 3D point sets. We conduct a number of experiments on benchmark models and CAD application models, and compare the proposed approach with other state-of-the-art algorithms. The results demonstrate the effectiveness of our method.


Patent
06 Nov 2017
TL;DR: In this article, the detection of abnormalities in multi-dimensional data is performed by processing the multidimensional data to obtain a reduced dimension embedding matrix and applying an out-of-sample extension procedure in the embedded space to compute coordinates of a newly arrived data point.
Abstract: Detection of abnormalities in multi-dimensional data is performed by processing the multi-dimensional data to obtain a reduced dimension embedding matrix, using the reduced dimension embedding matrix to form a lower dimension (of at least 2D) embedded space, applying an out-of-sample extension procedure in the embedded space to compute coordinates of a newly arrived data point and using the computed coordinates of the newly arrived data point and Euclidean distances to determine whether the newly arrived data point is normal or abnormal.

Journal ArticleDOI
TL;DR: A method that is based on a 3D point cloud is proposed to semi-automatically extract rock mass structural plane information and can provide a reference for rock mechanics, 3D geological modelling and other related fields.

Proceedings Article
18 Jun 2017
TL;DR: In this article, the authors describe a frontier of new lower bounds on the performance of online learning algorithms, reflecting a tradeoff between a term that depends on the optimal parameter value and a term depending on the gradients' rate of growth.
Abstract: The vast majority of optimization and online learning algorithms today require some prior information about the data (often in the form of bounds on gradients or on the optimal parameter value). When this information is not available, these algorithms require laborious manual tuning of various hyperparameters, motivating the search for algorithms that can adapt to the data with no prior information. We describe a frontier of new lower bounds on the performance of such algorithms, reflecting a tradeoff between a term that depends on the optimal parameter value and a term that depends on the gradients' rate of growth. Further, we construct a family of algorithms whose performance matches any desired point on this frontier, which no previous algorithm reaches.

Journal ArticleDOI
TL;DR: This paper proposes a robust non-rigid point set registration method based on spatially constrained context-aware Gaussian fields that reaches better performance than the state-of-the-art algorithms.
Abstract: Estimating transformations from degraded point sets is necessary for many computer vision and pattern recognition applications. In this paper, we propose a robust non-rigid point set registration method based on spatially constrained context-aware Gaussian fields. We first construct a context-aware representation (e.g., shape context) for assignment initialization. Then, we use a graph Laplacian regularized Gaussian fields to estimate the underlying transformation from the likely correspondences. On the one hand, the intrinsic manifold is considered and used to preserve the geometrical structure, and a priori knowledge of the point set is extracted. On the other hand, by using the deterministic annealing, the presented method is extended to a projected high-dimensional feature space, i.e., reproducing kernel Hilbert space through a kernel trick to solve the transformation, in which the local structure is propagated by the coarse-to-fine scaling strategy. In this way, the proposed method gradually recovers much more correct correspondences, and then estimates the transformation parameters accurately and robustly when facing degradations. Experimental results on 2D and 3D synthetic and real data (point sets) demonstrate that the proposed method reaches better performance than the state-of-the-art algorithms.

Journal ArticleDOI
TL;DR: This work describes how paired sampling can be used in conjunction with generalized linear (GLM) or generalized linear mixed models (GLMM) to estimate correction factors (δ) to remove biases between ARUs and traditional point counts and shows that density estimates can be derived from single ARUs by estimating EDRA as a function of EDRH and δ, thus providing alternatives to more complicated and expensive approaches.
Abstract: Acoustic recordings are increasingly used to quantify occupancy and abundance in avian monitoring and research. The recent development of relatively inexpensive programmable autonomous recording units (ARUs) has further increased the utility of acoustic recording technologies. Despite their potential advantages, persistent questions remain as to how comparable data are between ARUs and traditional (human observer) point counts. We suggest that differences in counts obtained from ARUs versus human observers primarily stem from differences in the effective detection radius of humans (EDRH) versus ARUs (EDRA). We describe how paired sampling can be used in conjunction with generalized linear (GLM) or generalized linear mixed models (GLMM) to estimate correction factors (δ) to remove biases between ARUs and traditional point counts. Furthermore, if human observers conduct distance estimation, we show that density estimates can be derived from single ARUs by estimating EDRA as a function of EDRH and δ, thus providing alternatives to more complicated and expensive approaches. We demonstrate our approach using data from 363 point count stations in 105 unique boreal study sites at which field staff conducted point count surveys that were simultaneously recorded by an ARU and later transcribed in the lab. Finally, we used repeated random subsampling of the data to split the data into model creation (70%) and validation (30%) subsets to iteratively estimate δ and validate density estimates from ARUs against densities calculated from human observers at the same independent validation locations. We modeled density of 35 species of boreal forest birds and show that incorporating δ in statistical offsets successfully removes systematic biases in estimated avian counts and/or density between human and ARU derived surveys. Our method is therefore easily implemented and will facilitate the integration of ARU and human observer point count data, facilitating expanded monitoring efforts and meta-analyses with historic point count data. Standardisation de données de points d'écoute issus d'échantillons appariés d'observateurs et d'enregistrements acoustiques RÉSUMÉ. Les enregistrements acoustiques sont de plus en plus utilisés pour quantifier l'occurrence et l'abondance dans le cadre de suivis et de recherches aviaires. Le développement récent d'unités d'enregistrement autonomes (ARU, pour autonomus recording units), programmables et plutôt bon marché a contribué à hausser l'utilité des technologies d'enregistrement acoustique. En dépit de leurs avantages possibles, des questions demeurent quant à la comparabilité des données issues de points d'écoute traditionnels (observateur humain) et d'ARU. Nous supposons que les différences obtenues entre les dénombrements issus d'ARU et ceux issus d'observateurs proviennent de différences dans le rayon de détection effectif (EDR, pour effective dectection radius) des humains (EDRH) comparativement à celui des ARU (EDRA). Nous décrivons de quelle façon un échantillonnage apparié peut être utilisé de pair avec des modèles linéaires généralisés ou linéaires généralisés à effets mixtes pour estimer les facteurs de correction (δ) permettant de supprimer les biais entre les points d'écoute traditionnels et ceux provenant d'ARU. De plus, si les observateurs doivent estimer les distances, nous montrons comment des estimations de densité peuvent être dérivées à partir de simples ARU, en estimant EDRA en fonction de EDRH et de δ, fournissant ainsi une alternative aux approches plus complexes et couteuses. Nous démontrons notre approche à partir de données provenant de 360 stations de point d'écoute localisées dans 105 sites d'étude boréaux, où du personnel de terrain a réalisé des dénombrements par point d'écoute qui ont été enregistrés simultanément par ARU et ensuite transcrits en laboratoire. Enfin, nous avons procédé par sous-échantillonnage aléatoire répété des données pour les séparer en sous-ensembles destinés à la création de modèles (70 %) ou à la validation (30 %) afin d'estimer δ de façon itérative et de valider les estimations de densité obtenues au moyen d'ARU comparativement à celles calculées à partir des données récoltées par des observateurs aux mêmes sites indépendants de validation. Nous avons modélisé la densité de 35 espèces d'oiseaux de forêt boréale et montrons que l'incorporation de δ dans les compensations statistiques enlève correctement les biais systématiques des estimations de dénombrements aviaires et/ou de densités trouvés entre les relevés faits par des observateurs et ceux obtenus au moyen d'ARU. Notre méthode est simple à utiliser, facilite l'intégration de données de points d'écoute issus d'observateurs et d'ARU, et permettra des efforts de suivi plus grands et des méta-analyses avec des données de point d'écoute historiques.

Posted Content
19 Dec 2017
TL;DR: Two new operations to improve PointNet with more efficient exploitation of local structures are presented, one focuses on local 3D geometric structures and the other exploits local feature structures by recursive feature aggregation on a nearest-neighbor-graph computed from 3D positions.
Abstract: Unlike on images, semantic learning on 3D point clouds using a deep network is challenging due to the naturally unordered data structure. Among existing works, PointNet has achieved promising results by directly learning on point sets. However, it does not take full advantage of a point's local neighborhood that contains fine-grained structural information which turns out to be helpful towards better semantic learning. In this regard, we present two new operations to improve PointNet with more efficient exploitation of local structures. The first one focuses on local 3D geometric structures. In analogy with a convolution kernel for images, we define a point-set kernel as a set of learnable points that jointly respond to a set of neighboring data points according to their geometric affinity measured by kernel correlation, adapted from a similar technique for point cloud registration. The second one exploits local feature structures by recursive feature aggregation on a nearest-neighbor-graph computed from 3D positions. Experiments show that our network is able to robustly capture local information and efficiently achieve better performance on major datasets.

Journal ArticleDOI
TL;DR: The direction of movement and final position of each node are used as important clues for community membership assignment and the proposed algorithm outperformed state-of-the-art community detection techniques on two benchmark networks with known communities using Normalized Mutual Information criterion.
Abstract: Many real world complex systems naturally map to network data structures instead of geometric spaces because the only available information is the presence or absence of a link between two entities in the system. To enable data mining techniques to solve problems in the network domain, the nodes need to be mapped to a geometric space. We propose this mapping by representing each network node with its geodesic distances from all other nodes. The space spanned by the geodesic distance vectors is the geodesic space of that network. The position of different nodes in the geodesic space encode the network structure. In this space, considering a continuous density field induced by each node, density at a specific point is the summation of density fields induced by all nodes. We drift each node in the direction of positive density gradient using an iterative algorithm till each node reaches a local maximum. Due to the network structure captured by this space, the nodes that drift to the same region of space belong to the same communities in the original network. We use the direction of movement and final position of each node as important clues for community membership assignment. The proposed algorithm is compared with more than 10 state-of-the-art community detection techniques on two benchmark networks with known communities using Normalized Mutual Information criterion. The proposed algorithm outperformed these methods by a significant margin. Moreover, the proposed algorithm has also shown excellent performance on many real-world networks.

Journal ArticleDOI
TL;DR: In this article, a method called point integral method (PIM) is proposed to solve the Poisson-type equations from point clouds, where the integral equation is derived from the unknown function.
Abstract: Partial differential equations (PDE) on manifolds arise in many areas, including mathematics and many applied fields. Due to the complicated geometrical structure of the manifold, it is difficult to get efficient numerical method to solve PDE on manifold. In the paper, we propose a method called point integral method (PIM) to solve the Poisson-type equations from point clouds. Among different kinds of PDEs, the Poisson-type equations including the standard Poisson equation and the related eigenproblem of the Laplace-Beltrami operator are one of the most important. In PIM, the key idea is to derive the integral equations which approximates the Poisson-type equations and contains no derivatives but only the values of the unknown function. This feature makes the integral equation easy to be discretized from point cloud. In the paper, we explain the derivation of the integral equations, describe the point integral method and its implementation, and present the numerical experiments to demonstrate the convergence of PIM.

Journal ArticleDOI
TL;DR: This work proposes a comprehensive and efficient point cloud management system based on a database server that works on groups of points (patches) rather than individual points, and is built on open source technologies; therefore it can be easily extended and customised.
Abstract: In addition to more traditional geographical data such as images (rasters) and vectors, point cloud data are becoming increasingly available. Such data are appreciated for their precision and true three-Dimensional (3D) nature. However, managing point clouds can be difficult due to scaling problems and specificities of this data type. Several methods exist but are usually fairly specialised and solve only one aspect of the management problem. In this work, we propose a comprehensive and efficient point cloud management system based on a database server that works on groups of points (patches) rather than individual points. This system is specifically designed to cover the basic needs of point cloud users: fast loading, compressed storage, powerful patch and point filtering, easy data access and exporting, and integrated processing. Moreover, the proposed system fully integrates metadata (like sensor position) and can conjointly use point clouds with other geospatial data, such as images, vectors, topology and other point clouds. Point cloud (parallel) processing can be done in-base with fast prototyping capabilities. Lastly, the system is built on open source technologies; therefore it can be easily extended and customised. We test the proposed system with several billion points obtained from Lidar (aerial and terrestrial) and stereo-vision. We demonstrate loading speeds in the ∼ 50 million pts/h per process range, transparent-for-user and greater than 2 to 4:1 compression ratio, patch filtering in the 0.1 to 1 s range, and output in the 0.1 million pts/s per process range, along with classical processing methods, such as object detection.

Journal ArticleDOI
TL;DR: In this article, the problem of controlling and stabilizing solutions to the Kuramoto-Sivashinsky (KS) equation is studied in a generalized form of the equation in which the effects of an electric field and dispersion are included.
Abstract: The problem of controlling and stabilizing solutions to the Kuramoto–Sivashinsky (KS) equation is studied in this paper. We consider a generalized form of the equation in which the effects of an electric field and dispersion are included. Both the feedback and optimal control problems are studied. We prove that we can control arbitrary non-trivial steady states of the KS equation, including travelling wave solutions, using a finite number of point actuators. The number of point actuators needed is related to the number of unstable modes of the equation. Furthermore, the proposed control methodology is shown to be robust with respect to changing the parameters in the equation, e.g. the viscosity coefficient or the intensity of the electric field. We also study the problem of controlling solutions of coupled systems of KS equations. Possible applications to controlling thin film flows are discussed. Our rigorous results are supported by extensive numerical simulations.

Posted Content
TL;DR: In this article, the authors developed a theory parallel to that of motivic complexes based on cycles and correspondences with coefficients in quadratic forms, which is closer to the point of view of $\mathbb{A}^1$-homotopy than the original one envisioned by Beilinson and set up by Voevodsky.
Abstract: The aim of this work is to develop a theory parallel to that of motivic complexes based on cycles and correspondences with coefficients in quadratic forms. This framework is closer to the point of view of $\mathbb{A}^1$-homotopy than the original one envisioned by Beilinson and set up by Voevodsky.

Patent
14 Mar 2017
TL;DR: In this paper, a multi-channel 2D space (IS, HS, DS) is defined for a 3D point cloud from a predetermined view point of a depth sensor.
Abstract: A processing method of a three-dimensional (3D) point cloud, comprising the steps of: obtaining a 3D point cloud from a predetermined view point of a depth sensor; extracting 3D coordinates (xyz) and intensity (i) data from each point of the 3D point cloud with respect to the view point; transforming 3D coordinates (xyz) and intensity (i) data into at least three two-dimensional (2D) spaces, namely an intensity 2D space (IS) function of the intensity (i) data of each point, a height 2D space (HS) function of an elevation (h) data of each point, and a distance 2D space (DS) function of a distance (d) data between each point of 3D point cloud and the view point, defining a single multi-channel 2D space (IS, HS, DS).

Journal ArticleDOI
TL;DR: It is shown that the projection point identified by the envelopment model is not the one used by the multiplier model to measure the division efficiencies for general two-stage systems.

Journal ArticleDOI
TL;DR: A new fully automatic segmentation method is proposed in this article, capable of processing a roof point dataset with an arbitrary shape, and robustly selecting the seed points in a parameter space with reduced dimensions and segmenting the planar patches in a sub-dataset with similar attributes when region growing in the object space.
Abstract: Planar patches are important primitives for polyhedral building models. One of the key challenges for successful reconstruction of three-dimensional 3D building models from airborne lidar point clouds is achieving high quality recognition and segmentation of the roof planar points. Unfortunately, the current automatic extraction processes for planar surfaces continue to suffer from limitations such as sensitivity to the selection of seed points and the lack of computational efficiency. In order to address these drawbacks, a new fully automatic segmentation method is proposed in this article, which is capable of the following: 1 processing a roof point dataset with an arbitrary shape; 2 robustly selecting the seed points in a parameter space with reduced dimensions; and 3 segmenting the planar patches in a sub-dataset with similar attributes when region growing in the object space. The detection of seed points in the parameter space was improved by mapping the accumulator array to a 1D space. The range for region growing in the object space was reduced by an attribute similarity measure that split the roof dataset into candidate and non-candidate subsets. The experimental results confirmed that the proposed approach can extract planar patches of building roofs robustly and efficiently.