scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Automatic reconstruction of as-built building information models from laser-scanned point clouds: A review of related techniques

TL;DR: This article surveys techniques developed in civil engineering and computer science that can be utilized to automate the process of creating as-built BIMs and outlines the main methods used by these algorithms for representing knowledge about shape, identity, and relationships.
About: This article is published in Automation in Construction.The article was published on 2010-11-01. It has received 789 citations till now. The article focuses on the topics: Information model & Computer Aided Design.
Citations
More filters
01 Jan 2017

2 citations


Cites background from "Automatic reconstruction of as-buil..."

  • ...3D imagery data collection technology such as 3D laser scanners has the capacity to capture dense point cloud data of a structure with mm-level accuracy (Akinci et al. 2006; Tang et al. 2010)....

    [...]

  • ...reviewed a broad range of algorithms and techniques that are used for the recognition and reconstruction of building elements from 3D laser scan data for as-built modeling (Tang et al. 2010)....

    [...]

  • ...Tang et al. reviewed a broad range of algorithms and techniques that are used for the recognition and reconstruction of building elements from 3D laser scan data for as-built modeling (Tang et al. 2010)....

    [...]

Proceedings ArticleDOI
01 Jan 2019

2 citations

Proceedings ArticleDOI
01 Jul 2014
TL;DR: A new logarithmically proportional objective function that can be used in both heuristic and metaheuristic algorithms to discover planar surfaces in a point cloud without exploiting any prior knowledge about those surfaces is introduced.
Abstract: 3D laser scanning is becoming a standard technology to generate building models of a facility's as-is condition. Since most constructions are constructed upon planar surfaces, recognition of them paves the way for automation of generating building models. This paper introduces a new logarithmically proportional objective function that can be used in both heuristic and metaheuristic (MH) algorithms to discover planar surfaces in a point cloud without exploiting any prior knowledge about those surfaces. It can also adopt itself to the structural density of a scanned construction. In this paper, a metaheuristic method, genetic algorithm (GA), is used to test this introduced objective function on a synthetic point cloud. The results obtained show the proposed method is capable to find all plane configurations of planar surfaces (with a wide variety of sizes) in the point cloud with a minor distance to the actual configurations.

2 citations

Book ChapterDOI
24 Aug 2018
TL;DR: In this article, a systematic review of previous studies touching upon the PCD-enabled facility management applications is presented, focusing on the non-geometric properties (e.g., specifications of materials, relations between objects), and less focusing on providing decision support functions.
Abstract: Although the value of 3D point cloud data (PCD) has been increasingly recognized by the architectural, engineering, construction and facility operations (AECO) sectors, there is much less actual application of PCD in facility management (FM) than other stages. In order to facilitate the exploration of using PCD for FM, this study aims to summarize existing research effort and identify the gaps based on a systematic review of previous studies touching upon the PCD-enabled FM. This review was guided by a conceptual model that consists of four key components associated with PCD application process, including target objects, PCD sensing, model output and applications. 47 papers published in 21 academic journals were collected for the analysis. It was found that Light Detection and Ranging (LiDAR), photogrammetry, and Synthetic Aperture Radar (SAR) were the three mostly used technologies for collecting the PCD. The raw signals, such as fragments of point cloud and photos, collected by these technologies need to be pre-processed for generating the PCD, and segmentation and meshing are two general aspects of PCD post-processing to create models. It was also found that most studies focused on geometric properties, data processing, feature extraction, object recognition, and model generation, seldom would they dig deeper for decision-making support of FM applications. Based on the results, three major gaps of PCD-enabled FM were concluded, including (1) overlooking the valuable non-geometric properties (e.g. specifications of materials, relations between objects); (2) less focusing on providing decision support functions; and (3) hovering at data level rather than information level. Eleven possible research directions including semantics enrichment, real-time model generation, longitudinal analysis, and smart living applications of PCD-enabled FM were thus suggested for future research.

1 citations

References
More filters
Journal ArticleDOI
TL;DR: New results are derived on the minimum number of landmarks needed to obtain a solution, and algorithms are presented for computing these minimum-landmark solutions in closed form that provide the basis for an automatic system that can solve the Location Determination Problem under difficult viewing.
Abstract: A new paradigm, Random Sample Consensus (RANSAC), for fitting a model to experimental data is introduced. RANSAC is capable of interpreting/smoothing data containing a significant percentage of gross errors, and is thus ideally suited for applications in automated image analysis where interpretation is based on the data provided by error-prone feature detectors. A major portion of this paper describes the application of RANSAC to the Location Determination Problem (LDP): Given an image depicting a set of landmarks with known locations, determine that point in space from which the image was obtained. In response to a RANSAC requirement, new results are derived on the minimum number of landmarks needed to obtain a solution, and algorithms are presented for computing these minimum-landmark solutions in closed form. These results provide the basis for an automatic system that can solve the LDP under difficult viewing

23,396 citations

Journal ArticleDOI
TL;DR: This paper has designed a stand-alone, flexible C++ implementation that enables the evaluation of individual components and that can easily be extended to include new algorithms.
Abstract: Stereo matching is one of the most active research areas in computer vision. While a large number of algorithms for stereo correspondence have been developed, relatively little work has been done on characterizing their performance. In this paper, we present a taxonomy of dense, two-frame stereo methods designed to assess the different components and design decisions made in individual stereo algorithms. Using this taxonomy, we compare existing stereo methods and present experiments evaluating the performance of many different variants. In order to establish a common software platform and a collection of data sets for easy evaluation, we have designed a stand-alone, flexible C++ implementation that enables the evaluation of individual components and that can be easily extended to include new algorithms. We have also produced several new multiframe stereo data sets with ground truth, and are making both the code and data sets available on the Web.

7,458 citations


"Automatic reconstruction of as-buil..." refers background in this paper

  • ...In other fields, such as computer vision, standard test sets and performance metrics have been established [72,83], but no standard evaluation metrics have been established for as-built BIM creation as yet....

    [...]

Journal ArticleDOI
TL;DR: Recognition-by-components (RBC) provides a principled account of the heretofore undecided relation between the classic principles of perceptual organization and pattern recognition.
Abstract: The perceptual recognition of objects is conceptualized to be a process in which the image of the input is segmented at regions of deep concavity into an arrangement of simple geometric components, such as blocks, cylinders, wedges, and cones. The fundamental assumption of the proposed theory, recognition-by-components (RBC), is that a modest set of generalized-cone components, called geons (N £ 36), can be derived from contrasts of five readily detectable properties of edges in a two-dimensiona l image: curvature, collinearity, symmetry, parallelism, and cotermination. The detection of these properties is generally invariant over viewing position an$ image quality and consequently allows robust object perception when the image is projected from a novel viewpoint or is degraded. RBC thus provides a principled account of the heretofore undecided relation between the classic principles of perceptual organization and pattern recognition: The constraints toward regularization (Pragnanz) characterize not the complete object but the object's components. Representational power derives from an allowance of free combinations of the geons. A Principle of Componential Recovery can account for the major phenomena of object recognition: If an arrangement of two or three geons can be recovered from the input, objects can be quickly recognized even when they are occluded, novel, rotated in depth, or extensively degraded. The results from experiments on the perception of briefly presented pictures by human observers provide empirical support for the theory. Any single object can project an infinity of image configurations to the retina. The orientation of the object to the viewer can vary continuously, each giving rise to a different two-dimensional projection. The object can be occluded by other objects or texture fields, as when viewed behind foliage. The object need not be presented as a full-colored textured image but instead can be a simplified line drawing. Moreover, the object can even be missing some of its parts or be a novel exemplar of its particular category. But it is only with rare exceptions that an image fails to be rapidly and readily classified, either as an instance of a familiar object category or as an instance that cannot be so classified (itself a form of classification).

5,464 citations


"Automatic reconstruction of as-buil..." refers background in this paper

  • ...Various researchers have proposed candidate sets of primitives, such as geons [9], superquadrics [3], and generalized cylinders [10]....

    [...]

Journal ArticleDOI
TL;DR: Two of the most critical requirements in support of producing reliable face-recognition systems are a large database of facial images and a testing procedure to evaluate systems.
Abstract: Two of the most critical requirements in support of producing reliable face-recognition systems are a large database of facial images and a testing procedure to evaluate systems. The Face Recognition Technology (FERET) program has addressed both issues through the FERET database of facial images and the establishment of the FERET tests. To date, 14,126 images from 1,199 individuals are included in the FERET database, which is divided into development and sequestered portions of the database. In September 1996, the FERET program administered the third in a series of FERET face-recognition tests. The primary objectives of the third test were to 1) assess the state of the art, 2) identify future areas of research, and 3) measure algorithm performance.

4,816 citations

Proceedings ArticleDOI
01 Aug 1996
TL;DR: This paper presents a volumetric method for integrating range images that is able to integrate a large number of range images yielding seamless, high-detail models of up to 2.6 million triangles.
Abstract: A number of techniques have been developed for reconstructing surfaces by integrating groups of aligned range images. A desirable set of properties for such algorithms includes: incremental updating, representation of directional uncertainty, the ability to fill gaps in the reconstruction, and robustness in the presence of outliers. Prior algorithms possess subsets of these properties. In this paper, we present a volumetric method for integrating range images that possesses all of these properties. Our volumetric representation consists of a cumulative weighted signed distance function. Working with one range image at a time, we first scan-convert it to a distance function, then combine this with the data already acquired using a simple additive scheme. To achieve space efficiency, we employ a run-length encoding of the volume. To achieve time efficiency, we resample the range image to align with the voxel grid and traverse the range and voxel scanlines synchronously. We generate the final manifold by extracting an isosurface from the volumetric grid. We show that under certain assumptions, this isosurface is optimal in the least squares sense. To fill gaps in the model, we tessellate over the boundaries between regions seen to be empty and regions never observed. Using this method, we are able to integrate a large number of range images (as many as 70) yielding seamless, high-detail models of up to 2.6 million triangles.

3,282 citations


"Automatic reconstruction of as-buil..." refers background in this paper

  • ...Non-parametric geometricmodeling reconstructs a surface, typically in the formof a triangle mesh [41], or a volume [18]....

    [...]