scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Automatic reconstruction of as-built building information models from laser-scanned point clouds: A review of related techniques

TL;DR: This article surveys techniques developed in civil engineering and computer science that can be utilized to automate the process of creating as-built BIMs and outlines the main methods used by these algorithms for representing knowledge about shape, identity, and relationships.
About: This article is published in Automation in Construction.The article was published on 2010-11-01. It has received 789 citations till now. The article focuses on the topics: Information model & Computer Aided Design.
Citations
More filters
Journal ArticleDOI
25 Feb 2014
TL;DR: In this paper, the authors investigated the effectiveness of using terrestrial laser scanning system for topographic survey by carrying out field test in Universiti Teknologi Malaysia (UTM), Skudai, Johor.
Abstract: In this decade, terrestrial laser scanner (TLS) is getting popular in many fields such as reconstruction, monitoring, surveying, as-built of facilities, archaeology, and topographic surveying. This is due the high speed in data collection which is about 50,000 to 1,000,000 three-dimensional (3D) points per second at high accuracy. The main advantage of 3D representation for the data is that it is more approximate to the real world. Therefore, the aim of this paper is to show the use of High-Definition Surveying (HDS), also known as 3D laser scanning for topographic survey. This research investigates the effectiveness of using terrestrial laser scanning system for topographic survey by carrying out field test in Universiti Teknologi Malaysia (UTM), Skudai, Johor. The 3D laser scanner used in this study is a Leica ScanStation C10. Data acquisition was carried out by applying the traversing method. In this study, the result for the topographic survey is under 1st class survey. At the completion of this study, a standard of procedure was proposed for topographic data acquisition using laser scanning systems. This proposed procedure serves as a guideline for users who wish to utilize laser scanning system in topographic survey fully.

6 citations

Journal ArticleDOI
TL;DR: A methodology for quantifying the historical value of the Cathedral of Christ the King, in the municipality of Huejutla de Reyes, Hidalgo, Mexico, was developed through the applicati... as discussed by the authors.
Abstract: In this study, a methodology for quantifying the historical value of the Cathedral of Christ the King, in the municipality of Huejutla de Reyes, Hidalgo, Mexico, was developed through the applicati...

6 citations

Journal ArticleDOI
TL;DR: In this paper, a framework for automated digital documentation and progress reporting of mechanical pipes in building construction projects, using smartphones, was presented to optimize video frame rate to achieve a desired image overlap; define metric scale for 3D reconstruction; extract pipes from point clouds; and classify pipes according to their planned bill of quantity radii.

6 citations

Journal ArticleDOI
TL;DR: In this paper , a conceptual framework has been devised with 21 effective parameters under five significant categories, i.e., target object, technical, external interference, internal interference, occlusions, and sensing.
Abstract: The construction industry is moving toward digitalization, and technologies support various construction processes. In the automated construction progress monitoring domain, several modern progress measurement techniques have been introduced. However, a hesitant attitude has been observed toward its adoption. Researchers have highlighted lack of theoretical understanding of effectual implementation is one of the significant reasons. This study aims to analyze general technological parameters related to automated monitoring technologies and devise a theoretical-based conceptual framework explaining the aspects affecting the adequate operation of automated monitoring. The study has been executed by following a systematic inline process for the identification of effective parameters, which include a structured literature review, semi-structured interviews, pilot survey, questionnaire survey, and structural equation modeling (SEM)-based mathematical model. A refined conceptual framework has been devised with 21 effective parameters under five significant categories, i.e., “Target Object,” “Technical,” “External Interference,” “Occlusions,” and “Sensing.” A knowledge framework has been established by adopting the SEM technique, which is designed on the characteristics-based theme. This conceptual framework provides the theoretical base for practitioners toward the conceptual understanding of automated monitoring processes related to technological parameters that affect the outcomes. This study is unique as it focused on the general criteria or parameters that affect the performance or outcomes of the digital monitoring process and is easily understandable by the user or operator.

6 citations

Proceedings ArticleDOI
19 Sep 2012
TL;DR: This paper proposes a method based on sparsity-inducing optimization to address the planar surface extraction problem and experimental results on a typical noisy PCD demonstrate the effectiveness of the algorithm.
Abstract: Most of the manual labor needed to create the geometric building information model (BIM) of an existing facility is spent converting raw point cloud data (PCD) to a BIM description. Automating this process would drastically reduce the modeling cost. Surface extraction from PCD is a fundamental step in this process. Compact modeling of redundant points in PCD as a set of planes leads to smaller file size and fast interactive visualization on cheap hardware. Traditional approaches for smooth surface reconstruction do not explicitly model the sparse scene structure or significantly exploit the redundancy. This paper proposes a method based on sparsity-inducing optimization to address the planar surface extraction problem. Through sparse optimization, points in PCD are segmented according to their embedded linear subspaces. Within each segmented part, plane models can be estimated. Experimental results on a typical noisy PCD demonstrate the effectiveness of the algorithm.

5 citations

References
More filters
Journal ArticleDOI
TL;DR: New results are derived on the minimum number of landmarks needed to obtain a solution, and algorithms are presented for computing these minimum-landmark solutions in closed form that provide the basis for an automatic system that can solve the Location Determination Problem under difficult viewing.
Abstract: A new paradigm, Random Sample Consensus (RANSAC), for fitting a model to experimental data is introduced. RANSAC is capable of interpreting/smoothing data containing a significant percentage of gross errors, and is thus ideally suited for applications in automated image analysis where interpretation is based on the data provided by error-prone feature detectors. A major portion of this paper describes the application of RANSAC to the Location Determination Problem (LDP): Given an image depicting a set of landmarks with known locations, determine that point in space from which the image was obtained. In response to a RANSAC requirement, new results are derived on the minimum number of landmarks needed to obtain a solution, and algorithms are presented for computing these minimum-landmark solutions in closed form. These results provide the basis for an automatic system that can solve the LDP under difficult viewing

23,396 citations

Journal ArticleDOI
TL;DR: This paper has designed a stand-alone, flexible C++ implementation that enables the evaluation of individual components and that can easily be extended to include new algorithms.
Abstract: Stereo matching is one of the most active research areas in computer vision. While a large number of algorithms for stereo correspondence have been developed, relatively little work has been done on characterizing their performance. In this paper, we present a taxonomy of dense, two-frame stereo methods designed to assess the different components and design decisions made in individual stereo algorithms. Using this taxonomy, we compare existing stereo methods and present experiments evaluating the performance of many different variants. In order to establish a common software platform and a collection of data sets for easy evaluation, we have designed a stand-alone, flexible C++ implementation that enables the evaluation of individual components and that can be easily extended to include new algorithms. We have also produced several new multiframe stereo data sets with ground truth, and are making both the code and data sets available on the Web.

7,458 citations


"Automatic reconstruction of as-buil..." refers background in this paper

  • ...In other fields, such as computer vision, standard test sets and performance metrics have been established [72,83], but no standard evaluation metrics have been established for as-built BIM creation as yet....

    [...]

Journal ArticleDOI
TL;DR: Recognition-by-components (RBC) provides a principled account of the heretofore undecided relation between the classic principles of perceptual organization and pattern recognition.
Abstract: The perceptual recognition of objects is conceptualized to be a process in which the image of the input is segmented at regions of deep concavity into an arrangement of simple geometric components, such as blocks, cylinders, wedges, and cones. The fundamental assumption of the proposed theory, recognition-by-components (RBC), is that a modest set of generalized-cone components, called geons (N £ 36), can be derived from contrasts of five readily detectable properties of edges in a two-dimensiona l image: curvature, collinearity, symmetry, parallelism, and cotermination. The detection of these properties is generally invariant over viewing position an$ image quality and consequently allows robust object perception when the image is projected from a novel viewpoint or is degraded. RBC thus provides a principled account of the heretofore undecided relation between the classic principles of perceptual organization and pattern recognition: The constraints toward regularization (Pragnanz) characterize not the complete object but the object's components. Representational power derives from an allowance of free combinations of the geons. A Principle of Componential Recovery can account for the major phenomena of object recognition: If an arrangement of two or three geons can be recovered from the input, objects can be quickly recognized even when they are occluded, novel, rotated in depth, or extensively degraded. The results from experiments on the perception of briefly presented pictures by human observers provide empirical support for the theory. Any single object can project an infinity of image configurations to the retina. The orientation of the object to the viewer can vary continuously, each giving rise to a different two-dimensional projection. The object can be occluded by other objects or texture fields, as when viewed behind foliage. The object need not be presented as a full-colored textured image but instead can be a simplified line drawing. Moreover, the object can even be missing some of its parts or be a novel exemplar of its particular category. But it is only with rare exceptions that an image fails to be rapidly and readily classified, either as an instance of a familiar object category or as an instance that cannot be so classified (itself a form of classification).

5,464 citations


"Automatic reconstruction of as-buil..." refers background in this paper

  • ...Various researchers have proposed candidate sets of primitives, such as geons [9], superquadrics [3], and generalized cylinders [10]....

    [...]

Journal ArticleDOI
TL;DR: Two of the most critical requirements in support of producing reliable face-recognition systems are a large database of facial images and a testing procedure to evaluate systems.
Abstract: Two of the most critical requirements in support of producing reliable face-recognition systems are a large database of facial images and a testing procedure to evaluate systems. The Face Recognition Technology (FERET) program has addressed both issues through the FERET database of facial images and the establishment of the FERET tests. To date, 14,126 images from 1,199 individuals are included in the FERET database, which is divided into development and sequestered portions of the database. In September 1996, the FERET program administered the third in a series of FERET face-recognition tests. The primary objectives of the third test were to 1) assess the state of the art, 2) identify future areas of research, and 3) measure algorithm performance.

4,816 citations

Proceedings ArticleDOI
01 Aug 1996
TL;DR: This paper presents a volumetric method for integrating range images that is able to integrate a large number of range images yielding seamless, high-detail models of up to 2.6 million triangles.
Abstract: A number of techniques have been developed for reconstructing surfaces by integrating groups of aligned range images. A desirable set of properties for such algorithms includes: incremental updating, representation of directional uncertainty, the ability to fill gaps in the reconstruction, and robustness in the presence of outliers. Prior algorithms possess subsets of these properties. In this paper, we present a volumetric method for integrating range images that possesses all of these properties. Our volumetric representation consists of a cumulative weighted signed distance function. Working with one range image at a time, we first scan-convert it to a distance function, then combine this with the data already acquired using a simple additive scheme. To achieve space efficiency, we employ a run-length encoding of the volume. To achieve time efficiency, we resample the range image to align with the voxel grid and traverse the range and voxel scanlines synchronously. We generate the final manifold by extracting an isosurface from the volumetric grid. We show that under certain assumptions, this isosurface is optimal in the least squares sense. To fill gaps in the model, we tessellate over the boundaries between regions seen to be empty and regions never observed. Using this method, we are able to integrate a large number of range images (as many as 70) yielding seamless, high-detail models of up to 2.6 million triangles.

3,282 citations


"Automatic reconstruction of as-buil..." refers background in this paper

  • ...Non-parametric geometricmodeling reconstructs a surface, typically in the formof a triangle mesh [41], or a volume [18]....

    [...]