scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Automatic reconstruction of as-built building information models from laser-scanned point clouds: A review of related techniques

TL;DR: This article surveys techniques developed in civil engineering and computer science that can be utilized to automate the process of creating as-built BIMs and outlines the main methods used by these algorithms for representing knowledge about shape, identity, and relationships.
About: This article is published in Automation in Construction.The article was published on 2010-11-01. It has received 789 citations till now. The article focuses on the topics: Information model & Computer Aided Design.
Citations
More filters
Journal ArticleDOI
TL;DR: For facility management of campus, an geo-information system is developed that can be used for planning, constructing, and maintaining Jatinangor ITB’s facilities and infrastructures using openMAINT, an open source solution for the Property & Facility Management.
Abstract: . The new campus of ITB-Indonesia, which is located at Jatinangor, requires good facilities and infrastructures to supporting all of campus activities. Those can not be separated from procurement and maintenance activities. Technology for procurement and maintenance of facilities and infrastructures –based computer (information system)– has been known as Building Information Modeling (BIM). Nowadays, that technology is more affordable with some of free software that easy to use and tailored to user needs. BIM has some disadvantages and it requires other technologies to complete it, namely Geographic Information System (GIS). BIM and GIS require surveying data to visualized landscape and buildings on Jatinangor ITB campus. This paper presents the on-going of an internal service program conducted by the researcher, academic staff and students for the university. The program including 3D surveying to support the data requirements for 3D modeling of buildings in CityGML and Industry Foundation Classes (IFC) data model. The entire 3D surveying will produce point clouds that can be used to make 3D model. The 3D modeling is divided into low and high levels of detail modeling. The low levels model is stored in 3D CityGML database, and the high levels model including interiors is stored in BIM Server. 3D model can be used to visualized the building and site of Jatinangor ITB campus. For facility management of campus, an geo-information system is developed that can be used for planning, constructing, and maintaining Jatinangor ITB’s facilities and infrastructures. The system uses openMAINT, an open source solution for the Property & Facility Management.

4 citations


Cites background from "Automatic reconstruction of as-buil..."

  • ...…scans and targets are predetermined, (2) data capturing stage, which is usually done when construction is complete, and (3) data processing stage, in which scanned data are converted to 3D components through reverse engineering approaches (e.g, Akinci et al. 2006, Arayici 2007, Tang et al. 2010)....

    [...]

Journal ArticleDOI
TL;DR: In this paper, a semi-automatically reconstructing parametric representations for H-BIM-related uses, by means of the most recent 3D data classification techniques that exploit Artificial Intelligence (AI).
Abstract: . Cultural heritage information systems, such as H-BIM, are becoming more and more widespread today, thanks to their potential to bring together, around a 3D representation, the wealth of knowledge related to a given object of study. However, the reconstruction of such tools starting from 3D architectural surveying is still largely deemed as a lengthy and time-consuming process, with inherent complexities related to managing and interpreting unstructured and unorganized data derived, e.g., from laser scanning or photogrammetry. Tackling this issue and starting from reality-based surveying, the purpose of this paper is to semi-automatically reconstruct parametric representations for H-BIM-related uses, by means of the most recent 3D data classification techniques that exploit Artificial Intelligence (AI). The presented methodology consists of a first semantic segmentation phase, aiming at the automatic recognition through AI of architectural elements of historic buildings within points clouds; a Random Forest classifier is used for the classification task, evaluating each time the performance of the predictive model. At a second stage, visual programming techniques are applied to the reconstruction of a conceptual mock-up of each detected element and to the subsequent propagation of the 3D information to other objects with similar characteristics. The resulting parametric model can be used for heritage preservation and dissemination purposes, as common practices implemented in modern H-BIM documentation systems. The methodology is tailored to representative case studies related to the typology of the medieval cloister and scattered over the Tuscan territory.

4 citations

Journal ArticleDOI
TL;DR: In this paper, the Integrated Data Assessment and Modeling (IDAM) method based on digital scanning and modelling technologies for capturing of the geometry and material composition data is proposed for enabling a generation of as built Building Information Modelling (BIM)-models from acquired point clouds and non-geometric data.
Abstract: Buildings are the largest consumer of raw materials and simultaneously are responsible for 40% of the global energy consumption as well as for about 30% of global CO2 emissions. In order to reach sustainability goals such as reduction of the use of primary resources, it is of utmost importance to reuse or recycle the existing stocks – a strategy labelled as “Urban Mining”. The fact that the new construction rate is only 3%, underlines the importance of Urban Mining. However, there is lack of knowledge about the exact material composition and geometry of the existing stock, which represents the main obstacle for Urban Mining and accordingly for reaching high recycling rates. In this paper the Integrated Data Assessment and Modelling (IDAM) method based on digital scanning and modelling technologies for capturing of the geometry and material composition data is proposed for enabling a generation of as built Building Information Modelling (BIM)-models from acquired point clouds and non-geometric data. The main aim of this research is to explore the potential of the IDAM method for the generation of a BIM-model, which serves as basis for BIM-based Material Passports (MP), as major element enabling Circular Economy (CE) and Urban Mining strategies as well as the creation of a digital secondary raw materials cadastre. In order to deliver a proof of concept for IDAM, a real use case will be assessed in terms of geometry and material composition, and possibilities of data capturing via laser scanning and ground penetrating radar (GPR) for follow-up generation of a BIM-based MP explored. For capturing the geometry, laser scanning, and for capturing the material composition, GPR is used. The use of GPR for the generation of a BIM-model, which incorporates material information, addresses a research gap – the capturing and modelling of geometry is already well explored, however the methods and tools for capturing and modelling of the material composition of buildings are largely lacking. Result show, that the coupled use of capturing technologies has great potential to serve as basis for a BIM-based MP. Moreover, the use of GPR, enables a determination of embedded materials within a building, but is confronted with various difficulties. As a result, a framework, which can serve as groundwork for follow-up research, is presented.

4 citations

Proceedings ArticleDOI
TL;DR: Two promising unsupervised techniques which are One-Class SVM (OCSVM) and Isolation Forest (IF) are investigated, which optimize the separation between relevant/normal points and irrelevant/noisy points.
Abstract: 3D point cloud is increasingly getting attention for perceiving 3D environment which is needed in many emerging applications. This data structure is challenging due to its characteristics and the limitation of the acquisition step which adds a considerable amount of noise. Therefore, enhancing 3D point clouds is a very crucial and critical step. In this paper, we investigate two promising unsupervised techniques which are One-Class SVM (OCSVM) and Isolation Forest (IF). These two techniques optimize the separation between relevant/normal points and irrelevant/noisy points. For evaluation, three metrics are computed, which are the processing time, the number of detected noisy points, and Peak Signal-to-Noise (PSNR) in order to compare the both proposed techniques with one of the recommended filters in the literature which is Moving Least Square (MLS) filter. The obtained results reveal promising capability in terms of effectiveness. However, OCSVM technique suffers from high computational time; therefore, its efficiency is enhanced using modern Graphics Processing Unit (GPU) with an average rate improvement of 1.8.

4 citations


Cites background from "Automatic reconstruction of as-buil..."

  • ...Many emerging applications in various fields as augmented reality, robot perception [1], shape designing [2], damage detection and quantification [3], emergency preparedness [4]– [7], digitizing cultural heritage [8], building surveying repair and maintenance [9], and many others require the perception of 3D environment or the interaction with 3D objects....

    [...]

Journal ArticleDOI
01 Jul 2022-Sensors
TL;DR: NrtNet as discussed by the authors uses a transformer-based correspondence matrix generation module to learn the correspondence probability between pairs of point sets, and then conducts normalization to obtain the correct point set corresponding matrix.
Abstract: Self-attention networks have revolutionized the field of natural language processing and have also made impressive progress in image analysis tasks. Corrnet3D proposes the idea of first obtaining the point cloud correspondence in point cloud registration. Inspired by these successes, we propose an unsupervised network for non-rigid point cloud registration, namely NrtNet, which is the first network using a transformer for unsupervised large deformation non-rigid point cloud registration. Specifically, NrtNet consists of a feature extraction module, a correspondence matrix generation module, and a reconstruction module. Feeding a pair of point clouds, our model first learns the point-by-point features and feeds them to the transformer-based correspondence matrix generation module, which utilizes the transformer to learn the correspondence probability between pairs of point sets, and then the correspondence probability matrix conducts normalization to obtain the correct point set corresponding matrix. We then permute the point clouds and learn the relative drift of the point pairs to reconstruct the point clouds for registration. Extensive experiments on synthetic and real datasets of non-rigid 3D shapes show that NrtNet outperforms state-of-the-art methods, including methods that use grids as input and methods that directly compute point drift.

4 citations

References
More filters
Journal ArticleDOI
TL;DR: New results are derived on the minimum number of landmarks needed to obtain a solution, and algorithms are presented for computing these minimum-landmark solutions in closed form that provide the basis for an automatic system that can solve the Location Determination Problem under difficult viewing.
Abstract: A new paradigm, Random Sample Consensus (RANSAC), for fitting a model to experimental data is introduced. RANSAC is capable of interpreting/smoothing data containing a significant percentage of gross errors, and is thus ideally suited for applications in automated image analysis where interpretation is based on the data provided by error-prone feature detectors. A major portion of this paper describes the application of RANSAC to the Location Determination Problem (LDP): Given an image depicting a set of landmarks with known locations, determine that point in space from which the image was obtained. In response to a RANSAC requirement, new results are derived on the minimum number of landmarks needed to obtain a solution, and algorithms are presented for computing these minimum-landmark solutions in closed form. These results provide the basis for an automatic system that can solve the LDP under difficult viewing

23,396 citations

Journal ArticleDOI
TL;DR: This paper has designed a stand-alone, flexible C++ implementation that enables the evaluation of individual components and that can easily be extended to include new algorithms.
Abstract: Stereo matching is one of the most active research areas in computer vision. While a large number of algorithms for stereo correspondence have been developed, relatively little work has been done on characterizing their performance. In this paper, we present a taxonomy of dense, two-frame stereo methods designed to assess the different components and design decisions made in individual stereo algorithms. Using this taxonomy, we compare existing stereo methods and present experiments evaluating the performance of many different variants. In order to establish a common software platform and a collection of data sets for easy evaluation, we have designed a stand-alone, flexible C++ implementation that enables the evaluation of individual components and that can be easily extended to include new algorithms. We have also produced several new multiframe stereo data sets with ground truth, and are making both the code and data sets available on the Web.

7,458 citations


"Automatic reconstruction of as-buil..." refers background in this paper

  • ...In other fields, such as computer vision, standard test sets and performance metrics have been established [72,83], but no standard evaluation metrics have been established for as-built BIM creation as yet....

    [...]

Journal ArticleDOI
TL;DR: Recognition-by-components (RBC) provides a principled account of the heretofore undecided relation between the classic principles of perceptual organization and pattern recognition.
Abstract: The perceptual recognition of objects is conceptualized to be a process in which the image of the input is segmented at regions of deep concavity into an arrangement of simple geometric components, such as blocks, cylinders, wedges, and cones. The fundamental assumption of the proposed theory, recognition-by-components (RBC), is that a modest set of generalized-cone components, called geons (N £ 36), can be derived from contrasts of five readily detectable properties of edges in a two-dimensiona l image: curvature, collinearity, symmetry, parallelism, and cotermination. The detection of these properties is generally invariant over viewing position an$ image quality and consequently allows robust object perception when the image is projected from a novel viewpoint or is degraded. RBC thus provides a principled account of the heretofore undecided relation between the classic principles of perceptual organization and pattern recognition: The constraints toward regularization (Pragnanz) characterize not the complete object but the object's components. Representational power derives from an allowance of free combinations of the geons. A Principle of Componential Recovery can account for the major phenomena of object recognition: If an arrangement of two or three geons can be recovered from the input, objects can be quickly recognized even when they are occluded, novel, rotated in depth, or extensively degraded. The results from experiments on the perception of briefly presented pictures by human observers provide empirical support for the theory. Any single object can project an infinity of image configurations to the retina. The orientation of the object to the viewer can vary continuously, each giving rise to a different two-dimensional projection. The object can be occluded by other objects or texture fields, as when viewed behind foliage. The object need not be presented as a full-colored textured image but instead can be a simplified line drawing. Moreover, the object can even be missing some of its parts or be a novel exemplar of its particular category. But it is only with rare exceptions that an image fails to be rapidly and readily classified, either as an instance of a familiar object category or as an instance that cannot be so classified (itself a form of classification).

5,464 citations


"Automatic reconstruction of as-buil..." refers background in this paper

  • ...Various researchers have proposed candidate sets of primitives, such as geons [9], superquadrics [3], and generalized cylinders [10]....

    [...]

Journal ArticleDOI
TL;DR: Two of the most critical requirements in support of producing reliable face-recognition systems are a large database of facial images and a testing procedure to evaluate systems.
Abstract: Two of the most critical requirements in support of producing reliable face-recognition systems are a large database of facial images and a testing procedure to evaluate systems. The Face Recognition Technology (FERET) program has addressed both issues through the FERET database of facial images and the establishment of the FERET tests. To date, 14,126 images from 1,199 individuals are included in the FERET database, which is divided into development and sequestered portions of the database. In September 1996, the FERET program administered the third in a series of FERET face-recognition tests. The primary objectives of the third test were to 1) assess the state of the art, 2) identify future areas of research, and 3) measure algorithm performance.

4,816 citations

Proceedings ArticleDOI
01 Aug 1996
TL;DR: This paper presents a volumetric method for integrating range images that is able to integrate a large number of range images yielding seamless, high-detail models of up to 2.6 million triangles.
Abstract: A number of techniques have been developed for reconstructing surfaces by integrating groups of aligned range images. A desirable set of properties for such algorithms includes: incremental updating, representation of directional uncertainty, the ability to fill gaps in the reconstruction, and robustness in the presence of outliers. Prior algorithms possess subsets of these properties. In this paper, we present a volumetric method for integrating range images that possesses all of these properties. Our volumetric representation consists of a cumulative weighted signed distance function. Working with one range image at a time, we first scan-convert it to a distance function, then combine this with the data already acquired using a simple additive scheme. To achieve space efficiency, we employ a run-length encoding of the volume. To achieve time efficiency, we resample the range image to align with the voxel grid and traverse the range and voxel scanlines synchronously. We generate the final manifold by extracting an isosurface from the volumetric grid. We show that under certain assumptions, this isosurface is optimal in the least squares sense. To fill gaps in the model, we tessellate over the boundaries between regions seen to be empty and regions never observed. Using this method, we are able to integrate a large number of range images (as many as 70) yielding seamless, high-detail models of up to 2.6 million triangles.

3,282 citations


"Automatic reconstruction of as-buil..." refers background in this paper

  • ...Non-parametric geometricmodeling reconstructs a surface, typically in the formof a triangle mesh [41], or a volume [18]....

    [...]