scispace - formally typeset
Open AccessJournal ArticleDOI

Automatic Creation of Semantically Rich 3D Building Models from Laser Scanner Data

Reads0
Chats0
TLDR
A method to automatically convert the raw 3D point data from a laser scanner positioned at multiple locations throughout a facility into a compact, semantically rich information model that is capable of identifying and modeling the main visible structural components of an indoor environment despite the presence of significant clutter and occlusion.
About
This article is published in Automation in Construction.The article was published on 2013-05-01 and is currently open access. It has received 576 citations till now. The article focuses on the topics: Laser scanning & Point cloud.

read more

Figures
Citations
More filters
Journal ArticleDOI

Incremental map refinement of building information using lidar point clouds

TL;DR: This paper proposes a method to incrementally refine the map by several measurements from different campaigns and represent the map in a hierarchical way with a measure indicating uncertainty and the level of detail for objects.
Journal ArticleDOI

Volumetric wall detection in unorganized indoor point clouds using continuous segments in 2D grids

TL;DR: In this paper , a new wall detection method in the indoor point clouds of buildings is presented, where the point clouds are segmented into horizontal layers, and a concept of continuous segments in a 2D grid representation is used to extract the footprints of the wall structures, and 2D blocks are projected into 3D space to obtain the wall segments in the initial 3D point cloud.
Journal ArticleDOI

Low-Cost Prototype to Automate the 3D Digitization of Pieces: An Application Example and Comparison.

TL;DR: In this paper, the authors describe the design of a mechanical and programmable 3D capturing system to be used by either a 3D scanner or a DSLR camera through photogrammetry.
Book ChapterDOI

Towards the Semantic Enrichment of Existing Online 3D Building Geometry to Publish Linked Building Data

TL;DR: The goal is to investigate whether online 3D content from different repositories can be processed by a single algorithm to produce the desired semantics.

Automatic Object Recognition and Registration of Dynamic Heavy Equipment Using a Hybrid LADAR System

Mengmeng Gai
TL;DR: In this article, a model-based automatic object recognition and registration framework, Projection-Recognition-Projection (PRP), was introduced to assist heavy equipment operators in rapidly perceiving 3D working environment at dynamic construction sites.
References
More filters
Journal ArticleDOI

Original Contribution: Stacked generalization

David H. Wolpert
- 05 Feb 1992 - 
TL;DR: The conclusion is that for almost any real-world generalization problem one should use some version of stacked generalization to minimize the generalization error rate.
Proceedings ArticleDOI

Image inpainting

TL;DR: A novel algorithm for digital inpainting of still images that attempts to replicate the basic techniques used by professional restorators, and does not require the user to specify where the novel information comes from.
Journal ArticleDOI

Photo tourism: exploring photo collections in 3D

TL;DR: This work presents a system for interactively browsing and exploring large unstructured collections of photographs of a scene using a novel 3D interface that consists of an image-based modeling front end that automatically computes the viewpoint of each photograph and a sparse 3D model of the scene and image to model correspondences.
Proceedings ArticleDOI

Modeling and rendering architecture from photographs: a hybrid geometry- and image-based approach

TL;DR: This work presents a new approach for modeling and rendering existing architectural scenes from a sparse set of still photographs, which combines both geometry-based and imagebased techniques, and presents view-dependent texture mapping, a method of compositing multiple views of a scene that better simulates geometric detail on basic models.
Related Papers (5)
Frequently Asked Questions (11)
Q1. What are the contributions in "Automatic creation of semantically rich 3d building models from laser scanner data" ?

This paper presents a method to automatically convert the raw 3D point data from a laser scanner positioned at multiple locations throughout a building into a compact, semantically rich model. Then, the authors perform a detailed analysis of the recognized surfaces to locate windows and doorways. The authors evaluated the method on a large, highly cluttered data set of a building with forty separate rooms yielding promising results. 

Their experiments suggest that the context aspect of their algorithm improves recognition performance by about 6% and that the most useful contextual features are coplanarity and orthogonality. 

In the first phase, planar patches are extracted from the point cloud and a context-based machine learning algorithm is used to label the patches as wall, ceiling, floor, or clutter. 

The detailed surface modeling phase of the algorithm operates on each planar patch produced by the contextbased modeling process, detecting the occluded regions and regions within openings in the surface. 

A learning algorithm is used to encode the characteristics of opening shape and location, which allows the algorithm to infer the shape of an opening even when it is partially occluded. 

The authors are currently working on completing the points-to-BIM pipeline by implementing an automated method to convert the surface-based representation produced by their algorithm into a volumetric representation that is commonly used for BIMs. 

Detecting openings in unoccluded surfaces can be achieved by analyzing the data density and classifying low density areas as openings. 

The classifier uses local features computed on each patch in isolation as well as features describing the relationship between each patch and its nearest neighbors. 

These models, which are generally known as building information models (BIMs), are used for many purposes, including planning and visualization during the design phase, detection of mistakes made during construction, and simulation and space planning during the management phase. 

The result of this process is a compact model of the walls, floor, and ceiling of a room, with each patch labeled according to its type. 

Building modeling algorithms are frequently demonstrated on simple examples like hallways that are devoid of furniture or other objects that would obscure the surfaces to be modeled.