scispace - formally typeset
T

Thomas Czerniawski

Researcher at University of Texas at Austin

Publications -  21
Citations -  436

Thomas Czerniawski is an academic researcher from University of Texas at Austin. The author has contributed to research in topics: Point cloud & Building information modeling. The author has an hindex of 8, co-authored 19 publications receiving 235 citations. Previous affiliations of Thomas Czerniawski include Arizona State University & University of Waterloo.

Papers
More filters
Journal ArticleDOI

Semantic segmentation of point clouds of building interiors with deep learning: Augmenting training datasets with synthetic BIM-based point clouds

TL;DR: The experimental results confirmed the viability of using synthetic point clouds generated from building information models in combination with small datasets of real point clouds, and opened up the possibility of developing a segmentation model for building interiors that can be applied to as-built modeling of buildings that contain unseen indoor structures.
Journal ArticleDOI

6D DBSCAN-based segmentation of building point clouds for planar object classification

TL;DR: A study of the semantic information stored in the planar objects of noisy building point clouds using the Scene Meshes Dataset with aNNotations, a collection of over 100 indoor scenes captured by consumer-grade depth cameras.
Journal ArticleDOI

Pipe spool recognition in cluttered point clouds using a curvature-based shape descriptor

TL;DR: An automated method for locating and extracting pipe spools in cluttered point cloud scans is presented, based on local data level curvature estimation, clustering, and bag-of-features matching.
Journal ArticleDOI

Automated digital modeling of existing buildings: A review of visual object recognition methods

TL;DR: A summary of the efforts of the past ten years in automating the digital modeling of existing buildings by applying reality capture devices and computer vision algorithms, with a particular focus on object recognition methods.
Journal ArticleDOI

Automated segmentation of RGB-D images into a comprehensive set of building components using deep learning

TL;DR: It is shown that a deep neural network can semantically segment RGB-D (i.e. color and depth) images into 13 building component classes simultaneously despite the use of a small training dataset with only 1490 object instances.