scispace - formally typeset
Journal ArticleDOI

A survey of structure from motion

TLDR
This survey includes a review of the fundamentals of feature extraction and matching, various recent methods for handling ambiguities in 3D scenes, SfM techniques involving relatively uncommon camera models and image features, and popular sources of data and S fM software.
Abstract
The structure from motion (SfM) problem in computer vision is to recover the three-dimensional (3D) structure of a stationary scene from a set of projective measurements, represented as a collection of two-dimensional (2D) images, via estimation of motion of the cameras corresponding to these images. In essence, SfM involves the three main stages of (i) extracting features in images (e.g. points of interest, lines, etc.) and matching these features between images, (ii) camera motion estimation (e.g. using relative pairwise camera positions estimated from the extracted features), and (iii) recovery of the 3D structure using the estimated motion and features (e.g. by minimizing the so-called reprojection error). This survey mainly focuses on relatively recent developments in the literature pertaining to stages (ii) and (iii). More specifically, after touching upon the early factorization-based techniques for motion and structure estimation, we provide a detailed account of some of the recent camera location estimation methods in the literature, followed by discussion of notable techniques for 3D structure recovery. We also cover the basics of the simultaneous localization and mapping (SLAM) problem, which can be viewed as a specific case of the SfM problem. Further, our survey includes a review of the fundamentals of feature extraction and matching (i.e. stage (i) above), various recent methods for handling ambiguities in 3D scenes, SfM techniques involving relatively uncommon camera models and image features, and popular sources of data and SfM software.

read more

Citations
More filters
Proceedings ArticleDOI

Pix2Vox: Context-aware 3D Reconstruction from Single and Multi-view Images

TL;DR: Pix2Vox as mentioned in this paper proposes a context-aware fusion module to adaptively select high-quality reconstructions for each part from different coarse 3D volumes to obtain a fused 3D volume.
Journal ArticleDOI

Evaluating the Performance of Structure from Motion Pipelines

TL;DR: A comparison of different state-of-the-art SfM pipelines in terms of their ability to reconstruct different scenes is reported and an evaluation procedure is proposed that considers both the reconstruction errors as well as the estimation errors of the camera poses used in the reconstruction.
Book ChapterDOI

OmniDepth: Dense Depth Estimation for Indoors Spherical Panoramas

TL;DR: In this article, the authors circumvent the challenges associated with acquiring high quality 3D datasets with ground truth depth annotations, by reusing recently released large scale 3D dataset and re-purposing them to omnidirectional images via rendering.
Journal ArticleDOI

Automated continuous construction progress monitoring using multiple workplace real time 3D scans

TL;DR: A new method is proposed, where changes are constantly perceived and as-built model continuously updated during the construction process, instead of periodical scanning of the whole building under construction, which enables more efficient project management.
Posted Content

GRF: Learning a General Radiance Field for 3D Scene Representation and Rendering

TL;DR: The key to the approach is to explicitly integrate the principle of multi- view geometry to obtain the internal representations from observed 2D views, guaranteeing the learned implicit representations meaningful and multi-view consistent.
References
More filters
Journal ArticleDOI

Distinctive Image Features from Scale-Invariant Keypoints

TL;DR: This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene and can robustly identify objects among clutter and occlusion while achieving near real-time performance.
Book

Multiple view geometry in computer vision

TL;DR: In this article, the authors provide comprehensive background material and explain how to apply the methods and implement the algorithms directly in a unified framework, including geometric principles and how to represent objects algebraically so they can be computed and applied.
Book ChapterDOI

SURF: speeded up robust features

TL;DR: A novel scale- and rotation-invariant interest point detector and descriptor, coined SURF (Speeded Up Robust Features), which approximates or even outperforms previously proposed schemes with respect to repeatability, distinctiveness, and robustness, yet can be computed and compared much faster.
Journal ArticleDOI

A performance evaluation of local descriptors

TL;DR: It is observed that the ranking of the descriptors is mostly independent of the interest region detector and that the SIFT-based descriptors perform best and Moments and steerable filters show the best performance among the low dimensional descriptors.
Book ChapterDOI

Bundle Adjustment - A Modern Synthesis

TL;DR: A survey of the theory and methods of photogrammetric bundle adjustment can be found in this article, with a focus on general robust cost functions rather than restricting attention to traditional nonlinear least squares.
Related Papers (5)