Author
P.F. McLauchlan
Bio: P.F. McLauchlan is an academic researcher from University of Surrey. The author has contributed to research in topics: Motion estimation & Iterative reconstruction. The author has an hindex of 1, co-authored 1 publications receiving 52 citations.
Papers
More filters
01 Jun 2000
TL;DR: The main theoretical advance here is showing how to adjust the system information matrix when scene/camera parameters are removed from the reconstruction, and thus how to achieve an efficient recursive solution to the reconstruction problem.
Abstract: We present a new formulation of sequential least-squares applied to scene and motion reconstruction from image features. We argue that recursive techniques will become more important both for real-time control applications and also interactive vision applications. The aim is to approximate as well as possible the result of the batch bundle adjustment method. In previously published work we described an algorithm which works well if the same features are visible throughout the sequence. Here we show how to deal with new features in a way that avoids deterioration of the results. The main theoretical advance here is showing how to adjust the system information matrix when scene/camera parameters are removed from the reconstruction. We show how this procedure affects the sparseness of the information matrix, and thus how to achieve an efficient recursive solution to the reconstruction problem.
52 citations
Cited by
More filters
Book•
30 Sep 2010
TL;DR: Computer Vision: Algorithms and Applications explores the variety of techniques commonly used to analyze and interpret images and takes a scientific approach to basic vision problems, formulating physical models of the imaging process before inverting them to produce descriptions of a scene.
Abstract: Humans perceive the three-dimensional structure of the world with apparent ease. However, despite all of the recent advances in computer vision research, the dream of having a computer interpret an image at the same level as a two-year old remains elusive. Why is computer vision such a challenging problem and what is the current state of the art? Computer Vision: Algorithms and Applications explores the variety of techniques commonly used to analyze and interpret images. It also describes challenging real-world applications where vision is being successfully used, both for specialized applications such as medical imaging, and for fun, consumer-level tasks such as image editing and stitching, which students can apply to their own personal photos and videos. More than just a source of recipes, this exceptionally authoritative and comprehensive textbook/reference also takes a scientific approach to basic vision problems, formulating physical models of the imaging process before inverting them to produce descriptions of a scene. These problems are also analyzed using statistical models and solved using rigorous engineering techniques Topics and features: structured to support active curricula and project-oriented courses, with tips in the Introduction for using the book in a variety of customized courses; presents exercises at the end of each chapter with a heavy emphasis on testing algorithms and containing numerous suggestions for small mid-term projects; provides additional material and more detailed mathematical topics in the Appendices, which cover linear algebra, numerical techniques, and Bayesian estimation theory; suggests additional reading at the end of each chapter, including the latest research in each sub-field, in addition to a full Bibliography at the end of the book; supplies supplementary course material for students at the associated website, http://szeliski.org/Book/. Suitable for an upper-level undergraduate or graduate-level course in computer science or engineering, this textbook focuses on basic techniques that work under real-world conditions and encourages students to push their creative boundaries. Its design and exposition also make it eminently suitable as a unique reference to the fundamental techniques and current research literature in computer vision.
4,146 citations
21 Sep 1999
TL;DR: A survey of the theory and methods of photogrammetric bundle adjustment can be found in this article, with a focus on general robust cost functions rather than restricting attention to traditional nonlinear least squares.
Abstract: This paper is a survey of the theory and methods of photogrammetric bundle adjustment, aimed at potential implementors in the computer vision community. Bundle adjustment is the problem of refining a visual reconstruction to produce jointly optimal structure and viewing parameter estimates. Topics covered include: the choice of cost function and robustness; numerical optimization including sparse Newton methods, linearly convergent approximations, updating and recursive methods; gauge (datum) invariance; and quality control. The theory is developed for general robust cost functions rather than restricting attention to traditional nonlinear least squares.
3,521 citations
15 May 2006
TL;DR: It is demonstrated that with a few augmentations, existing 2DSLAM technology can be extended to perform full 3D SLAM in less benign, outdoor, undulating environments with data acquired with a 3D laser range finder.
Abstract: Traditional simultaneous localization and mapping (SLAM) algorithms have been used to great effect in flat, indoor environments such as corridors and offices. We demonstrate that with a few augmentations, existing 2D SLAM technology can be extended to perform full 3D SLAM in less benign, outdoor, undulating environments. In particular, we use data acquired with a 3D laser range finder. We use a simple segmentation algorithm to separate the data stream into distinct point clouds, each referenced to a vehicle position. The SLAM technique we then adopt inherits much from 2D delayed state (or scan-matching) SLAM in that the state vector is an ever growing stack of past vehicle positions and inter-scan registrations are used to form measurements between them. The registration algorithm used is a novel combination of previous techniques carefully balancing the need for maximally wide convergence basins, robustness and speed. In addition, we introduce a novel post-registration classification technique to detect matches which have converged to incorrect local minima
378 citations
TL;DR: The novel insight that the simultaneous localization and mapping (SLAM) information matrix is exactly sparse in a delayed-state framework is reported, which means it can produce equivalent results to the full-covariance solution.
Abstract: This paper reports the novel insight that the simultaneous localization and mapping (SLAM) information matrix is exactly sparse in a delayed-state framework. Such a framework is used in view-based representations of the environment that rely upon scan-matching raw sensor data to obtain virtual observations of robot motion with respect to a place it has previously been. The exact sparseness of the delayed-state information matrix is in contrast to other recent feature-based SLAM information algorithms, such as sparse extended information filter or thin junction-tree filter, since these methods have to make approximations in order to force the feature-based SLAM information matrix to be sparse. The benefit of the exact sparsity of the delayed-state framework is that it allows one to take advantage of the information space parameterization without incurring any sparse approximation error. Therefore, it can produce equivalent results to the full-covariance solution. The approach is validated experimentally using monocular imagery for two datasets: a test-tank experiment with ground truth, and a remotely operated vehicle survey of the RMS Titanic
320 citations
14 Oct 2008
TL;DR: The key aspect of the system is a fast and simple pose estimation algorithm that uses information not only from the estimated 3D map, but also from the epipolar constraint, which leads to a much more stable estimation of the camera trajectory than the conventional approach.
Abstract: We present a system for monocular simultaneous localization and mapping (mono-SLAM) relying solely on video input. Our algorithm makes it possible to precisely estimate the camera trajectory without relying on any motion model. The estimation is completely incremental: at a given time frame, only the current location is estimated while the previous camera positions are never modified. In particular, we do not perform any simultaneous iterative optimization of the camera positions and estimated 3D structure (local bundle adjustment). The key aspect of the system is a fast and simple pose estimation algorithm that uses information not only from the estimated 3D map, but also from the epipolar constraint. We show that the latter leads to a much more stable estimation of the camera trajectory than the conventional approach. We perform high precision camera trajectory estimation in urban scenes with a large amount of clutter. Using an omnidirectional camera placed on a vehicle, we cover one of the longest distance ever reported, up to 2.5 kilometers.
259 citations