scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography

01 Jun 1981-Communications of The ACM (ACM)-Vol. 24, Iss: 6, pp 381-395
TL;DR: New results are derived on the minimum number of landmarks needed to obtain a solution, and algorithms are presented for computing these minimum-landmark solutions in closed form that provide the basis for an automatic system that can solve the Location Determination Problem under difficult viewing.
Abstract: A new paradigm, Random Sample Consensus (RANSAC), for fitting a model to experimental data is introduced. RANSAC is capable of interpreting/smoothing data containing a significant percentage of gross errors, and is thus ideally suited for applications in automated image analysis where interpretation is based on the data provided by error-prone feature detectors. A major portion of this paper describes the application of RANSAC to the Location Determination Problem (LDP): Given an image depicting a set of landmarks with known locations, determine that point in space from which the image was obtained. In response to a RANSAC requirement, new results are derived on the minimum number of landmarks needed to obtain a solution, and algorithms are presented for computing these minimum-landmark solutions in closed form. These results provide the basis for an automatic system that can solve the LDP under difficult viewing

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: A new criterion for triggering the extension of word hits, combined with a new heuristic for generating gapped alignments, yields a gapped BLAST program that runs at approximately three times the speed of the original.
Abstract: The BLAST programs are widely used tools for searching protein and DNA databases for sequence similarities. For protein comparisons, a variety of definitional, algorithmic and statistical refinements described here permits the execution time of the BLAST programs to be decreased substantially while enhancing their sensitivity to weak similarities. A new criterion for triggering the extension of word hits, combined with a new heuristic for generating gapped alignments, yields a gapped BLAST program that runs at approximately three times the speed of the original. In addition, a method is introduced for automatically combining statistically significant alignments produced by BLAST into a position-specific score matrix, and searching the database using this matrix. The resulting Position-Specific Iterated BLAST (PSIBLAST) program runs at approximately the same speed per iteration as gapped BLAST, but in many cases is much more sensitive to weak but biologically relevant sequence similarities. PSI-BLAST is used to uncover several new and interesting members of the BRCT superfamily.

70,111 citations

01 Jan 2001
TL;DR: This book is referred to read because it is an inspiring book to give you more chance to get experiences and also thoughts and it will show the best book collections and completed collections.
Abstract: Downloading the book in this website lists can give you more advantages. It will show you the best book collections and completed collections. So many books can be found in this website. So, this is not only this multiple view geometry in computer vision. However, this book is referred to read because it is an inspiring book to give you more chance to get experiences and also thoughts. This is simple, read the soft file of the book and you get it.

14,282 citations

Journal ArticleDOI
TL;DR: In this paper, the authors prove that under some suitable assumptions, it is possible to recover both the low-rank and the sparse components exactly by solving a very convenient convex program called Principal Component Pursuit; among all feasible decompositions, simply minimize a weighted combination of the nuclear norm and of the e1 norm.
Abstract: This article is about a curious phenomenon. Suppose we have a data matrix, which is the superposition of a low-rank component and a sparse component. Can we recover each component individuallyq We prove that under some suitable assumptions, it is possible to recover both the low-rank and the sparse components exactly by solving a very convenient convex program called Principal Component Pursuit; among all feasible decompositions, simply minimize a weighted combination of the nuclear norm and of the e1 norm. This suggests the possibility of a principled approach to robust principal component analysis since our methodology and results assert that one can recover the principal components of a data matrix even though a positive fraction of its entries are arbitrarily corrupted. This extends to the situation where a fraction of the entries are missing as well. We discuss an algorithm for solving this optimization problem, and present applications in the area of video surveillance, where our methodology allows for the detection of objects in a cluttered background, and in the area of face recognition, where it offers a principled way of removing shadows and specularities in images of faces.

6,783 citations

Journal ArticleDOI
Roger Y. Tsai1
01 Aug 1987
TL;DR: In this paper, a two-stage technique for 3D camera calibration using TV cameras and lenses is described, aimed at efficient computation of camera external position and orientation relative to object reference coordinate system as well as the effective focal length, radial lens distortion, and image scanning parameters.
Abstract: A new technique for three-dimensional (3D) camera calibration for machine vision metrology using off-the-shelf TV cameras and lenses is described. The two-stage technique is aimed at efficient computation of camera external position and orientation relative to object reference coordinate system as well as the effective focal length, radial lens distortion, and image scanning parameters. The two-stage technique has advantage in terms of accuracy, speed, and versatility over existing state of the art. A critical review of the state of the art is given in the beginning. A theoretical framework is established, supported by comprehensive proof in five appendixes, and may pave the way for future research on 3D robotics vision. Test results using real data are described. Both accuracy and speed are reported. The experimental results are analyzed and compared with theoretical prediction. Recent effort indicates that with slight modification, the two-stage calibration can be done in real time.

5,940 citations

Book
03 Jan 1992
TL;DR: A new technique for three-dimensional camera calibration for machine vision metrology using off-the-shelf TV cameras and lenses using two-stage technique has advantage in terms of accuracy, speed, and versatility over existing state of the art.
Abstract: A new technique for three-dimensional (3D) camera calibration for machine vision metrology using off-the-shelf TV cameras and lenses is described. The two-stage technique is aimed at efficient computation of camera external position and orientation relative to object reference coordinate system as well as the effective focal length, radial lens distortion, and image scanning parameters. The two-stage technique has advantage in terms of accuracy, speed, and versatility over existing state of the art. A critical review of the state of the art is given in the beginning. A theoretical framework is established, supported by comprehensive proof in five appendixes, and may pave the way for future research on 3D robotics vision. Test results using real data are described. Both accuracy and speed are reported. The experimental results are analyzed and compared with theoretical prediction. Recent effort indicates that with slight modification, the two-stage calibration can be done in real time.

5,816 citations

References
More filters
Book
01 Jan 1973
TL;DR: In this article, a unified, comprehensive and up-to-date treatment of both statistical and descriptive methods for pattern recognition is provided, including Bayesian decision theory, supervised and unsupervised learning, nonparametric techniques, discriminant analysis, clustering, preprosessing of pictorial data, spatial filtering, shape description techniques, perspective transformations, projective invariants, linguistic procedures, and artificial intelligence techniques for scene analysis.
Abstract: Provides a unified, comprehensive and up-to-date treatment of both statistical and descriptive methods for pattern recognition. The topics treated include Bayesian decision theory, supervised and unsupervised learning, nonparametric techniques, discriminant analysis, clustering, preprosessing of pictorial data, spatial filtering, shape description techniques, perspective transformations, projective invariants, linguistic procedures, and artificial intelligence techniques for scene analysis.

13,647 citations

Book
01 Jan 1972
TL;DR: This chapter discusses Taylor Polynomials, Numerical Linear Algebra, and the Finite Difference Method for PDEs, as well as other topics of interest to students of mathematics.
Abstract: Chapter 1. Taylor Polynomials. Chapter 2. Error and Computer Arithmetic. Chapter 3. Rootfinding. Chapter 4. Interpolation and Approximation. Chapter 5. Numerical Integration and Differentiation. Chapter 6. Solution of Systems of Linear Equations. Chapter 7. Numerical Linear Algebra: Advanced Topics. Chapter 8. Ordinary Differential Equations. Chapter 9. Finite Difference Method for PDEs. Appendix A: Mean Value Theorems. Appendix B: Mathematical Formulas. Appendix C: Numerical Analysis Software Packages. Appendix D: Matlab: An Introduction. Appendix E: The Binary Number System. Answers to Selected Problems. Bibliography. Index.

1,266 citations

Book
01 Jan 1976
TL;DR: In this paper, the authors present an introduction to the mathematical theory underlying computer graphic applications, including transformations, projections, 2-D and 3-D curve definition schemes, and surface definitions.
Abstract: From the Publisher: This text is ideal for junior-,senior-,and graduate-level courses in computer graphics and computer-aided design taught in departments of mechanical and aeronautical engineering and computer science. It presents in a unified manner an introduction to the mathematical theory underlying computer graphic applications. It covers topics of keen interest to students in engineering and computer science: transformations,projections,2-D and 3-D curve definition schemes,and surface definitions. It also includes techniques,such as B-splines,which are incorporated as part of the software in advanced engineering workstations. A basic knowledge of vector and matrix algebra and calculus is required.

1,086 citations

Journal ArticleDOI
TL;DR: In this paper, a discussion is directed to least-squares estimation theory, from its inception by Gauss to its modern form, as developed by Kalman, and it is enlightening to consider just how far (or how little) we have advanced since the initial developments and to recognize the truth in the saying that we stand on the shoulders of giants.
Abstract: This discussion is directed to least-squares estimation theory, from its inception by Gauss1 to its modern form, as developed by Kalman.2 To aid in furnishing the desired perspective, the contributions and insights provided by Gauss are described and related to developments that have appeared more recently (that is, in the 20th century). In the author's opinion, it is enlightening to consider just how far (or how little) we have advanced since the initial developments and to recognize the truth in the saying that we ``stand on the shoulders of giants.''

752 citations


"Random sample consensus: a paradigm..." refers background in this paper

  • ...to determine the number of points required to produce a desired precision [ 10]....

    [...]