scispace - formally typeset
Search or ask a question
Author

Martin A. Fischler

Other affiliations: Artificial Intelligence Center
Bio: Martin A. Fischler is an academic researcher from SRI International. The author has contributed to research in topics: Image processing & Machine perception. The author has an hindex of 18, co-authored 50 publications receiving 23690 citations. Previous affiliations of Martin A. Fischler include Artificial Intelligence Center.

Papers
More filters
Journal ArticleDOI
TL;DR: New results are derived on the minimum number of landmarks needed to obtain a solution, and algorithms are presented for computing these minimum-landmark solutions in closed form that provide the basis for an automatic system that can solve the Location Determination Problem under difficult viewing.
Abstract: A new paradigm, Random Sample Consensus (RANSAC), for fitting a model to experimental data is introduced. RANSAC is capable of interpreting/smoothing data containing a significant percentage of gross errors, and is thus ideally suited for applications in automated image analysis where interpretation is based on the data provided by error-prone feature detectors. A major portion of this paper describes the application of RANSAC to the Location Determination Problem (LDP): Given an image depicting a set of landmarks with known locations, determine that point in space from which the image was obtained. In response to a RANSAC requirement, new results are derived on the minimum number of landmarks needed to obtain a solution, and algorithms are presented for computing these minimum-landmark solutions in closed form. These results provide the basis for an automatic system that can solve the LDP under difficult viewing

23,396 citations

Journal ArticleDOI
TL;DR: The criteria that are important for evaluating the effectiveness of various computational stereo techniques are presented and a representative sampling of computational stereo research is provided.
Abstract: Perception of depth is a central problem m machine vision. Stereo is an attractive technique for depth perception because, compared with monocular techniques, it leads to more direct, unambiguous, and quantitative depth measurements, and unlike \"active\" approaches such as radar and laser ranging, it is suitable in almost all application domains. Computational stereo is broadly defined as the recovery of the three-dimensional characteristics of a scene from multiple images taken from different points of view. First, each of the functional components of the computational stereo paradigm--image acquLsition, camera modeling, feature acquisition, image matching, depth determination, and interpolation--is identified and discussed. Then, the criteria that are important for evaluating the effectiveness of various computational stereo techniques are presented. Finally a representative sampling of computational stereo research is provided.

774 citations

Journal ArticleDOI
TL;DR: In this article, a computer-based approach to the problem of detecting and precisely delineating roads, and similar line-like structures, appearing in low-resolution aerial imagery is described.
Abstract: This paper describes a computer-based approach to the problem of detecting and precisely delineating roads, and similar “line-like” structures, appearing in low-resolution aerial imagery The approach is based on a new paradigm for combining local information from multiple, and possibly incommensurate, sources, including various line and edge detection operators, map knowledge about the likely path of roads through an image, and generic knowledge about roads (eg, connectivity, curvature, and width constraints) The final interpretation of the scene is achieved by using either a graph search or dynamic programming technique to optimize a global figure of merit Implementation details and experimental results are included

415 citations

Book
01 Jan 1987
TL;DR: This book presents sixty research papers, most written since 1980, that address a problem, provides a survey of major issues, ideas, and research projects, and presents reprints of key papers.
Abstract: Each chapter of the book addresses a problem, provides a survey of major issues, ideas, and research projects, and presents reprints of key papers. In total, the book presents sixty research papers, most written since 1980.

393 citations

Proceedings Article
24 Aug 1981
TL;DR: The technique is specifically designed to filter out gross errors before applying a smoothing procedure to compute a precise model in order to solve the problem of locating cylinders in range data.
Abstract: General principles for fitting models to data containing "gross" errors in addition to "measurement" errors are presented A fitting technique is described and illustrated by its application to the problem of locating cylinders in range data, two key steps in this process arc fitting ellipses to partial data and fitting lines to sets of three-dimensional points The technique is specifically designed to filter out gross errors before applying a smoothing procedure to compute a precise model Such a technique is particularly applicable to computer vision tasks because the data in these tasks arc often produced by local computations that are inherently unreliable.

329 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: A new criterion for triggering the extension of word hits, combined with a new heuristic for generating gapped alignments, yields a gapped BLAST program that runs at approximately three times the speed of the original.
Abstract: The BLAST programs are widely used tools for searching protein and DNA databases for sequence similarities. For protein comparisons, a variety of definitional, algorithmic and statistical refinements described here permits the execution time of the BLAST programs to be decreased substantially while enhancing their sensitivity to weak similarities. A new criterion for triggering the extension of word hits, combined with a new heuristic for generating gapped alignments, yields a gapped BLAST program that runs at approximately three times the speed of the original. In addition, a method is introduced for automatically combining statistically significant alignments produced by BLAST into a position-specific score matrix, and searching the database using this matrix. The resulting Position-Specific Iterated BLAST (PSIBLAST) program runs at approximately the same speed per iteration as gapped BLAST, but in many cases is much more sensitive to weak but biologically relevant sequence similarities. PSI-BLAST is used to uncover several new and interesting members of the BRCT superfamily.

70,111 citations

Journal ArticleDOI
TL;DR: In this paper, an approach to synthesizing decision trees that has been used in a variety of systems, and it describes one such system, ID3, in detail, is described, and a reported shortcoming of the basic algorithm is discussed.
Abstract: The technology for building knowledge-based systems by inductive inference from examples has been demonstrated successfully in several practical applications. This paper summarizes an approach to synthesizing decision trees that has been used in a variety of systems, and it describes one such system, ID3, in detail. Results from recent studies show ways in which the methodology can be modified to deal with information that is noisy and/or incomplete. A reported shortcoming of the basic algorithm is discussed and two means of overcoming it are compared. The paper concludes with illustrations of current research directions.

17,177 citations

01 Jan 2001
TL;DR: This book is referred to read because it is an inspiring book to give you more chance to get experiences and also thoughts and it will show the best book collections and completed collections.
Abstract: Downloading the book in this website lists can give you more advantages. It will show you the best book collections and completed collections. So many books can be found in this website. So, this is not only this multiple view geometry in computer vision. However, this book is referred to read because it is an inspiring book to give you more chance to get experiences and also thoughts. This is simple, read the soft file of the book and you get it.

14,282 citations

Journal ArticleDOI
TL;DR: This paper has designed a stand-alone, flexible C++ implementation that enables the evaluation of individual components and that can easily be extended to include new algorithms.
Abstract: Stereo matching is one of the most active research areas in computer vision. While a large number of algorithms for stereo correspondence have been developed, relatively little work has been done on characterizing their performance. In this paper, we present a taxonomy of dense, two-frame stereo methods designed to assess the different components and design decisions made in individual stereo algorithms. Using this taxonomy, we compare existing stereo methods and present experiments evaluating the performance of many different variants. In order to establish a common software platform and a collection of data sets for easy evaluation, we have designed a stand-alone, flexible C++ implementation that enables the evaluation of individual components and that can be easily extended to include new algorithms. We have also produced several new multiframe stereo data sets with ground truth, and are making both the code and data sets available on the Web.

7,458 citations

Journal ArticleDOI
TL;DR: In this paper, the authors prove that under some suitable assumptions, it is possible to recover both the low-rank and the sparse components exactly by solving a very convenient convex program called Principal Component Pursuit; among all feasible decompositions, simply minimize a weighted combination of the nuclear norm and of the e1 norm.
Abstract: This article is about a curious phenomenon. Suppose we have a data matrix, which is the superposition of a low-rank component and a sparse component. Can we recover each component individuallyq We prove that under some suitable assumptions, it is possible to recover both the low-rank and the sparse components exactly by solving a very convenient convex program called Principal Component Pursuit; among all feasible decompositions, simply minimize a weighted combination of the nuclear norm and of the e1 norm. This suggests the possibility of a principled approach to robust principal component analysis since our methodology and results assert that one can recover the principal components of a data matrix even though a positive fraction of its entries are arbitrarily corrupted. This extends to the situation where a fraction of the entries are missing as well. We discuss an algorithm for solving this optimization problem, and present applications in the area of video surveillance, where our methodology allows for the detection of objects in a cluttered background, and in the area of face recognition, where it offers a principled way of removing shadows and specularities in images of faces.

6,783 citations