scispace - formally typeset
Proceedings ArticleDOI

Parallel Depth Recovery By Changing Camera Parameters

TLDR
In this paper, a new method is described for recovering the distance of objects in a scene from images formed by lenses, based on measuring the change in the scene's image due to a known change in three intrinsic camera parameters: (i) distance between the lens and the image detector, (ii) focal length, and (iii) diameter of the lens aperture.
Abstract
A new method is described for recovering the distance of objects in a scene from images formed by lenses. The recovery is based on measuring the change in the scene’s image due to a known change in the three intrinsic camera parameters: (i) distance between the lens md the image detector, (ii) focal length of the lens, and (iii) diameter of the lens aperture. The method is parallel involving simple local computations. In comparison with stereo vision and structure-from-motion methods, the correspondence problem does not arise. This method for depthmap recovery may also be used for (i) obtaining focused images (i.e. images having large depth of field) from two images having finite depth of field, and (ii) rapid autofocusing of computer controlled video cameras.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings Article

Robot vision

TL;DR: A scheme is developed for classifying the types of motion perceived by a humanlike robot and equations, theorems, concepts, clues, etc., relating the objects, their positions, and their motion to their images on the focal plane are presented.
Journal ArticleDOI

Local scale control for edge detection and blur estimation

TL;DR: Local scale control is shown to be important for the estimation of blur in complex images, where the potential for interference between nearby edges of very different blur scale requires that estimates be made at the minimum reliable scale.
Journal ArticleDOI

Depth from defocus: a spatial domain approach

TL;DR: A new method named STM is described for determining distance of objects and rapid autofocusing of camera systems based on a new Spatial-Domain Convolution/Deconvolution Transform that requires only two images taken with different camera parameters such as lens position, focal length, and aperture diameter.
Journal ArticleDOI

An investigation of methods for determining depth from focus

TL;DR: A general, matrix-based method using regularization is presented, which eliminates some fundamental problems with inverse filtering: inaccuracies in finding the frequency domain representation, windowing effects, and border effects.
Proceedings ArticleDOI

Depth from focusing and defocusing

TL;DR: A model of the blurring effect that takes geometric blurring as well as imaging blurring into consideration in the calibration of the Blurring model is proposed, and a maximal resemblance estimation method is proposed to decrease or eliminate the window effect.
References
More filters
Journal ArticleDOI

Introduction to Fourier Optics

Joseph W. Goodman, +1 more
- 01 Apr 1969 - 
TL;DR: The second edition of this respected text considerably expands the original and reflects the tremendous advances made in the discipline since 1968 as discussed by the authors, with a special emphasis on applications to diffraction, imaging, optical data processing, and holography.
Book

Robot Vision

TL;DR: Robot Vision as discussed by the authors is a broad overview of the field of computer vision, using a consistent notation based on a detailed understanding of the image formation process, which can provide a useful and current reference for professionals working in the fields of machine vision, image processing, and pattern recognition.
Proceedings Article

Robot vision

TL;DR: A scheme is developed for classifying the types of motion perceived by a humanlike robot and equations, theorems, concepts, clues, etc., relating the objects, their positions, and their motion to their images on the focal plane are presented.
Book

A new sense for depth of field

Alex Pentland
TL;DR: In this paper, the authors examined a novel source of depth information: focal gradients resulting from the limited depth of field inherent in most optical systems and proved that this source of information can be used to make reliable depth maps of useful accuracy with relatively minimal computation.
Journal ArticleDOI

A New Sense for Depth of Field

TL;DR: In this paper, the authors examined a novel source of depth information: focal gradients resulting from the limited depth of field inherent in most optical systems, which can be used to make reliable depth maps of useful accuracy with relatively minimal computation.