scispace - formally typeset
Proceedings ArticleDOI

Depth estimation from single image using Defocus and Texture cues

TLDR
A model that combines two monocular depth cues namely Texture and Defocus is presented, which mainly focuses on modifying the erroneous regions in defocus map by using the texture energy present at that region.
Abstract
As imaging is a process of 2D projection of a 3D scene, the depth information is lost at the time of image capture from conventional camera. This depth information can be inferred back from a set of visual cues present in the image. In this work, we present a model that combines two monocular depth cues namely Texture and Defocus. Depth is related to the spatial extent of the defocus blur by assuming that more an object is blurred, the farther it is from the camera. At first, we estimate the amount of defocus blur present at edge pixels of an image. This is referred as the Sparse Defocus map. Using the sparse defocus map we generate the full defocus map. However such defocus maps always contain hole regions and ambiguity in depth. To handle this problem an additional depth cue, in our case texture has been integrated to generate better defocus map. This integration mainly focuses on modifying the erroneous regions in defocus map by using the texture energy present at that region. The sparse defocus map is corrected using texture based rules. The hole regions, where there are no significant edges and texture are detected and corrected in sparse defocus map. We have used region wise propagation for better defocus map generation. The accuracy of full defocus map is increased with the region wise propagation.

read more

Citations
More filters
Proceedings ArticleDOI

Monocular 3D metric scale reconstruction using depth from defocus and image velocity

TL;DR: It is shown in real experiments that the proposed approach has the potential to enhance robot navigation algorithms that rely on monocular cameras and converges to a metric scale, accurate, sparse depth map and 3D camera poses with images from a monocular camera.
Proceedings ArticleDOI

A Defocus Based Novel Keyboard Design

TL;DR: In this article, the authors proposed an application of depth map estimation from defocus to a novel keyboard design for detecting keystrokes, which can be integrated with devices such as mobile, PC and tablets and can be generated by either printing on plain paper or by projection on a flat surface.
Book ChapterDOI

A Defocus Based Novel Keyboard Design

TL;DR: The proposed design utilizes measured defocus together with a precalibrated relation between the defocus amount and the keyboard pattern to infer the depth, which, along with the azimuth position of the stroke identifies the key.
Book ChapterDOI

Calibration of Depth Map Using a Novel Target

TL;DR: This work proposes an efficient method that relies on defocus or blur variation in an image to indicate the depth map for a given camera focus and uses real data, simple calibration method to extract the depth maps.
Dissertation

Scale estimation for monocular SLAM using depth from defocus

TL;DR: It is demonstrated that integrating DfD into monocular SLAM eliminates scale drift and results in accurate metric scale maps.
References
More filters
Book

Digital Image Processing Using MATLAB

TL;DR: 1. Fundamentals of Image Processing, 2. Intensity Transformations and Spatial Filtering, and 3. Frequency Domain Processing.
Journal ArticleDOI

A Closed-Form Solution to Natural Image Matting

TL;DR: A closed-form solution to natural image matting that allows us to find the globally optimal alpha matte by solving a sparse linear system of equations and predicts the properties of the solution by analyzing the eigenvectors of a sparse matrix, closely related to matrices used in spectral image segmentation algorithms.
Journal ArticleDOI

Make3D: Learning 3D Scene Structure from a Single Still Image

TL;DR: This work considers the problem of estimating detailed 3D structure from a single still image of an unstructured environment and uses a Markov random field (MRF) to infer a set of "plane parameters" that capture both the 3D location and 3D orientation of the patch.
Journal ArticleDOI

Colorization using optimization

TL;DR: This paper presents a simple colorization method that requires neither precise image segmentation, nor accurate region tracking, and demonstrates that high quality colorizations of stills and movie clips may be obtained from a relatively modest amount of user input.
Proceedings Article

Learning Depth from Single Monocular Images

TL;DR: This work begins by collecting a training set of monocular images (of unstructured outdoor environments which include forests, trees, buildings, etc.) and their corresponding ground-truth depthmaps, and applies supervised learning to predict the depthmap as a function of the image.
Related Papers (5)