scispace - formally typeset
Proceedings ArticleDOI

Defocus map estimation from a single image using principal components

01 Sep 2015-pp 163-167
Abstract: Light is a mixture of multiple spectral components. An image is a response of the scene with respect to these spectra. Principal components are more compact representation of the data compared to any other representations. Hence accuracy of the estimated defocus parameter is higher in principal component representation than any other customary used representations. In this paper, we present comparison between principal component representation and customary gray scale representation for depth map creation. The presented results shows that the depth maps obtained using principal component are smoother than depth maps obtained using gray scale representation. Besides that, the noise estimation using principal components is much more accurate than using Wiener strategy.
Topics: Depth map (57%), Principal component analysis (57%), Grayscale (50%)
Citations
More filters

Proceedings ArticleDOI
01 Dec 2019-
Abstract: Depth map estimation from Defocus is a computer vision technique which has wide applications such as constructing the $3D$ setup from $2D$ image(s), image refocusing and reconstructing $3D$ scenes. In this paper, we propose an application of Depth from Defocus to a novel keyboard design for detecting keystrokes. The proposed keyboard can be integrated with devices such as mobile, PC and tablets and can be generated by either printing on plain paper or by projection on a flat surface. The proposed design utilizes measured defocus together with a precalibrated relation between the defocus amount and the keyboard pattern to infer the depth, which, along with the azimuth position of the stroke identifies the key. As the proposed design does not require any other hardware besides a monocular camera, this makes the proposed approach a cost effective and feasible solution for a portable keyboard.

Book ChapterDOI
19 Jul 2020-
TL;DR: The proposed design utilizes measured defocus together with a precalibrated relation between the defocus amount and the keyboard pattern to infer the depth, which, along with the azimuth position of the stroke identifies the key.
Abstract: Defocus based Depth estimation has been widely applied for constructing 3D setup from 2D image(s), reconstructing 3D scenes and image refocusing. Using defocus enables us to infer depth information from a single image using visual clues which can be captured by a monocular camera. In this paper, we propose an application of Depth from Defocus to a novel, portable keyboard design. Our estimation technique is based on the concept that depth of the finger with respect to our camera and its defocus blur value is correlated, and a map can be obtained to detect the finger position accurately. We have utilised the near-focus region for our design, assuming that the closer an object is to our camera, more will be its defocus blur. The proposed keyboard can be integrated with smartphones, tablets and Personal Computers, and only requires printing on plain paper or projection on a flat surface. The detection approach involves tracking the finger’s position as the user types, measuring its defocus value when a key is pressed, and mapping the measured defocus together with a precalibrated relation between the defocus amount and the keyboard pattern. This is utilised to infer the finger’s depth, which, along with the azimuth position of the stroke, identifies the pressed key. Our minimalistic design only requires a monocular camera, and there is no need for any external hardware. This makes the proposed approach a cost-effective and feasible solution for a portable keyboard.

Cites methods from "Defocus map estimation from a singl..."

  • ...As per the approach employed by Kumar [33], the defocus blur kernel is assumed to be Gaussian with kernel parameter σ....

    [...]

  • ...As per the approach employed by Kumar [33], the defocus blur kernel is assumed to be Gaussian with kernel parameter σ. Kernel parameter σ varies with the depth....

    [...]


References
More filters

Journal ArticleDOI
TL;DR: A closed-form solution to natural image matting that allows us to find the globally optimal alpha matte by solving a sparse linear system of equations and predicts the properties of the solution by analyzing the eigenvectors of a sparse matrix, closely related to matrices used in spectral image segmentation algorithms.
Abstract: Interactive digital matting, the process of extracting a foreground object from an image based on limited user input, is an important task in image and video editing. From a computer vision perspective, this task is extremely challenging because it is massively ill-posed - at each pixel we must estimate the foreground and the background colors, as well as the foreground opacity ("alpha matte") from a single color measurement. Current approaches either restrict the estimation to a small part of the image, estimating foreground and background colors based on nearby pixels where they are known, or perform iterative nonlinear estimation by alternating foreground and background color estimation with alpha estimation. In this paper, we present a closed-form solution to natural image matting. We derive a cost function from local smoothness assumptions on foreground and background colors and show that in the resulting expression, it is possible to analytically eliminate the foreground and background colors to obtain a quadratic cost function in alpha. This allows us to find the globally optimal alpha matte by solving a sparse linear system of equations. Furthermore, the closed-form formula allows us to predict the properties of the solution by analyzing the eigenvectors of a sparse matrix, closely related to matrices used in spectral image segmentation algorithms. We show that high-quality mattes for natural images may be obtained from a small amount of user input.

1,660 citations


20


"Defocus map estimation from a singl..." refers methods in this paper

  • ...The full defocus map was obtained by spreading the sparse defocus map over the entire image using Levin’s [2] matting matrix....

    [...]

  • ...Here L is the matting matrix as described by Levin [2] and D is the edge map and d̂ is the given sparse defocus map....

    [...]

  • ...To compare the performance of the full defocus map creation between PCR and GSR, Levin’s [2] method is applied on benchmark images as shown in the Figure (5)....

    [...]


Journal ArticleDOI
TL;DR: This paper presents a simple yet effective approach to estimate the amount of spatially varying defocus blur at edge locations, and demonstrates the effectiveness of this method in providing a reliable estimation of the defocus map.
Abstract: In this paper, we address the challenging problem of recovering the defocus map from a single image. We present a simple yet effective approach to estimate the amount of spatially varying defocus blur at edge locations. The input defocused image is re-blurred using a Gaussian kernel and the defocus blur amount can be obtained from the ratio between the gradients of input and re-blurred images. By propagating the blur amount at edge locations to the entire image, a full defocus map can be obtained. Experimental results on synthetic and real images demonstrate the effectiveness of our method in providing a reliable estimation of the defocus map.

313 citations


"Defocus map estimation from a singl..." refers background or methods in this paper

  • ...More accurate and reliable depth map can be found using σ estimate from the PCR. Zhuo and Sim [1] proposed a methodology for the full defocus map estimation using the defocus amount at the edges....

    [...]

  • ...The first column in the Table (II) represents the amount of blur in different spectral components with 30db SNR....

    [...]

  • ...The defocus map is created using Zhuo’s strategy [1] due to its simplicity and effectiveness....

    [...]


Journal ArticleDOI
15 May 2013-Optics Letters
TL;DR: The proposed method takes into consideration not only the affect of light refraction but also the blur texture of an image, and is more reliable in defocus map estimation compared to various state-of-the-art methods.
Abstract: We present an effective method for defocus map estimation from a single natural image. It is inspired by the observation that defocusing can significantly affect the spectrum amplitude at the object edge locations in an image. By establishing the relationship between the amount of spatially varying defocus blur and spectrum contrast at edge locations, we first estimate the blur amount at these edge locations, then a full defocus map can be obtained by propagating the blur amount at edge locations over the entire image with a nonhomogeneous optimization procedure. The proposed method takes into consideration not only the affect of light refraction but also the blur texture of an image. Experimental results demonstrate that our proposed method is more reliable in defocus map estimation compared to various state-of-the-art methods.

77 citations


"Defocus map estimation from a singl..." refers methods in this paper

  • ...In all these cases, GSR for the image was used to determine the defocus parameter....

    [...]


Journal ArticleDOI
01 Feb 2007-
TL;DR: A novel application of the diffusion principle is made for generating the defocus space of the scene, the set of all possible observations for a given scene that can be captured using a physical lens system, and estimates the depth in the scene and generates the corresponding fully focused equivalent pin-hole image.
Abstract: An intrinsic property of real aperture imaging has been that the observations tend to be defocused. This artifact has been used in an innovative manner by researchers for depth estimation, since the amount of defocus varies with varying depth in the scene. There have been various methods to model the defocus blur. We model the defocus process using the model of diffusion of heat. The diffusion process has been traditionally used in low level vision problems like smoothing, segmentation and edge detection. In this paper a novel application of the diffusion principle is made for generating the defocus space of the scene. The defocus space is the set of all possible observations for a given scene that can be captured using a physical lens system. Using the notion of defocus space we estimate the depth in the scene and also generate the corresponding fully focused equivalent pin-hole image. The algorithm described here also brings out the equivalence of the two modalities, viz. depth from focus and depth from defocus for structure recovery.

64 citations


"Defocus map estimation from a singl..." refers methods in this paper

  • ...The GSR was used to compute the blur parameter....

    [...]


Proceedings ArticleDOI
13 Jun 2010-
TL;DR: A novel depth estimation method that places a diffuser in the scene prior to image capture that is analogous to conventional depth-from-defocus (DFD), where the scatter angle of the diffuser determines the effective aperture of the system.
Abstract: An optical diffuser is an element that scatters light and is commonly used to soften or shape illumination. In this paper, we propose a novel depth estimation method that places a diffuser in the scene prior to image capture. We call this approach depth-from-diffusion (DFDiff). We show that DFDiff is analogous to conventional depth-from-defocus (DFD), where the scatter angle of the diffuser determines the effective aperture of the system. The main benefit of DFDiff is that while DFD requires very large apertures to improve depth sensitivity, DFDiff only requires an increase in the diffusion angle – a much less expensive proposition. We perform a detailed analysis of the image formation properties of a DFDiff system, and show a variety of examples demonstrating greater precision in depth estimation when using DFDiff.

48 citations


"Defocus map estimation from a singl..." refers methods in this paper

  • ...The GSR was used to compute the blur parameter....

    [...]


Performance
Metrics
No. of citations received by the Paper in previous years
YearCitations
20201
20191