scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Defocus map estimation from a single image using principal components

TL;DR: In this paper, a comparison between principal component representation and traditional gray scale representation for depth map creation is made, and the results show that the depth maps obtained using principal component are smoother than the ones obtained using traditional gray-scale representation.
Abstract: Light is a mixture of multiple spectral components. An image is a response of the scene with respect to these spectra. Principal components are more compact representation of the data compared to any other representations. Hence accuracy of the estimated defocus parameter is higher in principal component representation than any other customary used representations. In this paper, we present comparison between principal component representation and customary gray scale representation for depth map creation. The presented results shows that the depth maps obtained using principal component are smoother than depth maps obtained using gray scale representation. Besides that, the noise estimation using principal components is much more accurate than using Wiener strategy.
Citations
More filters
Proceedings ArticleDOI
01 Dec 2019
TL;DR: In this article, the authors proposed an application of depth map estimation from defocus to a novel keyboard design for detecting keystrokes, which can be integrated with devices such as mobile, PC and tablets and can be generated by either printing on plain paper or by projection on a flat surface.
Abstract: Depth map estimation from Defocus is a computer vision technique which has wide applications such as constructing the $3D$ setup from $2D$ image(s), image refocusing and reconstructing $3D$ scenes. In this paper, we propose an application of Depth from Defocus to a novel keyboard design for detecting keystrokes. The proposed keyboard can be integrated with devices such as mobile, PC and tablets and can be generated by either printing on plain paper or by projection on a flat surface. The proposed design utilizes measured defocus together with a precalibrated relation between the defocus amount and the keyboard pattern to infer the depth, which, along with the azimuth position of the stroke identifies the key. As the proposed design does not require any other hardware besides a monocular camera, this makes the proposed approach a cost effective and feasible solution for a portable keyboard.
Book ChapterDOI
19 Jul 2020
TL;DR: The proposed design utilizes measured defocus together with a precalibrated relation between the defocus amount and the keyboard pattern to infer the depth, which, along with the azimuth position of the stroke identifies the key.
Abstract: Defocus based Depth estimation has been widely applied for constructing 3D setup from 2D image(s), reconstructing 3D scenes and image refocusing. Using defocus enables us to infer depth information from a single image using visual clues which can be captured by a monocular camera. In this paper, we propose an application of Depth from Defocus to a novel, portable keyboard design. Our estimation technique is based on the concept that depth of the finger with respect to our camera and its defocus blur value is correlated, and a map can be obtained to detect the finger position accurately. We have utilised the near-focus region for our design, assuming that the closer an object is to our camera, more will be its defocus blur. The proposed keyboard can be integrated with smartphones, tablets and Personal Computers, and only requires printing on plain paper or projection on a flat surface. The detection approach involves tracking the finger’s position as the user types, measuring its defocus value when a key is pressed, and mapping the measured defocus together with a precalibrated relation between the defocus amount and the keyboard pattern. This is utilised to infer the finger’s depth, which, along with the azimuth position of the stroke, identifies the pressed key. Our minimalistic design only requires a monocular camera, and there is no need for any external hardware. This makes the proposed approach a cost-effective and feasible solution for a portable keyboard.

Cites methods from "Defocus map estimation from a singl..."

  • ...As per the approach employed by Kumar [33], the defocus blur kernel is assumed to be Gaussian with kernel parameter σ....

    [...]

  • ...As per the approach employed by Kumar [33], the defocus blur kernel is assumed to be Gaussian with kernel parameter σ. Kernel parameter σ varies with the depth....

    [...]

Journal ArticleDOI
TL;DR: Spectral 3D computer vision as discussed by the authors examines both the geometric and spectral properties of objects, and provides a deeper understanding of an object's physical properties by providing information from narrow bands in various regions of the electromagnetic spectrum.
Abstract: Spectral 3D computer vision examines both the geometric and spectral properties of objects. It provides a deeper understanding of an object's physical properties by providing information from narrow bands in various regions of the electromagnetic spectrum. Mapping the spectral information onto the 3D model reveals changes in the spectra-structure space or enhances 3D representations with properties such as reflectance, chromatic aberration, and varying defocus blur. This emerging paradigm advances traditional computer vision and opens new avenues of research in 3D structure, depth estimation, motion analysis, and more. It has found applications in areas such as smart agriculture, environment monitoring, building inspection, geological exploration, and digital cultural heritage records. This survey offers a comprehensive overview of spectral 3D computer vision, including a unified taxonomy of methods, key application areas, and future challenges and prospects.
References
More filters
Journal ArticleDOI
TL;DR: This method builds on the universal imaging principle: only scene at the focus distance will converge to a single sharp point on imaging sensor but other scene will yield different blur effects varying with its distance from the camera lens.
Abstract: We present a technique to recover and refine the depth map from a single image captured by a conventional camera in this paper. Our method builds on the universal imaging principle: only scene at the focus distance will converge to a single sharp point on imaging sensor but other scene will yield different blur effects varying with its distance from the camera lens. We first estimate depth values at edge locations via spectrum contrast and then recover the full depth map using a depth matting optimization method. Due to the fact that some blur textures such as soft shadows or blur patterns will produce ambiguity results during the procedure of depth estimation, we use a total variation-based image smoothing method to smooth the original image, a smoothed image with detailed texture being suppressed can be generated. Taking this smoothed image as reference image, a guided filter is used to refine the final depth map.

35 citations


"Defocus map estimation from a singl..." refers methods in this paper

  • ...In all these cases, GSR for the image was used to determine the defocus parameter....

    [...]

Proceedings ArticleDOI
01 Sep 2009
TL;DR: A postprocessing approach that corrects the defocused blurry edges to sharp ones with the aid of the parametric edge model and then render this cue as a novel local prior to ensure the sharpness of the refocused image.
Abstract: In this paper, we present a postprocessing approach to tackle the single image focus editing problem. In detail, the proposed method can accomplish the tasks of focus map estimation, image refocusing and defocusing. Given an image with a mixture of focused and defocused objects, we first detect the edges and then estimate the focus map based on the edge blurriness which is depicted explicitly with a well-parameterized model. The image refocusing problem is addressed in an elaborate blind deconvolution framework, where the image prior is modeled well by using both global and local constraints. Especially, we correct the defocused blurry edges to sharp ones with the aid of the parametric edge model and then render this cue as a novel local prior to ensure the sharpness of the refocused image. Experimental results demonstrate that the proposed approach performs well in producing different styles of realistic images from a single input by focus editing.

34 citations


"Defocus map estimation from a singl..." refers methods in this paper

  • ...The GSR was used to compute the blur parameter....

    [...]