scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Defocus map estimation from a single image using principal components

TL;DR: In this paper, a comparison between principal component representation and traditional gray scale representation for depth map creation is made, and the results show that the depth maps obtained using principal component are smoother than the ones obtained using traditional gray-scale representation.
Abstract: Light is a mixture of multiple spectral components. An image is a response of the scene with respect to these spectra. Principal components are more compact representation of the data compared to any other representations. Hence accuracy of the estimated defocus parameter is higher in principal component representation than any other customary used representations. In this paper, we present comparison between principal component representation and customary gray scale representation for depth map creation. The presented results shows that the depth maps obtained using principal component are smoother than depth maps obtained using gray scale representation. Besides that, the noise estimation using principal components is much more accurate than using Wiener strategy.
Citations
More filters
Proceedings ArticleDOI
01 Dec 2019
TL;DR: In this article, the authors proposed an application of depth map estimation from defocus to a novel keyboard design for detecting keystrokes, which can be integrated with devices such as mobile, PC and tablets and can be generated by either printing on plain paper or by projection on a flat surface.
Abstract: Depth map estimation from Defocus is a computer vision technique which has wide applications such as constructing the $3D$ setup from $2D$ image(s), image refocusing and reconstructing $3D$ scenes. In this paper, we propose an application of Depth from Defocus to a novel keyboard design for detecting keystrokes. The proposed keyboard can be integrated with devices such as mobile, PC and tablets and can be generated by either printing on plain paper or by projection on a flat surface. The proposed design utilizes measured defocus together with a precalibrated relation between the defocus amount and the keyboard pattern to infer the depth, which, along with the azimuth position of the stroke identifies the key. As the proposed design does not require any other hardware besides a monocular camera, this makes the proposed approach a cost effective and feasible solution for a portable keyboard.
Book ChapterDOI
19 Jul 2020
TL;DR: The proposed design utilizes measured defocus together with a precalibrated relation between the defocus amount and the keyboard pattern to infer the depth, which, along with the azimuth position of the stroke identifies the key.
Abstract: Defocus based Depth estimation has been widely applied for constructing 3D setup from 2D image(s), reconstructing 3D scenes and image refocusing. Using defocus enables us to infer depth information from a single image using visual clues which can be captured by a monocular camera. In this paper, we propose an application of Depth from Defocus to a novel, portable keyboard design. Our estimation technique is based on the concept that depth of the finger with respect to our camera and its defocus blur value is correlated, and a map can be obtained to detect the finger position accurately. We have utilised the near-focus region for our design, assuming that the closer an object is to our camera, more will be its defocus blur. The proposed keyboard can be integrated with smartphones, tablets and Personal Computers, and only requires printing on plain paper or projection on a flat surface. The detection approach involves tracking the finger’s position as the user types, measuring its defocus value when a key is pressed, and mapping the measured defocus together with a precalibrated relation between the defocus amount and the keyboard pattern. This is utilised to infer the finger’s depth, which, along with the azimuth position of the stroke, identifies the pressed key. Our minimalistic design only requires a monocular camera, and there is no need for any external hardware. This makes the proposed approach a cost-effective and feasible solution for a portable keyboard.

Cites methods from "Defocus map estimation from a singl..."

  • ...As per the approach employed by Kumar [33], the defocus blur kernel is assumed to be Gaussian with kernel parameter σ....

    [...]

  • ...As per the approach employed by Kumar [33], the defocus blur kernel is assumed to be Gaussian with kernel parameter σ. Kernel parameter σ varies with the depth....

    [...]

Journal ArticleDOI
TL;DR: Spectral 3D computer vision as discussed by the authors examines both the geometric and spectral properties of objects, and provides a deeper understanding of an object's physical properties by providing information from narrow bands in various regions of the electromagnetic spectrum.
Abstract: Spectral 3D computer vision examines both the geometric and spectral properties of objects. It provides a deeper understanding of an object's physical properties by providing information from narrow bands in various regions of the electromagnetic spectrum. Mapping the spectral information onto the 3D model reveals changes in the spectra-structure space or enhances 3D representations with properties such as reflectance, chromatic aberration, and varying defocus blur. This emerging paradigm advances traditional computer vision and opens new avenues of research in 3D structure, depth estimation, motion analysis, and more. It has found applications in areas such as smart agriculture, environment monitoring, building inspection, geological exploration, and digital cultural heritage records. This survey offers a comprehensive overview of spectral 3D computer vision, including a unified taxonomy of methods, key application areas, and future challenges and prospects.