scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

A Defocus Based Novel Keyboard Design

TL;DR: In this article, the authors proposed an application of depth map estimation from defocus to a novel keyboard design for detecting keystrokes, which can be integrated with devices such as mobile, PC and tablets and can be generated by either printing on plain paper or by projection on a flat surface.
Abstract: Depth map estimation from Defocus is a computer vision technique which has wide applications such as constructing the $3D$ setup from $2D$ image(s), image refocusing and reconstructing $3D$ scenes. In this paper, we propose an application of Depth from Defocus to a novel keyboard design for detecting keystrokes. The proposed keyboard can be integrated with devices such as mobile, PC and tablets and can be generated by either printing on plain paper or by projection on a flat surface. The proposed design utilizes measured defocus together with a precalibrated relation between the defocus amount and the keyboard pattern to infer the depth, which, along with the azimuth position of the stroke identifies the key. As the proposed design does not require any other hardware besides a monocular camera, this makes the proposed approach a cost effective and feasible solution for a portable keyboard.
References
More filters
Journal ArticleDOI
01 Feb 2012-Sensors
TL;DR: The calibration of the Kinect sensor is discussed, and an analysis of the accuracy and resolution of its depth data is provided, based on a mathematical model of depth measurement from disparity.
Abstract: Consumer-grade range cameras such as the Kinect sensor have the potential to be used in mapping applications where accuracy requirements are less strict. To realize this potential insight into the geometric quality of the data acquired by the sensor is essential. In this paper we discuss the calibration of the Kinect sensor, and provide an analysis of the accuracy and resolution of its depth data. Based on a mathematical model of depth measurement from disparity a theoretical error analysis is presented, which provides an insight into the factors influencing the accuracy of the data. Experimental results show that the random error of depth measurement increases with increasing distance to the sensor, and ranges from a few millimeters up to about 4 cm at the maximum range of the sensor. The quality of the data is also found to be influenced by the low resolution of the depth measurements.

1,671 citations

Proceedings ArticleDOI
29 Jul 2007
TL;DR: A simple modification to a conventional camera is proposed to insert a patterned occluder within the aperture of the camera lens, creating a coded aperture, and introduces a criterion for depth discriminability which is used to design the preferred aperture pattern.
Abstract: A conventional camera captures blurred versions of scene information away from the plane of focus. Camera systems have been proposed that allow for recording all-focus images, or for extracting depth, but to record both simultaneously has required more extensive hardware and reduced spatial resolution. We propose a simple modification to a conventional camera that allows for the simultaneous recovery of both (a) high resolution image information and (b) depth information adequate for semi-automatic extraction of a layered depth representation of the image. Our modification is to insert a patterned occluder within the aperture of the camera lens, creating a coded aperture. We introduce a criterion for depth discriminability which we use to design the preferred aperture pattern. Using a statistical model of images, we can recover both depth information and an all-focus image from single photographs taken with the modified camera. A layered depth map is then extracted, requiring user-drawn strokes to clarify layer assignments in some cases. The resulting sharp image and layered depth map can be combined for various photographic applications, including automatic scene segmentation, post-exposure refocusing, or re-rendering of the scene from an alternate viewpoint.

1,489 citations

Journal ArticleDOI
TL;DR: It is shown that this scheme will correctly decompose scenes containing arbitrary rigid objects in motion, recovering their three dimensional structure and motion.
Abstract: The interpretation of structure from motion is examined from a computional point of view. The question addressed is how the three dimensional structure and motion of objects can be inferred from the two dimensional transformations of their projected images when no three dimensional information is conveyed by the individual projections. The following scheme is proposed: (i) divide the image into groups of four elements each; (ii) test each group for a rigid interpretation; (iii) combine the results obtained in (ii). It is shown that this scheme will correctly decompose scenes containing arbitrary rigid objects in motion, recovering their three dimensional structure and motion. The analysis is based primarily on the 'structure from motion' theorem which states that the structure of four non-coplanar points is recoverable from three orthographic projections. The interpretation scheme is extended to cover perspective projections, and its psychological relevance is discussed.

930 citations

Journal ArticleDOI
TL;DR: A new method named STM is described for determining distance of objects and rapid autofocusing of camera systems based on a new Spatial-Domain Convolution/Deconvolution Transform that requires only two images taken with different camera parameters such as lens position, focal length, and aperture diameter.
Abstract: A new method named STM is described for determining distance of objects and rapid autofocusing of camera systems. STM uses image defocus information and is based on a new Spatial-Domain Convolution/Deconvolution Transform. The method requires only two images taken with different camera parameters such as lens position, focal length, and aperture diameter. Both images can be arbitrarily blurred and neither of them needs to be a focused image. Therefore STM is very fast in comparison with Depth-from-Focus methods which search for the lens position or focal length of best focus. The method involves simple local operations and can be easily implemented in parallel to obtain the depth-map of a scene. STM has been implemented on an actual camera system named SPARCS. Experiments on the performance of STM and their results on real-world planar objects are presented. The results indicate that the accuracy of STM compares well with Depth-from-Focus methods and is useful in practical applications. The utility of the method is demonstrated for rapid autofocusing of electronic cameras.

514 citations

Journal ArticleDOI
TL;DR: This paper presents a simple yet effective approach to estimate the amount of spatially varying defocus blur at edge locations, and demonstrates the effectiveness of this method in providing a reliable estimation of the defocus map.

370 citations