scispace - formally typeset
Search or ask a question
Book ChapterDOI

A Defocus Based Novel Keyboard Design

TL;DR: The proposed design utilizes measured defocus together with a precalibrated relation between the defocus amount and the keyboard pattern to infer the depth, which, along with the azimuth position of the stroke identifies the key.
Abstract: Defocus based Depth estimation has been widely applied for constructing 3D setup from 2D image(s), reconstructing 3D scenes and image refocusing. Using defocus enables us to infer depth information from a single image using visual clues which can be captured by a monocular camera. In this paper, we propose an application of Depth from Defocus to a novel, portable keyboard design. Our estimation technique is based on the concept that depth of the finger with respect to our camera and its defocus blur value is correlated, and a map can be obtained to detect the finger position accurately. We have utilised the near-focus region for our design, assuming that the closer an object is to our camera, more will be its defocus blur. The proposed keyboard can be integrated with smartphones, tablets and Personal Computers, and only requires printing on plain paper or projection on a flat surface. The detection approach involves tracking the finger’s position as the user types, measuring its defocus value when a key is pressed, and mapping the measured defocus together with a precalibrated relation between the defocus amount and the keyboard pattern. This is utilised to infer the finger’s depth, which, along with the azimuth position of the stroke, identifies the pressed key. Our minimalistic design only requires a monocular camera, and there is no need for any external hardware. This makes the proposed approach a cost-effective and feasible solution for a portable keyboard.
References
More filters
Journal ArticleDOI
01 Feb 2012-Sensors
TL;DR: The calibration of the Kinect sensor is discussed, and an analysis of the accuracy and resolution of its depth data is provided, based on a mathematical model of depth measurement from disparity.
Abstract: Consumer-grade range cameras such as the Kinect sensor have the potential to be used in mapping applications where accuracy requirements are less strict. To realize this potential insight into the geometric quality of the data acquired by the sensor is essential. In this paper we discuss the calibration of the Kinect sensor, and provide an analysis of the accuracy and resolution of its depth data. Based on a mathematical model of depth measurement from disparity a theoretical error analysis is presented, which provides an insight into the factors influencing the accuracy of the data. Experimental results show that the random error of depth measurement increases with increasing distance to the sensor, and ranges from a few millimeters up to about 4 cm at the maximum range of the sensor. The quality of the data is also found to be influenced by the low resolution of the depth measurements.

1,671 citations


"A Defocus Based Novel Keyboard Desi..." refers methods in this paper

  • ...Another method for depth estimation from single image was presented by Khoshelham [16], but it requires a kinect sensor and thus adds to the hardware requirements as well as cost associated with the keyboard....

    [...]

Journal ArticleDOI
TL;DR: It is shown that this scheme will correctly decompose scenes containing arbitrary rigid objects in motion, recovering their three dimensional structure and motion.
Abstract: The interpretation of structure from motion is examined from a computional point of view. The question addressed is how the three dimensional structure and motion of objects can be inferred from the two dimensional transformations of their projected images when no three dimensional information is conveyed by the individual projections. The following scheme is proposed: (i) divide the image into groups of four elements each; (ii) test each group for a rigid interpretation; (iii) combine the results obtained in (ii). It is shown that this scheme will correctly decompose scenes containing arbitrary rigid objects in motion, recovering their three dimensional structure and motion. The analysis is based primarily on the 'structure from motion' theorem which states that the structure of four non-coplanar points is recoverable from three orthographic projections. The interpretation scheme is extended to cover perspective projections, and its psychological relevance is discussed.

930 citations

Journal ArticleDOI
TL;DR: A new method named STM is described for determining distance of objects and rapid autofocusing of camera systems based on a new Spatial-Domain Convolution/Deconvolution Transform that requires only two images taken with different camera parameters such as lens position, focal length, and aperture diameter.
Abstract: A new method named STM is described for determining distance of objects and rapid autofocusing of camera systems. STM uses image defocus information and is based on a new Spatial-Domain Convolution/Deconvolution Transform. The method requires only two images taken with different camera parameters such as lens position, focal length, and aperture diameter. Both images can be arbitrarily blurred and neither of them needs to be a focused image. Therefore STM is very fast in comparison with Depth-from-Focus methods which search for the lens position or focal length of best focus. The method involves simple local operations and can be easily implemented in parallel to obtain the depth-map of a scene. STM has been implemented on an actual camera system named SPARCS. Experiments on the performance of STM and their results on real-world planar objects are presented. The results indicate that the accuracy of STM compares well with Depth-from-Focus methods and is useful in practical applications. The utility of the method is demonstrated for rapid autofocusing of electronic cameras.

514 citations


"A Defocus Based Novel Keyboard Desi..." refers methods in this paper

  • ...However, any other method such as [29], [30], [31] and [32] can also be employed instead for defocus estimation....

    [...]

Journal ArticleDOI
TL;DR: This paper presents a simple yet effective approach to estimate the amount of spatially varying defocus blur at edge locations, and demonstrates the effectiveness of this method in providing a reliable estimation of the defocus map.

370 citations


"A Defocus Based Novel Keyboard Desi..." refers methods in this paper

  • ...We have used Zhuo’s strategy [28] due to its simplicity and effectiveness....

    [...]

Journal ArticleDOI
TL;DR: This data show that disparate shading (even in the absence of disparate edges) yields a vivid stereoscopic depth perception, and is compared with computer-vision algorithms for both single cues and their integration for three-dimensional vision.
Abstract: We studied the integration of image disparities, edge information, and shading in the three-dimensional perception of complex yet well-controlled images generated with a computer-graphics system. The images showed end-on views of flat- and smooth-shaded ellipsoids, i.e., images with and without intensity discontinuities (edges). A map of perceived depth was measured by adjusting a small stereo depth probe interactively to the perceived surface. Our data show that disparate shading (even in the absence of disparate edges) yields a vivid stereoscopic depth perception. The perceived depth is significantly reduced if the disparities are completely removed (shape-from-shading). If edge information is available, it overrides both shape-from-shading and disparate shading. Degradations of depth perception corresponded to a reduced depth rather than to an increased scatter in the depth measurement. The results are compared with computer-vision algorithms for both single cues and their integration for three-dimensional vision.

317 citations


"A Defocus Based Novel Keyboard Desi..." refers methods in this paper

  • ...Depth estimation using vision can take different approaches such as Depth from focus [12], Stereo Depth [13] and Structure from Motion [14]....

    [...]