scispace - formally typeset
Search or ask a question

Showing papers on "Image sensor published in 2007"


Journal ArticleDOI
TL;DR: The proposed technique would also allow precise coregistration of images for the measurement of surface displacements due to ice-flow or geomorphic processes, or for any other change detection applications.
Abstract: We describe a procedure to accurately measure ground deformations from optical satellite images. Precise orthorectification is obtained owing to an optimized model of the imaging system, where look directions are linearly corrected to compensate for attitude drifts, and sensor orientation uncertainties are accounted for. We introduce a new computation of the inverse projection matrices for which a rigorous resampling is proposed. The irregular resampling problem is explicitly addressed to avoid introducing aliasing in the ortho-rectified images. Image registration and correlation is achieved with a new iterative unbiased processor that estimates the phase plane in the Fourier domain for subpixel shift detection. Without using supplementary data, raw images are wrapped onto the digital elevation model and coregistered with a 1/50 pixel accuracy. The procedure applies to images from any pushbroom imaging system. We analyze its performance using Satellite pour l'Observation de la Terre (SPOT) images in the case of a null test (no coseismic deformation) and in the case of large coseismic deformations due to the Mw 7.1 Hector Mine, California, earthquake of 1999. The proposed technique would also allow precise coregistration of images for the measurement of surface displacements due to ice-flow or geomorphic processes, or for any other change detection applications. A complete software package, the Coregistration of Optically Sensed Images and Correlation, is available for download from the Caltech Tectonics Observatory website

777 citations


Journal ArticleDOI
TL;DR: In this article, a solution-processed photodetector that exhibits D* (normalized detectivity) greater than 5 × 1012 Jones (a unit of detectivity equivalent to cm-Hz1/2 W−1) was presented.
Abstract: One billion image sensors worldwide render optical images as digital photographs in video cameras, still cameras and camera phones. These silicon-based sensors monolithically integrate photodetection with electronic readout. However, silicon photodiodes rely on a smaller bandgap than that required for visible detection; this degrades visible-wavelength sensitivity and produces unwanted infrared sensitivity. Thin-film top-surface visible photodetectors have therefore been investigated based on amorphous1, organic2 and colloidal quantum-dot3 semiconductors. However, none of these devices has exhibited visible sensitivity approaching that of silicon. Here we report a sensitive solution-processed photodetector that, across the entire visible spectrum, exhibits D* (normalized detectivity) greater than 5 × 1012 Jones (a unit of detectivity equivalent to cm Hz1/2 W−1). A photoconductive gain of >100 has been measured, facilitating high-fidelity electronic readout, and the linear dynamic range is greater than 60 dB, as required for high-contrast applications. These photodetectors are also compatible with flexible organic-based electronics.

423 citations


Patent
27 Sep 2007
TL;DR: In this article, a solid-state image sensor equipped with a plurality of charge-storage sections, discriminates photoelectrons generated by incoming light on the incoming timing and measures the timing of the incoming light, and a control section that controls a conducted state between the above described plurality of storage sections and the above-described plurality of capacitors.
Abstract: A solid-state image sensor of a charge sorting method used in a time-of-flight measurement method, in which noise derived from background light, which is caused by the reflection light from the subject derived from background light is eliminated, reflection light from the subject derived from a predetermined light source, which is previously set in the solid-state image sensor, is effectively extracted as a signal component to achieve high sensitivity and low noise, which is a solid-state image sensor that is equipped with a plurality of charge-storage sections, discriminates photoelectrons generated by incoming light on the incoming timing and sort to the above-described plurality of charge-storage sections, and measures the timing of the incoming light, in which the sensor has: a plurality of capacitors that capable of conducting to the plurality of charge-storage sections; and a control section that controls a conducted state between the above-described plurality of charge-storage sections and the above-described plurality of capacitors, in which by selectively conducting the above-described plurality of charge-storage sections and the above-described plurality of capacitors by the control of the above-described control section, the difference component of charge stored in the above-described plurality of charge-storage sections is extracted.

339 citations


Patent
09 Mar 2007
TL;DR: In this article, an electronic camera for producing an output image of a scene from a captured image signal includes: (a) a first imaging stage comprising a first image sensor for generating a first sensor output; a first lens for forming an image of the scene on the first sensor; and (b) a second imaging stage consisting of a second image sensor and a second lens focus adjuster for adjusting focus of the second lens responsive to a second focus detection signal.
Abstract: An electronic camera for producing an output image of a scene from a captured image signal includes: (a) a first imaging stage comprising a first image sensor for generating a first sensor output; a first lens for forming a first image of the scene on the first image sensor; and a first lens focus adjuster for adjusting focus of the first lens responsive to a first focus detection signal; and (b) a second imaging stage comprising a second image sensor for generating a second sensor output; a second lens for forming a second image of the scene on the second image sensor; and a second lens focus adjuster for adjusting focus of the second lens responsive to a second focus detection signal. A processing stage either (a) selects the sensor output from the first imaging stage as the captured image signal and uses the sensor output from the second imaging stage to generate the first focus detection signal for the selected imaging stage, or (b) selects the sensor output from the second imaging stage as the captured image signal and uses the sensor output from the first imaging stage to generate the second focus detection signal for the selected imaging stage.

261 citations


Journal ArticleDOI
TL;DR: This paper introduces nonintrusive component forensics as a new methodology for the forensic analysis of visual sensing information, aiming to identify the algorithms and parameters employed inside various processing modules of a digital device by only using the device output data without breaking the device apart.
Abstract: Rapid technology development and the widespread use of visual sensors have led to a number of new problems related to protecting intellectual property rights, handling patent infringements, authenticating acquisition sources, and identifying content manipulations. This paper introduces nonintrusive component forensics as a new methodology for the forensic analysis of visual sensing information, aiming to identify the algorithms and parameters employed inside various processing modules of a digital device by only using the device output data without breaking the device apart. We propose techniques to estimate the algorithms and parameters employed by important camera components, such as color filter array and color interpolation modules. The estimated interpolation coefficients provide useful features to construct an efficient camera identifier to determine the brand and model from which an image was captured. The results obtained from such component analysis are also useful to examine the similarities between the technologies employed by different camera models to identify potential infringement/licensing and to facilitate studies on technology evolution

247 citations


Book
19 Sep 2007
TL;DR: In this paper, the authors present a general overview of CMOS image sensors, including basic pixel structures, sensor peripherals, and sensor characteristics, including color pixel sharing, analog operation, and digital processing materials other than Silicon structures other than standard CMOS technologies.
Abstract: Introduction A general overview Brief history of CMOS image sensors Brief history of smart CMOS image sensors Organization of the book Fundamentals of CMOS Image Sensors Introduction Fundamental of photo detection Photo detectors for smart CMOS image sensors Accumulation mode in PD Basic pixel structures Sensor peripherals Basic sensor characteristics Color Pixel sharing Comparison between pixel architecture Comparison with CCDs Smart Functions and Materials Introduction Pixel structure Analog operation Pulse modulation Digital processing Materials other than Silicon Structures other than standard CMOS technologies Smart Imaging Introduction Low light imaging High speed Wide dynamic range Demodulation 3D range finder Target tracking Dedicated arrangement of pixel and optics Applications Introduction Information and communication applications Biotechnology applications Medical applications Appendix A: Tables of Constants Appendix B: Illuminance Appendix C: Human Eye and CMOS Image Sensors Appendix D: Fundamental Characteristics of MOS Capacitor Appendix E: Fundamental Characteristics of MOSFET Appendix F: Optical Format and Resolution References Index

247 citations


Journal ArticleDOI
TL;DR: In this paper, an ellipse fitting method was used to detect eyeglass regions and replaced with eye template patterns to preserve the details useful for face recognition in the fused image.
Abstract: This paper describes a new software-based registration and fusion of visible and thermal infrared (IR) image data for face recognition in challenging operating environments that involve illumination variations. The combined use of visible and thermal IR imaging sensors offers a viable means for improving the performance of face recognition techniques based on a single imaging modality. Despite successes in indoor access control applications, imaging in the visible spectrum demonstrates difficulties in recognizing the faces in varying illumination conditions. Thermal IR sensors measure energy radiations from the object, which is less sensitive to illumination changes, and are even operable in darkness. However, thermal images do not provide high-resolution data. Data fusion of visible and thermal images can produce face images robust to illumination variations. However, thermal face images with eyeglasses may fail to provide useful information around the eyes since glass blocks a large portion of thermal energy. In this paper, eyeglass regions are detected using an ellipse fitting method, and replaced with eye template patterns to preserve the details useful for face recognition in the fused image. Software registration of images replaces a special-purpose imaging sensor assembly and produces co-registered image pairs at a reasonable cost for large-scale deployment. Face recognition techniques using visible, thermal IR, and data-fused visible-thermal images are compared using a commercial face recognition software (FaceIt®) and two visible-thermal face image databases (the NIST/Equinox and the UTK-IRIS databases). The proposed multiscale data-fusion technique improved the recognition accuracy under a wide range of illumination changes. Experimental results showed that the eyeglass replacement increased the number of correct first match subjects by 85% (NIST/Equinox) and 67% (UTK-IRIS).

211 citations


Journal ArticleDOI
TL;DR: The assembled system is the first spherical compound eye able to capture images and is evaluated by analyzing resolution and cross-talk between the single channels.
Abstract: A spherical artificial compound eye which is comprised of an imaging microlens array and a pinhole array in the focal plane serving as receptor matrix is fabricated. The arrays are patterned on separate spherical bulk lenses by means of a special modified laser lithography system which is capable of generating structures with low shape deviation on curved surfaces. Design considerations of the imaging system are presented as well as the characterization of the comprising elements on curved surfaces, with special attention to the homogeneity over the array. The assembled system is the first spherical compound eye able to capture images. It is evaluated by analyzing resolution and cross-talk between the single channels.

209 citations


Patent
08 Jan 2007
TL;DR: An image sensing system for a vehicle includes an imaging sensor comprising a two-dimensional array of light sensing photosensor elements The system includes a logic and control circuit comprising an image processor for processing image data derived from the imaging sensor as discussed by the authors.
Abstract: An image sensing system for a vehicle includes an imaging sensor comprising a two-dimensional array of light sensing photosensor elements The system includes a logic and control circuit comprising an image processor for processing image data derived from the imaging sensor The logic and control circuit generates at least one control output for controlling at least one accessory of the vehicle The imaging sensor is disposed at an interior portion of the cabin of the vehicle and preferably has a field of view exterior of the vehicle through a window of the vehicle

183 citations


Patent
09 Mar 2007
TL;DR: In this paper, an electronic camera for producing an output image of a scene from a captured image signal includes a first imaging stage comprising a first image sensor for generating a first sensor output and a first lens for forming the first image of the scene.
Abstract: An electronic camera for producing an output image of a scene from a captured image signal includes a first imaging stage comprising a first image sensor for generating a first sensor output and a first lens for forming a first image of the scene on the first image sensor, and a second imaging stage comprising a second image sensor for generating a second sensor output and a second lens for forming a second image of the scene on the second image sensor. The sensor output from the first imaging stage is used as a primary output image for forming the captured image signal and the sensor output from the second imaging stage is used as a secondary output image for modifying the primary output image, thereby generating an enhanced, captured image signal.

171 citations


Journal ArticleDOI
TL;DR: A learning-based chromatic distribution-matching scheme is proposed to determine the image's skin chroma distribution online such that it can tolerate the chromatic deviation coming from special lighting without increasing false alarm.

Proceedings ArticleDOI
03 Dec 2007
TL;DR: FireFly Mosaic is presented, a wireless sensor network image processing framework with operating system, networking and image processing primitives that assist in the development of distributed vision-sensing tasks and is the first wireless sensor networking system to integrate multiple coordinating cameras performing local processing.
Abstract: With the advent of CMOS cameras, it is now possible to make compact, cheap and low-power image sensors capable of on-board image processing These embedded vision sensors provide a rich new sensing modality enabling new classes of wireless sensor networking applications In order to build these applications, system designers need to overcome challenges associated with limited bandwidth, limited power, group coordination and fusing of multiple camera views with various other sensory inputs Real-time properties must be upheld if multiple vision sensors are to process data, communicate with each other and make a group decision before the measured environmental feature changes In this paper, we present FireFly Mosaic, a wireless sensor network image processing framework with operating system, networking and image processing primitives that assist in the development of distributed vision-sensing tasks Each FireFly Mosaic wireless camera consists of a FireFly (Rowe et al, 2006) node coupled with a CMUcam3 (Rowe et al, 2007) embedded vision processor The FireFly nodes run the nano-RK (Eswaran et al, 2005) real-time operating system and communicate using the RT-link (Rowe et al, 2006) collision-free TDMA link protocol Using FireFly Mosaic, we demonstrate an assisted living application capable of fusing multiple cameras with overlapping views to discover and monitor daily activities in a home Using this application, we show how an integrated platform with support for time synchronization, a collision-free TDMA link layer, an underlying RTOS and an interface to an embedded vision sensor provides a stable framework for distributed real-time vision processing To the best of our knowledge, this is the first wireless sensor networking system to integrate multiple coordinating cameras performing local processing

Patent
01 Mar 2007
TL;DR: In this paper, a method and apparatus for capturing image data from multiple image sensors and generating an output image sequence are disclosed, where the data from different image sensors is processed and interleaved to generate an improved output motion sequence relative to an image motion sequence generated from a single equivalent sensor.
Abstract: A method and apparatus for capturing image data from multiple image sensors and generating an output image sequence are disclosed. The multiple image sensors capture data with one or more different characteristics, such as: staggered exposure periods, different length exposure periods, different frame rates, different spatial resolution, different lens systems, and different focal lengths. The data from multiple image sensors is processed and interleaved to generate an improved output motion sequence relative to an output motion sequence generated from an a single equivalent image sensor.

Proceedings ArticleDOI
02 Jul 2007
TL;DR: An introduction to the major processing stages inside a digital camera is provided and several methods for source digital camera identification and forgery detection are reviewed.
Abstract: There are two main interests in digital camera image forensics, namely source identification and forgery detection. In this paper, we first briefly provide an introduction to the major processing stages inside a digital camera and then review several methods for source digital camera identification and forgery detection. Existing methods for source identification explore the various processing stages inside a digital camera to derive the clues for distinguishing the source cameras while forgery detection checks for inconsistencies in image quality or for presence of certain characteristics as evidence of tampering.

Journal ArticleDOI
TL;DR: Aplanatic telescopes with two aspheric mirrors, configured to correct spherical and coma aberrations, are considered for application in γ-ray astronomy utilizing the ground-based atmospheric Cherenkov technique as discussed by the authors.

Journal ArticleDOI
TL;DR: In this paper, a new type of CMOS time-of-flight (TOF) range image sensor using single-layer gates on field oxide structure for photo conversion and charge transfer is presented.
Abstract: This paper presents a new type of CMOS time-of-flight (TOF) range image sensor using single-layer gates on field oxide structure for photo conversion and charge transfer. This simple structure allows the realization of a dense TOF range imaging array with 1515 mum2 pixels in a standard CMOS process. Only an additional process step to create an n-type buried layer which is necessary for high-speed charge transfer is added to the fabrication process. The sensor operates based on time-delay dependent modulation of photocharge induced by back reflected infrared light pulses from an active illumination light source. To reduce the influence of background light, a small duty cycle light pulse is used and charge draining structures are included in the pixel. The TOF sensor chip fabricated measures a range resolution of 2.35 cm at 30 frames per second and an improvement to 0.74 cm at three frames per second with a pulsewidth of 100 ns.

Journal ArticleDOI
TL;DR: A 3D scene reconstruction in a depth of 650 to 1550 m from only three images with an accuracy of <30 m is demonstrated, which is 10 times better than estimated from the classical resolution limit obtained for depth scanning active imaging with a similar number of images.
Abstract: We present a technique to overcome the depth resolution limitation for 3D active imaging. Applying microsecond laser pulses and sensor gate width, a scene of several hundred meters is illuminated and recorded in a single image. The trapezoid-shaped range intensity profile is analyzed to obtain both the reflectivity and the depth of scene. We demonstrate a 3D scene reconstruction in a depth of 650 to 1550 m from only three images with an accuracy of <30 m. This depth accuracy is 10 times better than estimated from the classical resolution limit obtained for depth scanning active imaging with a similar number of images. Therefore, this technique enables superresolution depth mapping with a reduction of image data processing.

Patent
09 Mar 2007
TL;DR: In this article, an electronic camera for producing an output image of a scene from a captured image signal includes a first imaging stage comprising a first image sensor for generating a first sensor output and a first lens for forming a first images of the scene on the first sensor, and a second imaging stage consisting of a second image sensor, where the lenses have different focal lengths.
Abstract: An electronic camera for producing an output image of a scene from a captured image signal includes a first imaging stage comprising a first image sensor for generating a first sensor output and a first lens for forming a first image of the scene on the first image sensor, and a second imaging stage comprising a second image sensor for generating a second sensor output and a second lens for forming a second image of the scene on the second image sensor, where the lenses have different focal lengths. A processing stage uses the sensor output from one of the imaging stages as the captured image signal and uses the images from both imaging stages to generate a range map identifying distances to the different portions of the scene.

Patent
Giora Yahav1
15 Nov 2007
TL;DR: In this article, a dual-mode depth imaging system and method is provided, the system comprising a first and second image sensors and a processor able to switch between a first mode of depth imaging and a second mode according to at least one predefined threshold.
Abstract: Dual mode depth imaging system and method is provided, the system comprising a first and second image sensors and a processor able to switch between a first mode of depth imaging and a second mode of depth imaging according to at least one predefined threshold. The method comprising providing depth sensing by Time of Flight if the distance of the sensed object from the camera is not below a first threshold and/or if a depth resolution above a second threshold is not required, and providing depth sensing by triangulation, if the distance of the sensed object from the camera is below the first threshold and/or if a depth resolution above the second threshold is required.

Journal ArticleDOI
TL;DR: In this article, the authors investigated the impact of spectral differences between satellite sensors when attempting cross-calibration based on near-simultaneous imaging of common ground targets in analogous spectral bands.

Patent
10 Jul 2007
TL;DR: In this paper, a first sensor detects a variation in inclination of an image pickup device to generate a first sensing data, and a second sensor detects position movement from an image sensor in the image pickup devices to generate second sensing data and a driving unit is coupled to the image sensor.
Abstract: Image pickup systems capable of preventing blurred images are provided, in which a first sensor detects a variation in inclination of an image pickup device to generate a first sensing data, a second sensor detects a position movement from an image sensor in the image pickup device to generate a second sensing data and a driving unit is coupled to the image sensor. A processing module receives the first and second sensing data, integrates the first sensing data, calculates the integrated first sensing data and the second sensing data to obtain control information, and enables the driving unit to adjust the position of the image sensor according to the control information.

Patent
27 Sep 2007
TL;DR: In this paper, a camera solution includes an image sensor and an image processing and control system, with at least two different operating modes, with one of the modes having a higher dynamic range.
Abstract: A camera solution includes an image sensor and an image processing and control system. At least two different operating modes are supported, with one of the modes having a higher dynamic range. Control of the dynamic range is provided at the system level. The system supports statically or dynamically selecting an operating mode that determines the dynamic range of a camera. In one implementation, the system supports the use of either a conventional image sensor that does not natively support a high dynamic range or a dual-mode image sensor.

Patent
24 Sep 2007
TL;DR: In this article, a method for using a capture device to capture at least two video signals corresponding to a scene was proposed, which includes: providing a two-dimensional image sensor having a plurality of pixels, reading a first group of pixels from the image sensor at a first frame rate to produce a first video signal of the image scene, and then reading a second group of pixel from the camera at a second frame rate for producing a second video signal; and using at least one of the video signals for adjusting one or more of the capture device parameters.
Abstract: A method for using a capture device to capture at least two video signals corresponding to a scene, includes: providing a two-dimensional image sensor having a plurality of pixels; reading a first group of pixels from the image sensor at a first frame rate to produce a first video signal of the image scene; reading a second group of pixels from the image sensor at a second frame rate for producing a second video signal; and using at least one of the video signals for adjusting one or more of the capture device parameters.

Journal ArticleDOI
TL;DR: The results demonstrate that the proposed image fusion method using the support value transform is effective and is superior to the conventional image fusion methods in terms of the pertained quantitative fusion evaluation indexes, such as quality of visual information (QAB/F), the mutual information, etc.
Abstract: With the development of numerous imaging sensors, many images can be simultaneously pictured by various sensors. However, there are many scenarios where no one sensor can give the complete picture. Image fusion is an important approach to solve this problem and produces a single image which preserves all relevant information from a set of different sensors. In this paper, we proposed a new image fusion method using the support value transform, which uses the support value to represent the salient features of image. This is based on the fact that, in support vector machines (SVMs), the data with larger support values have a physical meaning in the sense that they reveal relative more importance of the data points for contributing to the SVM model. The mapped least squares SVM (mapped LS-SVM) is used to efficiently compute the support values of image. The support value analysis is developed by using a series of multiscale support value filters, which are obtained by filling zeros in the basic support value filter deduced from the mapped LS-SVM to match the resolution of the desired level. Compared with the widely used image fusion methods, such as the Laplacian pyramid, discrete wavelet transform methods, the proposed method is an undecimated transform-based approach. The fusion experiments are undertaken on multisource images. The results demonstrate that the proposed approach is effective and is superior to the conventional image fusion methods in terms of the pertained quantitative fusion evaluation indexes, such as quality of visual information (QAB/F) , the mutual information, etc.

Patent
03 Oct 2007
TL;DR: In this article, the authors present a method, apparatus and software product for enhancing a dynamic range of an image with a multi-exposure pixel pattern taken by an image sensor of a camera for one or more color channels, wherein a plurality of groups of pixels of the image sensor have different exposure times.
Abstract: The specification and drawings present a new method, apparatus and software product for enhancing a dynamic range of an image with a multi-exposure pixel pattern taken by an image sensor of a camera for one or more color channels, wherein a plurality of groups of pixels of the image sensor have different exposure times (e.g., pre-selected or adjusted by a user through a user interface using a viewfinder feedback, or adjusted by a user through a user interface after taking and storing RAW image, etc.). Processing of the captured image for constructing an enhanced image of the image for each of the one or more color channels can be performed using weighted combination of exposure times of pixels having different pre-selected exposure times according to a predetermined criterion.

Patent
09 Mar 2007
TL;DR: In this paper, a processor enables capture and display of the separate images, and further responds to an operator selection of one of the imaging stages as a primary capture unit which is to be primarily used for capturing an image of the scene that is stored by the digital camera.
Abstract: An electronic camera includes first and second imaging stages for capturing separate images of a scene, one of the stages being designated as a default imaging stage. A processor enables capture and display of the separate images, and further responds to an operator selection of one of the imaging stages as a primary capture unit which is to be primarily used for capturing an image of the scene that is stored by the digital camera. If the operator selection does not occur within a predetermined time period, or if the camera is actuated before the time has run out, the processor automatically selects the default imaging stage as the primary capture unit.

Patent
11 May 2007
TL;DR: In this article, the first pixel cell array and the first photographic lens may be configured to cooperate to capture a first image of a scene, and the second pixel cells array and second photographic lenses may be configurable to cooperate for capturing a second image of the scene.
Abstract: Present embodiments relate to techniques for capturing images. One embodiment may include an image sensor, comprising a substrate, a first pixel cell array disposed on the substrate, a first photographic lens arranged to focus light onto the first pixel cell array, a second pixel cell array disposed on the substrate, a second photographic lens arranged to focus light onto the second pixel cell array, and an image coordination circuit configured to coordinate the first array and lens with the second array and lens to provide an image. The first pixel cell array and the first photographic lens may be configured to cooperate to capture a first image of a scene, and the second pixel cell array and the second photographic lens may be configured to cooperate to capture a second image of the scene.

Journal ArticleDOI
TL;DR: High-resolution digital holography and pattern projection techniques such as coded light or fringe projection for real-time extraction of 3D object positions and color information could manifest themselves as an alternative to traditional camera-based methods.
Abstract: Advances in image sensors and evolution of digital computation is a strong stimulus for development and implementation of sophisticated methods for capturing, processing and analysis of 3D data from dynamic scenes. Research on perspective time-varying 3D scene capture technologies is important for the upcoming 3DTV displays. Methods such as shape-from-texture, shape-from-shading, shape-from-focus, and shape-from-motion extraction can restore 3D shape information from a single camera data. The existing techniques for 3D extraction from single-camera video sequences are especially useful for conversion of the already available vast mono-view content to the 3DTV systems. Scene-oriented single-camera methods such as human face reconstruction and facial motion analysis, body modeling and body motion tracking, and motion recognition solve efficiently a variety of tasks. 3D multicamera dynamic acquisition and reconstruction, their hardware specifics including calibration and synchronization and software demands form another area of intensive research. Different classes of multiview stereo algorithms such as those based on cost function computing and optimization, fusing of multiple views, and feature-point reconstruction are possible candidates for dynamic 3D reconstruction. High-resolution digital holography and pattern projection techniques such as coded light or fringe projection for real-time extraction of 3D object positions and color information could manifest themselves as an alternative to traditional camera-based methods. Apart from all of these approaches, there also are some active imaging devices capable of 3D extraction such as the 3D time-of-flight camera, which provides 3D image data of its environment by means of a modulated infrared light source.

Journal ArticleDOI
TL;DR: Simulation results show that the image quality degrades as objects move away from the sensor surface, and the spatial resolution of contact imaging depends on the sensor size as well as the distance between objects and the sensorsurface.
Abstract: We report simulated and experimental image quality for contact imaging, a method for imaging objects close to the sensor surface without intervening optics. This technique preserves microscale resolution for applications that can not tolerate the size or weight of conventional optical elements. In order to assess image quality, we investigated the spatial resolution of contact imaging, which depends on the sensor size as well as the distance between objects and the sensor surface. We studied how this distance affects image quality using a commercial optical simulator. Simulation results show that the image quality degrades as objects move away from the sensor surface. To experimentally validate these results, an image sensor was designed and fabricated in a commercially available three metal, two poly, 0.5 mum CMOS technology. Experiments with the contact imager corroborate the simulation results. Two specific applications of contact imaging are demonstrated.

Journal ArticleDOI
TL;DR: This paper introduces a novel approach for solving the problem of camera calibration from spheres by exploiting the relationship between the dual images of spheres and the dual image of the absolute conic (IAC), which provides two constraints for estimating the IAC.
Abstract: This paper introduces a novel approach for solving the problem of camera calibration from spheres. By exploiting the relationship between the dual images of spheres and the dual image of the absolute conic (IAC), it is shown that the common pole and polar with regard to the conic images of two spheres are also the pole and polar with regard to the IAC. This provides two constraints for estimating the IAC and, hence, allows a camera to be calibrated from an image of at least three spheres. Experimental results show the feasibility of the proposed approach