scispace - formally typeset
Search or ask a question

Showing papers on "Image sensor published in 2010"


Journal ArticleDOI
TL;DR: A comprehensive optimization method to arrive at the spatial and spectral layout of the color filter array of a GAP camera is presented and a novel algorithm for reconstructing the under-sampled channels of the image while minimizing aliasing artifacts is developed.
Abstract: We propose the concept of a generalized assorted pixel (GAP) camera, which enables the user to capture a single image of a scene and, after the fact, control the tradeoff between spatial resolution, dynamic range and spectral detail. The GAP camera uses a complex array (or mosaic) of color filters. A major problem with using such an array is that the captured image is severely under-sampled for at least some of the filter types. This leads to reconstructed images with strong aliasing. We make four contributions in this paper: 1) we present a comprehensive optimization method to arrive at the spatial and spectral layout of the color filter array of a GAP camera. 2) We develop a novel algorithm for reconstructing the under-sampled channels of the image while minimizing aliasing artifacts. 3) We demonstrate how the user can capture a single image and then control the tradeoff of spatial resolution to generate a variety of images, including monochrome, high dynamic range (HDR) monochrome, RGB, HDR RGB, and multispectral images. 4) Finally, the performance of our GAP camera has been verified using extensive simulations that use multispectral images of real world scenes. A large database of these multispectral images has been made available at http://wwwl.cs.columbia.edu/ CAVE/projects/gap_camera/ for use by the research community.

833 citations


Patent
11 Aug 2010
TL;DR: In this article, an optical reading device is described having an image sensor having a sensor array of pixels which are exposed to an image; a printed circuit board (PCB) for carrying the image sensor; a lens assembly for focusing light on the sensor array; a support assembly integral with the lens retainer, the support assembly having a containment section for containing the image sensors; and a thermally and electrically conductive elastomeric gasket disposed between the containment section and the sensor.
Abstract: An optical reading device is described having an image sensor having a sensor array of pixels which are exposed to an image; a printed circuit board (PCB) for carrying the image sensor; a lens assembly for focusing light on the sensor array; a lens retainer for retaining the lens; a support assembly integral with the lens retainer, the support assembly having a containment section for containing the image sensor; and a thermally and electrically conductive elastomeric gasket disposed between the containment section and the image sensor and for contacting the image sensor.

344 citations


Journal ArticleDOI
TL;DR: This work proposes a novel approach for attenuating the influence of details from scenes on SPNs so as to improve the device identification rate of the identifier.
Abstract: Sensor pattern noises (SPNs), extracted from digital images to serve as the fingerprints of imaging devices, have been proved as an effective way for digital device identification. However, as we demonstrate in this work, the limitation of the current method of extracting SPNs is that the SPNs extracted from images can be severely contaminated by details from scenes, and as a result, the identification rate is unsatisfactory unless images of a large size are used. In this work, we propose a novel approach for attenuating the influence of details from scenes on SPNs so as to improve the device identification rate of the identifier. The hypothesis underlying our SPN enhancement method is that the stronger a signal component in an SPN is, the less trustworthy the component should be, and thus should be attenuated. This hypothesis suggests that an enhanced SPN can be obtained by assigning weighting factors inversely proportional to the magnitude of the SPN components.

344 citations


Patent
26 Mar 2010
TL;DR: In this article, the authors described a hand-held device having a two-dimensional image sensor, which can be used for focusing light reflected from a target onto the two dimensional imager.
Abstract: There is described a device having a two dimensional imager. The device having a two dimensional image sensor can be a hand held device. Imaging optics can be provided for focusing light reflected from a target onto the two dimensional imager. An image including imaging data can be obtained utilizing the hand held device.

340 citations


Journal ArticleDOI
TL;DR: The polarization imaging sensor has a signal-to-noise ratio of 45 dB and captures intensity, angle and degree of linear polarization in the visible spectrum at 40 frames per second with 300 mW of power consumption.
Abstract: We report an imaging sensor capable of recording the optical properties of partially polarized light by monolithically integrating aluminum nanowire optical filters with a CCD imaging array. The imaging sensor, composed of 1000 by 1000 imaging elements with 7.4 μm pixel pitch, is covered with an array of pixel-pitch matched nanowire optical filters with four different orientations offset by 45°. The polarization imaging sensor has a signal-to-noise ratio of 45 dB and captures intensity, angle and degree of linear polarization in the visible spectrum at 40 frames per second with 300 mW of power consumption.

338 citations


Patent
08 Jan 2010
TL;DR: In this paper, a terminal having an image sensor array and a plurality of operator selectable operating modes is set forth, where the image sensor arrays can have an associated light source bank.
Abstract: There is set forth herein a terminal having an image sensor array and a plurality of operator selectable operating modes. The image sensor array can have an associated light source bank. The operator selectable operating modes can include at least one camera operating mode and at least one flashlight operating mode. In the at least one camera operating mode the image sensor array and light source bank can be controlled for optimization of frame capture. In the at least one flashlight operating mode the image sensor array and the light source bank can be controlled for optimizing illumination of an operators viewing area with reduced average power consumption.

332 citations


Patent
19 Oct 2010
TL;DR: An optical imager includes an image sensor for capturing images of targets and outputting image signals; a lens for focusing the target on the image sensor as a function of lens position; a memory for storing predetermined lens positions determined from predetermined target sizes; and a controller for determining current target size based on captured images and positioning the lens at a predetermined lens position by correlating current target sizes with predetermined target size as mentioned in this paper.
Abstract: An optical imager includes: an image sensor for capturing images of targets and outputting image signals; a lens for focusing the target on the image sensor as a function of lens position; a memory for storing predetermined lens positions determined from predetermined target sizes; and a controller for determining current target size based on captured images and positioning the lens at a predetermined lens position by correlating current target size with predetermined target sizes.

321 citations


Proceedings ArticleDOI
13 Jun 2010
TL;DR: It is shown the surprising result that 3D scans of reasonable quality can also be obtained with a sensor of such low data quality, and a new combination of a 3D superresolution method with a probabilistic scan alignment approach that explicitly takes into account the sensor's noise characteristics.
Abstract: We describe a method for 3D object scanning by aligning depth scans that were taken from around an object with a time-of-flight camera. These ToF cameras can measure depth scans at video rate. Due to comparably simple technology they bear potential for low cost production in big volumes. Our easy-to-use, cost-effective scanning solution based on such a sensor could make 3D scanning technology more accessible to everyday users. The algorithmic challenge we face is that the sensor's level of random noise is substantial and there is a non-trivial systematic bias. In this paper we show the surprising result that 3D scans of reasonable quality can also be obtained with a sensor of such low data quality. Established filtering and scan alignment techniques from the literature fail to achieve this goal. In contrast, our algorithm is based on a new combination of a 3D superresolution method with a probabilistic scan alignment approach that explicitly takes into account the sensor's noise characteristics.

308 citations


Journal ArticleDOI
TL;DR: This work analyzes the focused plenoptic camera in optical phase space and presents basic, blended, and depth-based rendering algorithms for producing high-quality, high-resolution images in real time.
Abstract: Plenoptic cameras, constructed with internal microlens arrays, capture both spatial and angular information, i.e., the full 4-D radiance, of a scene. The design of traditional plenoptic cameras assumes that each microlens image is completely defocused with respect to the image created by the main camera lens. As a result, only a single pixel in the final image is rendered from each microlens image, resulting in disappointingly low resolution. A recently devel- oped alternative approach based on the focused plenoptic camera uses the microlens array as an imaging system focused on the im- age plane of the main camera lens. The flexible spatioangular trade- off that becomes available with this design enables rendering of final images with significantly higher resolution than those from traditional plenoptic cameras. We analyze the focused plenoptic camera in optical phase space and present basic, blended, and depth-based rendering algorithms for producing high-quality, high-resolution im- ages. We also present our graphics-processing-unit-based imple- mentations of these algorithms, which are able to render full screen refocused images in real time. © 2010 SPIE and IS&T.

274 citations


Proceedings ArticleDOI
03 Aug 2010
TL;DR: This paper reviews the rationale and history of this event-based approach, introduces sensor functionalities, and gives an overview of the papers in this session.
Abstract: The four chips [1–4] presented in the special session on "Activity-driven, event-based vision sensors" quickly output compressed digital data in the form of events. These sensors reduce redundancy and latency and increase dynamic range compared with conventional imagers. The digital sensor output is easily interfaced to conventional digital post processing, where it reduces the latency and cost of post processing compared to imagers. The asynchronous data could spawn a new area of DSP that breaks from conventional Nyquist rate signal processing. This paper reviews the rationale and history of this event-based approach, introduces sensor functionalities, and gives an overview of the papers in this session. The paper concludes with a brief discussion on open questions.

237 citations


Journal ArticleDOI
TL;DR: In this paper, the authors developed a new analytical pulse pileup model for both peak and tail pileup effects for nonparalyzable detectors, which takes into account the bipolar shape of the pulse, the distribution function of time intervals between random events, and the input probability density function of photon energies.
Abstract: Purpose: Recently, novel CdTe photon counting x-ray detectors (PCXDs) with energy discrimination capabilities have been developed. When such detectors are operated under a high x-ray flux, however, coincident pulses distort the recorded energy spectrum. These distortions are called pulse pileup effects. It is essential to compensate for these effects on the recorded energy spectrum in order to take full advantage of spectral information PCXDs provide. Such compensation can be achieved by incorporating a pileup model into the image reconstruction process for computed tomography, that is, as a part of the forward imaging process, and iteratively estimating either the imaged object or the line integrals using, e.g., a maximum likelihood approach. The aim of this study was to develop a new analytical pulse pileup model for both peak and tail pileup effects for nonparalyzable detectors. Methods: The model takes into account the following factors: The bipolar shape of the pulse, the distribution function of time intervals between random events, and the input probability density function of photon energies. The authors used Monte Carlo simulations to evaluate the model. Results: The recorded spectra estimated by the model were in an excellent agreement with those obtained by Monte Carlo simulations for various levels of pulse pileup effects. The coefficients of variation (i.e., the root mean square difference divided by the mean of measurements) were 5.3%–10.0% for deadtime losses of 1%–50% with a polychromatic incident x-ray spectrum. Conclusions: The proposed pulse pileup model can predict recorded spectrum with relatively good accuracy.

Proceedings ArticleDOI
13 Jun 2010
TL;DR: This work proposes a weighting function that produces statistically optimal estimates under the assumption of compound-Gaussian noise, based on a calibrated camera model that accounts for all noise sources and allows us to simultaneously estimate the irradiance and its uncertainty.
Abstract: Given a multi-exposure sequence of a scene, our aim is to recover the absolute irradiance falling onto a linear camera sensor. The established approach is to perform a weighted average of the scaled input exposures. However, there is no clear consensus on the appropriate weighting to use. We propose a weighting function that produces statistically optimal estimates under the assumption of compound-Gaussian noise. Our weighting is based on a calibrated camera model that accounts for all noise sources. This model also allows us to simultaneously estimate the irradiance and its uncertainty. We evaluate our method on simulated and real world photographs, and show that we consistently improve the signal-to-noise ratio over previous approaches. Finally, we show the effectiveness of our model for optimal exposure sequence selection and HDR image denoising.

Patent
Alexander Shpunt1, Gerard Medioni1, Daniel Cohen1, Erez Sali1, Ronen Deitch1 
28 Jul 2010
TL;DR: In this article, a first image of the pattern on the object is captured using first image sensor, and this image is processed to generate pattern-based depth data with respect to the object.
Abstract: A method for depth mapping includes projecting a pattern of optical radiation onto an object. A first image of the pattern on the object is captured using a first image sensor, and this image is processed to generate pattern-based depth data with respect to the object. A second image of the object is captured using a second image sensor, and the second image is processed together with another image to generate stereoscopic depth data with respect to the object. The pattern-based depth data is combined with the stereoscopic depth data to create a depth map of the object.

Proceedings ArticleDOI
TL;DR: In this paper, a MATLAB code for synthetic aperture radar image reconstruction using the matched filter and backprojection algorithms is provided, and a manipulation of the back-projection imaging equations is provided to show how common MATLAB functions, ifft and interp1, may be used for straightforward SAR image formation.
Abstract: While many synthetic aperture radar (SAR) image formation techniques exist, two of the most intuitive methods for implementation by SAR novices are the matched filter and backprojection algorithms. The matched filter and (non-optimized) backprojection algorithms are undeniably computationally complex. However, the backprojection algorithm may be successfully employed for many SAR research endeavors not involving considerably large data sets and not requiring time-critical image formation. Execution of both image reconstruction algorithms in MATLAB is explicitly addressed. In particular, a manipulation of the backprojection imaging equations is supplied to show how common MATLAB functions, ifft and interp1, may be used for straight-forward SAR image formation. In addition, limits for scene size and pixel spacing are derived to aid in the selection of an appropriate imaging grid to avoid aliasing. Example SAR images generated though use of the backprojection algorithm are provided given four publicly available SAR datasets. Finally, MATLAB code for SAR image reconstruction using the matched filter and backprojection algorithms is provided.

Proceedings ArticleDOI
13 Jun 2010
TL;DR: This work presents a flexible method for fusing information from optical and range sensors based on an accelerated high-dimensional filtering approach, and describes how to integrate priors on object motion and appearance and how to achieve an efficient implementation using parallel processing hardware such as GPUs.
Abstract: We present a flexible method for fusing information from optical and range sensors based on an accelerated high-dimensional filtering approach. Our system takes as input a sequence of monocular camera images as well as a stream of sparse range measurements as obtained from a laser or other sensor system. In contrast with existing approaches, we do not assume that the depth and color data streams have the same data rates or that the observed scene is fully static. Our method produces a dense, high-resolution depth map of the scene, automatically generating confidence values for every interpolated depth point. We describe how to integrate priors on object motion and appearance and how to achieve an efficient implementation using parallel processing hardware such as GPUs.

Journal ArticleDOI
TL;DR: In this article, the sensing mechanism, design issues, performance evaluation and applications for planar capacitive sensors are presented in the context of characterisation and imaging of a dielectric material under test (MUT), a systematic study of sensor modelling, features and design issues is needed.
Abstract: Purpose – The purpose of this paper is to present the sensing mechanism, design issues, performance evaluation and applications for planar capacitive sensors. In the context of characterisation and imaging of a dielectric material under test (MUT), a systematic study of sensor modelling, features and design issues is needed. In addition, the influencing factors on sensitivity distribution, and the effect of conductivity on sensor performance need to be further studied for planar capacitive sensors.Design/methodology/approach – While analytical methods can provide accurate solutions to sensors of simple geometries, numerical modelling is preferred to obtain sensor response to different design parameters and properties of MUT, and to derive the sensitivity distributions of various electrode designs. Several important parameters have been used to evaluate the response of the sensors in different sensing modes. The designs of different planar capacitive sensor arrays are presented and experimentally evaluated...

Patent
19 Feb 2010
TL;DR: In this paper, a method and a system for processing multi-aperture image data are described, wherein the method comprises: capturing image data associated with one or more objects by simultaneously exposing an image sensor in an imaging system to spectral energy associated with at least a first part of the electromagnetic spectrum using at least first aperture, and to spectral information associated with a second part of electromagnetic spectrum with both second and third aperture.
Abstract: A method and a system for processing multi-aperture image data are described, wherein the method comprises: capturing image data associated with one or more objects by simultaneously exposing an image sensor in an imaging system to spectral energy associated with at least a first part of the electromagnetic spectrum using at least a first aperture and to spectral energy associated with at least a second part of the electromagnetic spectrum using at least a second and third aperture; generating first image data associated with said first part of the electromagnetic spectrum and second image data associated with said second part of the electromagnetic spectrum; and, generating depth information associated with said captured image on the basis displacement information in said second image data, preferably on the basis of displacement information in an auto-correlation function of the high-frequency image data associated with said second image data.

Proceedings ArticleDOI
03 Dec 2010
TL;DR: This work presents an efficient graph-theoretic algorithm for segmenting a colored laser point cloud derived from a laser scanner and camera that enables combination of color information from a wide field of view camera with a 3D LIDAR point cloud from an actuated planar laser scanner.
Abstract: We present an efficient graph-theoretic algorithm for segmenting a colored laser point cloud derived from a laser scanner and camera. Segmentation of raw sensor data is a crucial first step for many high level tasks such as object recognition, obstacle avoidance and terrain classification. Our method enables combination of color information from a wide field of view camera with a 3D LIDAR point cloud from an actuated planar laser scanner. We extend previous work on robust camera-only graph-based segmentation to the case where spatial features, such as surface normals, are available. Our combined method produces segmentation results superior to those derived from either cameras or laser-scanners alone. We verify our approach on both indoor and outdoor scenes.

Proceedings ArticleDOI
18 Mar 2010
TL;DR: A column-parallel ADC architecture is the most widely used ADC in CMOS image sensors for high-speed and low-power operation and delta-sigma (ΔΣ) ADCs are applied for low-speed imaging with large pixel pitch.
Abstract: Over the last few years, the demands for high-density and high-speed imaging have increased drastically. Since CMOS image sensors have the advantages of low power consumption and easy system integration, they have become dominant over CCDs in the consumer market [1–4]. A column-parallel ADC architecture is the most widely used ADC in CMOS image sensors for high-speed and low-power operation [2–6]. The column-parallel architecture can be classified as: successive-approximation register (SAR) [2], cyclic [3], single-slope (SS) [4], and delta-sigma (ΔΣ) [5,6] ADCs. Although SAR ADCs have been utilized for high-speed imaging, such as UDTV, they require a DAC in a column, whose area is unacceptably large for consumer electronics with a fine pixel pitch. Cyclic ADCs have also been reported in high-speed imaging, but they have high power consumption and high noise levels. Since SS ADCs provide relatively high resolution with minimum area, they have been widely used in CMOS image sensors. However, SS ADCs require very fast clock signals leading to high power consumption in the case of high-speed imaging. Although ΔΣ ADCs have been investigated for low-noise imaging, they have only been applied for low-speed imaging with large pixel pitch because of the complexity of ΔΣ modulators and following decimation filters.

Proceedings ArticleDOI
13 Jun 2010
TL;DR: A new framework that incorporates radiative transfer theory to estimate object reflectance and the mean shift algorithm to simultaneously track the object based on its reflectance spectra is proposed and the combination of spectral detection and motion prediction enables the tracker to be robust against abrupt motions, and facilitate fast convergence of themean shift tracker.
Abstract: Recent advances in electronics and sensor design have enabled the development of a hyperspectral video camera that can capture hyperspectral datacubes at near video rates The sensor offers the potential for novel and robust methods for surveillance by combining methods from computer vision and hyperspectral image analysis Here, we focus on the problem of tracking objects through challenging conditions, such as rapid illumination and pose changes, occlusions, and in the presence of confusers A new framework that incorporates radiative transfer theory to estimate object reflectance and the mean shift algorithm to simultaneously track the object based on its reflectance spectra is proposed The combination of spectral detection and motion prediction enables the tracker to be robust against abrupt motions, and facilitate fast convergence of the mean shift tracker In addition, the system achieves good computational efficiency by using random projection to reduce spectral dimension The tracker has been evaluated on real hyperspectral video data

Journal ArticleDOI
22 Apr 2010
TL;DR: In this paper, the inner products are computed in the analog domain using a computational focal plane and an analog vector-matrix multiplier (VMM), which is more than mere postprocessing as the processing circuity is integrated as part of the sensing circuity itself.
Abstract: This paper demonstrates a computational image sensor capable of implementing compressive sensing operations. Instead of sensing raw pixel data, this image sensor projects the image onto a separable 2-D basis set and measures the corresponding expansion coefficients. The inner products are computed in the analog domain using a computational focal plane and an analog vector-matrix multiplier (VMM). This is more than mere postprocessing, as the processing circuity is integrated as part of the sensing circuity itself. We implement compressive imaging on the sensor by using pseudorandom vectors called noiselets for the measurement basis. This choice allows us to reconstruct the image from only a small percentage of the transform coefficients. This effectively compresses the image without any digital computation and reduces the throughput of the analog-to-digital converter (ADC). The reduction in throughput has the potential to reduce power consumption and increase the frame rate. The general architecture and a detailed circuit implementation of the image sensor are discussed. We also present experimental results that demonstrate the advantages of using the sensor for compressive imaging rather than more traditional coded imaging strategies.

Patent
28 Jun 2010
TL;DR: In this article, an image sensor pixel that includes a photoelectric conversion unit supported by a substrate and an insulator adjacent to the substrate is constructed with a process that optimizes the upper aperture of the light guide.
Abstract: An image sensor pixel that includes a photoelectric conversion unit supported by a substrate and an insulator adjacent to the substrate. The pixel includes a light guide that is located within an opening of the insulator and extends above the insulator such that a portion of the light guide has an air interface. The air interface improves the internal reflection of the light guide. Additionally, the light guide and an adjacent color filter are constructed with a process that optimizes the upper aperture of the light guide. These characteristics of the light guide eliminate the need for a microlens.

Patent
12 Oct 2010
TL;DR: In this article, the authors describe an optical device for imaging, comprising at least one microlens field (10) having at least two microlenses (10a, 10b) and an image sensor (30), each of which comprises a plurality of image detectors (32a, 32b).
Abstract: The invention relates to an optical device for imaging, comprising at least one microlens field (10) having at least two microlenses (10a, 10b) and an image sensor (30) having at least two image detector matrices (30a, 30b). The at least two image detector matrices each comprises a plurality of image detectors (32a, 32b), and a correlation exists between the image detector matrices and the microlenses, such that each microlens, together with an image detector matrix, forms an optical channel. The centers (34a, 34b) of the image detector matrices are laterally shifted to varying extents in relation to area centroids of the microlens apertures (13a, 13b) of the associated optical channels that are projected on the image detector matrices, such that the optical channels have different, in part overlapping capturing regions, and an overlapping region of the capturing regions of two channels is imaged on the image detector matrices, having offset with respect to an image detector grid of the image detector matrices. In addition, the invention relates to an image processing device and to a method for optical imaging.

Patent
19 Apr 2010
TL;DR: In this article, a rolling shutter is used to capture the radiation from the scene in successive, respective exposure periods from different, respective areas of the scene so as to form an electronic image of a scene.
Abstract: Imaging apparatus includes an illumination assembly, including a plurality of radiation sources and projection optics, which are configured to project radiation from the radiation sources onto different, respective regions of a scene. An imaging assembly includes an image sensor and objective optics configured to form an optical image of the scene on the image sensor, which includes an array of sensor elements arranged in multiple groups, which are triggered by a rolling shutter to capture the radiation from the scene in successive, respective exposure periods from different, respective areas of the scene so as to form an electronic image of the scene. A controller is coupled to actuate the radiation sources sequentially in a pulsed mode so that the illumination assembly illuminates the different, respective areas of the scene in synchronization with the rolling shutter.

Proceedings ArticleDOI
01 Dec 2010
TL;DR: A 3×3 prototype image sensor array consisting of 2µm diameter CMOS avalanche photodiodes with 3-transistor NMOS pixel circuitry is integrated in a 90nm CMOS image sensor technology.
Abstract: A 3×3 prototype image sensor array consisting of 2µm diameter CMOS avalanche photodiodes with 3-transistor NMOS pixel circuitry is integrated in a 90nm CMOS image sensor technology. The 5µm pixel pitch is the smallest achieved to date and is obtained with <1% crosstalk, 250Hz mean dark count rate (DCR) at 20C, 36% photon detection efficiency at 410nm (PDE) and 107ps FWHM jitter. The small pixel pitch makes it possible to recover the 12.5% fill factor by standard wafer-level microlenses. A 5-stage capacitive charge pump generates the 11V breakdown voltage from a standard 2.5V supply obviating external high voltage generation.

Journal ArticleDOI
TL;DR: Analysis on sensitivity and link budget have been presented to guide the design of high-sensitivity noncontact vital sign detector and results show that the fabricated chip has a sensitivity of better than -101 dBm for ideal detection in the absence of random body movement.
Abstract: In this paper, analyses on sensitivity and link budget have been presented to guide the design of high-sensitivity noncontact vital sign detector. Important design issues such as flicker noise, baseband bandwidth, and gain budget have been discussed with practical considerations of analog-to-digital interface and signal processing methods in noncontact vital sign detection. Based on the analyses, a direct-conversion 5.8-GHz radar sensor chip with 1-GHz bandwidth was designed and fabricated. This radar sensor chip is software configurable to set the operation point and detection range for optimal performance. It integrates all the analog functions on-chip so that the output can be directly sampled for digital signal processing. Measurement results show that the fabricated chip has a sensitivity of better than -101 dBm for ideal detection in the absence of random body movement. Experiments have been performed successfully in laboratory environment to detect the vital signs of human subjects.

Proceedings ArticleDOI
18 Mar 2010
TL;DR: Conventional image/video sensors acquire visual information from a scene in time-quantized fashion at some predetermined frame rate, which leads, depending on the dynamic contents of the scene, to a more or less high degree of redundancy in the image data.
Abstract: Conventional image/video sensors acquire visual information from a scene in time-quantized fashion at some predetermined frame rate. Each frame carries the information from all pixels, regardless of whether or not this information has changed since the last frame had been acquired, which is usually not long ago. This method obviously leads, depending on the dynamic contents of the scene, to a more or less high degree of redundancy in the image data. Acquisition and handling of these dispensable data consume valuable resources; sophisticated and resource-hungry video compression methods have been developed to deal with these data.

Proceedings ArticleDOI
01 Dec 2010
TL;DR: In this paper, a novel hybrid complementary metal oxide semiconductor (CMOS) image sensor architecture utilizing nanometer scale amorphous In-Ga-Zn-O (a-IGZO) thin film transistors (TFT) combined with a conventional Si photo diode was proposed.
Abstract: In this article, we propose a novel hybrid complementary metal oxide semiconductor (CMOS) image sensor architecture utilizing nanometer scale amorphous In-Ga-Zn-O (a-IGZO) thin film transistors (TFT) combined with a conventional Si photo diode. This approach will overcome the loss of quantum efficiency and image quality due to the downscaling of the photodiode. The 180nm gate length a-IGZO TFT exhibits remarkable short channel device performance including a low 1/ƒ noise and a high output gain, despite fabrication temperatures as low as 200°C. The excellent device performance has been achieved by a double layer gate dielectric (Al 2 O 3 /SiO 2 ) and a trapezoidal active region formed by a tailored etching process. A self aligned top gate structure was employed for low parasitic capacitance. 3D process simulation tools were applied to optimize a four pixel CMOS image sensor structure. The results demonstrate how our stacked hybrid device approach contributes to new device strategies in image sensor architectures. We expect that this approach is applicable to numerous devices and systems in future micro- and nano-electronics.

Patent
13 Apr 2010
TL;DR: In this article, the authors propose a removable, pluggable and disposable opto-electronic modules for illumination and imaging for endoscopy or borescopy for use with portable display devices.
Abstract: Various embodiments for providing removable, pluggable and disposable opto-electronic modules for illumination and imaging for endoscopy or borescopy are provided for use with portable display devices. Generally, various medical or industrial devices can include one or more solid state or other compact electro-optic illuminating elements located thereon. Additionally, such opto-electronic modules may include illuminating optics, imaging optics, and/or image capture devices. The illuminating elements may have different wavelengths and can be time-synchronized with an image sensor to illuminate an object for imaging or detecting purpose or other conditioning purpose. The removable opto-electronic modules may be plugged onto the exterior surface of another medical device, deployably coupled to the distal end of the device, or otherwise disposed on the device.

Patent
Yosuke Kusaka1
25 Feb 2010
TL;DR: In this paper, a focus adjustment device includes an image sensor that includes imaging pixels for capturing an image formed via an imaging optical system and focus detection pixels for detecting a focus adjusting state at the image sensor through a first pupil division-type image shift detection method.
Abstract: A focus adjustment device includes an image sensor that includes imaging pixels for capturing an image formed via an imaging optical system and focus detection pixels for detecting a focus adjustment state at the imaging optical system through a first pupil division-type image shift detection method, a focus detector that detects a focus adjustment state at the imaging optical system through a second pupil division-type image shift detection method different from the first pupil division-type image shift detection method, and a focus adjustment controller that executes focus adjustment for the imaging optical system based upon the focus adjustment states detected by the image sensor and the focus detector.