scispace - formally typeset
Search or ask a question

Showing papers on "Image sensor published in 2011"


Journal ArticleDOI
TL;DR: The high-speed and high-resolution pattern projection capability offered by the digital light projection technology may enable new generation systems for 3D surface measurement applications that will provide much better functionality and performance than existing ones in terms of speed, accuracy, resolution, modularization, and ease of use.
Abstract: We provide a review of recent advances in 3D surface imaging technologies. We focus particularly on noncontact 3D surface measurement techniques based on structured illumination. The high-speed and high-resolution pattern projection capability offered by the digital light projection technology, together with the recent advances in imaging sensor technologies, may enable new generation systems for 3D surface measurement applications that will provide much better functionality and performance than existing ones in terms of speed, accuracy, resolution, modularization, and ease of use. Performance indexes of 3D imaging system are discussed, and various 3D surface imaging schemes are categorized, illustrated, and compared. Calibration techniques are also discussed, since they play critical roles in achieving the required precision. Numerous applications of 3D surface imaging technologies are discussed with several examples.

1,331 citations


Journal ArticleDOI
TL;DR: The biomimetic CMOS dynamic vision and image sensor described in this paper is based on a QVGA array of fully autonomous pixels containing event-based change detection and pulse-width-modulation imaging circuitry, which ideally results in lossless video compression through complete temporal redundancy suppression at the pixel level.
Abstract: The biomimetic CMOS dynamic vision and image sensor described in this paper is based on a QVGA (304×240) array of fully autonomous pixels containing event-based change detection and pulse-width-modulation (PWM) imaging circuitry. Exposure measurements are initiated and carried out locally by the individual pixel that has detected a change of brightness in its field-of-view. Pixels do not rely on external timing signals and independently and asynchronously request access to an (asynchronous arbitrated) output channel when they have new grayscale values to communicate. Pixels that are not stimulated visually do not produce output. The visual information acquired from the scene, temporal contrast and grayscale data, are communicated in the form of asynchronous address-events (AER), with the grayscale values being encoded in inter-event intervals. The pixel-autonomous and massively parallel operation ideally results in lossless video compression through complete temporal redundancy suppression at the pixel level. Compression factors depend on scene activity and peak at ~1000 for static scenes. Due to the time-based encoding of the illumination information, very high dynamic range - intra-scene DR of 143 dB static and 125 dB at 30 fps equivalent temporal resolution - is achieved. A novel time-domain correlated double sampling (TCDS) method yields array FPN of 56 dB (9.3 bit) for >10 Lx illuminance.

632 citations


Patent
20 Jun 2011
TL;DR: In this article, an image sensor integrated circuit can be configured to capture a frame of image data by reading out a plurality of analog signals, each read out analog signal can be representative of light incident on a group of two or more pixels of the plurality of pixels.
Abstract: An indicia reading terminal can comprise an image sensor integrated circuit having a two-dimensional image sensor, a hand held housing encapsulating the two-dimensional image sensor, and an imaging lens configured to focus an image of a target decodable indicia onto the two-dimensional image sensor. The two-dimensional image sensor can include a plurality of pixels arranged in repetitive patterns. Each pattern can include at least one pixel sensitive in a first spectrum region, at least one pixel sensitive in a second spectrum region, and at least one pixel sensitive in a third spectrum region. The image sensor integrated circuit can be configured to capture a frame of image data by reading out a plurality of analog signals. Each read out analog signal can be representative of light incident on a group of two or more pixels of the plurality of pixels. The image sensor integrated circuit can be further configured to convert the plurality of analog signals to a plurality of digital signals and to store the plurality of digital signals in a memory. The indicia reading terminal can be operative to process the frame of image data for attempting to decode for decodable indicia.

346 citations


Patent
Jun Lu1, Yong Liu1, Ynjiun Paul Wang1
07 Nov 2011
TL;DR: In this article, an optical indicia reading terminal can comprise a microprocessor, a memory, and an image sensor integrated circuit, all coupled to a system bus, and a hand held housing encapsulating the two-dimensional image sensor.
Abstract: An optical indicia reading terminal can comprise a microprocessor, a memory, and an image sensor integrated circuit, all coupled to a system bus, and a hand held housing encapsulating the two-dimensional image sensor. The image sensor integrated circuit can comprise a two-dimensional image sensor including a plurality of pixels. The image sensor integrated circuit can be configured to read out a plurality of analog signals. Each analog signal of the plurality of analog signals can be representative of light incident on at least one pixel of the plurality of pixels. The image sensor integrated circuit can be further configured to derive a plurality of luminance signals from the plurality of analog signals, each luminance signal being representative of the luminance of light incident on at least one pixel of the plurality of pixels. The image sensor integrated circuit can be further configured to store a frame of image data in the terminal's memory by converting the plurality of luminance signals to a plurality of digital values, each digital value being representative of the luminance of light incident on at least one pixel of the plurality of pixels. The optical indicia reading terminal can be configured to process the frame of image data for decoding decodable indicia.

344 citations


Patent
19 Jan 2011
TL;DR: In this paper, an imaging terminal having an image sensor array and a variable lens assembly for focusing an image onto the imaging sensor array is described. But the focus setting of the imaging lens assembly can be fixed so that a predetermined lens assembly focus setting is active when a trigger signal is active.
Abstract: There is set forth herein an imaging terminal having an image sensor array and a variable lens assembly for focusing an image onto the image sensor array. In one embodiment, an imaging terminal can include one or more focusing configuration selected from the group comprising a full set focusing configuration, a truncated set focusing configuration and a fixed focusing configuration. When a full set focusing configuration is active, a full set of candidate focus settings can be active when the imaging terminal determines a focus setting of the terminal responsively to a trigger signal activation. When a truncated set focusing configuration is active, a truncated range of candidate focus settings can be active when the imaging terminal determines a focus setting of the terminal responsively to a trigger signal activation. When a fixed focusing configuration is active, the focus setting of the imaging lens assembly can be fixed so that a predetermined lens assembly focus setting is active when a trigger signal is active.

338 citations


Patent
31 Jan 2011
TL;DR: In this article, a decodable indicia reading system is presented for use in locating and decoding a bar code symbol represented within a frame of image data, which can include a central processing unit (CPU), a memory communicatively coupled to the CPU, and two or more image sensors.
Abstract: A decodable indicia reading system can be provided for use in locating and decoding a bar code symbol represented within a frame of image data. The system can comprise a central processing unit (CPU), a memory communicatively coupled to the CPU, and two or more image sensors communicatively coupled to the CPU or to the memory. The system can be configured to select an image sensor for indicia reading by cycling through available image sensors to detect an image sensor suitable for an attempted indicia reading operation by comparing a measured parameter value to a pre-defined sensor-specific threshold value. The system can be further configured to select the first suitable or the best suitable image sensor for the attempted decodable indicia reading operation based upon the comparison result. The system can be further configured to notify the system operator which image sensor has been selected. The system can be further configured to obtain a decodable indicia image by the selected image sensor.

335 citations


Patent
09 Sep 2011
TL;DR: An apparatus for decoding a bar code symbol may include an image sensor integrated circuit having a plurality of pixels, timing and control circuitry, gain circuitry for controlling gain, and analog to digital conversion circuitry for conversion of an analog signal to a digital signal.
Abstract: An apparatus for use in decoding a bar code symbol may include an image sensor integrated circuit having a plurality of pixels, timing and control circuitry for controlling an image sensor, gain circuitry for controlling gain, and analog to digital conversion circuitry for conversion of an analog signal to a digital signal. The apparatus may also include a printed circuit board for receiving the image sensor integrated circuit. The connection between the image sensor integrated circuit and the printed circuit board characterized by a plurality of conductive adhesive connectors disposed between a plurality of electrode pads and a plurality of contact pads, where the conductive adhesive connectors provide electrical input/output and mechanical connections between the image sensor integrated circuit and the printed circuit board. The apparatus may be operative for processing image signals generated by the image sensor integrated circuit for attempting to decode the bar code symbol.

323 citations


Patent
08 Jun 2011
TL;DR: In this article, a secure indicia encoding system with a lock receiving portion is described, where an imaging subsystem, a memory, a processor, and a housing are used to decode a decodable feature represented in at least one of the frames of image data.
Abstract: A securable indicia encoding system with a lock receiving portion is disclosed herein. In one illustrative embodiment, a securable indicia decoding device may include an imaging subsystem, a memory, a processor, and a housing. The imaging subsystem may include an image sensor array and an imaging optics assembly operative for focusing an image onto the image sensor array. The memory may be capable of storing frames of image data comprising data communicated through the read-out portion of at least some of the pixels during the imaging operation. The processor may be operative for receiving one or more of the frames of image data from the data storage element and performing a decode operation for attempting to decode a decodable feature represented in at least one of the frames of image data. The housing may encapsulate the illumination subsystem and the imaging subsystem. The housing may include a lock receiving portion for receiving a security lock.

313 citations


Patent
26 Sep 2011
TL;DR: An optical indicia reading terminal (100) can include an image sensor ( 62,1032), an imaging lens (1110), an analog-to-digital converter (1037), and an illumination assembly (1207).
Abstract: An optical indicia reading terminal ( 100 ) can comprise an image sensor ( 62,1032 ), an imaging lens ( 1110 ) configured to focus an image of decodable indicia ( 15 ) on the image sensor ( 62,1032 ), an analog-to-digital converter ( 1037 ) configured to convert an analog signal read out of the image sensor ( 62,1032 ) into a digital signal representative of light incident on the image sensor ( 62,1032 ), a hand held housing ( 52 ) encapsulating the image sensor ( 62,1032 ), a microprocessor ( 1060 ) configured to output a decoded message data corresponding to the decodable indicia ( 15 ) by processing the digital signal, and an illumination assembly ( 1207 ). The illumination assembly ( 1207 ) can include at least one visible spectrum illumination source ( 322 a - 322 z ) and at least one invisible spectrum illumination source ( 324 a - 324 z ). The visible spectrum illumination source ( 322 a - 322 z ) can be configured to emit a light having a wavelength belonging to a visible spectrum region. The invisible spectrum illumination source ( 324 a - 324 z ) can be configured to emit a light having a wavelength belonging to an invisible spectrum region. The intensities of light emitted by the visible spectrum light sources ( 322 a - 322 z ) and invisible spectrum light sources ( 324 a - 324 z ) can be chosen to minimize a perceived combined light intensity while providing an illumination sufficient for obtaining an image suitable for decoding the decodable indicia ( 15 ).

302 citations


Patent
30 Jun 2011
TL;DR: A decodable reading terminal can include a laser-based scanner, an image sensor, a photo-detector, and an A/D converter as mentioned in this paper, which can be configured to convert the first analog signal into a first digital signal.
Abstract: A decodable indicia reading terminal can comprise a laser-based scanner, an imager-based scanner, a central processing unit (CPU), and an illumination assembly. The laser-based scanner can include a laser source, a photo-detector, and an analog-to-digital (A/D) converter. The laser source can be configured to emit a laser beam onto a substrate bearing decodable indicia. The photo-detector can be configured to receive a beam of a variable intensity reflected by the decodable indicia, and to output a first analog signal representative of the variable intensity. The A/D converter can be configured to convert the first analog signal into a first digital signal. The imager-based scanner can include a multiple pixel image sensor, an imaging lens, and an A/D converter. The imaging lens can be configured to focus an image of the decodable indicia on the image sensor. The A/D converter can be configured to convert into a second digital signal a second analog signal read out of the image sensor and representative of light incident on the image sensor. The CPU can be configured to output a decoded message data corresponding to the decodable indicia by processing the first digital signal and/or the second digital signal. The illumination assembly can include an indicator light bar and an illumination light bar. The ON/OFF state and color of the indicator light bar can reflect the state of the decodable indicia reading terminal. The illumination light bar can be configured to generate a high intensity illumination for illuminating the substrate bearing the decodable indicia. The wavelength of the light generated by the indicator light bar can be substantially equal to the wavelength of the light generated by the illumination light bar, and the light generated by the illumination light bar can have a very low perceived intensity.

299 citations


Proceedings ArticleDOI
06 Nov 2011
TL;DR: It is shown that the proposed techniques for sampling, representing and reconstructing the space-time volume can effectively reconstruct a video from a single image maintaining high spatial resolution.
Abstract: Cameras face a fundamental tradeoff between the spatial and temporal resolution - digital still cameras can capture images with high spatial resolution, but most high-speed video cameras suffer from low spatial resolution. It is hard to overcome this tradeoff without incurring a significant increase in hardware costs. In this paper, we propose techniques for sampling, representing and reconstructing the space-time volume in order to overcome this tradeoff. Our approach has two important distinctions compared to previous works: (1) we achieve sparse representation of videos by learning an over-complete dictionary on video patches, and (2) we adhere to practical constraints on sampling scheme which is imposed by architectures of present image sensor devices. Consequently, our sampling scheme can be implemented on image sensors by making a straightforward modification to the control unit. To demonstrate the power of our approach, we have implemented a prototype imaging system with per-pixel coded exposure control using a liquid crystal on silicon (LCoS) device. Using both simulations and experiments on a wide range of scenes, we show that our method can effectively reconstruct a video from a single image maintaining high spatial resolution.

Proceedings ArticleDOI
20 Jun 2011
TL;DR: By modeling such spatio-temporal redundancies in a video volume, one can faithfully recover the underlying high-speed video frames from the observed low speed coded video by proposing a reconstruction algorithm that uses the data from P2C2 along with additional priors about videos to perform temporal super-resolution.
Abstract: We describe an imaging architecture for compressive video sensing termed programmable pixel compressive camera (P2C2). P2C2 allows us to capture fast phenomena at frame rates higher than the camera sensor. In P2C2, each pixel has an independent shutter that is modulated at a rate higher than the camera frame-rate. The observed intensity at a pixel is an integration of the incoming light modulated by its specific shutter. We propose a reconstruction algorithm that uses the data from P2C2 along with additional priors about videos to perform temporal super-resolution. We model the spatial redundancy of videos using sparse representations and the temporal redundancy using brightness constancy constraints inferred via optical flow. We show that by modeling such spatio-temporal redundancies in a video volume, one can faithfully recover the underlying high-speed video frames from the observed low speed coded video. The imaging architecture and the reconstruction algorithm allows us to achieve temporal super-resolution without loss in spatial resolution. We implement a prototype of P2C2 using an LCOS modulator and recover several videos at 200 fps using a 25 fps camera.

Proceedings ArticleDOI
07 Apr 2011
TL;DR: This work states that the introduction of SPAD devices in deep-submicron CMOS has enabled the design of massively parallel arrays where the entire photon detection and ToA circuitry is integrated on-pixel.
Abstract: Image sensors capable of resolving the time-of-arrival (ToA) of individual photons with high resolution are needed in several applications, such as fluorescence lifetime imaging microscopy (FLIM), Forster resonance energy transfer (FRET), optical rangefinding, and positron emission tomography In FRET, for example, typical fluorescence lifetime is of the order of 100 to 300ps, thus deep-subnanosecond resolutions are needed in the instrument response function (IRF) This in turn requires new time-resolved image sensors with better time resolution, increased throughput, and lower costs Solid-state avalanche photodiodes operated in Geiger-mode, or single-photon avalanche diodes (SPADs), have existed for decades [1] but only recently have SPADs been integrated in CMOS However, as array sizes have grown, the readout bottleneck has also become evident, leading to hybrid designs or more integration and more parallelism on-chip [2,3] This trend has accelerated with the introduction of SPAD devices in deep-submicron CMOS, that have enabled the design of massively parallel arrays where the entire photon detection and ToA circuitry is integrated on-pixel [4,5]

Journal ArticleDOI
TL;DR: Flexible DOF imaging can open a new creative dimension in photography and lead to new capabilities in scientific imaging, vision, and graphics.
Abstract: The range of scene depths that appear focused in an image is known as the depth of field (DOF). Conventional cameras are limited by a fundamental trade-off between depth of field and signal-to-noise ratio (SNR). For a dark scene, the aperture of the lens must be opened up to maintain SNR, which causes the DOF to reduce. Also, today's cameras have DOFs that correspond to a single slab that is perpendicular to the optical axis. In this paper, we present an imaging system that enables one to control the DOF in new and powerful ways. Our approach is to vary the position and/or orientation of the image detector during the integration time of a single photograph. Even when the detector motion is very small (tens of microns), a large range of scene depths (several meters) is captured, both in and out of focus. Our prototype camera uses a micro-actuator to translate the detector along the optical axis during image integration. Using this device, we demonstrate four applications of flexible DOF. First, we describe extended DOF where a large depth range is captured with a very wide aperture (low noise) but with nearly depth-independent defocus blur. Deconvolving a captured image with a single blur kernel gives an image with extended DOF and high SNR. Next, we show the capture of images with discontinuous DOFs. For instance, near and far objects can be imaged with sharpness, while objects in between are severely blurred. Third, we show that our camera can capture images with tilted DOFs (Scheimpflug imaging) without tilting the image detector. Finally, we demonstrate how our camera can be used to realize nonplanar DOFs. We believe flexible DOF imaging can open a new creative dimension in photography and lead to new capabilities in scientific imaging, vision, and graphics.

Proceedings ArticleDOI
11 Oct 2011
TL;DR: Preliminary results indicate that the Kinect sensor does indeed work in a wider range of operating conditions and it can produce activity descriptions that match that of a human.
Abstract: Previously, we put forth a new computer vision system for indoor well-being monitoring of elderly populations based on the use of multiple stereo camera pairs. That approach involves combining the strengths of image space with three dimensional volume element (voxel) space techniques. However, that system is fundamentally limited because it is based on color imagery from visible light cameras. In this article, we extend our prior research and consider a new, inexpensive infrared depth camera device, the Microsoft Kinect. Advantages, such as the ability to operate 24–7 in low-to-no light conditions, and shortcomings are detailed. In addition, we discuss necessary algorithmic extensions to our mixed image and voxel space framework for the Kinect sensor. Experiments are performed in a laboratory designed to resemble an elders living quarter. Vision findings are evaluated using our prior high-level linguistic summarization of human activity work. Preliminary results indicate that the Kinect sensor does indeed work in a wider range of operating conditions and it can produce activity descriptions that match that of a human.

Patent
11 Aug 2011
TL;DR: In this paper, the authors present techniques for processing image data acquired using a digital image sensor 90. In accordance with aspects of the present disclosure, one such technique may relate to the processing of image data in a system 10 that supports multiple image sensors 90.
Abstract: Various techniques are provided for processing image data acquired using a digital image sensor 90. In accordance with aspects of the present disclosure, one such technique may relate to the processing of image data in a system 10 that supports multiple image sensors 90. In one embodiment, the image processing system 32 may include control circuitry configured to determine whether a device is operating in a single sensor mode (one active sensor) or a dual sensor mode (two active sensors). When operating in the single sensor mode, data may be provided directly to a front-end pixel processing unit 80 from the sensor interface of the active sensor. When operating in a dual sensor mode, the image frames from the first and second sensors 90a, 90b are provided to the front-end pixel processing unit 80 in an interleaved manner. For instance, in one embodiment, the image frames from the first and second sensors 90a, 90b are written to a memory 108, and then read out to the front-end pixel processing unit 80 in an interleaved manner.

Proceedings ArticleDOI
Cha Zhang1, Zhengyou Zhang1
11 Jul 2011
TL;DR: A maximum likelihood solution for the joint depth and color calibration based on two principles, where points on the checker-board shall be co-planar, and the plane is known from color camera calibration.
Abstract: Commodity depth cameras have created many interesting new applications in the research community recently. These applications often require the calibration information between the color and the depth cameras. Traditional checkerboard based calibration schemes fail to work well for the depth camera, since its corner features cannot be reliably detected in the depth image. In this paper, we present a maximum likelihood solution for the joint depth and color calibration based on two principles. First, in the depth image, points on the checker-board shall be co-planar, and the plane is known from color camera calibration. Second, additional point correspondences between the depth and color images may be manually specified or automatically established to help improve calibration accuracy. Uncertainty in depth values has been taken into account systematically. The proposed algorithm is reliable and accurate, as demonstrated by extensive experimental results on simulated and real-world examples.

Patent
04 May 2011
TL;DR: In this paper, a color digital camera with direct luminance detection is described, where the luminance signals are obtained directly from a broadband image sensor channel without interpolation of RGB data.
Abstract: Digital camera systems and methods are described that provide a color digital camera with direct luminance detection. The luminance signals are obtained directly from a broadband image sensor channel without interpolation of RGB data. The chrominance signals are obtained from one or more additional image sensor channels comprising red and/or blue color band detection capability. The red and blue signals are directly combined with the luminance image sensor channel signals. The digital camera generates and outputs an image in YCrCb color space by directly combining outputs of the broadband, red and blue sensors.

Journal ArticleDOI
TL;DR: A 2.1 M pixel, 120 frame/s CMOS image sensor with column-parallel delta-sigma (ΔΣ) ADC architecture with second-order ΔΣ ADC improves the conversion speed while reducing the random noise (RN) level as well.
Abstract: This paper presents a 2.1 M pixel, 120 frame/s CMOS image sensor with column-parallel delta-sigma (ΔΣ) ADC architecture. The use of a second-order ΔΣ ADC improves the conversion speed while reducing the random noise (RN) level as well. The ΔΣ ADC employing an inverter-based ΔΣ modulator and a compact decimation filter is accommodated within a fine pixel pitch of 2.25-μm and improves energy efficiency while providing a high frame-rate of 120 frame/s. A prototype image sensor has been fabricated with a 0.13-μm CMOS process. Measurement results show a RN of 2.4 erms- and a dynamic range of 73 dB. The power consumption of the prototype image sensor is only 180 mW. This work achieves the energy efficiency of 1.7 e-·nJ.

Journal ArticleDOI
TL;DR: The ability of the sensor to capture very fast moving objects, rotating at 10 K revolutions per second, has been verified experimentally and a compact preamplification stage has been introduced that allows to improve the minimum detectable contrast over previous designs.
Abstract: This paper presents a 128 × 128 dynamic vision sensor. Each pixel detects temporal changes in the local illumination. A minimum illumination temporal contrast of 10% can be detected. A compact preamplification stage has been introduced that allows to improve the minimum detectable contrast over previous designs, while at the same time reducing the pixel area by 1/3. The pixel responds to illumination changes in less than 3.6 μs. The ability of the sensor to capture very fast moving objects, rotating at 10 K revolutions per second, has been verified experimentally. A frame-based sensor capable to achieve this, would require at least 100 K frames per second.

Patent
10 Nov 2011
TL;DR: In this paper, the SLM imparts a programmable pattern of attenuation that may be used to correct for asymmetries between the first and second modes of illumination or imaging.
Abstract: Methods are disclosed for measuring target structures formed by a lithographic process on a substrate. A grating structure within the target is smaller than an illumination spot and field of view of a measurement optical system. The optical system has a first branch leading to a pupil plane imaging sensor and a second branch leading to a substrate plane imaging sensor. A spatial light modulator is arranged in an intermediate pupil plane of the second branch of the optical system. The SLM imparts a programmable pattern of attenuation that may be used to correct for asymmetries between the first and second modes of illumination or imaging. By use of specific target designs and machine-learning processes, the attenuation patterns may also be programmed to act as filter functions, enhancing sensitivity to specific parameters of interest, such as focus.

Journal ArticleDOI
TL;DR: The demonstrated performance of the spatial-convolution method shows it is a powerful tool for reducing reconstruction artifacts originating from the detector finite size and improving the quality of optoacoustic reconstructions.
Abstract: Purpose: Optoacousticimaging enables mapping the optical absorption of biological tissue using optical excitation and acoustic detection. Although most image-reconstruction algorithms are based on the assumption of a detector with an isotropic sensitivity, the geometry of the detector often leads to a response with spatially dependent magnitude and bandwidth. This effect may lead to attenuation or distortion in the recorded signal and, consequently, in the reconstructed image. Methods: Herein, an accurate numerical method for simulating the spatially dependent response of an arbitrary-shape acoustic transducer is presented. The method is based on an analytical solution obtained for a two-dimensional line detector. The calculated response is incorporated in the forward model matrix of an optoacousticimaging setup using temporal convolution, and image reconstruction is performed by inverting the matrix relation. Results: The method was numerically and experimentally demonstrated in two dimensions for both flat and focused transducers and compared to the spatial-convolution method. In forward simulations, the developed method did not suffer from the numerical errors exhibited by the spatial-convolution method. In reconstruction simulations and experiments, the use of both temporal-convolution and spatial-convolution methods lead to an enhancement in resolution compared to a reconstruction with a point detectormodel. However, because of its higher modeling accuracy, the temporal-convolution method achieved a noise figure approximated three times lower than the spatial-convolution method. Conclusions: The demonstrated performance of the spatial-convolution method shows it is a powerful tool for reducing reconstruction artifacts originating from the detector finite size and improving the quality of optoacousticreconstructions. Furthermore, the method may be used for assessing new system designs. Specifically, detectors with nonstandard shapes may be investigated.

Journal ArticleDOI
01 Apr 2011
TL;DR: The historic and physical foundations of integral imaging are overviewed; different optical pickup and display schemes are discussed and system parameters and performance metrics are described; computational methods for reconstruction and range estimation are presented and several applications including 3-D underwater imaging, near infra red passive sensing, imaging in photon-starved environments, and3-D optical microscopy are discussed.
Abstract: Three-dimensional (3-D) optical image sensing and visualization technologies have been researched extensively for different applications in fields as diverse as entertainment, medical sciences, robotics, manufacturing, and defense. In many instances, the capabilities of 3-D imaging and display systems have revolutionized the progress of these disciplines, enabling new detection/display abilities that would not have been otherwise possible. As one of the promising methods in the area of 3-D sensing and display, integral imaging offers passive and relatively inexpensive way to capture 3-D information and to visualize it optically or computationally. The integral imaging technique belongs to the broader class of multiview imaging techniques and is based on a century old principle which has only been resurrected in the past decade owing to advancement of optoelectronic image sensors as well as the exponential increase in computing power. In this paper, historic and physical foundations of integral imaging are overviewed; different optical pickup and display schemes are discussed and system parameters and performance metrics are described. In addition, computational methods for reconstruction and range estimation are presented and several applications including 3-D underwater imaging, near infra red passive sensing, imaging in photon-starved environments, and 3-D optical microscopy are discussed among others.

Patent
09 Nov 2011
TL;DR: In this article, the authors present a control to determine an object present in the first and/or second forward fields of view of a vehicle by processing image data captured by both of the imaging sensors.
Abstract: A vision system for a vehicle includes a first imaging sensor having a first forward field of view and a second imaging sensor spaced from the first imaging sensor and having a second forward field of view, which at least partially overlaps with the first forward field of view. A control processes image data captured by at least one of the first and second imaging sensors to determine an object present in the first and/or second forward fields of view. The control is operable to modulate a headlamp of the vehicle responsive to the processing of image data captured by the at least one of the first and second imaging sensors. The control may process image data captured by both of the imaging sensors to determine a distance between the equipped vehicle and an object present in the overlap of the first and second forward fields of view.

Proceedings ArticleDOI
17 May 2011
TL;DR: A novel method to select camera sensors from an arbitrary deployment to form a camera barrier is proposed, and redundancy reduction techniques to effectively reduce the number of cameras used are presented.
Abstract: Barrier coverage has attracted much attention in the past few years However, most of the previous works focused on traditional scalar sensors We propose to study barrier coverage in camera sensor networks One fundamental difference between camera and scalar sensor is that cameras from different positions can form quite different views of the object As a result, simply combining the sensing range of the cameras across the field does not necessarily form an effective camera barrier since the face image (or the interested aspect) of the object may be missed To address this problem, we use the angle between the object's facing direction and the camera's viewing direction to measure the quality of sensing An object is full-view covered if there is always a camera to cover it no matter which direction it faces and the camera's viewing direction is sufficiently close to the object's facing direction We study the problem of constructing a camera barrier, which is essentially a connected zone across the monitored field such that every point within this zone is full-view covered We propose a novel method to select camera sensors from an arbitrary deployment to form a camera barrier, and present redundancy reduction techniques to effectively reduce the number of cameras used We also present techniques to deploy cameras for barrier coverage in a deterministic environment, and analyze and optimize the number of cameras required for this specific deployment under various parameters

Patent
09 Mar 2011
TL;DR: In this paper, the authors proposed a solid-state image sensor with high sensitivity, which consists of an infrared detecting pixel, a non-sensitive pixel, and a differential amplifier.
Abstract: PROBLEM TO BE SOLVED: To provide a solid-state image sensor having high sensitivity.SOLUTION: A solid-state image sensor in an embodiment comprises an infrared detecting pixel which varies the output potential upon receiving infrared, a non-sensitive pixel which has smaller displacement in the output potential upon receiving infrared than the displacement in the output potential of the infrared detecting pixel upon receiving infrared, a raw selection line which applies drive potential to both of the infrared detecting pixel and the non-sensitive pixel, and a differential amplifier which outputs the potential corresponding to the difference in potential between the output potential from the infrared detecting pixel inputted to an input terminal and the output potential from the non-sensitive pixel inputted to another input terminal. The infrared detecting pixel, the non-sensitive pixel, and the differential amplifier are disposed on the same semiconductor substrate.

Patent
16 Sep 2011
TL;DR: In this article, the authors provide removable, pluggable and disposable opto-electronic modules for illumination and imaging for endoscopy or borescopy are provided for use with portable display devices.
Abstract: Various embodiments for providing removable, pluggable and disposable opto-electronic modules for illumination and imaging for endoscopy or borescopy are provided for use with portable display devices. Generally, various rigid, flexible or expandable single use medical or industrial devices with an access channel, can include one or more solid state or other compact electro-optic illuminating elements located thereon. Additionally, such opto-electronic modules may include illuminating optics, imaging optics, and/or image capture devices, and airtight means for suction and delivery within the device. The illuminating elements may have different wavelengths and can be time-synchronized with an image sensor to illuminate an object for 2D and 3D imaging, or for certain diagnostic purposes.

Patent
Frank Doepke1
17 May 2011
TL;DR: In this paper, the authors describe a method for panoramic photography in handheld personal electronic devices using a positional sensor-assisted panoramography technique using motion filtering and geometric corrections on captured image data.
Abstract: This disclosure pertains to devices, methods, and computer readable media for performing positional sensor-assisted panoramic photography techniques in handheld personal electronic devices Generalized steps that may be used to carry out the panoramic photography techniques described herein include, but are not necessarily limited to: 1) acquiring image data from the electronic device's image sensor; 2) performing “motion filtering” on the acquired image data, eg, using information returned from positional sensors of the electronic device to inform the processing of the image data; 3) performing image registration between adjacent captured images; 4) performing geometric corrections on captured image data, eg, due to perspective changes and/or camera rotation about a non-center of perspective (COP) camera point; and 5) “stitching” the captured images together to create the panoramic scene, eg, blending the image data in the overlap area between adjacent captured images The resultant stitched panoramic image may be cropped before final storage

Journal ArticleDOI
TL;DR: This paper uses the abstract notion of light field representation as a general tool to describe computational camera designs, where each camera can be formulated as a projection of a high-dimensional light field to a 2-D image sensor.
Abstract: A computational camera uses a combination of optics and processing to produce images that cannot be captured with traditional cameras. In the last decade, computational imaging has emerged as a vibrant field of research. A wide variety of computational cameras has been demonstrated to encode more useful visual information in the captured images, as compared with conventional cameras. In this paper, we survey computational cameras from two perspectives. First, we present a taxonomy of computational camera designs according to the coding approaches, including object side coding, pupil plane coding, sensor side coding, illumination coding, camera arrays and clusters, and unconventional imaging systems. Second, we use the abstract notion of light field representation as a general tool to describe computational camera designs, where each camera can be formulated as a projection of a high-dimensional light field to a 2-D image sensor. We show how individual optical devices transform light fields and use these transforms to illustrate how different computational camera designs (collections of optical devices) capture and encode useful visual information.

Proceedings ArticleDOI
10 Apr 2011
TL;DR: This model proposes an efficient method for full-view coverage detection in any given camera sensor networks and derives a sufficient condition on the sensor density needed for full -view coverage in a random uniform deployment.
Abstract: Camera sensors are different from traditional scalar sensors as different cameras from different positions can form distinct views of the object. However, traditional disk sensing model does not consider this intrinsic property of camera sensors. To this end, we propose a novel model called full-view coverage. An object is considered to be full-view covered if for any direction from 0 to 2π (object's facing direction), there is always a sensor such that the object is within the sensor's range and more importantly the sensor's viewing direction is sufficiently close to the object's facing direction. With this model, we propose an efficient method for full-view coverage detection in any given camera sensor networks. We also derive a sufficient condition on the sensor density needed for full-view coverage in a random uniform deployment. Finally, we show a necessary and sufficient condition on the sensor density for full-view coverage in a triangular lattice based deployment.