scispace - formally typeset
Search or ask a question

Showing papers on "Image sensor published in 2012"


Journal ArticleDOI
TL;DR: A three-dimensional range camera able to look around a corner using diffusely reflected light that achieves sub-millimetre depth precision and centimetre lateral precision over 40 cm×40cm×40 cm of hidden space is demonstrated.
Abstract: The recovery of objects obscured by scattering is an important goal in imaging and has been approached by exploiting, for example, coherence properties, ballistic photons or penetrating wavelengths. Common methods use scattered light transmitted through an occluding material, although these fail if the occluder is opaque. Light is scattered not only by transmission through objects, but also by multiple reflection from diffuse surfaces in a scene. This reflected light contains information about the scene that becomes mixed by the diffuse reflections before reaching the image sensor. This mixing is difficult to decode using traditional cameras. Here we report the combination of a time-of-flight technique and computational reconstruction algorithms to untangle image information mixed by diffuse reflection. We demonstrate a three-dimensional range camera able to look around a corner using diffusely reflected light that achieves sub-millimetre depth precision and centimetre lateral precision over 40 cm×40 cm×40 cm of hidden space. An important goal in optics is to image objects hidden by turbid media, although line-of-sight techniques fail when the obscuring medium becomes opaque. Veltenet al. use ultrafast imaging techniques to recover three-dimensional shapes of non-line-of-sight objects after reflection from diffuse surfaces.

641 citations


Proceedings ArticleDOI
14 May 2012
TL;DR: It is demonstrated that the proposed checkerboard corner detector significantly outperforms current state-of-the-art and the proposed camera-to-range registration method is able to discover multiple solutions in the case of ambiguities.
Abstract: As a core robotic and vision problem, camera and range sensor calibration have been researched intensely over the last decades. However, robotic research efforts still often get heavily delayed by the requirement of setting up a calibrated system consisting of multiple cameras and range measurement units. With regard to removing this burden, we present a toolbox with web interface for fully automatic camera-to-camera and camera-to-range calibration. Our system is easy to setup and recovers intrinsic and extrinsic camera parameters as well as the transformation between cameras and range sensors within one minute. In contrast to existing calibration approaches, which often require user intervention, the proposed method is robust to varying imaging conditions, fully automatic, and easy to use since a single image and range scan proves sufficient for most calibration scenarios. Experimentally, we demonstrate that the proposed checkerboard corner detector significantly outperforms current state-of-the-art. Furthermore, the proposed camera-to-range registration method is able to discover multiple solutions in the case of ambiguities. Experiments using a variety of sensors such as grayscale and color cameras, the Kinect 3D sensor and the Velodyne HDL-64 laser scanner show the robustness of our method in different indoor and outdoor settings and under various lighting conditions.

488 citations


Proceedings ArticleDOI
TL;DR: A new type of plenoptic camera with an extended depth of field and a maximal effective resolution of up to a quarter of the sensor resolution is introduced.
Abstract: Placing a micro lens array in front of an image sensor transforms a normal camera into a single lens 3D camera, which also allows the user to change the focus and the point of view after a picture has been taken. While the concept of such plenoptic cameras is known since 1908, only recently the increased computing power of low-cost hardware and the advances in micro lens array production, have made the application of plenoptic cameras feasible. This text presents a detailed analysis of plenoptic cameras as well as introducing a new type of plenoptic camera with an extended depth of field and a maximal effective resolution of up to a quarter of the sensor resolution.

412 citations


Patent
07 May 2012
TL;DR: In this paper, a system for determining the volume and dimensions of a three-dimensional object using a dimensioning system is described, which can include an image sensor, a non-transitory, machine-readable, storage, and a processor.
Abstract: Systems and methods of determining the volume and dimensions of a three-dimensional object using a dimensioning system are provided. The dimensioning system can include an image sensor, a non-transitory, machine-readable, storage, and a processor. The dimensioning system can select and fit a three-dimensional packaging wireframe model about each three-dimensional object located within a first point of view of the image sensor. Calibration is performed to calibrate between image sensors of the dimensioning system and those of the imaging system. Calibration may occur pre-run time, in a calibration mode or period. Calibration may occur during a routine. Calibration may be automatically triggered on detection of a coupling between the dimensioning and the imaging systems.

342 citations


Patent
26 Jun 2012
TL;DR: In this paper, an optical indicia reading terminal including a housing, a multiple pixel image sensor disposed within the housing, an imaging lens assembly configured to focus an image of decodable indicia on the image sensor, an optical bandpass filter disposed in an optical path of light incident on the sensor, and an analog-to-digital (A/D) converter configured to convert an analog signal read out of an image sensor into a digital signal representative of the analog signal.
Abstract: Methods for using an optical indicia reading terminal including a housing, a multiple pixel image sensor disposed within the housing, an imaging lens assembly configured to focus an image of decodable indicia on the image sensor, an optical bandpass filter disposed in an optical path of light incident on the image sensor, an analog-to-digital (A/D) converter configured to convert an analog signal read out of the image sensor into a digital signal representative of the analog signal, and processor configured to output a decoded message data corresponding to the decodable indicia by processing the digital signal.

341 citations


Patent
15 May 2012
TL;DR: In this paper, an actuator is connected to at least one imaging subsystem for moving an angle of the optical axis relative to the terminal to align the object in the second image data with the first image data.
Abstract: A terminal for measuring at least one dimension of an object includes at least one imaging subsystem and an actuator. The at least one imaging subsystem includes an imaging optics assembly operable to focus an image onto an image sensor array. The imaging optics assembly has an optical axis. The actuator is operably connected to the at least one imaging subsystem for moving an angle of the optical axis relative to the terminal. The terminal is adapted to obtain first image data of the object and is operable to determine at least one of a height, a width, and a depth dimension of the object based on effecting the actuator to change the angle of the optical axis relative to the terminal to align the object in second image data with the object in the first image data, the second image data being different from the first image data.

341 citations


Patent
20 Jun 2012
TL;DR: A decodable reading terminal can comprise a housing including a housing window, a multiple pixel image sensor disposed within the housing, an imaging lens configured to focus an image of decodeable indicia on the image sensor, an optical bandpass filter disposed in an optical path of light incident on the sensor, and an analog-to-digital (A/D) converter configured to convert an analog signal read out of an image sensor into a digital signal representative of the analog signal as discussed by the authors.
Abstract: A decodable indicia reading terminal can comprise a housing including a housing window, a multiple pixel image sensor disposed within the housing, an imaging lens configured to focus an image of decodable indicia on the image sensor, an optical bandpass filter disposed in an optical path of light incident on the image sensor, an analog-to-digital (A/D) converter configured to convert an analog signal read out of the image sensor into a digital signal representative of the analog signal, and processor configured to output a decoded message data corresponding to the decodable indicia by processing the digital signal.

339 citations


Patent
14 Sep 2012
TL;DR: An apparatus having an image sensor is provided in this article, where the image sensor of the apparatus can include a two-dimensional image sensor, and a lens assembly can be provided in combination with a 2D image sensor.
Abstract: An apparatus having an image sensor is provided An image sensor of the apparatus can include a two dimensional image sensor A lens assembly can be provided in combination with an image sensor In one aspect, an apparatus can attempt to decode a decodable symbol representation In one aspect an apparatus can output a frame of image data

331 citations


Patent
Yiyi Guan1
27 Jun 2012
TL;DR: In this article, an imaging assembly can define a field of view on a substrate and the illumination assembly can project light within the view of the imaging assembly, and the imaging apparatus can be configured so that during an exposure period of the image assembly, it emits light that spans multiple visible color wavelength bands.
Abstract: There is set forth herein in one embodiment an imaging apparatus having an imaging assembly and an illumination assembly. The imaging assembly can comprise an imaging lens and an image sensor array. The illumination assembly can include a light source bank having one or more light source. The imaging assembly can define a field of view on a substrate and the illumination assembly can project light within the field of view. The imaging apparatus can be configured so that the illumination assembly during an exposure period of the imaging assembly emits light that spans multiple visible color wavelength bands.

322 citations


Patent
Paul Edward Showering1
03 Feb 2012
TL;DR: In this article, a mobile computing device can be configured to periodically display a preview image frame of the target object, based on the movement detected by the motion sensor, which can be further configured to compensate for a movement of the imaging device relatively to the target objects during a time period elapsed between taking and displaying the preview image frames.
Abstract: A mobile computing device can comprise a microprocessor, a display, at least one motion sensor, and an imaging device including a two-dimensional image sensor and an imaging lens configured to focus an image of a target object on the image sensor. The mobile computing device can be configured to periodically display a preview image frame of the target object. The mobile computing device can be further configured to compensate for a movement of the imaging device relatively to the target object during a time period elapsed between taking and displaying the preview image frame, by transforming the preview image frame based on the device movement detected by the motion sensor.

306 citations


Patent
01 Mar 2012
TL;DR: In this paper, a computer system for decoding a signal of decodable indicia is described, which includes a laser scanner configured that outputs a signal and a microprocessor that includes a camera sensor interface that is configured to receive the signal from the laser scanner.
Abstract: A computer system for decoding a signal of decodable indicia. The computer system includes a laser scanner configured that outputs a signal of decodable indicia and a microprocessor that include a camera sensor interface that is configured to receive the signal from the laser scanner.

Patent
Jun Lu1, Young Liu1, Xi Tao1, Feng Chen1, Ynjiun Paul Wang1 
08 May 2012
TL;DR: An encoded information reading (EIR) terminal can comprise a microprocessor communicatively coupled to a system bus, a memory, a communication interface, and a pluggable imaging assembly identified by a type identifier and configured to acquire an image comprising decodable indicia.
Abstract: An encoded information reading (EIR) terminal can comprise a microprocessor communicatively coupled to a system bus, a memory, a communication interface, and a pluggable imaging assembly identified by a type identifier and configured to acquire an image comprising decodable indicia. The imaging assembly can comprise a two-dimensional image sensor configured to output an analog signal representative of the light reflected by an object located within the field of view of the imaging assembly. The EIR terminal can be configured to output, by processing the analog signal, the raw image data derived from the analog signal and/or a decoded message corresponding to the decodable indicia. The imaging assembly can be communicatively coupled to the system bus via an imaging assembly interface comprising a plurality of wires and a multi-pin connector. The imaging assembly interface can comprise one or more wires configured to carry the imaging assembly type identifier. The EIR terminal can be configured, responsive to receiving the type identifier via the one or more wires, to retrieve from the memory one or more imaging assembly configuration information items corresponding to the type identifier and/or to receive via the communication interface one or more imaging assembly configuration information items corresponding to the type identifier. The EIR terminal can be further configured to control the imaging assembly using the imaging assembly configuration information items.

Journal ArticleDOI
TL;DR: With the availability of more channels combined with the powerful digital signal processing (DSP) capabilities of modern computers, the performance of mm-wave imaging systems is advancing rapidly.
Abstract: Due to the enormous advances made in semiconductor technology over the last few years, high integration densities with moderate costs are achievable even in the millimeter-wave (mm-wave) range and beyond, which encourage the development of imaging systems with a high number of channels. The mm-wave range lies between 30 and 300 GHz, with corresponding wavelengths between 10 and 1 mm. While imaging objects with signals of a few millimeters in wavelength, many optically opaque objects appear transparent, making mm-wave imaging attractive for a wide variety of commercial and scientific applications like nondestructive testing (NDT), material characterization, security scanning, and medical screening. The spatial resolution in lateral and range directions as well as the image dynamic range offered by an imaging system are considered the main measures of performance. With the availability of more channels combined with the powerful digital signal processing (DSP) capabilities of modern computers, the performance of mm-wave imaging systems is advancing rapidly.

Journal ArticleDOI
04 Jan 2012-Sensors
TL;DR: Recommendations for properties of imaging sensor, data collection and processing of UAV image data to ensure accurate point cloud generation are given.
Abstract: The objective of this investigation was to develop and investigate methods for point cloud generation by image matching using aerial image data collected by quadrocopter type micro unmanned aerial vehicle (UAV) imaging systems. Automatic generation of high-quality, dense point clouds from digital images by image matching is a recent, cutting-edge step forward in digital photogrammetric technology. The major components of the system for point cloud generation are a UAV imaging system, an image data collection process using high image overlaps, and post-processing with image orientation and point cloud generation. Two post-processing approaches were developed: one of the methods is based on Bae Systems’ SOCET SET classical commercial photogrammetric software and another is built using Microsoft®’s Photosynth™ service available in the Internet. Empirical testing was carried out in two test areas. Photosynth processing showed that it is possible to orient the images and generate point clouds fully automatically without any a priori orientation information or interactive work. The photogrammetric processing line provided dense and accurate point clouds that followed the theoretical principles of photogrammetry, but also some artifacts were detected. The point clouds from the Photosynth processing were sparser and noisier, which is to a large extent due to the fact that the method is not optimized for dense point cloud generation. Careful photogrammetric processing with self-calibration is required to achieve the highest accuracy. Our results demonstrate the high performance potential of the approach and that with rigorous processing it is possible to reach results that are consistent with theory. We also point out several further research topics. Based on theoretical and empirical results, we give recommendations for properties of imaging sensor, data collection and processing of UAV image data to ensure accurate point cloud generation.

Journal ArticleDOI
TL;DR: In this article, the spatial resolution of digital particle image velocimetry (DPIV) is analyzed as a function of the tracer particles and the imaging and recording system.
Abstract: This work analyzes the spatial resolution that can be achieved by digital particle image velocimetry (DPIV) as a function of the tracer particles and the imaging and recording system. As the in-plane resolution for window-correlation evaluation is related by the interrogation window size, it was assumed in the past that single-pixel ensemble-correlation increases the spatial resolution up to the pixel limit. However, it is shown that the determining factor limiting the resolution of single-pixel ensemble-correlation are the size of the particle images, which is dependent on the size of the particles, the magnification, the f-number of the imaging system, and the optical aberrations. Furthermore, since the minimum detectable particle image size is determined by the pixel size of the camera sensor in DPIV, this quantity is also considered in this analysis. It is shown that the optimal magnification that results in the best possible spatial resolution can be estimated from the particle size, the lens properties, and the pixel size of the camera. Thus, the information provided in this paper allows for the optimization of the camera and objective lens choices as well as the working distance for a given setup. Furthermore, the possibility of increasing the spatial resolution by means of particle tracking velocimetry (PTV) is discussed in detail. It is shown that this technique allows to increase the spatial resolution to the subpixel limit for averaged flow fields. In addition, PTV evaluation methods do not show bias errors that are typical for correlation-based approaches. Therefore, this technique is best suited for the estimation of velocity profiles.

Journal ArticleDOI
TL;DR: The target application for this sensor is time-resolved imaging, in particular fluorescence lifetime imaging microscopy and 3D imaging, and the characterization shows the suitability of the proposed sensor technology for these applications.
Abstract: We report on the design and characterization of a novel time-resolved image sensor fabricated in a 130 nm CMOS process. Each pixel within the 3232 pixel array contains a low-noise single-photon detector and a high-precision time-to-digital converter (TDC). The 10-bit TDC exhibits a timing resolution of 119 ps with a timing uniformity across the entire array of less than 2 LSBs. The differential non-linearity (DNL) and integral non-linearity (INL) were measured at ±0.4 and ±1.2 LSBs, respectively. The pixel array was fabricated with a pitch of 50 μm in both directions and with a total TDC area of less than 2000 μm2. The target application for this sensor is time-resolved imaging, in particular fluorescence lifetime imaging microscopy and 3D imaging. The characterization shows the suitability of the proposed sensor technology for these applications.

Journal ArticleDOI
TL;DR: This paper focuses on the design and characterization of a 256 x 64-pixel image sensor, which also comprises an event-driven readout circuit, an array of 64 row-level high-throughput time-to-digital converters, and a 16 Gbit/s global read out circuit.
Abstract: We introduce an optical time-of-flight image sensor taking advantage of a MEMS-based laser scanning device. Unlike previous approaches, our concept benefits from the high timing resolution and the digital signal flexibility of single-photon pixels in CMOS to allow for a nearly ideal cooperation between the image sensor and the scanning device. This technique enables a high signal-to-background light ratio to be obtained, while simultaneously relaxing the constraint on size of the MEMS mirror. These conditions are critical for devising practical and low-cost depth sensors intended to operate in uncontrolled environments, such as outdoors. A proof-of-concept prototype capable of operating in real-time was implemented. This paper focuses on the design and characterization of a 256 x 64-pixel image sensor, which also comprises an event-driven readout circuit, an array of 64 row-level high-throughput time-to-digital converters, and a 16 Gbit/s global readout circuit. Quantitative evaluation of the sensor under 2 klux of background light revealed a repeatability error of 13.5 cm throughout the distance range of 20 meters.

Proceedings ArticleDOI
04 Mar 2012
TL;DR: A method for reducing interference between multiple structured light-based depth sensors operating in the same spectrum with rigidly attached projectors and cameras that will allow inexpensive commodity depth sensors to form the basis of dense large-scale capture systems is presented.
Abstract: We present a method for reducing interference between multiple structured light-based depth sensors operating in the same spectrum with rigidly attached projectors and cameras. A small amount of motion is applied to a subset of the sensors so that each unit sees its own projected pattern sharply, but sees a blurred version of the patterns of other units. If high spacial frequency patterns are used, each sensor sees its own pattern with higher contrast than the patterns of other units, resulting in simplified pattern disambiguation. An analysis of this method is presented for a group of commodity Microsoft Kinect color-plus-depth sensors with overlapping views. We demonstrate that applying a small vibration with a simple motor to a subset of the Kinect sensors results in reduced interference, as manifested as holes and noise in the depth maps. Using an array of six Kinects, our system reduced interference-related missing data from from 16.6% to 1.4% of the total pixels. Another experiment with three Kinects showed an 82.2% percent reduction in the measurement error introduced by interference. A side-effect is blurring in the color images of the moving units, which is mitigated with post-processing. We believe our technique will allow inexpensive commodity depth sensors to form the basis of dense large-scale capture systems.

Patent
21 Aug 2012
TL;DR: In this paper, a method of detecting information transmitted by a light source in a complementary metal-oxide-semiconductor (CMOS) image sensor by detecting a frequency of light pulses produced by the light source was presented.
Abstract: In one aspect, the present disclosure relates to a method of detecting information transmitted by a light source in a complementary metal-oxide-semiconductor (CMOS) image sensor by detecting a frequency of light pulses produced by the light source. In some embodiments, the method includes capturing on the CMOS image sensor with a rolling shutter an image in which different portions of the CMOS image sensor are exposed at different points in time; detecting visible distortions that include alternating stripes in the image; measuring a width of the alternating stripes present in the image; and selecting a symbol based on the width of the alternating stripes present in the image to recover information encoded in the frequency of light pulses produced by the light source captured in the image.

Patent
28 Aug 2012
TL;DR: In this article, a system and method to calibrate displays using a spectral-based colorimetrically calibrated multicolor camera is described, where a set of reference absolute XYZ coordinates of the calibration pattern images are compared with a set measured XYZ color coordinates captured using the camera.
Abstract: Described are a system and method to calibrate displays using a spectral-based colorimetrically calibrated multicolor camera. Particularly, discussed are systems and methods for displaying a multicolor calibration pattern image on a display unit, capturing the multicolor calibration pattern image with a multicolor camera having a plurality of image sensors, with each image sensor configured to capture a predetermined color of light, comparing a set of reference absolute XYZ coordinates of a set of colors from the multicolor calibration pattern with a set of measured XYZ color coordinates captured using the colorimetrically calibrated camera, and calibrating the display unit based on the comparison between the reference coordinates and the measured coordinates.

Patent
27 Apr 2012
TL;DR: In this article, an image processing apparatus including an HDR processing unit inputting images picked up while exposure control that changes an exposure time is being carried out with a predetermined spatial period and a predetermined temporal period on pixels that compose an image sensor, and carrying out image processing.
Abstract: There is provided an image processing apparatus including an HDR (High Dynamic Range) processing unit inputting images picked up while exposure control that changes an exposure time is being carried out with a predetermined spatial period and a predetermined temporal period on pixels that compose an image sensor, and carrying out image processing. The HDR processing unit generates a first combined image by combining pixel values of a plurality of images with different sensitivities generated by an interpolation process using a plurality of consecutively picked-up images, generates a second combined image by combining pixel values of a plurality of images with different sensitivities generated by an interpolation process that uses a single picked-up image, and generates an HDR image by executing a pixel value blending process on the first combined image and the second combined image in accordance with a blending ratio calculated in accordance with movement detection information.

Journal ArticleDOI
TL;DR: In this research, a unique freeform microlens array was designed and fabricated for a compact compound-eye camera to achieve a large field of view, and was fabricated using a combination of ultraprecision diamond broaching and microinjection molding process.
Abstract: In this research, a unique freeform microlens array was designed and fabricated for a compact compound-eye camera to achieve a large field of view. This microlens array has a field of view of 48°×48°, with a thickness of only 1.6 mm. The freeform microlens array resides on a flat substrate, and thus can be directly mounted to a commercial 2D image sensor. Freeform surfaces were used to design the microlens profiles, thus allowing the microlenses to steer and focus incident rays simultaneously. The profiles of the freeform microlenses were represented using extended polynomials, the coefficients of which were optimized using ZEMAX. To reduce crosstalk among neighboring channels, a micro aperture array was machined using high-speed micromilling. The molded microlens array was assembled with the micro aperture array, an adjustable fixture, and a board-level image sensor to form a compact compound-eye camera system. The imaging tests using the compound-eye camera showed that the unique freeform microlens array was capable of forming proper images, as suggested by design. The measured field of view of ±23.5° also matches the initial design and is considerably larger compared with most similar camera designs using conventional microlens arrays. To achieve low manufacturing cost without sacrificing image quality, the freeform microlens array was fabricated using a combination of ultraprecision diamond broaching and a microinjection molding process.

Patent
Sui Tong Tang1
06 Feb 2012
TL;DR: In this article, a camera unit generates a processed digital image by augmenting color image data with infrared image data according to the level of ambient light exposure, and the image data is processed into a digital image, according to a selected mode of operation for the camera unit, using colour image data only when the ambient light is high, but augmenting the color images with infrared images when the level is low to increase the color luminance of the final image.
Abstract: A camera unit generates a processed digital image by augmenting color image data with infrared image data according to the level of ambient light exposure. The camera has an ambient light sensor that detects the level of ambient light in the camera unit and an image sensor that provides image data. One or more quantum dot layers may be included in the image sensor. A camera controller adapts the camera unit for operation in different modes that are selectable based on the levels of detected ambient light. The image data is processed into a digital image, according to the selected mode of operation for the camera unit, using color image data only when the level of ambient light is high, but augmenting the color image data with infrared image data when the level of ambient light is low to increase the color luminance of the final processed digital image.

Patent
31 May 2012
TL;DR: In this article, an imaging apparatus includes an image sensor configured to capture an image of a subject; an identification information storage unit configured to store a particular subject and a terminal device corresponding to the particular subject; a face detection unit and a face recognition unit configured by a microcomputer.
Abstract: An imaging apparatus includes: an image sensor configured to capture an image of a subject; an identification information storage unit configured to store a particular subject and a terminal device corresponding to the particular subject; a face detection unit and a face recognition unit configured to detect the particular subject stored in the identification information storage unit in the image captured by the image sensor; and a microcomputer configured to notify, when the face detection unit and the face recognition unit detect the particular subject, the terminal device which is stored in the identification information storage unit and corresponds to the detected particular subject that the particular subject is detected.

Proceedings ArticleDOI
16 Jun 2012
TL;DR: Though the depth image is noisy, incomplete and low resolution, it facilitates both camera motion estimation and frame warping, which make the video stabilization a much well posed problem.
Abstract: Previous video stabilization methods often employ homographies to model transitions between consecutive frames, or require robust long feature tracks. However, the homography model is invalid for scenes with significant depth variations, and feature point tracking is fragile in videos with textureless objects, severe occlusion or camera rotation. To address these challenging cases, we propose to solve video stabilization with an additional depth sensor such as the Kinect camera. Though the depth image is noisy, incomplete and low resolution, it facilitates both camera motion estimation and frame warping, which make the video stabilization a much well posed problem. The experiments demonstrate the effectiveness of our algorithm.

Journal ArticleDOI
TL;DR: An approach for geometrically calibrating individual and multiple cameras in both the thermal and visible modalities is presented and a new geometric mask with high thermal contrast and not requiring a flood lamp is presented as an alternative calibration pattern.
Abstract: Accurate and efficient thermal-infrared (IR) camera calibration is important for advancing computer vision research within the thermal modality. This paper presents an approach for geometrically calibrating individual and multiple cameras in both the thermal and visible modalities. The proposed technique can be used to correct for lens distortion and to simultaneously reference both visible and thermal-IR cameras to a single coordinate frame. The most popular existing approach for the geometric calibration of thermal cameras uses a printed chessboard heated by a flood lamp and is comparatively inaccurate and difficult to execute. Additionally, software toolkits provided for calibration either are unsuitable for this task or require substantial manual intervention. A new geometric mask with high thermal contrast and not requiring a flood lamp is presented as an alternative calibration pattern. Calibration points on the pattern are then accurately located using a clustering-based algorithm which utilizes the maximally stable extremal region detector. This algorithm is integrated into an automatic end-to-end system for calibrating single or multiple cameras. The evaluation shows that using the proposed mask achieves a mean reprojection error up to 78% lower than that using a heated chessboard. The effectiveness of the approach is further demonstrated by using it to calibrate two multiple-camera multiple-modality setups. Source code and binaries for the developed software are provided on the project Web site.

Patent
21 May 2012
TL;DR: In this article, a photoelectric conversion unit was used to detect a phase difference in an image sensor. But the phase difference was defined as a split light beam out of a light beam from the photographing optical system and outputting focus detection signals.
Abstract: An image sensor comprises a first imaging pixel and a second imaging pixel each of which detects an object image formed by a photographing optical system and generates a recording image. Each of the first imaging pixel and the second imaging pixel comprises a plurality of photoelectric conversion units segmented in a first direction, the plurality of photoelectric conversion units have an ability of photoelectrically converting images formed by split light beams out of a light beam from the photographing optical system and outputting focus detection signals to be used to detect a phase difference. A base-line length of photoelectric conversion units to be used to detect the phase difference included in the first imaging pixel is longer than that of photoelectric conversion units to be used to detect the phase difference included in the second imaging pixel.

Journal ArticleDOI
TL;DR: The oversampled binary sensing scheme as a parameter estimation problem based on quantized Poisson statistics is formulated and the Cramér-Rao lower bound (CRLB) of the estimation variance approaches that of an ideal unquantized sensor, i.e., as if there were no quantization in the sensor measurements.
Abstract: We study a new image sensor that is reminiscent of a traditional photographic film. Each pixel in the sensor has a binary response, giving only a 1-bit quantized measurement of the local light intensity. To analyze its performance, we formulate the oversampled binary sensing scheme as a parameter estimation problem based on quantized Poisson statistics. We show that, with a single-photon quantization threshold and large oversampling factors, the Cramer-Rao lower bound (CRLB) of the estimation variance approaches that of an ideal unquantized sensor, i.e., as if there were no quantization in the sensor measurements. Furthermore, the CRLB is shown to be asymptotically achievable by the maximum-likelihood estimator (MLE). By showing that the log-likelihood function of our problem is concave, we guarantee the global optimality of iterative algorithms in finding the MLE. Numerical results on both synthetic data and images taken by a prototype sensor verify our theoretical analysis and demonstrate the effectiveness of our image reconstruction algorithm. They also suggest the potential application of the oversampled binary sensing scheme in high dynamic range photography.

Journal ArticleDOI
TL;DR: An Event-Driven Convolution Module for computing 2D convolutions on such event streams and has multi-kernel capability, which means it will select the convolution kernel depending on the origin of the event.
Abstract: Event-Driven vision sensing is a new way of sensing visual reality in a frame-free manner. This is, the vision sensor (camera) is not capturing a sequence of still frames, as in conventional video and computer vision systems. In Event-Driven sensors each pixel autonomously and asynchronously decides when to send its address out. This way, the sensor output is a continuous stream of address events representing reality dynamically continuously and without constraining to frames. In this paper we present an Event-Driven Convolution Module for computing 2D convolutions on such event streams. The Convolution Module has been designed to assemble many of them for building modular and hierarchical Convolutional Neural Networks for robust shape and pose invariant object recognition. The Convolution Module has multi-kernel capability. This is, it will select the convolution kernel depending on the origin of the event. A proof-of-concept test prototype has been fabricated in a 0.35 μm CMOS process and extensive experimental results are provided. The Convolution Processor has also been combined with an Event-Driven Dynamic Vision Sensor (DVS) for high-speed recognition examples. The chip can discriminate propellers rotating at 2 k revolutions per second, detect symbols on a 52 card deck when browsing all cards in 410 ms, or detect and follow the center of a phosphor oscilloscope trace rotating at 5 KHz.

Patent
30 Nov 2012
TL;DR: An image sensing system for a vehicle includes an imager disposed at or proximate to an in-cabin portion of a vehicle windshield and having a forward field of view to the exterior of the vehicle through the vehicle windshield as mentioned in this paper.
Abstract: An image sensing system for a vehicle includes an imager disposed at or proximate to an in-cabin portion of a vehicle windshield and having a forward field of view to the exterior of the vehicle through the vehicle windshield. The photosensor array of the imager is operable to capture image data. The image sensing system identifies objects in the forward field of view of the imager via processing of captured image data by an image processor. The photosensor array may be operable to capture frames of image data and the image sensing system may include an exposure control which determines an accumulation period of time that the photosensor array senses light when capturing a frame of image data. Identification of objects may be based at least in part on at least one of (i) shape, (ii) luminance, (iii) geometry, (iv) spatial location, (v) motion and (vi) spectral characteristic.