scispace - formally typeset
Search or ask a question

Showing papers on "Image sensor published in 2014"


Journal ArticleDOI
TL;DR: This paper presents a dynamic and active pixel vision sensor (DAVIS) which addresses this deficiency by outputting asynchronous DVS events and synchronous global shutter frames concurrently.
Abstract: Event-based dynamic vision sensors (DVSs) asynchronously report log intensity changes. Their high dynamic range, sub-ms latency and sparse output make them useful in applications such as robotics and real-time tracking. However they discard absolute intensity information which is useful for object recognition and classification. This paper presents a dynamic and active pixel vision sensor (DAVIS) which addresses this deficiency by outputting asynchronous DVS events and synchronous global shutter frames concurrently. The active pixel sensor (APS) circuits and the DVS circuits within a pixel share a single photodiode. Measurements from a 240×180 sensor array of 18.5 μm 2 pixels fabricated in a 0.18 μm 6M1P CMOS image sensor (CIS) technology show a dynamic range of 130 dB with 11% contrast detection threshold, minimum 3 μs latency, and 3.5% contrast matching for the DVS pathway; and a 51 dB dynamic range with 0.5% FPN for the APS readout.

735 citations


Journal ArticleDOI
01 Jan 2014
TL;DR: An overview of the current applications of thermal cameras is provided, and the nature of thermal radiation and the technology of thermal camera are described.
Abstract: Thermal cameras are passive sensors that capture the infrared radiation emitted by all objects with a temperature above absolute zero. This type of camera was originally developed as a surveillance and night vision tool for the military, but recently the price has dropped, significantly opening up a broader field of applications. Deploying this type of sensor in vision systems eliminates the illumination problems of normal greyscale and RGB cameras. This survey provides an overview of the current applications of thermal cameras. Applications include animals, agriculture, buildings, gas detection, industrial, and military applications, as well as detection, tracking, and recognition of humans. Moreover, this survey describes the nature of thermal radiation and the technology of thermal cameras.

546 citations


Patent
Ynjiun Paul Wang1
13 Jan 2014
TL;DR: In this paper, a global shutter shared by first and second pixels is proposed to reduce the charge build up on the charge storage area attributable to incident light rays, where the global shutter is equipped with a charge storage and an associated shield.
Abstract: There is set forth herein in one embodiment an image sensor array including a global shutter shared by first and second pixels. The global shutter can include a charge storage area having an associated shield for reducing charge build up on the charge storage area attributable to incident light rays. There is set forth herein in one embodiment an imaging apparatus having one or more configuration. The one or more configuration can include one or more of a configuration wherein a frame read out from an image sensor array has unbinned pixel values, a configuration wherein a frame read out from an image sensor array has binned pixel values corresponding to an M×N, M>=2, N>=2 arrangement of pixel values, and a configuration wherein a frame read out from an image sensor array has binned pixel values corresponding to a 1×N, N>=2 arrangement of pixel values.

340 citations


Journal ArticleDOI
TL;DR: Characteristics unique to image-sensor-based VLC as compared to radio wave technology are identified to improve automotive safety and demonstrate its effectiveness through a V2V communication field trial.
Abstract: The present article introduces VLC for automotive applications using an image sensor. In particular, V2I-VLC and V2V-VLC are presented. While previous studies have documented the effectiveness of V2I and V2V communication using radio technology in terms of improving automotive safety, in the present article, we identify characteristics unique to image-sensor-based VLC as compared to radio wave technology. The two primary advantages of a VLC system are its line-of-sight feature and an image sensor that not only provides VLC functions, but also the potential vehicle safety applications made possible by image and video processing. Herein, we present two ongoing image-sensor-based V2I-VLC and V2VVLC projects. In the first, a transmitter using an LED array (which is assumed to be an LED traffic light) and a receiver using a high-framerate CMOS image sensor camera is introduced as a potential V2I-VLC system. For this system, real-time transmission of the audio signal has been confirmed through a field trial. In the second project, we introduce a newly developed CMOS image sensor capable of receiving highspeed optical signals and demonstrate its effectiveness through a V2V communication field trial. In experiments, due to the high-speed signal reception capability of the camera receiver using the developed image sensor, a data transmission rate of 10 Mb/s has been achieved, and image (320 × 240, color) reception has been confirmed together with simultaneous reception of various internal vehicle data, such as vehicle ID and speed.

340 citations


Journal ArticleDOI
28 Aug 2014
TL;DR: It is suggested that bioinspired vision systems have the potential to outperform conventional, frame-based vision systems in many application fields and to establish new benchmarks in terms of redundancy suppression and data compression, dynamic range, temporal resolution, and power efficiency.
Abstract: State-of-the-art image sensors suffer from significant limitations imposed by their very principle of operation. These sensors acquire the visual information as a series of “snapshot” images, recorded at discrete points in time. Visual information gets time quantized at a predetermined frame rate which has no relation to the dynamics present in the scene. Furthermore, each recorded frame conveys the information from all pixels, regardless of whether this information, or a part of it, has changed since the last frame had been acquired. This acquisition method limits the temporal resolution, potentially missing important information, and leads to redundancy in the recorded image data, unnecessarily inflating data rate and volume. Biology is leading the way to a more efficient style of image acquisition. Biological vision systems are driven by events happening within the scene in view, and not, like image sensors, by artificially created timing and control signals. Translating the frameless paradigm of biological vision to artificial imaging systems implies that control over the acquisition of visual information is no longer being imposed externally to an array of pixels but the decision making is transferred to the single pixel that handles its own information individually. In this paper, recent developments in bioinspired, neuromorphic optical sensing and artificial vision are presented and discussed. It is suggested that bioinspired vision systems have the potential to outperform conventional, frame-based vision systems in many application fields and to establish new benchmarks in terms of redundancy suppression and data compression, dynamic range, temporal resolution, and power efficiency. Demanding vision tasks such as real-time 3-D mapping, complex multiobject tracking, or fast visual feedback loops for sensory-motor action, tasks that often pose severe, sometimes insurmountable, challenges to conventional artificial vision systems, are in reach using bioinspired vision sensing and processing techniques.

329 citations


Journal ArticleDOI
20 Nov 2014
TL;DR: A compressive technique is introduced that does not require postprocessing, resulting in a predicted frame rate increase by a factor 8 from a compressive ratio of 12.5% with only 28% relative error.
Abstract: Microscopy is an essential tool in a huge range of research areas Until now, microscopy has been largely restricted to imaging in the visible region of the electromagnetic spectrum Here we present a microscope system that uses single-pixel imaging techniques to produce images simultaneously in the visible and shortwave infrared We apply our microscope to the inspection of various objects, including a silicon CMOS sensor, highlighting the complementarity of the visible and shortwave infrared wavebands The system is capable of producing images with resolutions between 32×32 and 128×128 pixels at corresponding frame rates between 10 and 06 Hz We introduce a compressive technique that does not require postprocessing, resulting in a predicted frame rate increase by a factor 8 from a compressive ratio of 125% with only 28% relative error

301 citations


Journal ArticleDOI
TL;DR: An optical vehicle-to-vehicle (V2V) communication system based on an optical wireless communication technology using an LED transmitter and a camera receiver which employs a special CMOS image sensor, i.e, an optical communication image sensor (OCI).
Abstract: This paper introduces an optical vehicle-to-vehicle (V2V) communication system based on an optical wireless communication technology using an LED transmitter and a camera receiver, which employs a special CMOS image sensor, ie, an optical communication image sensor (OCI) The OCI has a “communication pixel (CPx)” that can promptly respond to light intensity variations and an output circuit of a “flag image” in which only high-intensity light sources, such as LEDs, have emerged The OCI that employs these two technologies provides capabilities for a 10-Mb/s optical signal reception and real-time LED detection to the camera receiver The optical V2V communication system consisting of the LED transmitters mounted on a leading vehicle and the camera receiver mounted on a following vehicle is constructed, and various experiments are conducted under real driving and outdoor lighting conditions Due to the LED detection method using the flag image, the camera receiver correctly detects LEDs, in real time, in challenging outdoor conditions Furthermore, between two vehicles, various vehicle internal data (such as speed) and image data (320 × 240, color) are transmitted successfully, and the 130-fps image data reception is achieved while driving outside

241 citations


Patent
08 Jan 2014
TL;DR: In this article, an illumination subassembly is configured to project onto an object a pattern of monochromatic optical radiation in a given wavelength band, which is then processed by a processor to generate and output a depth map of the object in registration with the color image.
Abstract: Imaging apparatus includes an illumination subassembly, which is configured to project onto an object a pattern of monochromatic optical radiation in a given wavelength band. An imaging subassembly includes an image sensor, which is configured both to capture a first, monochromatic image of the pattern on the object by receiving the monochromatic optical radiation reflected from the object and to capture a second, color image of the object by receiving polychromatic optical radiation, and to output first and second image signals responsively to the first and second images, respectively. A processor is configured to process the first and second signals so as to generate and output a depth map of the object in registration with the color image.

221 citations


Journal ArticleDOI
TL;DR: An augmented reality navigation system with automatic marker-free image registration using 3-D image overlay and stereo tracking for dental surgery and the overall image overlay error of the proposed system was 0.71 mm.
Abstract: Computer-assisted oral and maxillofacial surgery (OMS) has been rapidly evolving since the last decade. State-of-the-art surgical navigation in OMS still suffers from bulky tracking sensors, troublesome image registration procedures, patient movement, loss of depth perception in visual guidance, and low navigation accuracy. We present an augmented reality navigation system with automatic marker-free image registration using 3-D image overlay and stereo tracking for dental surgery. A customized stereo camera is designed to track both the patient and instrument. Image registration is performed by patient tracking and real-time 3-D contour matching, without requiring any fiducial and reference markers. Real-time autostereoscopic 3-D imaging is implemented with the help of a consumer-level graphics processing unit. The resulting 3-D image of the patient's anatomy is overlaid on the surgical site by a half-silvered mirror using image registration and IP-camera registration to guide the surgeon by exposing hidden critical structures. The 3-D image of the surgical instrument is also overlaid over the real one for an augmented display. The 3-D images present both stereo and motion parallax from which depth perception can be obtained. Experiments were performed to evaluate various aspects of the system; the overall image overlay error of the proposed system was 0.71 mm.

200 citations


Journal ArticleDOI
TL;DR: This work fabricate pixels consisting of vertical silicon nanowires with integrated photodetectors, demonstrate that their spectral sensitivities are governed by nanowire radius, and perform color imaging.
Abstract: The organic dye filters of conventional color image sensors achieve the red/green/blue response needed for color imaging, but have disadvantages related to durability, low absorption coefficient, and fabrication complexity. Here, we report a new paradigm for color imaging based on all-silicon nanowire devices and no filters. We fabricate pixels consisting of vertical silicon nanowires with integrated photodetectors, demonstrate that their spectral sensitivities are governed by nanowire radius, and perform color imaging. Our approach is conceptually different from filter-based methods, as absorbed light is converted to photocurrent, ultimately presenting the opportunity for very high photon efficiency.

200 citations


Proceedings ArticleDOI
15 Apr 2014
TL;DR: This paper presents a communication scheme that enables interior ambient LED lighting systems to send data to mobile devices using either cameras or light sensors, and shows through experiments how a binary frequency shift keying modulation scheme can be used to transmit data from up to 29 unique light sources simultaneously in a single collision domain.
Abstract: The omnipresence of indoor lighting makes it an ideal vehicle for pervasive communication with mobile devices. In this paper, we present a communication scheme that enables interior ambient LED lighting systems to send data to mobile devices using either cameras or light sensors. By exploiting rolling shutter camera sensors that are common on tablets, laptops and smartphones, it is possible to detect high-frequency changes in light intensity reflected off of surfaces and in direct line-of-sight of the camera. We present a demodulation approach that allows smartphones to accurately detect frequencies as high as 8kHz with 0.2kHz channel separation. In order to avoid humanly perceivable flicker in the lighting, our system operates at frequencies above 2kHz and compensates for the non-ideal frequency response of standard LED drivers by adjusting the light's duty-cycle. By modulating the PWM signal commonly used to drive LED lighting systems, we are able to encode data that can be used as localization landmarks. We show through experiments how a binary frequency shift keying modulation scheme can be used to transmit data at 1.25 bytes per second (fast enough to send an ID code) from up to 29 unique light sources simultaneously in a single collision domain. We also show how tags can demodulate the same signals using a light sensor instead of a camera for low-power applications.

Journal ArticleDOI
TL;DR: Characterization of gamma detection performance with an 3 × 3 × 5 mm3 LYSO scintillator at 20°C is reported, showing a 511-keV gamma energy resolution of 10.9% and a coincidence timing resolution of 399 ps.
Abstract: An 8 × 16 pixel array based on CMOS small-area silicon photomultipliers (mini-SiPMs) detectors for PET applications is reported. Each pixel is 570 × 610 μm2 in size and contains four digital mini-SiPMs, for a total of 720 SPADs, resulting in a full chip fill-factor of 35.7%. For each gamma detection, the pixel provides the total detected energy and a timestamp, obtained through two 7-b counters and two 12-b 64-ps TDCs. An adder tree overlaid on top of the pixel array sums the sensor total counts at up to 100 Msamples/s, which are then used for detecting the asynchronous gamma events on-chip, while also being output in real-time. Characterization of gamma detection performance with an 3 × 3 × 5 mm3 LYSO scintillator at 20°C is reported, showing a 511-keV gamma energy resolution of 10.9% and a coincidence timing resolution of 399 ps.

Proceedings ArticleDOI
23 Jun 2014
TL;DR: This paper forms the reconstruction task as a linear inverse problem on the transient response of a scene, which they acquire using an affordable setup consisting of a modulated light source and a time-of-flight image sensor, and achieves resolutions in the order of a few centimeters for object shape and albedo.
Abstract: The functional difference between a diffuse wall and a mirror is well understood: one scatters back into all directions, and the other one preserves the directionality of reflected light. The temporal structure of the light, however, is left intact by both: assuming simple surface reflection, photons that arrive first are reflected first. In this paper, we exploit this insight to recover objects outside the line of sight from second-order diffuse reflections, effectively turning walls into mirrors. We formulate the reconstruction task as a linear inverse problem on the transient response of a scene, which we acquire using an affordable setup consisting of a modulated light source and a time-of-flight image sensor. By exploiting sparsity in the reconstruction domain, we achieve resolutions in the order of a few centimeters for object shape (depth and laterally) and albedo. Our method is robust to ambient light and works for large room-sized scenes. It is drastically faster and less expensive than previous approaches using femtosecond lasers and streak cameras, and does not require any moving parts.

Journal ArticleDOI
TL;DR: The adaptation of a smartphone's camera to function as a compact lensless microscope that allows for sub-micron resolution imaging over an ultra-wide field-of-view (FOV) and pixel super-resolution reconstruction.
Abstract: Portable chip-scale microscopy devices can potentially address various imaging needs in mobile healthcare and environmental monitoring. Here, we demonstrate the adaptation of a smartphone's camera to function as a compact lensless microscope. Unlike other chip-scale microscopy schemes, this method uses ambient illumination as its light source and does not require the incorporation of a dedicated light source. The method is based on the shadow imaging technique where the sample is placed on the surface of the image sensor, which captures direct shadow images under illumination. To improve the image resolution beyond the pixel size, we perform pixel super-resolution reconstruction with multiple images at different angles of illumination, which are captured while the user is manually tilting the device around any ambient light source, such as the sun or a lamp. The lensless imaging scheme allows for sub-micron resolution imaging over an ultra-wide field-of-view (FOV). Image acquisition and reconstruction are performed on the device using a custom-built Android application, constructing a stand-alone imaging device for field applications. We discuss the construction of the device using a commercial smartphone and demonstrate the imaging capabilities of our system.

Proceedings ArticleDOI
07 Mar 2014
TL;DR: In this paper, a novel snapshot multispectral imager concept based on optical filters monolithically integrated on top of a standard CMOS image sensor is introduced, which overcomes the problems mentioned for scanning applications by snapshot acquisition, where an entire multi-spectral data cube is sensed at one discrete point in time.
Abstract: The adoption of spectral imaging by industry has so far been limited due to the lack of high speed, low cost and compact spectral cameras. Moreover most state-of-the-art spectral cameras utilize some form of spatial or spectral scanning during acquisition, making them ill-suited for analyzing dynamic scenes containing movement. This paper introduces a novel snapshot multispectral imager concept based on optical filters monolithically integrated on top of a standard CMOS image sensor. It overcomes the problems mentioned for scanning applications by snapshot acquisition, where an entire multispectral data cube is sensed at one discrete point in time. This is enabled by depositing interference filters per pixel directly on a CMOS image sensor, extending the traditional Bayer color imaging concept to multi- or hyperspectral imaging without a need for dedicated fore-optics. The monolithic deposition leads to a high degree of design flexibility. This enables systems ranging from application-specific, high spatial resolution cameras with 1 to 4 spectral filters, to hyperspectral snapshot cameras at medium spatial resolutions and filters laid out in cells of 4x4 to 6x6 or more. Through the use of monolithically integrated optical filters it further retains the qualities of compactness, low cost and high acquisition speed, differentiating it from other snapshot spectral cameras.

Journal ArticleDOI
TL;DR: Distributed algorithms that use 2-D image measurements to estimate the absolute 3-D poses of the nodes in a camera network, with the purpose of enabling higher-level tasks such as tracking and recognition are proposed.
Abstract: In this paper we propose distributed algorithms that use 2-D image measurements to estimate the absolute 3-D poses of the nodes in a camera network, with the purpose of enabling higher-level tasks such as tracking and recognition. We assume that pairs of cameras with overlapping fields of view can estimate their relative 3-D pose (rotation and translation direction) using standard computer vision techniques. The solution we propose combines these local, noisy estimates into a single consistent localization. We derive our algorithms from optimization problems on the manifold of poses. We provide theoretical results on the convergence of the algorithms (choice of the step-size, initialization) and on the properties of their solutions (sensitivity, uniqueness). We also provide experiments on synthetic and real data. Interestingly, our algorithm for estimating the rotation part of the poses shows some degree of robustness to outliers.

Journal ArticleDOI
TL;DR: It is demonstrated that two high-speed spatial light modulators, located conjugate to the image and spectral plane, respectively, can code the hyperspectral datacube into a single sensor image such that the high-resolution signal can be recovered in postprocessing.
Abstract: This Letter presents a new snapshot approach to hyperspectral imaging via dual-optical coding and compressive computational reconstruction. We demonstrate that two high-speed spatial light modulators, located conjugate to the image and spectral plane, respectively, can code the hyperspectral datacube into a single sensor image such that the high-resolution signal can be recovered in postprocessing. We show various applications by designing different optical modulation functions, including programmable spatially varying color filtering, multiplexed hyperspectral imaging, and high-resolution compressive hyperspectral imaging.

Journal ArticleDOI
TL;DR: The architecture and three applications of the largest resolution image sensor based on single-photon avalanche diodes (SPADs) published to date are presented, used as a highly sensitive sensor with high temporal resolution and generation of true random numbers.
Abstract: We present the architecture and three applications of the largest resolution image sensor based on single-photon avalanche diodes (SPADs) published to date. The sensor, fabricated in a high-voltage CMOS process, has a resolution of 512 × 128 pixels and a pitch of 24 μm. The fill-factor of 5% can be increased to 30% with the use of microlenses. For precise control of the exposure and for time-resolved imaging, we use fast global gating signals to define exposure windows as small as 4 ns. The uniformity of the gate edges location is ∼140 ps (FWHM) over the whole array, while in-pixel digital counting enables frame rates as high as 156 kfps. Currently, our camera is used as a highly sensitive sensor with high temporal resolution, for applications ranging from fluorescence lifetime measurements to fluorescence correlation spectroscopy and generation of true random numbers.

Journal ArticleDOI
TL;DR: Simulation results show that the proposed algorithm improves the performance for detecting and imaging high-speed maneuvering targets and theoretical analysis confirms that the methodology can precisely focus targets.
Abstract: Weak-target detection and imaging are the challenging problems of airborne or spaceborne early warning radar. The envelope of a high-speed weak target after range compression spreads over range during the long observation period. To finely refocus a high-speed weak maneuvering target, motion parameters should be accurately obtained for compensating the envelope. This letter proposes a new imaging approach for high-speed maneuvering targets without a priori knowledge of their motion parameters. In this method, the azimuth compression function is constructed in a range and azimuth 2-D frequency domain, which can eliminate the coupling effect between range and azimuth. Theoretical analysis confirms that the methodology can precisely focus targets. Simulation results show that the proposed algorithm improves the performance for detecting and imaging high-speed maneuvering targets.

Patent
20 Oct 2014
TL;DR: In this article, a fingerprint sensing module includes a sensor substrate having a sensing side and a circuit side, an image sensor including conductive traces on the circuit side of the sensor substrate, and a sensor circuit including at least one integrated circuit mounted in the circuit of the substrate and electrically connected to the image sensor.
Abstract: A fingerprint sensing module includes a sensor substrate having a sensing side and a circuit side, an image sensor including conductive traces on the circuit side of the sensor substrate, and a sensor circuit including at least one integrated circuit mounted on the circuit side of the sensor substrate and electrically connected to the image sensor. The sensor substrate may be a flexible substrate. The module may include a velocity sensor on the sensor substrate or on a separate substrate. The module may further include a rigid substrate, and the sensor substrate may be affixed to the rigid substrate.

Journal ArticleDOI
TL;DR: A new convolutional sparse coding approach for recovering transient (light-in-flight) images from correlation image sensors and the derivation of a new physically-motivated model for transient images with drastically improved sparsity is presented.
Abstract: Correlation image sensors have recently become popular low-cost devices for time-of-flight, or range cameras. They usually operate under the assumption of a single light path contributing to each pixel. We show that a more thorough analysis of the sensor data from correlation sensors can be used can be used to analyze the light transport in much more complex environments, including applications for imaging through scattering and turbid media. The key of our method is a new convolutional sparse coding approach for recovering transient (light-in-flight) images from correlation image sensors. This approach is enabled by an analysis of sparsity in complex transient images, and the derivation of a new physically-motivated model for transient images with drastically improved sparsity.

Journal ArticleDOI
TL;DR: The experimental results verify the significant efficiency improvement of the proposed method in output quality and energy consumption, when compared with other fusion techniques in DCT domain.

Journal ArticleDOI
TL;DR: This work reports a 3D-printed high-resolution Fourier ptychographic microscope, termed FPscope, which uses a cellphone lens in a reverse manner, and shows that the depth-of-focus of the reported platform is about 0.1 mm, orders of magnitude longer than that of a conventional microscope objective with a similar NA.
Abstract: The large consumer market has made cellphone lens modules available at low-cost and in high-quality. In a conventional cellphone camera, the lens module is used to demagnify the scene onto the image plane of the camera, where image sensor is located. In this work, we report a 3D-printed high-resolution Fourier ptychographic microscope, termed FPscope, which uses a cellphone lens in a reverse manner. In our platform, we replace the image sensor with sample specimens, and use the cellphone lens to project the magnified image to the detector. To supersede the diffraction limit of the lens module, we use an LED array to illuminate the sample from different incident angles and synthesize the acquired images using the Fourier ptychographic algorithm. As a demonstration, we use the reported platform to acquire high-resolution images of resolution target and biological specimens, with a maximum synthetic numerical aperture (NA) of 0.5. We also show that, the depth-of-focus of the reported platform is about 0.1 mm, orders of magnitude longer than that of a conventional microscope objective with a similar NA. The reported platform may enable healthcare accesses in low-resource settings. It can also be used to demonstrate the concept of computational optics for educational purposes.

Journal ArticleDOI
TL;DR: Both simulations and experiments show that the proposed techniques for sampling, representing, and reconstructing the space-time volume to overcome this trade-off to reconstruct a video from a single coded image while maintaining high spatial resolution.
Abstract: Cameras face a fundamental trade-off between spatial and temporal resolution. Digital still cameras can capture images with high spatial resolution, but most high-speed video cameras have relatively low spatial resolution. It is hard to overcome this trade-off without incurring a significant increase in hardware costs. In this paper, we propose techniques for sampling, representing, and reconstructing the space-time volume to overcome this trade-off. Our approach has two important distinctions compared to previous works: 1) We achieve sparse representation of videos by learning an overcomplete dictionary on video patches, and 2) we adhere to practical hardware constraints on sampling schemes imposed by architectures of current image sensors, which means that our sampling function can be implemented on CMOS image sensors with modified control units in the future. We evaluate components of our approach, sampling function and sparse representation, by comparing them to several existing approaches. We also implement a prototype imaging system with pixel-wise coded exposure control using a liquid crystal on silicon device. System characteristics such as field of view and modulation transfer function are evaluated for our imaging system. Both simulations and experiments on a wide range of scenes show that our method can effectively reconstruct a video from a single coded image while maintaining high spatial resolution.

Patent
29 May 2014
TL;DR: An imaging system and method of a trailer backup assist system is provided and includes a camera having an image sensor as mentioned in this paper, which is mounted on the rear of a vehicle and images a target provided on a trailer.
Abstract: An imaging system and method of a trailer backup assist system is provided and includes a camera having an image sensor. The camera is mounted on the rear of a vehicle and images a target provided on a trailer. A controller is included for adjusting an image capture setting of the camera based on a status input from a vehicle lighting system, image data from the camera, and locational input from a positioning device.

Journal ArticleDOI
TL;DR: This work presents a method of quantitatively acquiring a large complex field, containing not only amplitude information but also phase information, based on single-shot phase imaging with a coded aperture (SPICA).
Abstract: We present a method of quantitatively acquiring a large complex field, containing not only amplitude information but also phase information, based on single-shot phase imaging with a coded aperture (SPICA). In SPICA, the propagating field from an object illuminated by partially coherent visible light is sieved by a coded mask, and the sieved field propagates to an image sensor, where it is captured. The sieved field is recovered from the single captured intensity image via a phase retrieval algorithm with an amplitude support constraint using the mask pattern, and then the object’s complex field is reconstructed from the recovered sieved field by an algorithm employing a sparsity constraint based on compressive sensing. The system model and the theoretical bounds of SPICA are derived. We also verified the concept with numerical demonstrations.

Patent
07 Mar 2014
Abstract: Systems and methods for high dynamic range imaging using array cameras in accordance with embodiments of the invention are disclosed. In one embodiment of the invention, a method of generating a high dynamic range image using an array camera includes defining at least two subsets of active cameras, determining image capture settings for each subset of active cameras, where the image capture settings include at least two exposure settings, configuring the active cameras using the determined image capture settings for each subset, capturing image data using the active cameras, synthesizing an image for each of the at least two subset of active cameras using the captured image data, and generating a high dynamic range image using the synthesized images.

Proceedings ArticleDOI
23 Jun 2014
TL;DR: This work proposes to jointly estimate scene depth and remove non-uniform blur caused by camera motion by exploiting their underlying geometric relationships, with only single blurry image as input, and presents a unified layer-based model for depth-involved deblurring.
Abstract: Camera shake during exposure time often results in spatially variant blur effect of the image. The non-uniform blur effect is not only caused by the camera motion, but also the depth variation of the scene. The objects close to the camera sensors are likely to appear more blurry than those at a distance in such cases. However, recent non-uniform deblurring methods do not explicitly consider the depth factor or assume fronto-parallel scenes with constant depth for simplicity. While single image non-uniform deblurring is a challenging problem, the blurry results in fact contain depth information which can be exploited. We propose to jointly estimate scene depth and remove non-uniform blur caused by camera motion by exploiting their underlying geometric relationships, with only single blurry image as input. To this end, we present a unified layer-based model for depth-involved deblurring. We provide a novel layer-based solution using matting to partition the layers and an expectation-maximization scheme to solve this problem. This approach largely reduces the number of unknowns and makes the problem tractable. Experiments on challenging examples demonstrate that both depth and camera shake removal can be well addressed within the unified framework.

Proceedings ArticleDOI
01 Jun 2014
TL;DR: It is demonstrated that these sensors inherently perform high-speed, video compression in each pixel by describing the first decompression algorithm for this data, which performs an online optimization of the event decoding in real time.
Abstract: Dynamic and active pixel vision sensors (DAVISs) are a new type of sensor that combine a frame-based intensity readout with an event-based temporal contrast readout. This paper demonstrates that these sensors inherently perform high-speed, video compression in each pixel by describing the first decompression algorithm for this data. The algorithm performs an online optimization of the event decoding in real time. Example scenes were recorded by the 240×180 pixel sensor at sub-Hz frame rates and successfully decompressed yielding an equivalent frame rate of 2kHz. A quantitative analysis of the compression quality resulted in an average pixel error of 0.5DN intensity resolution for non-saturating stimuli. The system exhibits an adaptive compression ratio which depends on the activity in a scene; for stationary scenes it can go up to 1862. The low data rate and power consumption of the proposed video compression system make it suitable for distributed sensor networks.

Journal ArticleDOI
TL;DR: A new simulation software that can be used to simulate microlens' performance under different conditions and a new non-destructive contact-less method to estimate the height of the microlenses are presented.
Abstract: Single-photon avalanche diode (SPAD) imagers typically have a relatively low fill factor, i.e. a low proportion of the pixel's surface is light sensitive, due to in-pixel circuitry. We present a microlens array fabricated on a 128×128 single-photon avalanche diode (SPAD) imager to enhance its sensitivity. The benefits and limitations of these light concentrators are studied for low light imaging applications. We present a new simulation software that can be used to simulate microlenses' performance under different conditions and a new non-destructive contact-less method to estimate the height of the microlenses. Results of experiments and simulations are in good agreement, indicating that a gain >10 can be achieved for this particular sensor.