scispace - formally typeset
Search or ask a question

Showing papers on "Image sensor published in 2019"


Journal ArticleDOI
TL;DR: The working principle, advantages, technical considerations and future potential of single-pixel imaging are described, which suits a wide a variety of detector technologies.
Abstract: Modern digital cameras employ silicon focal plane array (FPA) image sensors featuring millions of pixels. However, it is possible to make a camera that only needs one pixel. In these cameras a spatial light modulator, placed before or after the object to be imaged, applies a time-varying pattern and synchronized intensity measurements are made with a single-pixel detector. The principle of compressed sensing then allows an image to be generated. As the approach suits a wide a variety of detector technologies, images can be collected at wavelengths outside the reach of FPA technology or at high frame rates or in three dimensions. Promising applications include the visualization of hazardous gas leaks and 3D situation awareness for autonomous vehicles. Rather than requiring millions of pixels, it is possible to make a camera that only needs one pixel. This Review details the working principle, advantages, technical considerations and future potential of single-pixel imaging.

464 citations


Proceedings ArticleDOI
15 Jun 2019
TL;DR: In this paper, the authors propose a technique to "unprocess" images by inverting each step of an image processing pipeline, thereby allowing them to synthesize realistic raw sensor measurements from commonly available Internet photos.
Abstract: Machine learning techniques work best when the data used for training resembles the data used for evaluation. This holds true for learned single-image denoising algorithms, which are applied to real raw camera sensor readings but, due to practical constraints, are often trained on synthetic image data. Though it is understood that generalizing from synthetic to real images requires careful consideration of the noise properties of camera sensors, the other aspects of an image processing pipeline (such as gain, color correction, and tone mapping) are often overlooked, despite their significant effect on how raw measurements are transformed into finished images. To address this, we present a technique to “unprocess” images by inverting each step of an image processing pipeline, thereby allowing us to synthesize realistic raw sensor measurements from commonly available Internet photos. We additionally model the relevant components of an image processing pipeline when evaluating our loss function, which allows training to be aware of all relevant photometric processing that will occur after denoising. By unprocessing and processing training data and model outputs in this way, we are able to train a simple convolutional neural network that has 14%-38% lower error rates and is 9×-18× faster than the previous state of the art on the Darmstadt Noise Dataset, and generalizes to sensors outside of that dataset as well.

369 citations


Proceedings ArticleDOI
01 Oct 2019
TL;DR: The proposed CameraRadarFusion Net (CRF-Net) automatically learns at which level the fusion of the sensor data is most beneficial for the detection result, and is able to outperform a state-of-the-art image-only network for two different datasets.
Abstract: Object detection in camera images, using deep learning has been proven successfully in recent years. Rising detection rates and computationally efficient network structures are pushing this technique towards application in production vehicles. Nevertheless, the sensor quality of the camera is limited in severe weather conditions and through increased sensor noise in sparsely lit areas and at night. Our approach enhances current 2D object detection networks by fusing camera data and projected sparse radar data in the network layers. The proposed CameraRadarFusion Net (CRF-Net) automatically learns at which level the fusion of the sensor data is most beneficial for the detection result. Additionally, we introduce BlackIn, a training strategy inspired by Dropout, which focuses the learning on a specific sensor type. We show that the fusion network is able to outperform a state-of-the-art image-only network for two different datasets. The code for this research will be made available to the public at: https://github.com/TUMFTM/CameraRadarFusionNet

190 citations


Journal ArticleDOI
TL;DR: The flexible PD arrays with high detectivity, large on/off current ratio, and broad spectral response exhibit excellent electrical stability under large bending angle and superior folding endurance after hundreds of bending cycles, indicating that it has widespread potential in photosensing and imaging for optical communication, digital display, and artificial electronic skin applications.
Abstract: The quest for novel deformable image sensors with outstanding optoelectronic properties and large-scale integration becomes a great impetus to exploit more advanced flexible photodetector (PD) arrays. Here, 10 × 10 flexible PD arrays with a resolution of 63.5 dpi are demonstrated based on as-prepared perovskite arrays for photosensing and imaging. Large-scale growth controllable CH3 NH3 PbI3- x Clx arrays are synthesized on a poly(ethylene terephthalate) substrate by using a two-step sequential deposition method with the developed Al2 O3 -assisted hydrophilic-hydrophobic surface treatment process. The flexible PD arrays with high detectivity (9.4 × 1011 Jones), large on/off current ratio (up to 1.2 × 103 ), and broad spectral response exhibit excellent electrical stability under large bending angle (θ = 150°) and superior folding endurance after hundreds of bending cycles. In addition, the device can execute the functions of capturing a real-time light trajectory and detecting a multipoint light distribution, indicating that it has widespread potential in photosensing and imaging for optical communication, digital display, and artificial electronic skin applications.

158 citations


Journal ArticleDOI
TL;DR: The SwissSPAD2 as discussed by the authors is an image sensor with 512 × 512 photon-counting pixels, each comprising a single-photon avalanche diode (SPAD), a 1-b memory, and a gating mechanism capable of turning the SPAD on and off, with a skew of 250 and 344ps, respectively, for a minimum duration of 5.75 ns.
Abstract: In this paper, we report on SwissSPAD2, an image sensor with 512 × 512 photon-counting pixels, each comprising a single-photon avalanche diode (SPAD), a 1-b memory, and a gating mechanism capable of turning the SPAD on and off , with a skew of 250 and 344 ps, respectively, for a minimum duration of 5.75 ns. The sensor is designed to achieve a frame rate of up to 97 700 binary frames per second and sub-40 ps gate shifts. By synchronizing it with a pulsed laser and using multiple successive overlapping gates, one can reconstruct a molecule's fluorescent response with picosecond temporal resolution. Thanks to the sensor's number of pixels (the largest to date) and the fully integrated gated operation, SwissSPAD2 enables widefield fluorescence lifetime imaging microscopy with an all-solid-state solution and at relatively high frame rates. This was demonstrated with preliminary results on organic dyes and semiconductor quantum dots using both decay fitting and phasor analysis. Furthermore, pixels with an exceptionally low dark count rate and high photon detection probability enable uniform and high-quality imaging of biologically relevant fluorescent samples stained with multiple dyes. While future versions will feature the addition of microlenses and optimize firmware speed, our results open the way for low-cost alternatives to commercially available scientific time-resolved imagers.

128 citations


Journal ArticleDOI
12 Sep 2019-Sensors
TL;DR: An overview of tactile image sensors employing a camera is provided with a focus on the sensing principle, typical design, and variation in the sensor configuration.
Abstract: A tactile image sensor employing a camera is capable of obtaining rich tactile information through image sequences with high spatial resolution. There have been many studies on the tactile image sensors from more than 30 years ago, and, recently, they have been applied in the field of robotics. Tactile image sensors can be classified into three typical categories according to the method of conversion from physical contact to light signals: Light conductive plate-based, marker displacement- based, and reflective membrane-based sensors. Other important elements of the sensor, such as the optical system, image sensor, and post-image analysis algorithm, have been developed. In this work, the literature is surveyed, and an overview of tactile image sensors employing a camera is provided with a focus on the sensing principle, typical design, and variation in the sensor configuration.

105 citations


Journal ArticleDOI
TL;DR: An organic image sensor based on monolithic, vertically stacked two-terminal pixels based on an extremely simple architecture exhibits a high pixel photoresponse, demonstrating a weak-light imaging capability even at 1 µW cm-2 .
Abstract: Highly responsive organic image sensors are crucial for medical imaging applications. To enhance the pixelwise photoresponse in an organic image sensor, the integration of an organic photodetector with amplifiers, or the use of a highly responsive organic photodetector without an additional amplifying component, is required. The use of vertically stacked, two-terminal organic photodetectors with photomultiplication is a promising approach for highly responsive organic image sensors owing to their simple two-terminal structure and intrinsically large responsivity. However, there are no demonstrations of an imaging sensor array using organic photomultiplication photodetectors. The main obstacle to a sensor array is the weak-light sensitivity, which is limited by a relatively large dark current. Herein, a highly responsive organic image sensor based on monolithic, vertically stacked two-terminal pixels is presented. This is achieved using pixels of a vertically stacked diode-type organic photodetector with photomultiplication. Furthermore, applying an optimized injection electrode and additionally stacked rectifying layers, this two-terminal device simultaneously demonstrates a high responsivity (>40 A W-1 ), low dark current, and high rectification under illumination. An organic image sensor based on this device with an extremely simple architecture exhibits a high pixel photoresponse, demonstrating a weak-light imaging capability even at 1 µW cm-2 .

103 citations


Journal ArticleDOI
TL;DR: The ALS of a smartphone can emerge as a potential in-built sensor for detection and analysis of different types of analytes that can open new opportunities towards development of low cost sensing devices.
Abstract: In the present review article, the ubiquitous application of ambient light sensor (ALS) of a smartphone towards development of light weight, field-portable and low-cost devices have been summarized. Compared to the computational imaging based technique using phone's camera for detection of target analyte, there are certain advantages of utilizing ALS of smartphone in terms of low sample volume, ease in device fabrication and simplicity in optical design. Parallel to CMOS image sensor of the smartphone camera, the ALS of a smartphone can emerge as a potential in-built sensor for detection and analysis of different types of analytes that can open new opportunities towards development of low cost sensing devices.

81 citations


Journal ArticleDOI
TL;DR: A digital calibration scheme integrated into a column of the imager allows off-chip digital process, voltage, and temperature (PVT) compensation of every frame on the fly.
Abstract: A $192 \times 128$ pixel single photon avalanche diode (SPAD) time-resolved single photon counting (TCSPC) image sensor is implemented in STMicroelectronics 40-nm CMOS technology. The 13% fill factor, $18.4\,\,\mu \text {m} \times 9.2\,\,\mu \text{m}$ pixel contains a 33-ps resolution, 135-ns full scale, 12-bit time-to-digital converter (TDC) with 0.9-LSB differential and 5.64-LSB integral nonlinearity (DNL/INL). The sensor achieves a mean 219-ps full-width half-maximum (FWHM) impulse response function (IRF) and is operable at up to 18.6 kframes/s through 64 parallelized serial outputs. Cylindrical microlenses with a concentration factor of 3.25 increase the fill factor to 42%. The median dark count rate (DCR) is 25 Hz at 1.5-V excess bias. A digital calibration scheme integrated into a column of the imager allows off-chip digital process, voltage, and temperature (PVT) compensation of every frame on the fly. Fluorescence lifetime imaging microscopy (FLIM) results are presented.

79 citations


Journal ArticleDOI
22 Feb 2019-Sensors
TL;DR: The vision-based tactile sensor proposed in this article exploits the extremely high resolution of modern image sensors to reconstruct the normal force distribution applied to a soft material, whose deformation is observed on the camera images.
Abstract: Human skin is capable of sensing various types of forces with high resolution and accuracy. The development of an artificial sense of touch needs to address these properties, while retaining scalability to large surfaces with arbitrary shapes. The vision-based tactile sensor proposed in this article exploits the extremely high resolution of modern image sensors to reconstruct the normal force distribution applied to a soft material, whose deformation is observed on the camera images. By embedding a random pattern within the material, the full resolution of the camera can be exploited. The design and the motivation of the proposed approach are discussed with respect to a simplified elasticity model. An artificial deep neural network is trained on experimental data to perform the tactile sensing task with high accuracy for a specific indenter, and with a spatial resolution and a sensing range comparable to the human fingertip.

79 citations


Journal ArticleDOI
TL;DR: This work presents a compact, diffraction-based snapshot hyperspectral imaging method, using only a novel diffractive optical element (DOE) in front of a conventional, bare image sensor, and introduces a novel DOE design that generates an anisotropic shape of the spectrally-varying PSF.
Abstract: Traditional snapshot hyperspectral imaging systems include various optical elements: a dispersive optical element (prism), a coded aperture, several relay lenses, and an imaging lens, resulting in an impractically large form factor. We seek an alternative, minimal form factor of snapshot spectral imaging based on recent advances in diffractive optical technology. We thereupon present a compact, diffraction-based snapshot hyperspectral imaging method, using only a novel diffractive optical element (DOE) in front of a conventional, bare image sensor. Our diffractive imaging method replaces the common optical elements in hyperspectral imaging with a single optical element. To this end, we tackle two main challenges: First, the traditional diffractive lenses are not suitable for color imaging under incoherent illumination due to severe chromatic aberration because the size of the point spread function (PSF) changes depending on the wavelength. By leveraging this wavelength-dependent property alternatively for hyperspectral imaging, we introduce a novel DOE design that generates an anisotropic shape of the spectrally-varying PSF. The PSF size remains virtually unchanged, but instead the PSF shape rotates as the wavelength of light changes. Second, since there is no dispersive element and no coded aperture mask, the ill-posedness of spectral reconstruction increases significantly. Thus, we propose an end-to-end network solution based on the unrolled architecture of an optimization procedure with a spatial-spectral prior, specifically designed for deconvolution-based spectral reconstruction. Finally, we demonstrate hyperspectral imaging with a fabricated DOE attached to a conventional DSLR sensor. Results show that our method compares well with other state-of-the-art hyperspectral imaging methods in terms of spectral accuracy and spatial resolution, while our compact, diffraction-based spectral imaging method uses only a single optical element on a bare image sensor.

Proceedings ArticleDOI
01 Oct 2019
TL;DR: This work proposes a robust and real-time CNN architecture for Moving Object Detection (MOD) under low-light conditions by capturing motion information from both camera and LiDAR sensors and demonstrates the impact on KITTI dataset.
Abstract: Moving object detection is a critical task for autonomous vehicles. As dynamic objects represent higher collision risk than static ones, our own ego-trajectories have to be planned attending to the future states of the moving elements of the scene. Motion can be perceived using temporal information such as optical flow. Conventional optical flow computation is based on camera sensors only, which makes it prone to failure in conditions with low illumination. On the other hand, LiDAR sensors are independent of illumination, as they measure the time-of-flight of their own emitted lasers. In this work we propose a robust and real-time CNN architecture for Moving Object Detection (MOD) under low-light conditions by capturing motion information from both camera and LiDAR sensors. We demonstrate the impact of our algorithm on KITTI dataset where we simulate a low-light environment creating a novel dataset "Dark-KITTI". We obtain a 10.1 % relative improvement on Dark-KITTI, and a 4.25 % improvement on standard KITTI relative to our baselines. The proposed algorithm runs at 29 fps on a standard desktop GPU using 256x1224 resolution images.

Proceedings ArticleDOI
01 Oct 2019
TL;DR: A new module for event sequence embedding is introduced, which is the first learning-based stereo method for an event-based camera and the only method that produces dense results on the Multi Vehicle Stereo Event Camera Dataset (MVSEC).
Abstract: Today, a frame-based camera is the sensor of choice for machine vision applications. However, these cameras, originally developed for acquisition of static images rather than for sensing of dynamic uncontrolled visual environments, suffer from high power consumption, data rate, latency and low dynamic range. An event-based image sensor addresses these drawbacks by mimicking a biological retina. Instead of measuring the intensity of every pixel in a fixed time-interval, it reports events of significant pixel intensity changes. Every such event is represented by its position, sign of change, and timestamp, accurate to the microsecond. Asynchronous event sequences require special handling, since traditional algorithms work only with synchronous, spatially gridded data. To address this problem we introduce a new module for event sequence embedding, for use in difference applications. The module builds a representation of an event sequence by firstly aggregating information locally across time, using a novel fully-connected layer for an irregularly sampled continuous domain, and then across discrete spatial domain. Based on this module, we design a deep learning-based stereo method for event-based cameras. The proposed method is the first learning-based stereo method for an event-based camera and the only method that produces dense results. We show that large performance increases on the Multi Vehicle Stereo Event Camera Dataset (MVSEC), which became the standard set for benchmarking of event-based stereo methods.

Journal ArticleDOI
TL;DR: In this paper, a single-step grayscale lithographic process was proposed to enable fabrication of bespoke MSFAs based on the Fabry-Perot resonances of spatially variant metal-insulator-metal (MIM) cavities, where the exposure dose controlled insulator (cavity) thickness.
Abstract: Conventional cameras, such as in smartphones, capture wideband red, green and blue (RGB) spectral components, replicating human vision. Multispectral imaging (MSI) captures spatial and spectral information beyond our vision but typically requires bulky optical components and is expensive. Snapshot multispectral image sensors have been proposed as a key enabler for a plethora of MSI applications, from diagnostic medical imaging to remote sensing. To achieve low-cost and compact designs, spatially variant multispectral filter arrays (MSFAs) based on thin-film optical components are deposited atop image sensors. Conventional MSFAs achieve spectral filtering through either multi-layer stacks or pigment, requiring: complex mixtures of materials; additional lithographic steps for each additional wavelength; and large thicknesses to achieve high transmission efficiency. By contrast, we show here for the first time a single-step grayscale lithographic process that enables fabrication of bespoke MSFAs based on the Fabry-Perot resonances of spatially variant metal-insulator-metal (MIM) cavities, where the exposure dose controls insulator (cavity) thickness. We demonstrate customizable MSFAs scalable up to N-wavelength bands spanning the visible and near-infrared with high transmission efficiency (~75%) and narrow linewidths (~50 nm). Using this technique, we achieve multispectral imaging of several spectrally distinct target using our bespoke MIM-MSFAs fitted to a monochrome CMOS image sensor. Our unique framework provides an attractive alternative to conventional MSFA manufacture, by reducing both fabrication complexity and cost of these intricate optical devices, while increasing customizability.

Journal ArticleDOI
TL;DR: In this paper, a lensless on-chip microscopy platform based on near-field blind ptychographic modulation is proposed, where a thin diffuser is placed between the object and the image sensor for light wave modulation.
Abstract: We report a novel lensless on-chip microscopy platform based on near-field blind ptychographic modulation. In this platform, we place a thin diffuser in between the object and the image sensor for light wave modulation. By blindly scanning the unknown diffuser to different x-y positions, we acquire a sequence of modulated intensity images for quantitative object recovery. Different from previous ptychographic implementations, we employ a unit magnification configuration with a Fresnel number of ~50,000, which is orders of magnitude higher than previous ptychographic setups. The unit magnification configuration allows us to have the entire sensor area, 6.4 mm by 4.6 mm, as the imaging field of view. The ultra-high Fresnel number enables us to directly recover the positional shift of the diffuser in the phase retrieval process, addressing the positioning accuracy issue plagued in regular ptychographic experiments. In our implementation, we use a low-cost, DIY scanning stage to perform blind diffuser modulation. Precise mechanical scanning that is critical in conventional ptychography experiments is no longer needed in our setup. We further employ an up-sampling phase retrieval scheme to bypass the resolution limit set by the imager pixel size and demonstrate a half-pitch resolution of 0.78 micron. We validate the imaging performance via in vitro cell cultures, transparent and stained tissue sections, and a thick biological sample. We show that the recovered quantitative phase map can be used to perform effective cell segmentation of the dense yeast culture. We also demonstrate 3D digital refocusing of the thick biological sample based on the recovered wavefront. The reported platform provides a cost-effective and turnkey solution for large field-of-view, high-resolution, and quantitative on-chip microscopy.

Journal ArticleDOI
TL;DR: In this paper, a mismatch between the small size of image sensor pixels and the achievable filter spectral resolution has been identified, which has prevented the realisation of real-time image sensor applications.
Abstract: Rapid advances in image sensor technology have generated a mismatch between the small size of image sensor pixels and the achievable filter spectral resolution. This mismatch has prevented the real...

Proceedings ArticleDOI
16 Jun 2019
TL;DR: A new generation smart image sensor, CeleX-V, which integrates several vision functions into one chip, such as full-array-parallel motion detection and on-chip optical flow extraction, and supports both MIPI and parallel interface.
Abstract: We demonstrate a new generation smart image sensor, CeleX-V. With 1280×800 pixels, 9.8um pitch, the sensor integrates several vision functions into one chip, such as full-array-parallel motion detection and on-chip optical flow extraction. CeleX-V is also capable of producing high-quality full-frame pictures and thus is compatible with traditional picture-based algorithms. The sensor supports both MIPI and parallel interface, with typical 400mW power consumption.

Journal ArticleDOI
09 Jan 2019-PLOS ONE
TL;DR: This work shows that a consumer cellphone is capable of optical super-resolution imaging by (direct) Stochastic Optical Reconstruction Microscopy (dSTORM), achieving optical resolution better than 80 nm.
Abstract: High optical resolution in microscopy usually goes along with costly hardware components, such as lenses, mechanical setups and cameras. Several studies proved that Single Molecular Localization Microscopy can be made affordable, relying on off-the-shelf optical components and industry grade CMOS cameras. Recent technological advantages have yielded consumer-grade camera devices with surprisingly good performance. The camera sensors of smartphones have benefited of this development. Combined with computing power smartphones provide a fantastic opportunity for “imaging on a budget”. Here we show that a consumer cellphone is capable of optical super-resolution imaging by (direct) Stochastic Optical Reconstruction Microscopy (dSTORM), achieving optical resolution better than 80 nm. In addition to the use of standard reconstruction algorithms, we used a trained image-to-image generative adversarial network (GAN) to reconstruct video sequences under conditions where traditional algorithms provide sub-optimal localization performance directly on the smartphone. We believe that “cellSTORM” paves the way to make super-resolution microscopy not only affordable but available due to the ubiquity of cellphone cameras.

Journal ArticleDOI
TL;DR: This paper investigates high-quality system designs for single-sensor RGB-NIR imaging and proposes the best-performed system design, which was implemented based on the best system design and demonstrated several potential applications using the prototype.
Abstract: In recent years, many applications using a set of red-green-blue (RGB) and near-infrared (NIR) images, also called an RGB-NIR image, have been proposed. However, RGB-NIR imaging, i.e., simultaneous acquisition of RGB and NIR images, is still a laborious task because existing acquisition systems typically require two sensors or shots. In contrast, single-sensor RGB-NIR imaging using an RGB-NIR sensor, which is composed of a mosaic of RGB and NIR pixels, provides a practical and low-cost way of one-shot RGB-NIR image acquisition. In this paper, we investigate high-quality system designs for single-sensor RGB-NIR imaging. We first present a system evaluation framework using a new hyperspectral image data set we constructed. Different from existing work, our framework takes both the RGB-NIR sensor characteristics and the RGB-NIR imaging pipeline into account. Based on the evaluation framework, we then design each imaging factor that affects the RGB-NIR imaging quality and propose the best-performed system design. We finally present the configuration of our developed prototype RGB-NIR camera, which was implemented based on the best system design, and demonstrate several potential applications using the prototype.

Journal ArticleDOI
TL;DR: In this paper, the authors demonstrate the generation of transmissive structural colors based on uniform-height amorphous silicon nanostructures, and report the construction of submicrometer RGB filter arrays for a pixel size down to 0.5?m.
Abstract: Digital color imaging relies on spectral filters on top of a pixelated sensor, such as a CMOS image sensor. An important parameter of imaging devices is their resolution, which depends on the size of the pixels. For many applications, a high resolution is desirable, consequently requiring small spectral filters. Dielectric nanostructures, due to their resonant behavior and its tunability, offer the possibility to be assembled into flexible and miniature spectral filters, which could potentially replace conventional pigmented and dye-based color filters. In this paper, we demonstrate the generation of transmissive structural colors based on uniform-height amorphous silicon nanostructures. We optimize the structures for the primary RGB colors and report the construction of submicrometer RGB filter arrays for a pixel size down to 0.5 ?m.

Proceedings Article
01 Jan 2019
TL;DR: This paper learns a sensor-independent working space that can be used to canonicalize the RGB values of any arbitrary camera sensor and allows unseen camera sensors to be used on a single DNN model trained on this working space.
Abstract: While modern deep neural networks (DNNs) achieve state-of-the-art results for illuminant estimation, it is currently necessary to train a separate DNN for each type of camera sensor. This means when a camera manufacturer uses a new sensor, it is necessary to retrain an existing DNN model with training images captured by the new sensor. This paper addresses this problem by introducing a novel sensor-independent illuminant estimation framework. Our method learns a sensor-independent working space that can be used to canonicalize the RGB values of any arbitrary camera sensor. Our learned space retains the linear property of the original sensor raw-RGB space and allows unseen camera sensors to be used on a single DNN model trained on this working space. We demonstrate the effectiveness of this approach on several different camera sensors and show it provides performance on par with state-of-the-art methods that were trained per sensor.

Journal ArticleDOI
TL;DR: The video recording-capable compact incoherent digital holographic camera system is proposed and the real-time holographic recording and its digitally reconstructed video playback are demonstrated with the proposed system.
Abstract: The video recording-capable compact incoherent digital holographic camera system is proposed. The system consists of the linear polarizer, convex lens, geometric phase lens, and the polarized image sensor. The Fresnel hologram is recorded by this simple configuration in real time. The system parameters are analyzed and evaluated to record a better-quality hologram in a compact form-factor. The real-time holographic recording and its digitally reconstructed video playback are demonstrated with the proposed system.

Proceedings ArticleDOI
TL;DR: This work describes a testing and evaluation methodology that helps to benchmark novel sensor technologies and compare them to state-of-the-art sensors and it is shown that gated imaging outperforms state- of theart standard passive imaging due to time-synchronized active illumination.
Abstract: Adverse weather conditions are very challenging for autonomous driving because most of the state-of-the-art sensors stop working reliably under these conditions. In order to develop robust sensors and algorithms, tests with current sensors in defined weather conditions are crucial for determining the impact of bad weather for each sensor. This work describes a testing and evaluation methodology that helps to benchmark novel sensor technologies and compare them to state-of-the-art sensors. As an example, gated imaging is compared to standard imaging under foggy conditions. It is shown that gated imaging outperforms state-of-the-art standard passive imaging due to time-synchronized active illumination.

Journal ArticleDOI
TL;DR: The architecture of an embedded computer camera controller for monitoring and management of image data, which is applied in various control cases, and particularly in digitally controlled lighting devices, is proposed.
Abstract: Although with the advent of the LEDs the energy consumption in buildings can be reduced by 50%, there exists a potential for energy savings due to lighting controls. Moreover, lighting controls can ensure that the near zero energy requirements by EU can be achieved for near zero energy buildings (nZEBs). For this reason, more sophisticated lighting controls must be proposed in order to take full advantage of LEDs and their flexibility concerning dimming. This paper proposes the architecture of an embedded computer camera controller for monitoring and management of image data, which is applied in various control cases, and particularly in digitally controlled lighting devices. The proposed system deals with real-time monitoring and management of a GigE camera input. An in-house developed algorithm using MATLAB enables the identification of areas in luminance values. The embedded microcontroller is part of a complete lighting control system with an imaging sensor in order to measure and control the illumination of several working areas of a room. The power consumption of the proposed lighting system was measured and was compared with the power consumption of a typical photosensor. The functional performance and operation of the proposed camera control system architecture was evaluated based upon a BeagleBone Black microcontroller board.

Journal ArticleDOI
TL;DR: In this paper, a stereo-digital image correlation (stereo-DIC) system was proposed to obtain 3D full-field vibration measurements in a frequency range up to 4 kHz even with an available frame rate of 178 fps.

Journal ArticleDOI
15 Oct 2019
TL;DR: In this article, the authors designed and fabricated a flat multi-level diffractive lens (MDL) that is achromatic in the SWIR band (875-nm to 1675-nm).
Abstract: We designed and fabricated a flat multi-level diffractive lens (MDL) that is achromatic in the SWIR band (875 nm to 1675 nm). The MDL had a focal length of 25 mm, aperture diameter of 8.93 mm, and thickness of only 2.6 µm. By pairing the MDL with a SWIR image sensor, we also characterized its imaging performance in terms of the point-spread functions, modulation-transfer functions, and still and video imaging.

Journal ArticleDOI
TL;DR: In this paper, a color image sensor employing absorptive color filters exhibits low overall light transmission, resulting in limited signal levels per sensor pixel, and this issue is becoming critical beca...
Abstract: Conventional color image sensors employing absorptive color filters exhibit low overall light transmission, resulting in limited signal levels per sensor pixel. This issue is becoming critical beca...

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a multi-projection center (MPC) model with 6 intrinsic parameters to characterize light field cameras based on traditional two-parallelplane (TPP) representation.
Abstract: Light field cameras can capture both spatial and angular information of light rays, enabling 3D reconstruction by a single exposure. The geometry of 3D reconstruction is affected by intrinsic parameters of a light field camera significantly. In the paper, we propose a multi-projection-center (MPC) model with 6 intrinsic parameters to characterize light field cameras based on traditional two-parallel-plane (TPP) representation. The MPC model can generally parameterize light field in different imaging formations, including conventional and focused light field cameras. By the constraints of 4D ray and 3D geometry, a 3D projective transformation is deduced to describe the relationship between geometric structure and the MPC coordinates. Based on the MPC model and projective transformation, we propose a calibration algorithm to verify our light field camera model. Our calibration method includes a close-form solution and a non-linear optimization by minimizing re-projection errors. Experimental results on both simulated and real scene data have verified the performance of our algorithm.

Journal ArticleDOI
TL;DR: In this paper, a low-cost Galvo scanner is used to rapidly scan an unknown laser speckle pattern on the object and recover the positional shifts of the pattern based on the phase correlation of the captured images.
Abstract: We report a compact, cost-effective, and field-portable lensless imaging platform for quantitative microscopy. In this platform, the object is placed on top of an image sensor chip without using a lens. We use a low-cost galvo scanner to rapidly scan an unknown laser speckle pattern on the object. To address the positioning repeatability and accuracy issues, we directly recover the positional shifts of the speckle pattern based on the phase correlation of the captured images. To bypass the resolution limit set by the imager pixel size, we employ a sub-sampled ptychographic phase retrieval process to recover the complex object. We validate our approach using a resolution target, phase target, and biological sample. Our results show that accurate, high-quality complex images can be obtained from a lensless dataset with as few as ∼10 images. We also demonstrate the reported approach to achieve a 6.4-mm by 4.6-mm field of view and a half-pitch resolution of 1 μm. The reported approach may provide a quantitative lensless imaging strategy for addressing point-of-care-, global-health-, and telemedicine-related challenges.

Patent
21 Feb 2019
TL;DR: In this paper, a method for depth mapping of objects in a scene by a system of a moving/movable platform is disclosed, comprising actively illuminating the scene with pulsed light that is generated by at least one pulsed-light illuminator, receiving, responsive to illuminating scenes, reflections on at least 1 image sensor that comprises a plurality of pixel elements, gating one of the pixel elements of the image sensor for converting the reflections into pixel values for generating reflection-based images that have at least two depth-of-field ranges and an overlapping DOF region.
Abstract: A method for depth mapping of objects in a scene by a system of a moving/movable platform is disclosed, comprising actively illuminating the scene with pulsed light that is generated by at least one pulsed light illuminator; receiving, responsive to illuminating the scene, reflections on at least one image sensor that comprises a plurality of pixel elements; gating at least one of the plurality of pixel elements of the at least one image sensor for converting the reflections into pixel values for generating reflection-based images that have at least two depth-of-field ranges and an overlapping DOF region; and determining, based on at least one first pixel value of a first DOF in the overlapping DOF region, and based on at least one second pixel value of a second DOF in the overlapping DOF region, depth information of one or more objects located in the overlapping DOF region of the scene.