scispace - formally typeset
Search or ask a question

Showing papers on "Image sensor published in 2016"


Journal ArticleDOI
TL;DR: Some popular and state-of-the-art fusion methods in different levels especially at pixel level are reviewed and varied approaches and metrics for assessment of fused product are presented.

574 citations


Journal ArticleDOI
TL;DR: Organic photodiodes (OPDs) are beginning to rival their inorganic counterparts in a number of performance criteria including the linear dynamic range, detectivity, and color selectivity.
Abstract: Major growth in the image sensor market is largely as a result of the expansion of digital imaging into cameras, whether stand-alone or integrated within smart cellular phones or automotive vehicles. Applications in biomedicine, education, environmental monitoring, optical communications, pharmaceutics and machine vision are also driving the development of imaging technologies. Organic photodiodes (OPDs) are now being investigated for existing imaging technologies, as their properties make them interesting candidates for these applications. OPDs offer cheaper processing methods, devices that are light, flexible and compatible with large (or small) areas, and the ability to tune the photophysical and optoelectronic properties − both at a material and device level. Although the concept of OPDs has been around for some time, it is only relatively recently that significant progress has been made, with their performance now reaching the point that they are beginning to rival their inorganic counterparts in a number of performance criteria including the linear dynamic range, detectivity, and color selectivity. This review covers the progress made in the OPD field, describing their development as well as the challenges and opportunities.

499 citations


Journal ArticleDOI
TL;DR: A miniature flat camera integrating a monolithic metasurface lens doublet corrected for monochromatic aberrations, and an image sensor is demonstrated with nearly diffraction-limited image quality, indicating the potential of this technology in the development of optical systems for microscopy, photography, and computer vision.
Abstract: Optical metasurfaces are two-dimensional arrays of nano-scatterers that modify optical wavefronts at subwavelength spatial resolution They are poised to revolutionize optics by enabling complex low-cost systems where multiple metasurfaces are lithographically stacked and integrated with electronics For imaging applications, metasurface stacks can perform sophisticated image corrections and can be directly integrated with image sensors Here we demonstrate this concept with a miniature flat camera integrating a monolithic metasurface lens doublet corrected for monochromatic aberrations, and an image sensor The doublet lens, which acts as a fisheye photographic objective, has a small f-number of 09, an angle-of-view larger than 60° × 60°, and operates at 850 nm wavelength with 70% focusing efficiency The camera exhibits nearly diffraction-limited image quality, which indicates the potential of this technology in the development of optical systems for microscopy, photography, and computer vision

495 citations


Journal ArticleDOI
Robert LiKamWa1, Yunhui Hou1, Julian Gao1, Mia Polansky1, Lin Zhong1 
18 Jun 2016
TL;DR: The design of RedEye is designed to mitigate analog design complexity, using a modular column-parallel design to promote physical design reuse and algorithmic cyclic reuse and programmable mechanisms to admit noise for tunable energy reduction.
Abstract: Continuous mobile vision is limited by the inability to efficiently capture image frames and process vision features. This is largely due to the energy burden of analog readout circuitry, data traffic, and intensive computation. To promote efficiency, we shift early vision processing into the analog domain. This results in RedEye, an analog convolutional image sensor that performs layers of a convolutional neural network in the analog domain before quantization. We design RedEye to mitigate analog design complexity, using a modular column-parallel design to promote physical design reuse and algorithmic cyclic reuse. RedEye uses programmable mechanisms to admit noise for tunable energy reduction. Compared to conventional systems, RedEye reports an 85% reduction in sensor energy, 73% reduction in cloudlet-based system energy, and a 45% reduction in computation-based system energy.

204 citations


Proceedings ArticleDOI
01 Oct 2016
TL;DR: This work proposes a novel approach, which tracks the pose of monocular camera with respect to a given 3D LiDAR map, which employs a visual odometry system based on local bundle adjustment to reconstruct a sparse set of 3D points from image features.
Abstract: Localizing a camera in a given map is essential for vision-based navigation. In contrast to common methods for visual localization that use maps acquired with cameras, we propose a novel approach, which tracks the pose of monocular camera with respect to a given 3D LiDAR map. We employ a visual odometry system based on local bundle adjustment to reconstruct a sparse set of 3D points from image features. These points are continuously matched against the map to track the camera pose in an online fashion. Our approach to visual localization has several advantages. Since it only relies on matching geometry, it is robust to changes in the photometric appearance of the environment. Utilizing panoramic LiDAR maps additionally provides viewpoint invariance. Yet low-cost and lightweight camera sensors are used for tracking. We present real-world experiments demonstrating that our method accurately estimates the 6-DoF camera pose over long trajectories and under varying conditions.

152 citations


Journal ArticleDOI
TL;DR: To reach a transmission performance of 54 Mb/s, which is standardized as the maximum data rate in IEEE 802.11p for V2X communication, a more advanced OCI-based automotive VLC system is described, which achieves a more than fivefold higher data rate by introducing optical orthogonal frequency-division multiplexing (opticalOFDM).
Abstract: As a new technology for next-generation vehicle-to-everything (V2X) communication, visible-light communication (VLC) using light-emitting diode (LED) transmitters and camera receivers has been energetically studied. Toward the future in which vehicles are connected anytime and anywhere by optical signals, the cutting-edge camera receiver employing a special CMOS image sensor, i.e., the optical communication image sensor (OCI), has been prototyped, and an optical V2V communication system applying this OCI-based camera receiver has already demonstrated 10-Mb/s optical signal transmission between real vehicles during outside driving. In this paper, to reach a transmission performance of 54 Mb/s, which is standardized as the maximum data rate in IEEE 802.11p for V2X communication, a more advanced OCI-based automotive VLC system is described. By introducing optical orthogonal frequency-division multiplexing (optical-OFDM), the new system achieves a more than fivefold higher data rate. Additionally, the frequency response characteristics and circuit noise of the OCI are closely analyzed and taken into account in the signal design. Furthermore, the forward-current limitation of an actual LED is also considered for long operational reliability, i.e., the LED is not operated in overdrive. Bit-error-rate experiments verify a system performance of 45 Mb/s without bit errors and 55 Mb/s with $\text{BER}\ .

148 citations


Journal ArticleDOI
Magnus Mager1
TL;DR: A 10m2 inner tracking system based on seven concentric layers of Monolithic Active Pixel Sensors (MPS) was installed in the ALICE experiment during the second long shutdown of LHC in 2019-2020 as discussed by the authors.
Abstract: A new 10 m2 inner tracking system based on seven concentric layers of Monolithic Active Pixel Sensors will be installed in the ALICE experiment during the second long shutdown of LHC in 2019–2020. The monolithic pixel sensors will be fabricated in the 180 nm CMOS Imaging Sensor process of TowerJazz. The ALPIDE design takes full advantage of a particular process feature, the deep p-well, which allows for full CMOS circuitry within the pixel matrix, while at the same time retaining the full charge collection efficiency. Together with the small feature size and the availability of six metal layers, this allowed a continuously active low-power front-end to be placed into each pixel and an in-matrix sparsification circuit to be used that sends only the addresses of hit pixels to the periphery. This approach led to a power consumption of less than 40 mW cm − 2 , a spatial resolution of around 5 μm, a peaking time of around 2 μs, while being radiation hard to some 10 13 1 MeV n eq / cm 2 , fulfilling or exceeding the ALICE requirements. Over the last years of R & D, several prototype circuits have been used to verify radiation hardness, and to optimize pixel geometry and in-pixel front-end circuitry. The positive results led to a submission of full-scale (3 cm×1.5 cm) sensor prototypes in 2014. They are being characterized in a comprehensive campaign that also involves several irradiation and beam tests. A summary of the results obtained and prospects towards the final sensor to instrument the ALICE Inner Tracking System are given.

147 citations


Proceedings ArticleDOI
01 Dec 2016
TL;DR: Wang et al. as mentioned in this paper have successfully mass-produced novel stacked back-illuminated CMOS image sensors (BI-CIS), which introduced advanced Cu2Cu hybrid bonding that had developed.
Abstract: We have successfully mass-produced novel stacked back-illuminated CMOS image sensors (BI-CIS). In the new CIS, we introduced advanced Cu2Cu hybrid bonding that we had developed. The electrical test results showed that our highly robust Cu2Cu hybrid bonding achieved remarkable connectivity and reliability. The performance of image sensor was also investigated and our novel stacked BI-CIS showed favorable results.

108 citations


Journal ArticleDOI
TL;DR: In this paper, a CMOS single-photon avalanche diode (SPAD)-based quarter video graphics array image sensor with 8- $\mu \text{m}$ pixel pitch and 26.8% fill factor was presented.
Abstract: A CMOS single-photon avalanche diode (SPAD)-based quarter video graphics array image sensor with 8- $\mu \text{m}$ pixel pitch and 26.8% fill factor (FF) is presented. The combination of analog pixel electronics and scalable shared-well SPAD devices facilitates high-resolution, high-FF SPAD imaging arrays exhibiting photon shot-noise-limited statistics. The SPAD has 47 counts/s dark count rate at 1.5 V excess bias (EB), 39.5% photon detection probability (PDP) at 480 nm, and a minimum of 1.1 ns dead time at 1 V EB. Analog single-photon counting imaging is demonstrated with maximum 14.2-mV/SPAD event sensitivity and 0.06e− minimum equivalent read noise. Binary quanta image sensor (QIS) 16-kframes/s real-time oversampling is shown, verifying single-photon QIS theory with $4.6\times $ overexposure latitude and 0.168e− read noise.

108 citations


Journal ArticleDOI
TL;DR: The pnCCD is a 2D imaging sensor that meets these requirements as discussed by the authors, and it is read out at a rate of 1,150 frames per second with an image area of 264 x 264 pixel.
Abstract: We report on a new camera that is based on a pnCCD sensor for applications in scanning transmission electron microscopy. Emerging new microscopy techniques demand improved detectors with regards to readout rate, sensitivity and radiation hardness, especially in scanning mode. The pnCCD is a 2D imaging sensor that meets these requirements. Its intrinsic radiation hardness permits direct detection of electrons. The pnCCD is read out at a rate of 1,150 frames per second with an image area of 264 x 264 pixel. In binning or windowing modes, the readout rate is increased almost linearly, for example to 4000 frames per second at 4× binning (264 x 66 pixel). Single electrons with energies from 300 keV down to 5 keV can be distinguished due to the high sensitivity of the detector. Three applications in scanning transmission electron microscopy are highlighted to demonstrate that the pnCCD satisfies experimental requirements, especially fast recording of 2D images. In the first application, 65536 2D diffraction patterns were recorded in 70 s. STEM images corresponding to intensities of various diffraction peaks were reconstructed. For the second application, the microscope was operated in a Lorentz-like mode. Magnetic domains were imaged in an area of 256 x 256 sample points in less than 37 seconds for a total of 65536 images each with 264 x 132 pixels. Due to information provided by the two-dimensional images, not only the amplitude but also the direction of the magnetic field could be determined. In the third application, millisecond images of a semiconductor nanostructure were recorded to determine the lattice strain in the sample. A speed-up in measurement time by a factor of 200 could be achieved compared to a previously used camera system.

103 citations


Patent
17 Nov 2016
TL;DR: In this article, a system for measuring the 3D shape of an object using a structured light projector and an image sensor such as a scanner or camera is presented, where the projected dot pattern comprising a plurality of dots distributed on the grid such that neighboring dots within a certain sub-window size are unique sub-patterns.
Abstract: The present invention embraces a system for measuring the 3D shape of an object using a structured light projector and an image sensor such as a scanner or camera The structured light projector “projects” a pseudo random dot pattern onto the object that is positioned on a planar surface The image sensor captures the 3D image of the object from the reflective surface and determines the dimensions or shape of the object The surface displays the projected dot pattern and defines a grid based on the projected dot pattern The dot pattern comprising a plurality of dots distributed on the grid such that neighboring dots within a certain sub-window size are unique sub-patterns The neighboring dots are arranged in a staggered grid format relative to one axis of grid

Journal ArticleDOI
TL;DR: A single-photon, time-gated, pixel imager is presented for its application in fluorescence lifetime imaging microscopy, capable of gathering information about photon position, number, and time distribution, enabling cost-effective devices for scientific imaging applications.
Abstract: A single-photon, time-gated, $160 \times 120 $ pixel imager is presented for its application in fluorescence lifetime imaging microscopy. Exploiting single-photon avalanche diodes and an extremely compact pixel circuitry—only seven MOSFETs and one MOSCAP—the imager is capable of gathering information about photon position, number, and time distribution, enabling cost-effective devices for scientific imaging applications. This is achieved thanks to the photon counting and time-gating capabilities implemented in the analog domain, which in turn enable a $15\;{\upmu}{\text{m}}$ pixel with a 21% fill-factor. A reconfigurable column circuitry supports both the analog conventional readout and a self-referenced analog-to-digital conversion, able to cancel out the pixel-to-pixel nonuniformities, and speeding up the framerate to 486 fps. The imager, featuring also a delay locked loop to stabilize the internal waveform generation for reliable timing performance, has been implemented in a standard high-voltage $0.35\;{\upmu}{\text{m}}$ CMOS technology. Measurements in a fluorescent lifetime setup have been performed, comparing the results with single-point acquisitions made with commercial time-correlated equipment.

Journal ArticleDOI
TL;DR: In this paper, a radio-frequency optically pumped atomic magnetometer operating in magnetic induction tomography modality was used for shape reconstruction and detection of submillimetric cracks and penetration of conductive barriers.
Abstract: We report on a compact, tunable, and scalable to large arrays imaging device, based on a radio-frequency optically pumped atomic magnetometer operating in magnetic induction tomography modality. Imaging of conductive objects is performed at room temperature, in an unshielded environment and without background subtraction. Conductivity maps of target objects exhibit not only excellent performance in terms of shape reconstruction but also demonstrate detection of sub-millimetric cracks and penetration of conductive barriers. The results presented here demonstrate the potential of a future generation of imaging instruments, which combine magnetic induction tomography and the unmatched performance of atomic magnetometers.

Journal ArticleDOI
KyeoReh Lee1, YongKeun Park1
TL;DR: This work proposes a speckle-correlation scattering matrix approach, which enables access to impinging light-field information, when light transport in the diffusive layer is precisely calibrated, and demonstrates direct holographic measurements of three-dimensional optical fields using a compact device consisting of a regular image sensor and a diffusor.
Abstract: The word 'holography' means a drawing that contains all of the information for light-both amplitude and wavefront However, because of the insufficient bandwidth of current electronics, the direct measurement of the wavefront of light has not yet been achieved Though reference-field-assisted interferometric methods have been utilized in numerous applications, introducing a reference field raises several fundamental and practical issues Here we demonstrate a reference-free holographic image sensor To achieve this, we propose a speckle-correlation scattering matrix approach; light-field information passing through a thin disordered layer is recorded and retrieved from a single-shot recording of speckle intensity patterns Self-interference via diffusive scattering enables access to impinging light-field information, when light transport in the diffusive layer is precisely calibrated As a proof-of-concept, we demonstrate direct holographic measurements of three-dimensional optical fields using a compact device consisting of a regular image sensor and a diffusor

Proceedings ArticleDOI
03 Dec 2016
TL;DR: The first 3D-stacked backside illuminated (BSI) single photon avalanche diode (SPAD) image sensor capable of both single photon counting (SPC) intensity, and time resolved imaging was presented in this article.
Abstract: We present the first 3D-stacked backside illuminated (BSI) single photon avalanche diode (SPAD) image sensor capable of both single photon counting (SPC) intensity, and time resolved imaging. The 128×120 prototype has a pixel pitch of 7.83 μm making it the smallest pixel reported for SPAD image sensors. A low power, high density 40nm bottom tier hosts the quenching front end and processing electronics while an imaging specific 65nm top tier hosts the photo-detectors with a 1-to-1 hybrid bond connection [1]. The SPAD exhibits a median dark count rate (DCR) below 200cps at room temperature and 1V excess bias, and has a peak photon detection probability (PDP) of 27.5% at 640nm and 3 V excess bias.

Patent
06 Oct 2016
TL;DR: In this paper, a method comprising capturing current and past data frames of a vehicle scenery with an automotive imaging sensor, and predicting, by means of a recurrent neural network, the future position of a moving object in the vehicle scenery based on the current and present data frames.
Abstract: A method comprising capturing current and past data frames of a vehicle scenery with an automotive imaging sensor, and predicting, by means of a recurrent neural network, the future position of a moving object in the vehicle scenery based on the current and past data frames.

Journal ArticleDOI
Eric R. Fossum1, Jiaju Ma1, Saleh Masoodian1, Leo Anzagira1, Rachel Zizza1 
10 Aug 2016-Sensors
TL;DR: The Quanta Image Sensor (QIS) concept and its imaging characteristics are reviewed, which represents a possible major paradigm shift in image capture.
Abstract: The Quanta Image Sensor (QIS) was conceived when contemplating shrinking pixel sizes and storage capacities, and the steady increase in digital processing power. In the single-bit QIS, the output of each field is a binary bit plane, where each bit represents the presence or absence of at least one photoelectron in a photodetector. A series of bit planes is generated through high-speed readout, and a kernel or “cubicle” of bits (x, y, t) is used to create a single output image pixel. The size of the cubicle can be adjusted post-acquisition to optimize image quality. The specialized sub-diffraction-limit photodetectors in the QIS are referred to as “jots” and a QIS may have a gigajot or more, read out at 1000 fps, for a data rate exceeding 1 Tb/s. Basically, we are trying to count photons as they arrive at the sensor. This paper reviews the QIS concept and its imaging characteristics. Recent progress towards realizing the QIS for commercial and scientific purposes is discussed. This includes implementation of a pump-gate jot device in a 65 nm CIS BSI process yielding read noise as low as 0.22 e− r.m.s. and conversion gain as high as 420 µV/e−, power efficient readout electronics, currently as low as 0.4 pJ/b in the same process, creating high dynamic range images from jot data, and understanding the imaging characteristics of single-bit and multi-bit QIS devices. The QIS represents a possible major paradigm shift in image capture.

Journal ArticleDOI
01 Sep 2016-Small
TL;DR: This novel combination of cutting edge photonics research and well‐developed commercial products may not only lead to an important application of nanophotonics but also offer great potential for next generation image sensors beyond Moore's Law expectations.
Abstract: The increasing miniaturization and resolution of image sensors bring challenges to conventional optical elements such as spectral filters and polarizers, the properties of which are determined mainly by the materials used, including dye polymers. Recent developments in spectral filtering and optical manipulating techniques based on nanophotonics have opened up the possibility of an alternative method to control light spectrally and spatially. By integrating these technologies into image sensors, it will become possible to achieve high compactness, improved process compatibility, robust stability and tunable functionality. In this Review, recent representative achievements on nanophotonic image sensors are presented and analyzed including image sensors with nanophotonic color filters and polarizers, metamaterial-based THz image sensors, filter-free nanowire image sensors and nanostructured-based multispectral image sensors. This novel combination of cutting edge photonics research and well-developed commercial products may not only lead to an important application of nanophotonics but also offer great potential for next generation image sensors beyond Moore's Law expectations.

Journal ArticleDOI
TL;DR: This work proposes and demonstrates an RGB VLC transmission using CMOS image sensor with multi-input multi-output (MIMO) technique to mitigate the ICI and retrieve the three independent color channels in the rolling shutter pattern.
Abstract: Red, green, blue (RGB) light-emitting-diodes (LEDs) are used to increase the visible light communication (VLC) transmission capacity via wavelength-division-multiplexing (WDM), and the color image sensor in mobile phone is used to separate different color signals via a color filter array. However, due to the wide optical bandwidths of the color filters, there is a high spectral overlap among different channels, and a high inter-channel interference (ICI) happens. Here, we propose and demonstrate an RGB VLC transmission using CMOS image sensor with multi-input multi-output (MIMO) technique to mitigate the ICI and retrieve the three independent color channels in the rolling shutter pattern. Data pattern extinction-ratio (ER) enhancement and thresholding are deployed.

Patent
25 Jan 2016
TL;DR: In this article, a radar sensor device is disposed within the windshield electronics module and a forward facing image sensor is disposed in the interior cabin of a vehicle at and behind the windshield, with both disposed behind or adjacent to an upper region of the windshield.
Abstract: A forward facing sensing system comprises a windshield electronics module disposed in the interior cabin of a vehicle at and behind the windshield. A radar sensor device is disposed within the windshield electronics module and a forward facing image sensor is disposed within the windshield electronics module, and with both disposed behind or adjacent to an upper region of the windshield. A control comprising an image processor analyzes images captured by the forward facing image sensor in order to, at least in part, detect an object present forward of the vehicle in its direction of forward travel. The radar sensor device may utilize beam aiming or beam selection or may utilize digital beam forming or digital beam steering or may comprise an array antenna or a phased array antenna or the forward facing image sensor may comprise a pixelated imaging array sensor. The radar sensor device comprises a silicon germanium radar sensor.

Journal ArticleDOI
28 May 2016-Sensors
TL;DR: A novel design for an indoor positioning system using LEDs, an image sensor (IS) and an accelerometer sensor (AS) from mobile devices, which provides a high precision indoor position.
Abstract: Recently, it is believed that lighting and communication technologies are being replaced by high power LEDs, which are core parts of the visible light communication (VLC) system. In this paper, by taking advantages of VLC, we propose a novel design for an indoor positioning system using LEDs, an image sensor (IS) and an accelerometer sensor (AS) from mobile devices. The proposed algorithm, which provides a high precision indoor position, consists of four LEDs mounted on the ceiling transmitting their own three-dimensional (3D) world coordinates and an IS at an unknown position receiving and demodulating the signals. Based on the 3D world coordinates and the 2D image coordinate of LEDs, the position of the mobile device is determined. Compared to existing algorithms, the proposed algorithm only requires one IS. In addition, by using an AS, the mobile device is allowed to have arbitrary orientation. Last but not least, a mechanism for reducing the image sensor noise is proposed to further improve the accuracy of the positioning algorithm. A simulation is conducted to verify the performance of the proposed algorithm.

Proceedings ArticleDOI
27 Jun 2016
TL;DR: This paper introduces an algorithm that is able to combine and convert different RGB measurements into a single hyperspectral image for both indoor and outdoor scenes by exploiting the different spectral sensitivities of different camera sensors.
Abstract: Capturing hyperspectral images requires expensive and specialized hardware that is not readily accessible to most users. Digital cameras, on the other hand, are significantly cheaper in comparison and can be easily purchased and used. In this paper, we present a framework for reconstructing hyperspectral images by using multiple consumer-level digital cameras. Our approach works by exploiting the different spectral sensitivities of different camera sensors. In particular, due to the differences in spectral sensitivities of the cameras, different cameras yield different RGB measurements for the same spectral signal. We introduce an algorithm that is able to combine and convert these different RGB measurements into a single hyperspectral image for both indoor and outdoor scenes. This camera-based approach allows hyperspectral imaging at a fraction of the cost of most existing hyperspectral hardware. We validate the accuracy of our reconstruction against ground truth hyperspectral images (using both synthetic and real cases) and show its usage on relighting applications.

Proceedings ArticleDOI
11 Jul 2016
TL;DR: This work evaluates a compression method that was previously proposed for a different type of plenoptic image (focused or plen optic camera 2.0 contents) and shows a high compression efficiency compared with JPEG, i.e., up to 6 dB improvements for the tested images.
Abstract: Plenoptic images are one type of light field contents produced by using a combination of a conventional camera and an additional optical component in the form of microlens arrays, which are positioned in front of the image sensor surface. This camera setup can capture a sub-sampling of the light field with high spatial fidelity over a small range, and with a more coarsely sampled angle range. The earliest applications that leverage on the plenoptic image content is image refocusing, non-linear distribution of out-of-focus areas, SNR vs. resolution trade-offs, and 3D-image creation. All functionalities are provided by using post-processing methods. In this work, we evaluate a compression method that we previously proposed for a different type of plenoptic image (focused or plenoptic camera 2.0 contents) than the unfocused or plenoptic camera 1.0 that is used in this Grand Challenge. The method is an extension of the state-of-the-art video compression standard HEVC where we have brought the capability of bi-directional inter-frame prediction into the spatial prediction. The method is evaluated according to the scheme set out by the Grand Challenge, and the results show a high compression efficiency compared with JPEG, i.e., up to 6 dB improvements for the tested images.

Journal ArticleDOI
TL;DR: This paper proposes a self-calibration single-lens 3D video extensometer for non-contact, non-destructive and high-accuracy strain measurement and an efficient and robust inverse compositional Gauss-Newton algorithm combined with a robust stereo matching stage is employed to achieve high- Accuracy and real-time subset-based stereo matching.
Abstract: The accuracy of strain measurement using a common optical extensometer with two-dimensional (2D) digital image correlation (DIC) is not sufficient for experimental applications due to the effect of out-of-plane motion. Although three-dimensional (3D) DIC can measure all three components of displacement without introducing in-plane displacement errors, 3D-DIC requires the stringent synchronization between two digital cameras and requires complicated system calibration of binocular stereovision, which makes the measurement rather inconvenient. To solve the problems described above, this paper proposes a self-calibration single-lens 3D video extensometer for non-contact, non-destructive and high-accuracy strain measurement. In the established video extensometer, a single-lens 3D imaging system with a prism and two mirrors is constructed to acquire stereo images of the test sample surface, so the problems of synchronization and out-of-plane displacement can be solved easily. Moreover, a speckle-based self-calibration method which calibrates the single-lens stereo system using the reference speckle image of the specimen instead of the calibration targets is proposed, which will make the system more convenient to be used without complicated calibration. Furthermore, an efficient and robust inverse compositional Gauss-Newton algorithm combined with a robust stereo matching stage is employed to achieve high-accuracy and real-time subset-based stereo matching. Tensile tests of an Al-alloy specimen were performed to demonstrate the feasibility and effectiveness of the proposed self-calibration single-lens 3D video extensometer.

Patent
31 Mar 2016
TL;DR: In this article, a collimator filter layer on an image sensor wafer is formed, where a plurality of light collimating apertures in the collimators are aligned with the light sensing elements in the wafer.
Abstract: Methods and systems for integrating image sensor structures with collimator filters, including manufacturing methods and associated structures for forming collimator filters at the wafer level for integration with image sensor semiconductor wafers. Methods of making an optical biometric sensor include forming a collimator filter layer on an image sensor wafer, wherein a plurality of light collimating apertures in the collimator filter layer are aligned with a plurality of light sensing elements in the image sensor wafer, and after forming the collimator filter layer on the image sensor wafer, singulating the image sensor wafer into a plurality of individual optical sensors.

Journal ArticleDOI
TL;DR: Novel push-pull D-π-A dyes specially designed for Gaussian-shaped, narrow-band absorption and the high photoelectric conversion are reported, which work both as a color filter and as a source of photocurrents with linear and fast light responses, high sensitivity, and excellent stability.
Abstract: There are growing opportunities and demands for image sensors that produce higher-resolution images, even in low-light conditions. Increasing the light input areas through 3D architecture within the same pixel size can be an effective solution to address this issue. Organic photodiodes (OPDs) that possess wavelength selectivity can allow for advancements in this regard. Here, we report on novel push–pull D−π–A dyes specially designed for Gaussian-shaped, narrow-band absorption and the high photoelectric conversion. These p-type organic dyes work both as a color filter and as a source of photocurrents with linear and fast light responses, high sensitivity, and excellent stability, when combined with C60 to form bulk heterojunctions (BHJs). The effectiveness of the OPD composed of the active color filter was demonstrated by obtaining a full-color image using a camera that contained an organic/Si hybrid complementary metal-oxide-semiconductor (CMOS) color image sensor.

Journal ArticleDOI
23 May 2016-Sensors
TL;DR: This paper reviews the state of the art of single-photon avalanche diode (SPAD) image sensors for time-resolved imaging and focuses on pixel architectures featuring small pixel size and high fill factor as a key enabling technology for the successful implementation of high spatial resolution SPAD-based image sensors.
Abstract: This paper reviews the state of the art of single-photon avalanche diode (SPAD) image sensors for time-resolved imaging. The focus of the paper is on pixel architectures featuring small pixel size ( 20%) as a key enabling technology for the successful implementation of high spatial resolution SPAD-based image sensors. A summary of the main CMOS SPAD implementations, their characteristics and integration challenges, is provided from the perspective of targeting large pixel arrays, where one of the key drivers is the spatial uniformity. The main analog techniques aimed at time-gated photon counting and photon timestamping suitable for compact and low-power pixels are critically discussed. The main features of these solutions are the adoption of analog counting techniques and time-to-analog conversion, in NMOS-only pixels. Reliable quantum-limited single-photon counting, self-referenced analog-to-digital conversion, time gating down to 0.75 ns and timestamping with 368 ps jitter are achieved.

Journal ArticleDOI
Liping Yu1, Bing Pan1
TL;DR: The established single-camera stereo-DIC system only needs a single camera and presents strong robustness against variations in ambient light or the thermal radiation of a hot object, it demonstrates great potential in determining transient deformation in non-laboratory or high-temperature environments with the aid of a single high-speed camera.

Patent
Yosuke Kusaka1
09 Sep 2016
TL;DR: In this article, a backside illumination image sensor with a plurality of photoelectric conversion elements and a read circuit formed on a front surface side of the semiconductor substrate, and an on-chip lens formed at a position set apart from the light shielding film by a predetermined distance in correspondence to each photo-electric conversion element is presented.
Abstract: A backside illumination image sensor that includes a semiconductor substrate with a plurality of photoelectric conversion elements and a read circuit formed on a front surface side of the semiconductor substrate, and captures an image by outputting, via the read circuit, electrical signals generated as incident light having reached a back surface side of the semiconductor substrate is received at the photoelectric conversion elements includes: a light shielding film formed on a side where incident light enters the photoelectric conversion elements, with an opening formed therein in correspondence to each photoelectric conversion element; and an on-chip lens formed at a position set apart from the light shielding film by a predetermined distance in correspondence to each photoelectric conversion element. The light shielding film and an exit pupil plane of the image forming optical system achieve a conjugate relation to each other with regard to the on-chip lens.

Journal ArticleDOI
TL;DR: This work demonstrates a modified photometric stereo system with perfect pixel-registration, capable of reconstructing 3D images of scenes exhibiting dynamic behavior in real-time, and can be readily extended to other wavelengths, such as the infrared, where camera technology is expensive.
Abstract: Photometric stereo is an established three-dimensional (3D) imaging technique for estimating surface shape and reflectivity using multiple images of a scene taken from the same viewpoint but subject to different illumination directions. Importantly, this technique requires the scene to remain static during image acquisition otherwise pixel-matching errors can introduce significant errors in the reconstructed image. In this work, we demonstrate a modified photometric stereo system with perfect pixel-registration, capable of reconstructing 3D images of scenes exhibiting dynamic behavior in real-time. Performing high-speed structured illumination of a scene and sensing the reflected light with four spatially-separated, single-pixel detectors, our system reconstructs continuous real-time 3D video at ~8 frames per second for image resolutions of 64 × 64 pixels. Moreover, since this approach does not use a pixelated camera sensor, it can be readily extended to other wavelengths, such as the infrared, where camera technology is expensive.