scispace - formally typeset
Search or ask a question

Showing papers on "Image sensor published in 2022"


Journal ArticleDOI
TL;DR: In this article , a multifunctional infrared image sensor based on an array of black phosphorous programmable phototransistors (bP-PPT) is presented, which can receive optical images transmitted over a broad spectral range in the infrared and perform inference computation to process and recognize the images with 92% accuracy.
Abstract: Image sensors with internal computing capability enable in-sensor computing that can significantly reduce the communication latency and power consumption for machine vision in distributed systems and robotics. Two-dimensional semiconductors have many advantages in realizing such intelligent vision sensors because of their tunable electrical and optical properties and amenability for heterogeneous integration. Here, we report a multifunctional infrared image sensor based on an array of black phosphorous programmable phototransistors (bP-PPT). By controlling the stored charges in the gate dielectric layers electrically and optically, the bP-PPT's electrical conductance and photoresponsivity can be locally or remotely programmed with 5-bit precision to implement an in-sensor convolutional neural network (CNN). The sensor array can receive optical images transmitted over a broad spectral range in the infrared and perform inference computation to process and recognize the images with 92% accuracy. The demonstrated bP image sensor array can be scaled up to build a more complex vision-sensory neural network, which will find many promising applications for distributed and remote multispectral sensing.

32 citations


Journal ArticleDOI
TL;DR: In this paper , an integrated scanning light-field imaging sensor, termed a meta-imaging sensor, was proposed to achieve high-speed aberration-corrected three-dimensional photography for universal applications without additional hardware modifications.
Abstract: Planar digital image sensors facilitate broad applications in a wide range of areas1-5, and the number of pixels has scaled up rapidly in recent years2,6. However, the practical performance of imaging systems is fundamentally limited by spatially nonuniform optical aberrations originating from imperfect lenses or environmental disturbances7,8. Here we propose an integrated scanning light-field imaging sensor, termed a meta-imaging sensor, to achieve high-speed aberration-corrected three-dimensional photography for universal applications without additional hardware modifications. Instead of directly detecting a two-dimensional intensity projection, the meta-imaging sensor captures extra-fine four-dimensional light-field distributions through a vibrating coded microlens array, enabling flexible and precise synthesis of complex-field-modulated images in post-processing. Using the sensor, we achieve high-performance photography up to a gigapixel with a single spherical lens without a data prior, leading to orders-of-magnitude reductions in system capacity and costs for optical imaging. Even in the presence of dynamic atmosphere turbulence, the meta-imaging sensor enables multisite aberration correction across 1,000 arcseconds on an 80-centimetre ground-based telescope without reducing the acquisition speed, paving the way for high-resolution synoptic sky surveys. Moreover, high-density accurate depth maps can be retrieved simultaneously, facilitating diverse applications from autonomous driving to industrial inspections.

21 citations



Journal ArticleDOI
TL;DR: In this paper , a network of dual-gate silicon p-i-n photodiodes, which are compatible with complementary metal-oxide-semiconductor fabrication processes, can perform in-sensor image processing by being electrically programmed into convolutional filters.
Abstract: Complementary metal–oxide–semiconductor (CMOS) image sensors allow machines to interact with the visual world. In these sensors, image capture in front-end silicon photodiode arrays is separated from back-end image processing. To reduce the energy cost associated with transferring data between the sensing and computing units, in-sensor computing approaches are being developed where images are processed within the photodiode arrays. However, such methods require electrostatically doped photodiodes where photocurrents can be electrically modulated or programmed, and this is challenging in current CMOS image sensors that use chemically doped silicon photodiodes. Here we report in-sensor computing using electrostatically doped silicon photodiodes. We fabricate thousands of dual-gate silicon p–i–n photodiodes, which can be integrated into CMOS image sensors, at the wafer scale. With a 3 × 3 network of the electrostatically doped photodiodes, we demonstrate in-sensor image processing using seven different convolutional filters electrically programmed into the photodiode network. A network of dual-gate silicon p–i–n photodiodes, which are compatible with complementary metal–oxide–semiconductor fabrication processes, can perform in-sensor image processing by being electrically programmed into convolutional filters.

15 citations


Journal ArticleDOI
TL;DR: In this article , the authors focus on the recent progress of both types of quanta image sensors, including impact ionization-gain devices and modified CMOS image sensors with deep subelectron read noise and low noise readout signal chains.
Abstract: The quanta image sensor (QIS) is a photon-counting image sensor that has been implemented using different electron devices, including impact ionization-gain devices, such as the single-photon avalanche detectors (SPADs), and low-capacitance, high conversion-gain devices, such as modified CMOS image sensors (CIS) with deep subelectron read noise and/or low noise readout signal chains. This article primarily focuses on CIS QIS, but recent progress of both types is addressed. Signal processing progress, such as denoising, critical to improving apparent signal-to-noise ratio, is also reviewed as an enabling coinnovation.

15 citations


Journal ArticleDOI
TL;DR: In this paper , the performance of the short-wave infrared (SWIR) sensitive PbS colloidal quantum dot (CQD) photodetectors is analyzed.
Abstract: Thin-film-based image sensors feature a thin-film photodiode (PD) monolithically integrated on CMOS readout circuitry. They are getting significant attention as an imaging platform for wavelengths beyond the reach of Si PDs, i.e., for photon energies lower than 1.12 eV. Among the promising candidates for converting low-energy photons to electric charge carriers, lead sulfide (PbS) colloidal quantum dot (CQD) photodetectors are particularly well suited. However, despite the dynamic research activities in the development of these thin-film-based image sensors, no in-depth study has been published on their imaging characteristics. In this work, we present an elaborate analysis of the performance of our short-wave infrared (SWIR) sensitive PbS CQD imagers, which achieve external quantum efficiency (EQE) up to 40% at the wavelength of 1450 nm. Image lag is characterized and compared with the temporal photoresponsivity of the PD. We show that blooming is suppressed because of the restricted pixel-to-pixel movement of the photo-generated charge carriers within the bottom transport layer (BTL) of the PD stack. Finally, we perform statistical analysis of the activation energy for CQD by dark current spectroscopy (DCS), which is an implementation of a well-known methodology in Si-based imagers for defect engineering to a new class of imagers.

15 citations


Journal ArticleDOI
TL;DR: A monolithic vision enhancement chip with light-sensing, memory, digital-to-analog conversion, and processing functions by implementing a 619-pixel with 8582 transistors based on a wafer-scale two-dimensional monolayer molybdenum disulfide (MoS2).
Abstract: The rapid development of machine vision applications demands hardware that can sense and process visual information in a single monolithic unit to avoid redundant data transfer. Here, we design and demonstrate a monolithic vision enhancement chip with light-sensing, memory, digital-to-analog conversion, and processing functions by implementing a 619-pixel with 8582 transistors and physical dimensions of 10 mm by 10 mm based on a wafer-scale two-dimensional (2D) monolayer molybdenum disulfide (MoS2). The light-sensing function with analog MoS2 transistor circuits offers low noise and high photosensitivity. Furthermore, we adopt a MoS2 analog processing circuit to dynamically adjust the photocurrent of individual imaging sensor, which yields a high dynamic light-sensing range greater than 90 decibels. The vision chip allows the applications for contrast enhancement and noise reduction of image processing. This large-scale monolithic chip based on 2D semiconductors shows multiple functions with light sensing, memory, and processing for artificial machine vision applications, exhibiting the potentials of 2D semiconductors for future electronics.

14 citations


Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed a spiking neural network-based machine vision system that combines the speed of the machine and the mechanism of biological vision, achieving high-speed object detection and tracking 1,000x faster than human vision.

14 citations


Journal ArticleDOI
TL;DR: The evaluation demonstrates the viability of a batteryless, remote, visual-sensing platform in a small package that collects and usefully processes acquired data and transmits it over long distances (kms), while being deployed for multiple decades with zero maintenance.
Abstract: Batteryless image sensors present an opportunity for long-life, long-range sensor deployments that require zero maintenance, and have low cost. Such deployments are critical for enabling remote sensing applications, e.g., instrumenting national highways, where individual devices are deployed far (kms away) from supporting infrastructure. In this work, we develop and characterize Camaroptera, the first batteryless image-sensing platform to combine energy-harvesting with active, long-range (LoRa) communication. We also equip Camaroptera with a Machine Learning-based processing pipeline to mitigate costly, long-distance communication of image data. This processing pipeline filters out uninteresting images and only transmits the images interesting to the application. We show that compared to running a traditional Sense-and-Send workload, Camaroptera’s Local Inference pipeline captures and sends upto \( 12\times \) more images of interest to an application. By performing Local Inference, Camaroptera also sends upto \( 6.5\times \) fewer uninteresting images, instead using that energy to capture upto \( 14.7\times \) more new images, increasing its sensing effectiveness and availability. We fully prototype the Camaroptera hardware platform in a compact, 2 cm \( \times \) 3 cm \( \times \) 5 cm volume. Our evaluation demonstrates the viability of a batteryless, remote, visual-sensing platform in a small package that collects and usefully processes acquired data and transmits it over long distances (kms), while being deployed for multiple decades with zero maintenance.

13 citations


Journal ArticleDOI
TL;DR: In this paper , a single-photon avalanche diode (SPAD) sensor integrated with a 3D-stacked 65nm/65nm CMOS technology is reported for direct time-of-flight (dToF) 3D imaging in mobile devices.
Abstract: A 240 $\times$ 160 single-photon avalanche diode (SPAD) sensor integrated with a 3D-stacked 65nm/65nm CMOS technology is reported for direct time-of-flight (dToF) 3D imaging in mobile devices. The top tier is occupied by backside illuminated SPADs with 16 $\mu {\mathrm{ m}}$ pitch and 49.7% fill-factor. The SPADS consists of multiple 16 $\times$ 16 SPADs top groups, in which each of 8 $\times$ 8 SPADs sub-group shares a 10-bit, 97.65ps and 100ns range time-to-digital converter (TDC) in a quad-partition rolling shutter mode. During the exposure of each rolling stage, partial histogramming readout (PHR) approach is implemented to compress photon events to in-pixel histograms. Since the fine histograms is incomplete, for the first time we propose histogram distortion correction (HDC) algorithm to solve the linearity discontinuity at the coarse bin edges. With this algorithm, depth measurement up to 9.5m achieves an accuracy of 1cm and precision of 9mm in office lighting condition. Outdoor measurement with 10 klux sunlight achieves a maximum distance detection of 4m at 20 fps, using a VCSEL laser with the average power of 90 mW and peak power of 15 W.

13 citations


Journal ArticleDOI
01 Aug 2022-Sensors
TL;DR: In this article , a review and analysis of state-of-the-art image sensors for detecting, locating, and quantifying partial discharges in insulation systems and, in particular, corona discharges is presented.
Abstract: Today, there are many attempts to introduce the Internet of Things (IoT) in high-voltage systems, where partial discharges are a focus of concern since they degrade the insulation. The idea is to detect such discharges at a very early stage so that corrective actions can be taken before major damage is produced. Electronic image sensors are traditionally based on charge-coupled devices (CCDs) and, next, on complementary metal oxide semiconductor (CMOS) devices. This paper performs a review and analysis of state-of-the-art image sensors for detecting, locating, and quantifying partial discharges in insulation systems and, in particular, corona discharges since it is an area with an important potential for expansion due to the important consequences of discharges and the complexity of their detection. The paper also discusses the recent progress, as well as the research needs and the challenges to be faced, in applying image sensors in this area. Although many of the cited research works focused on high-voltage applications, partial discharges can also occur in medium- and low-voltage applications. Thus, the potential applications that could potentially benefit from the introduction of image sensors to detect electrical discharges include power substations, buried power cables, overhead power lines, and automotive applications, among others.

Journal ArticleDOI
TL;DR: The advancements of image sensor fabrication technology, for instance, backside illumination (BSI) process and pixel level hybrid wafer bonding, have created new trends in the HDR technology.
Abstract: Because of various purposes and high dynamic range (HDR) of brightness of objects in automotive applications, HDR image capture is a primary requirement. In this article, HDR CMOS image sensor (CIS) technology and its automotive applications are discussed including application requirements, basic HDR approaches and trends of HDR CMOS image sensor technologies, advantages and disadvantages for automotive application, and future prospect of the HDR technology. LED flicker caused by time aliasing effect and motion artifacts are two major issues in conventional multiple exposure HDR (MEHDR) approach, and several HDR technologies have been introduced for automotive applications. The advancements of image sensor fabrication technology, for instance, backside illumination (BSI) process and pixel level hybrid wafer bonding, have created new trends in the HDR technology.

Journal ArticleDOI
TL;DR: In this paper , a multilayer ONN pre-processor for image sensing is presented, using a commercial image intensifier as a parallel optoelectronic, optical-to-optical nonlinear activation function.
Abstract: Optical imaging is commonly used for both scientific and technological applications across industry and academia. In image sensing, a measurement, such as of an object's position, is performed by computational analysis of a digitized image. An emerging image-sensing paradigm breaks this delineation between data collection and analysis by designing optical components to perform not imaging, but encoding. By optically encoding images into a compressed, low-dimensional latent space suitable for efficient post-analysis, these image sensors can operate with fewer pixels and fewer photons, allowing higher-throughput, lower-latency operation. Optical neural networks (ONNs) offer a platform for processing data in the analog, optical domain. ONN-based sensors have however been limited to linear processing, but nonlinearity is a prerequisite for depth, and multilayer NNs significantly outperform shallow NNs on many tasks. Here, we realize a multilayer ONN pre-processor for image sensing, using a commercial image intensifier as a parallel optoelectronic, optical-to-optical nonlinear activation function. We demonstrate that the nonlinear ONN pre-processor can achieve compression ratios of up to 800:1 while still enabling high accuracy across several representative computer-vision tasks, including machine-vision benchmarks, flow-cytometry image classification, and identification of objects in real scenes. In all cases we find that the ONN's nonlinearity and depth allowed it to outperform a purely linear ONN encoder. Although our experiments are specialized to ONN sensors for incoherent-light images, alternative ONN platforms should facilitate a range of ONN sensors. These ONN sensors may surpass conventional sensors by pre-processing optical information in spatial, temporal, and/or spectral dimensions, potentially with coherent and quantum qualities, all natively in the optical domain.

Journal ArticleDOI
TL;DR: In this article , the authors review the latest achievements in stacked image sensors with respect to the evolution of image sensor architecture for accelerating performance improvements, extending sensing capabilities, and integrating edge computing with various stacked device technologies.
Abstract: The evolution of CMOS image sensors and their prospects using advanced imaging technologies are promising candidates to improve the quality of life. With the rapid advent of parallel analog-to-digital converters (ADCs) and back-illuminated (BI) technology, CMOS image sensors currently dominate the market for digital cameras, and stacked CMOS image sensors continue to provide enhanced functionality and user experience in mobile devices. This article reviews the latest achievements in stacked image sensors with respect to the evolution of image sensor architecture for accelerating performance improvements, extending sensing capabilities, and integrating edge computing with various stacked device technologies.

Journal ArticleDOI
TL;DR: In this article , the first on-chip UV optoelectronic integration in 4H-SiC CMOS was demonstrated, which includes an image sensor with 64 active pixels and a total of 1263 transistors on a 100 mm 2 chip.
Abstract: Abstract This work demonstrates the first on-chip UV optoelectronic integration in 4H-SiC CMOS, which includes an image sensor with 64 active pixels and a total of 1263 transistors on a 100 mm 2 chip. The reported image sensor offers serial digital, analog, and 2-bit ADC outputs and operates at 0.39 Hz with a maximum power consumption of 60 μW, which are significant improvements over previous reports. UV optoelectronics have applications in flame detection, satellites, astronomy, UV photography, and healthcare. The complexity of this optoelectronic system paves the way for new applications such harsh environment microcontrollers.

Journal ArticleDOI
TL;DR: In this article, the super-pixel calibration of a Sony IMX 250 MZR camera was used to quantify the quality of the polarization measurements and demonstrated that the measurements are generally consistent throughout the sensor.
Abstract: Polarization measurements conducted with a polarization camera using the Sony IMX 250 MZR polarization image sensor are assessed with the super-pixel calibration technique and a simple test setup. We define an error that quantifies the quality of the polarization measurements. Multiple factors influencing the measurement quality of the polarization camera are investigated and discussed. We demonstrate that polarization measurements are generally consistent throughout the sensor if not corrupted by large chief ray angles or large angles of incidence. The central 600×400pixels were analyzed, and it is shown that sufficiently large f-numbers no longer influence measurement quality. We also argue that lens design and focal length have little influence on these central pixels. The findings of this study provide useful guidance for researchers using such a polarization image sensor.

Journal ArticleDOI
TL;DR: In this paper , a compact and efficient metasurface-based spectral imager for use in the near-infrared range was demonstrated by fabricating dielectric multilayer filters directly on top of the CMOS image sensor.
Abstract: Abstract We have demonstrated a compact and efficient metasurface-based spectral imager for use in the near-infrared range. The spectral imager was created by fabricating dielectric multilayer filters directly on top of the CMOS image sensor. The transmission wavelength for each spectral channel was selected by embedding a Si nanopost array of appropriate dimensions within the multilayers on the corresponding pixels, and this greatly simplified the fabrication process by avoiding the variation of the multilayer-film thicknesses. The meta-spectral imager shows high efficiency and excellent spectral resolution up to 2.0 nm in the near-infrared region. Using the spectral imager, we were able to measure the broad spectra of LED emission and obtain hyperspectral images from wavelength-mixed images. This approach provides ease of fabrication, miniaturization, low crosstalk, high spectral resolution, and high transmission. Our findings can potentially be used in integrating a compact spectral imager in smartphones for diverse applications.

Journal ArticleDOI
TL;DR: A novel architecture for digital pixel sensor (DPS) which is a high-speed GS operation CIS with a pixel-wise analog-to-digital converter (ADC) and an in-pixel digital memory and a low-powered ADC with a near sub-threshold operation is proposed.
Abstract: This article presents a low random noise, a low-power, and a high-speed 2-mega pixels (Mp) global-shutter (GS)-type CMOS image sensor (CIS) using an advanced dynamic random access memory (DRAM) technology. GS CIS is one of the alternatives to solve image distortion issues caused by a conventional rolling-shutter (RS) CIS operation, since a 2-D image data can be simultaneously sampled by the in-pixel analog memory. To achieve a high-performance GS CIS, we proposed a novel architecture for digital pixel sensor (DPS) which is a high-speed GS operation CIS with a pixel-wise analog-to-digital converter (ADC) and an in-pixel digital memory. The major technologies of the proposed DPS can be summarized as follows: 1) two large coupling capacitors with mature DRAM technology; 2) extremely narrow pitch Cu-to-Cu (C2C) bond; and 3) finally low-powered ADC with a near sub-threshold operation. A perfect auto-zero operation for ADC is implemented using two DRAM capacitors, and a large number of transistors have to be integrated in the single pixel for realizing pixel-level ADC. Thus, each pixel has two fine-pitch C2C interconnections. This makes it possible to realize wafer-level stacked unit pixel. The proposed DPS with low-power consuming analog circuits has been successfully designed and developed for extremely fast-readout speed of max. 1200 frames per second (fps) and high sensitivity for low-illumination conditions.

Journal ArticleDOI
01 Mar 2022-Sensors
TL;DR: In this paper , an ultra-high-speed computational CMOS image sensor with a burst frame rate of 303 megaframes per second, which is the fastest among the solid-state image sensors, to our knowledge, is demonstrated.
Abstract: An ultra-high-speed computational CMOS image sensor with a burst frame rate of 303 megaframes per second, which is the fastest among the solid-state image sensors, to our knowledge, is demonstrated. This image sensor is compatible with ordinary single-aperture lenses and can operate in dual modes, such as single-event filming mode or multi-exposure imaging mode, by reconfiguring the number of exposure cycles. To realize this frame rate, the charge modulator drivers were adequately designed to suppress the peak driving current taking advantage of the operational constraint of the multi-tap charge modulator. The pixel array is composed of macropixels with 2 × 2 4-tap subpixels. Because temporal compressive sensing is performed in the charge domain without any analog circuit, ultrafast frame rates, small pixel size, low noise, and low power consumption are achieved. In the experiments, single-event imaging of plasma emission in laser processing and multi-exposure transient imaging of light reflections to extend the depth range and to decompose multiple reflections for time-of-flight (TOF) depth imaging with a compression ratio of 8× were demonstrated. Time-resolved images similar to those obtained by the direct-type TOF were reproduced in a single shot, while the charge modulator for the indirect TOF was utilized.

Journal ArticleDOI
01 Jan 2022
TL;DR: Wang et al. as mentioned in this paper proposed a point-cloud-centric depth completion method called attention bilateral convolutional network for depth completion (ABCD), which uses LiDAR data and camera data to improve the resolution of the sparse depth information.
Abstract: We propose a point-cloud-centric depth completion method called attention bilateral convolutional network for depth completion (ABCD). The proposed method uses LiDAR data and camera data to improve the resolution of the sparse depth information. Color images, which have been seen as fundamental to depth completion tasks, are inevitably sensitive to light and weather conditions. We designed an attentive bilateral convolutional layer (ABCL) to build a robust depth completion network under diverse environmental conditions. An ABCL efficiently learns geometric characteristics by directly leveraging a 3D point cloud and enhances the representation capability of sparse depth information by highlighting the core while suppressing clutter. The ABCD, with an ABCL as a building block, stably fills the void in sparse depth images even under unfamiliar conditions with minimum dependency on unstable camera sensors. Therefore, the proposed method is expected to be a solution to depth completion problems caused by changes in the environment in which images are captured. Through comparative experiments with other methods using the KITTI [1] and VirtualKITTI2 [2] datasets, we demonstrated the outstanding performance of the proposed method in diverse driving environments.

Journal ArticleDOI
TL;DR: In this paper , an implantable shank-based neural imager with color-filter-grating-based angle-sensitive pixels is presented, which combines angular-spectral and temporal information to demix and localize multispectral fluorescent targets.
Abstract: Implantable image sensors have the potential to revolutionize neuroscience. Due to their small form factor requirements; however, conventional filters and optics cannot be implemented. These limitations obstruct high-resolution imaging of large neural densities. Recent advances in angle-sensitive image sensors and single-photon avalanche diodes have provided a path toward ultrathin lens-less fluorescence imaging, enabling plenoptic sensing by extending sensing capabilities to include photon arrival time and incident angle, thereby providing the opportunity for separability of fluorescence point sources within the context of light-field microscopy (LFM). However, the addition of spectral sensitivity to angle-sensitive LFM reduces imager resolution because each wavelength requires a separate pixel subset. Here, we present a 1024-pixel, 50 µm thick implantable shank-based neural imager with color-filter-grating-based angle-sensitive pixels. This angular-spectral sensitive front end combines a metal-insulator-metal (MIM) Fabry-Perot color filter and diffractive optics to produce the measurement of orthogonal light-field information from two distinct colors within a single photodetector. The result is the ability to add independent color sensing to LFM while doubling the effective pixel density. The implantable imager combines angular-spectral and temporal information to demix and localize multispectral fluorescent targets. In this initial prototype, this is demonstrated with 45 μm diameter fluorescently labeled beads in scattering medium. Fluorescent lifetime imaging is exploited to further aid source separation, in addition to detecting pH through lifetime changes in fluorescent dyes. While these initial fluorescent targets are considerably brighter than fluorescently labeled neurons, further improvements will allow the application of these techniques to in-vivo multifluorescent structural and functional neural imaging.

Proceedings ArticleDOI
20 Feb 2022
TL;DR: In this paper , an intelligent vision sensor (IVS) with an embedded tiny CNN model and programmable weights is presented to achieve configurable feature extraction and on-chip image classification using a mixed-mode processing-in-sensor (PIS) technique.
Abstract: Vision systems with artificial intelligence (AI) for applications requiring image classification are in growing demand. However, the imager plus dedicated AI accelerator solution [1] suffers from the burdens of power and latency caused by the raw image data traffic between the imager and the companion signal processor with a neural network accelerator, making it unsuitable for the real-time inference in low-power edge devices. Recently, imagers with near- or in-sensor processing capability have been developed [2]–[6] to improve the system efficiency for specific applications. In [2]–[4], the near-sensor Haar-like filtering operations are implemented in imagers to realize face detection (FD). However, unlike using convolutional neural networks (CNNs) with programmable weights for different tasks, the implemented features of such prior works are limited and not configurable. In [5], a convolutional CMOS image sensor (CIS) with near-sensor analog multiply-accumulate (MAC) operations was reported for assisting with the 1st-layer computations of a CNN. However, the convolutional CIS is inadequate for some tasks, due limits on the numbers of layers/kernels, and needs a companion digital accelerator for the required operations (Rectified Linear Unit: ReLU, Maximum-Pooling: MP, Fully-Connected layer: FC, etc.) of a complete CNN model. In [6], an analog convolutional CIS is reported with a 5-layer network for CNN implementation. However, the analog MAC operations using charge sharing with a capacitor array leads to gain loss, low weight resolution, and limited accuracy. Moreover, the ReLU+MP operation using a static winner-take-all circuit is power hungry. To address these issues, we present an intelligent vision sensor (IVS) with an embedded tiny CNN model and programmable weights to achieve configurable feature extraction and on-chip image classification using a mixed-mode processing-in-sensor (PIS) technique.

Journal ArticleDOI
TL;DR: In this paper , an implantable shank-based neural imager with color-filter-grating-based angle-sensitive pixels is presented, which combines angular-spectral and temporal information to demix and localize multispectral fluorescent targets.
Abstract: Implantable image sensors have the potential to revolutionize neuroscience. Due to their small form factor requirements; however, conventional filters and optics cannot be implemented. These limitations obstruct high-resolution imaging of large neural densities. Recent advances in angle-sensitive image sensors and single-photon avalanche diodes have provided a path toward ultrathin lens-less fluorescence imaging, enabling plenoptic sensing by extending sensing capabilities to include photon arrival time and incident angle, thereby providing the opportunity for separability of fluorescence point sources within the context of light-field microscopy (LFM). However, the addition of spectral sensitivity to angle-sensitive LFM reduces imager resolution because each wavelength requires a separate pixel subset. Here, we present a 1024-pixel, 50 µm thick implantable shank-based neural imager with color-filter-grating-based angle-sensitive pixels. This angular-spectral sensitive front end combines a metal-insulator-metal (MIM) Fabry-Perot color filter and diffractive optics to produce the measurement of orthogonal light-field information from two distinct colors within a single photodetector. The result is the ability to add independent color sensing to LFM while doubling the effective pixel density. The implantable imager combines angular-spectral and temporal information to demix and localize multispectral fluorescent targets. In this initial prototype, this is demonstrated with 45 μm diameter fluorescently labeled beads in scattering medium. Fluorescent lifetime imaging is exploited to further aid source separation, in addition to detecting pH through lifetime changes in fluorescent dyes. While these initial fluorescent targets are considerably brighter than fluorescently labeled neurons, further improvements will allow the application of these techniques to in-vivo multifluorescent structural and functional neural imaging.

Proceedings ArticleDOI
12 Jun 2022
TL;DR: In this paper , a tetra pixel architecture consisting of four adjacent pixels shuffles its dual taps to compensate for mismatches and provides in-and quad-phase information at the same time, suppressing motion artifact.
Abstract: A 640×480 indirect time-of-flight (iToF) image sensor with a 7.2-μm tetra pixel architecture is presented. The proposed tetra pixel architecture consisting of four adjacent pixels shuffles its dual taps to compensate for mismatches and provides in- and quad-phase information at the same time, suppressing motion artifact. In addition, a symmetric trident pinned-photodiode (PPD) structure in each pixel is devised to optimize the alignment with microlens, enhancing the demodulation contrast and sensitivity. The prototype iToF sensor with a VGA resolution was fabricated in a 0.11-μm BSI process and fully characterized. Compared with conventional dual-tap iToF sensors, the measured nonlinearity is improved by about 60%, and the motion artifact is greatly reduced.

Journal ArticleDOI
TL;DR: In this paper , a tilted receiver camera correction method was proposed to eliminate the additional positioning errors caused by rotation in indoor visible light positioning (VLP) systems, which can suppress the positioning errors when part of the LED light is blocked, and can therefore enhance the robustness of the VLP system.
Abstract: Indoor visible light positioning (VLP) systems can be directly integrated with existing lighting infrastructure and achieve high-accuracy positioning when using CMOS image sensors as receivers. Due to cameras’ limited field of view (FOV), VLP systems based on multiple light emitting diodes (LEDs) require lamps to be arranged with high density, which is not practical in realistic scenarios. Additionally, smartphones held in the human hand may rotate about the x-, y- or z-axis. This article proposes a tilted receiver camera correction method for indoor VLP systems to eliminate the additional positioning errors caused by rotation. Furthermore, the proposed method can suppress the positioning errors when part of the LED light is blocked, and can therefore enhance the robustness of the VLP system. The experimental results show that the proposed VLP method can achieve average positioning error within 7 cm when more than 40% of the LED image is captured, and can achieve average positioning error of 3.9 cm when more than 90% of the LED image is captured.

Journal ArticleDOI
TL;DR: In this paper , a single-photon avalanche diode (SPAD) image sensor with a five-wire interface is designed for time-resolved fluorescence microendoscopy.
Abstract: A miniaturized 1.4 mm $\times \,\, 1.4$ mm, $128\times120$ single-photon avalanche diode (SPAD) image sensor with a five-wire interface is designed for time-resolved fluorescence microendoscopy. This is the first endoscopic chip-on-tip sensor capable of fluorescence lifetime imaging microscopy (FLIM). The sensor provides a novel, compact means to extend the photon counting dynamic range (DR) by partitioning the required bit depth between in-pixel counters and off-pixel noiseless frame summation. The sensor is implemented in STMicroelectronics 40-/90-nm 3-D-stacked backside-illuminated (BSI) CMOS process with 8- $\mu \text{m}$ pixels and 45% fill factor. The sensor capabilities are demonstrated through FLIM examples, including ex vivo human lung tissue, obtained at video rate.

Journal ArticleDOI
TL;DR: In this paper, a real-time fiber-optic infrared imaging system was proposed to capture a flexible wide field of view (FOV) and large depth of field infrared image in real time.
Abstract: A key limitation in the observation of instruments used in operations and heart sutures during a procedure is the scattering and absorption during optical imaging in the presence of blood. Therefore, we propose a novel real-time fiber-optic infrared imaging system simultaneously capturing a flexible wide field of view (FOV) and large depth of field infrared image in real time. The assessment criteria for imaging quality of the objective and coupling lens have been optimized and evaluated. Furthermore, the feasibility of manufacturing and assembly has been demonstrated with tolerance sensitivity and the Monte Carlo analysis. The simulated results show that the optical system can achieve a large working distance of 8 to 25 mm, a wide FOV of 120°, and the relative illuminance is over 0.98 in the overall FOV. To achieve high imaging quality in the proposed system, the modulation transfer function is over 0.661 at 16.7 lp/mm for a 320×256 short wavelength infrared camera sensor with a pixel size of 30 µm.

Journal ArticleDOI
TL;DR: In this article , various fabrication strategies for the curved image sensor arrays are summarized, and the applications of the curved device in the artificial electronic eyes as well as the challenges and opportunities of the curve image sensor array are also discussed.

Journal ArticleDOI
TL;DR: In this article , an optical system with a two-layer structure, comprising an external polarizer and polarizers on a pixel array, was constructed for detecting changes in polarization with high sensitivity.
Abstract: In this article, We demonstrated an image sensor for detecting changes in polarization with high sensitivity. For this purpose, we constructed an optical system with a two-layer structure, comprising an external polarizer and polarizers on a pixel array. An external polarizer is used to enhance the polarization rotation while reducing the intensity to avoid pixel saturation of the image sensor. Using a two-layer structure, the two polarizers can be arranged under optimal conditions and the image sensor can achieve high polarization-change detection performance. We fabricated the polarization image sensor using a 0.35- $\mu \text{m}$ CMOS process and, by averaging 50 ${\times }\,\,50$ pixels and 96 frames, achieved a polarization rotation detection limit of 5.2 ${\times }\,\,10^{-4^{\circ}} $ at a wavelength of 625 nm. We also demonstrated the applicability of electric-field distribution imaging using an electrooptic crystal (ZnTe) for weak-polarization-change distribution measurements.

Journal ArticleDOI
TL;DR: In this paper, the authors demonstrate feedback control of three-dimensional (3D) parallel focusing for automatic compensation of imperfections in an optical system and long-term stability of parallel focusing, realized by a computer-generated hologram (CGH) displayed on a spatial light modulator (SLM), and an iterative calculation based on the observation of the 3D focusing intensities with an image sensor on a programmable linear stage.