scispace - formally typeset
Search or ask a question

Showing papers on "High dynamic range published in 2020"


Journal ArticleDOI
TL;DR: In this paper, the authors present a new calibration and imaging pipeline that aims at producing high fidelity, high dynamic range images with LOFAR High Band Antenna data, while being computationally efficient and robust against the absorption of unmodeled radio emission.
Abstract: The Low Frequency Array (LOFAR) is an ideal instrument to conduct deep extragalactic surveys. It has a large field of view and is sensitive to large scale and compact emission. It is, however, very challenging to synthesize thermal noise limited maps at full resolution, mainly because of the complexity of the low-frequency sky and the direction dependent effects (phased array beams and ionosphere). In this first paper of a series we present a new calibration and imaging pipeline that aims at producing high fidelity, high dynamic range images with LOFAR High Band Antenna data, while being computationally efficient and robust against the absorption of unmodeled radio emission. We apply this calibration and imaging strategy to synthesize deep images of the Bootes and LH fields at 150 MHz, totaling $\sim80$ and $\sim100$ hours of integration respectively and reaching unprecedented noise levels at these low frequencies of $\lesssim30$ and $\lesssim23$ $\mu$Jy/beam in the inner $\sim3$ deg$^2$. This approach is also being used to reduce the LoTSS-wide data for the second data release.

117 citations


Journal ArticleDOI
TL;DR: DeepTMO as discussed by the authors proposes a conditional generative adversarial network (cGAN) to learn to adapt to vast scenic-content (e.g., outdoor, indoor, human, structures, etc.) and tackles the HDR related scene-specific challenges such as contrast and brightness, while preserving the fine-grained details.
Abstract: A computationally fast tone mapping operator (TMO) that can quickly adapt to a wide spectrum of high dynamic range (HDR) content is quintessential for visualization on varied low dynamic range (LDR) output devices such as movie screens or standard displays. Existing TMOs can successfully tone-map only a limited number of HDR content and require an extensive parameter tuning to yield the best subjective-quality tone-mapped output. In this paper, we address this problem by proposing a fast, parameter-free and scene-adaptable deep tone mapping operator (DeepTMO) that yields a high-resolution and high-subjective quality tone mapped output. Based on conditional generative adversarial network (cGAN), DeepTMO not only learns to adapt to vast scenic-content ( e.g. , outdoor, indoor, human, structures, etc.) but also tackles the HDR related scene-specific challenges such as contrast and brightness, while preserving the fine-grained details. We explore 4 possible combinations of Generator-Discriminator architectural designs to specifically address some prominent issues in HDR related deep-learning frameworks like blurring, tiling patterns and saturation artifacts. By exploring different influences of scales, loss-functions and normalization layers under a cGAN setting, we conclude with adopting a multi-scale model for our task. To further leverage on the large-scale availability of unlabeled HDR data, we train our network by generating targets using an objective HDR quality metric, namely Tone Mapping Image Quality Index (TMQI). We demonstrate results both quantitatively and qualitatively, and showcase that our DeepTMO generates high-resolution, high-quality output images over a large spectrum of real-world scenes. Finally, we evaluate the perceived quality of our results by conducting a pair-wise subjective study which confirms the versatility of our method.

93 citations


Proceedings ArticleDOI
14 Jun 2020
TL;DR: A novel computational imaging system with high resolution and low noise that first bridges the two sensing modalities via a noise-robust motion compensation model, and then performs joint image filtering that can be widely applied to many existing event-based algorithms that are highly dependent on spatial resolution and noise robustness.
Abstract: We present a novel computational imaging system with high resolution and low noise. Our system consists of a traditional video camera which captures high-resolution intensity images, and an event camera which encodes high-speed motion as a stream of asynchronous binary events. To process the hybrid input, we propose a unifying framework that first bridges the two sensing modalities via a noise-robust motion compensation model, and then performs joint image filtering. The filtered output represents the temporal gradient of the captured space-time volume, which can be viewed as motion-compensated event frames with high resolution and low noise. Therefore, the output can be widely applied to many existing event-based algorithms that are highly dependent on spatial resolution and noise robustness. In experimental results performed on both publicly available datasets as well as our contributing RGB-DAVIS dataset, we show systematic performance improvement in applications such as high frame-rate video synthesis, feature/corner detection and tracking, as well as high dynamic range image reconstruction.

70 citations


Proceedings ArticleDOI
14 Jun 2020
TL;DR: A hybrid camera system has been built to validate that the proposed method is able to reconstruct quantitatively and qualitatively high-quality high dynamic range images by successfully fusing the images and intensity maps for various real-world scenarios.
Abstract: Reconstruction of high dynamic range image from a single low dynamic range image captured by a frame-based conventional camera, which suffers from over- or under-exposure, is an ill-posed problem. In contrast, recent neuromorphic cameras are able to record high dynamic range scenes in the form of an intensity map, with much lower spatial resolution, and without color. In this paper, we propose a neuromorphic camera guided high dynamic range imaging pipeline, and a network consisting of specially designed modules according to each step in the pipeline, which bridges the domain gaps on resolution, dynamic range, and color representation between two types of sensors and images. A hybrid camera system has been built to validate that the proposed method is able to reconstruct quantitatively and qualitatively high-quality high dynamic range images by successfully fusing the images and intensity maps for various real-world scenarios.

54 citations


Proceedings ArticleDOI
Jonghyun Choi, Kuk-Jin Yoon1
14 Jun 2020
TL;DR: In this article, an end-to-end network is proposed to reconstruct high resolution, high dynamic range (HDR) images directly from the event stream, which outperforms the combination of the state-of-the-art event to image algorithms with the state of the art super resolution schemes in many quantitative measures by large margins.
Abstract: An event camera detects per-pixel intensity difference and produces asynchronous event stream with low latency, high dynamic range, and low power consumption. As a trade-off, the event camera has low spatial resolution. We propose an end-to-end network to reconstruct high resolution, high dynamic range (HDR) images directly from the event stream. We evaluate our algorithm on both simulated and real-world sequences and verify that it captures fine details of a scene and outperforms the combination of the state-of-the-art event to image algorithms with the state-of-the-art super resolution schemes in many quantitative measures by large margins. We further extend our method by using the active sensor pixel (APS) frames or reconstructing images iteratively.

51 citations


Journal ArticleDOI
TL;DR: Quanta burst photography as mentioned in this paper is a computational photography technique that leverages single-photon avalanche diodes (SPADs) as passive imaging devices for photography in challenging conditions, including ultra low-light and fast motion.
Abstract: Single-photon avalanche diodes (SPADs) are an emerging sensor technology capable of detecting individual incident photons, and capturing their time-of-arrival with high timing precision. While these sensors were limited to singlepixel or low-resolution devices in the past, recently, large (up to 1 MPixel) SPAD arrays have been developed. These single-photon cameras (SPCs) are capable of capturing high-speed sequences of binary single-photon images with no read noise. We present quanta burst photography, a computational photography technique that leverages SPCs as passive imaging devices for photography in challenging conditions, including ultra low-light and fast motion. Inspired by recent success of conventional burst photography, we design algorithms that align and merge binary sequences captured by SPCs into intensity images with minimal motion blur and artifacts, high signal-to-noise ratio (SNR), and high dynamic range. We theoretically analyze the SNR and dynamic range of quanta burst photography, and identify the imaging regimes where it provides significant benefits. We demonstrate, via a recently developed SPAD array, that the proposed method is able to generate high-quality images for scenes with challenging lighting, complex geometries, high dynamic range and moving objects. With the ongoing development of SPAD arrays, we envision quanta burst photography finding applications in both consumer and scientific photography.

43 citations


Journal ArticleDOI
TL;DR: The Variational Hilbert Quantitative Phase Imaging (VHQPI) is proposed—end-to-end purely computational add-on module able to improve performance of a QPI-unit without hardware modifications, potentially opening up new possibilities in QPI.
Abstract: Utilizing the refractive index as the endogenous contrast agent to noninvasively study transparent cells is a working principle of emerging quantitative phase imaging (QPI). In this contribution, we propose the Variational Hilbert Quantitative Phase Imaging (VHQPI)—end-to-end purely computational add-on module able to improve performance of a QPI-unit without hardware modifications. The VHQPI, deploying unique merger of tailored variational image decomposition and enhanced Hilbert spiral transform, adaptively provides high quality map of sample-induced phase delay, accepting particularly wide range of input single-shot interferograms (from off-axis to quasi on-axis configurations). It especially promotes high space-bandwidth-product QPI configurations alleviating the spectral overlapping problem. The VHQPI is tailored to deal with cumbersome interference patterns related to detailed locally varying biological objects with possibly high dynamic range of phase and relatively low carrier. In post-processing, the slowly varying phase-term associated with the instrumental optical aberrations is eliminated upon variational analysis to further boost the phase-imaging capabilities. The VHQPI is thoroughly studied employing numerical simulations and successfully validated using static and dynamic cells phase-analysis. It compares favorably with other single-shot phase reconstruction techniques based on the Fourier and Hilbert–Huang transforms, both in terms of visual inspection and quantitative evaluation, potentially opening up new possibilities in QPI.

39 citations


Journal ArticleDOI
TL;DR: This paper addresses the problem of phase information loss in HDR scenes, in order to enable 3D reconstruction from saturated or dark images by deep learning by using a specifically designed convolutional neural network (CNN), which can accurately extract phase information in both the low signal-to-noise ratio (SNR) and saturation situations after proper training.

39 citations


Journal ArticleDOI
TL;DR: An objective quality model for MEF of dynamic scenes is developed that significantly outperforms the state-of-the-art and is demonstrated to have the promise of the proposed model in parameter tuning of MEF methods.
Abstract: A common approach to high dynamic range (HDR) imaging is to capture multiple images of different exposures followed by multi-exposure image fusion (MEF) in either radiance or intensity domain. A predominant problem of this approach is the introduction of the ghosting artifacts in dynamic scenes with camera and object motion. While many MEF methods (often referred to as deghosting algorithms) have been proposed for reduced ghosting artifacts and improved visual quality, little work has been dedicated to perceptual evaluation of their deghosting results. Here we first construct a database that contains 20 multi-exposure sequences of dynamic scenes and their corresponding fused images by nine MEF algorithms. We then carry out a subjective experiment to evaluate fused image quality, and find that none of existing objective quality models for MEF provides accurate quality predictions. Motivated by this, we develop an objective quality model for MEF of dynamic scenes. Specifically, we divide the test image into static and dynamic regions, measure structural similarity between the image and the corresponding sequence in the two regions separately, and combine quality measurements of the two regions into an overall quality score. Experimental results show that the proposed method significantly outperforms the state-of-the-art. In addition, we demonstrate the promise of the proposed model in parameter tuning of MEF methods. The subjective database and the MATLAB code of the proposed model are made publicly available at https://github.com/h4nwei/MEF-SSIMd .

31 citations


Journal ArticleDOI
TL;DR: This paper presents a low power, high dynamic range (DR), reconfigurable light-to-digital converter (LDC) for photoplethysmogram (PPG), and near-infrared spectroscopy (NIRS) sensor readouts and utilizes a current integration and a charge counting operation to directly convert the photocurrent to a digital code, reducing the noise contributors in the system.
Abstract: This paper presents a low power, high dynamic range (DR), reconfigurable light-to-digital converter (LDC) for photoplethysmogram (PPG), and near-infrared spectroscopy (NIRS) sensor readouts. The proposed LDC utilizes a current integration and a charge counting operation to directly convert the photocurrent to a digital code, reducing the noise contributors in the system. This LDC consists of a latched comparator, a low-noise current reference, a counter, and a multi-function integrator, which is used in both signal amplification and charge counting based data quantization. Furthermore, a current DAC is used to further increase the DR by canceling the baseline current. The LDC together with LED drivers and auxiliary digital circuitry are implemented in a standard 0.18 μm CMOS process and characterized experimentally. The LDC and LED drivers consume a total power of 196 μW while achieving a maximum 119 dB DR. The charge counting clock, and the pulse repetition frequency of the LED driver can be reconfigured, providing a wide range of power-resolution trade-off. At a minimum power consumption of 87 μW, the LDC still achieves 95 dB DR. The LDC is also validated with on-body PPG and NIRS measurement by using a photodiode (PD) and a silicon photomultiplier (SIPM), respectively.

30 citations


Journal ArticleDOI
TL;DR: In this paper, a high dynamic range mechano-imaging (HDR-MI) polymeric material integrating physical and chemical mechanochromism is designed providing a continuous optical read-out of strain upon mechanical deformation.
Abstract: Cephalopods, such as squid, cuttlefish, and octopuses, use an array of responsive absorptive and photonic dermal structures to achieve rapid and reversible color changes for spectacular camouflage and signaling displays. Challenges remain in designing synthetic soft materials with similar multiple and dynamic responsivity for the development of optical sensors for the sensitive detection of mechanical stresses and strains. Here, a high dynamic range mechano-imaging (HDR-MI) polymeric material integrating physical and chemical mechanochromism is designed providing a continuous optical read-out of strain upon mechanical deformation. By combining a colloidal photonic array with a mechanically responsive dye, the material architecture significantly improves the mechanochromic sensitivity, which is moreover readily tuned, and expands the range of detectable strains and stresses at both microscopic and nanoscopic length scales. This multi-functional material is highlighted by creating detailed HDR mechanographs of membrane deformation and around defects using a low-cost hyperspectral camera, which is found to be in excellent agreement with the results of finite element simulations. This multi-scale approach to mechano-sensing and -imaging provides a platform to develop mechanochromic composites with high sensitivity and high dynamic mechanical range.

Journal ArticleDOI
TL;DR: This paper provides a complete theoretical characterization of the sensor in the context of HDR imaging, by proving the fundamental limits in the dynamic range that QIS can offer and its trade-offs with noise and speed.
Abstract: High dynamic range (HDR) imaging is one of the biggest achievements in modern photography. Traditional solutions to HDR imaging are designed for and applied to CMOS image sensors (CIS). However, the mainstream one-micron CIS cameras today generally have a high read noise and low frame-rate. Consequently, these sensors have limited acquisition speed, making the cameras slow in the HDR mode. In this paper, we propose a new computational photography technique for HDR imaging. Recognizing the limitations of CIS, we use the Quanta Image Sensors (QIS) to trade spatial-temporal resolution with bit-depth. QIS are single-photon image sensors that have comparable pixel pitch to CIS but substantially lower dark current and read noise. We provide a complete theoretical characterization of the sensor in the context of HDR imaging, by proving the fundamental limits in the dynamic range that QIS can offer and its trade-offs with noise and speed. In addition, we derive an optimal reconstruction algorithm for single-bit and multi-bit QIS. Our algorithm is theoretically optimal for all linear reconstruction schemes based on exposure bracketing. Experimental results confirm the validity of the theory and algorithm, based on synthetic and real QIS data.

Book ChapterDOI
30 Nov 2020
TL;DR: A Pyramidal Alignment and Masked merging network (PAMnet) that learns to synthesize HDR images from input low dynamic range (LDR) images in an end-to-end manner and can produce ghosting-free HDR results in the presence of large disparity and motion is proposed.
Abstract: High dynamic range (HDR) imaging is widely used in consumer photography, computer game rendering, autonomous driving, and surveillance systems. Reconstructing ghosting-free HDR images of dynamic scenes from a set of multi-exposure images is a challenging task, especially with large object motion, disparity, and occlusions, leading to visible artifacts using existing methods. In this paper, we propose a Pyramidal Alignment and Masked merging network (PAMnet) that learns to synthesize HDR images from input low dynamic range (LDR) images in an end-to-end manner. Instead of aligning under/overexposed images to the reference view directly in pixel-domain, we apply deformable convolutions across multiscale features for pyramidal alignment. Aligned features offer more flexibility to refine the inevitable misalignment for subsequent merging network without reconstructing the aligned image explicitly. To make full use of aligned features, we use dilated dense residual blocks with squeeze-and-excitation (SE) attention. Such attention mechanism effectively helps to remove redundant information and suppress misaligned features. Additional mask-based weighting is further employed to refine the HDR reconstruction, which offers better image quality and sharp local details. Experiments demonstrate that PAMnet can produce ghosting-free HDR results in the presence of large disparity and motion. We present extensive comparative studies using several popular datasets to demonstrate superior quality compared to the state-of-the-art algorithms.

Journal ArticleDOI
TL;DR: The ePix10ka2M is a new large area detector specifically developed for X-ray free-electron laser applications, as well as the detector nonlinearity as characterized by diffuse scattering measurements at the Linac Coherent Light Source.
Abstract: The ePix10ka2M (ePix10k) is a new large area detector specifically developed for X-ray free-electron laser (XFEL) applications. The hybrid pixel detector was developed at SLAC to provide a hard X-ray area detector with a high dynamic range, running at the 120 Hz repetition rate of the Linac Coherent Light Source (LCLS). The ePix10k consists of 16 modules, each with 352 × 384 pixels of 100 µm × 100 µm distributed on four ASICs, resulting in a 2.16 megapixel detector, with a 16.5 cm × 16.5 cm active area and ∼80% coverage. The high dynamic range is achieved with three distinct gain settings (low, medium, high) as well as two auto-ranging modes (high-to-low and medium-to-low). Here the three fixed gain modes are evaluated. The resulting dynamic range (from single photon counting to 10000 photons pixel−1 pulse−1 at 8 keV) makes it suitable for a large number of different XFEL experiments. The ePix10k replaces the large CSPAD in operation since 2011. The dimensions of the two detectors are similar, making the upgrade from CSPAD to ePix10k straightforward for most setups, with the ePix10k improving on experimental performance. The SLAC-developed ePix cameras all utilize a similar platform, are tailored to target different experimental conditions and are designed to provide an upgrade path for future high-repetition-rate XFELs. Here the first measurements on this new ePix10k detector are presented and the performance under typical XFEL conditions evaluated during an LCLS X-ray diffuse scattering experiment measuring the 9.5 keV X-ray photons scattered from a thin liquid jet.

Journal ArticleDOI
TL;DR: A binocular fringe projection profilometry system that saves the number of patterns by geometry constraint is built that can increase the dynamic range for real-time 3D measurements and a mixed phase unwrapping method is proposed that can reduce phase unwinding errors for dense fringe patterns.
Abstract: Fringe projection profilometry (FPP) is a widely used technique for real-time three-dimensional (3D) shape measurement. However, it tends to compromise when measuring objects that have a large variation range of surface reflectivity. In this paper, we present a FPP method that can increase the dynamic range for real-time 3D measurements. First, binary fringe patterns are projected to generate grayscale sinusoidal patterns with the defocusing technique. Each pattern is then captured twice with different exposure values in one projection period. With image fusion, surfaces under appropriate exposure are retained. To improve the real-time performance of high dynamic range (HDR) 3D shape measurements, we build a binocular fringe projection profilometry system that saves the number of patterns by geometry constraint. Further, to ensure the accuracy and robustness of HDR 3D measurements, we propose a mixed phase unwrapping method that can reduce phase unwrapping errors for dense fringe patterns. Experiment results show that the proposed method can realize accurate and real-time 3D measurement for HDR scenes at 28 frames per second.

Posted Content
TL;DR: Extensive numerical evaluations demonstrate that the proposed two-step convolutional neural network (CNN)-based image reconstruction method can reconstruct images from single PWs with a quality similar to that of gold-standard synthetic aperture imaging, on a dynamic range in excess of 60 dB.
Abstract: Ultrafast ultrasound (US) revolutionized biomedical imaging with its capability of acquiring full-view frames at over 1 kHz, unlocking breakthrough modalities such as shear-wave elastography and functional US neuroimaging. Yet, it suffers from strong diffraction artifacts, mainly caused by grating lobes, side lobes, or edge waves. Multiple acquisitions are typically required to obtain a sufficient image quality, at the cost of a reduced frame rate. To answer the increasing demand for high-quality imaging from single-shot acquisitions, we propose a two-step convolutional neural network (CNN)-based image reconstruction method, compatible with real-time imaging. A low-quality estimate is obtained by means of a backprojection-based operation, akin to conventional delay-and-sum beamforming, from which a high-quality image is restored using a residual CNN with multi-scale and multi-channel filtering properties, trained specifically to remove the diffraction artifacts inherent to ultrafast US imaging. To account for both the high dynamic range and the radio frequency property of US images, we introduce the mean signed logarithmic absolute error (MSLAE) as training loss function. Experiments were conducted with a linear transducer array, in single plane wave (PW) imaging. Trainings were performed on a simulated dataset, crafted to contain a wide diversity of structures and echogenicities. Extensive numerical evaluations demonstrate that the proposed approach can reconstruct images from single PWs with a quality similar to that of gold-standard synthetic aperture imaging, on a dynamic range in excess of 60 dB. In vitro and in vivo experiments show that trainings performed on simulated data translate well to experimental settings.

Journal ArticleDOI
TL;DR: An HDR-Visual Difference Predictor (VDP)-2-based rate-distortion (R-D) model to improve the coding performance and a new model parameter estimation method to further reduce the RC errors is proposed.
Abstract: High dynamic range (HDR) video compression technology, which is capable of delivering a wider range of luminance and a larger colour gamut than standard dynamic range (SDR) technology, has been widely used in recent years in many fields, including industrial image processing, digital entertainment, and machine vision. Rate control (RC) is of paramount importance to HDR compression and transmission; accordingly, an RC scheme for HDR in High Efficiency Video Coding (HEVC) is proposed in this paper. First, considering the HDR characteristics, we propose an HDR-Visual Difference Predictor (VDP)-2-based rate-distortion (R-D) model to improve the coding performance. Second, we directly utilize $\lambda $ rather than the bit rate in the optimization process to obtain the optimal solution. Finally, we propose a new model parameter estimation method to further reduce the RC errors. According to our experimental results, significant bit rate reductions in terms of HDR-VDP-2, the Video Quality Metric (VQM) and the mean peak-signal-to-noise ratio (mPSNR) can be achieved on average compared with the state-of-the-art algorithm used in HM16.19.

Posted Content
TL;DR: The proposed algorithm includes a frame augmentation pre-processing step that deblurs and temporally interpolates frame data using events and outperforms state-of-the-art methods in both absolute intensity error and image similarity indexes.
Abstract: Event cameras are ideally suited to capture HDR visual information without blur but perform poorly on static or slowly changing scenes. Conversely, conventional image sensors measure absolute intensity of slowly changing scenes effectively but do poorly on high dynamic range or quickly changing scenes. In this paper, we present an event-based video reconstruction pipeline for High Dynamic Range (HDR) scenarios. The proposed algorithm includes a frame augmentation pre-processing step that deblurs and temporally interpolates frame data using events. The augmented frame and event data are then fused using a novel asynchronous Kalman filter under a unifying uncertainty model for both sensors. Our experimental results are evaluated on both publicly available datasets with challenging lighting conditions and fast motions and our new dataset with HDR reference. The proposed algorithm outperforms state-of-the-art methods in both absolute intensity error (48% reduction) and image similarity indexes (average 11% improvement).

Journal ArticleDOI
TL;DR: A low-cost thermally actuated bimorph mirror with 200 mD linear response is described, which meets dynamic range and low aberration requirements for the Laser Interferometer Gravitational-wave Observatory (LIGO) upgrade.
Abstract: Adaptive optics are crucial for overcoming the fabrication limits on mirror curvature in high-precision interferometry. We describe a low-cost thermally actuated bimorph mirror with 200 mD linear response, which meets dynamic range and low aberration requirements for the A+ upgrade of the Laser Interferometer Gravitational-wave Observatory (LIGO). Its deformation and operation limits were measured and verified against finite element simulation.

Journal ArticleDOI
TL;DR: The primary idea is that blending two images in the deep-feature-domain is effective for synthesizing multi-exposure images that are structurally aligned to the reference, resulting in better-aligned images than the pixel-domain blending or geometric transformation methods.
Abstract: This paper presents a deep end-to-end network for high dynamic range (HDR) imaging of dynamic scenes with background and foreground motions Generating an HDR image from a sequence of multi-exposure images is a challenging process when the images have misalignments by being taken in a dynamic situation Hence, recent methods first align the multi-exposure images to the reference by using patch matching, optical flow, homography transformation, or attention module before the merging In this paper, we propose a deep network that synthesizes the aligned images as a result of blending the information from multi-exposure images, because explicitly aligning photos with different exposures is inherently a difficult problem Specifically, the proposed network generates under/over-exposure images that are structurally aligned to the reference, by blending all the information from the dynamic multi-exposure images Our primary idea is that blending two images in the deep-feature-domain is effective for synthesizing multi-exposure images that are structurally aligned to the reference, resulting in better-aligned images than the pixel-domain blending or geometric transformation methods Specifically, our alignment network consists of a two-way encoder for extracting features from two images separately, several convolution layers for blending deep features, and a decoder for constructing the aligned images The proposed network is shown to generate the aligned images with a wide range of exposure differences very well and thus can be effectively used for the HDR imaging of dynamic scenes Moreover, by adding a simple merging network after the alignment network and training the overall system end-to-end, we obtain a performance gain compared to the recent state-of-the-art methods

Journal ArticleDOI
TL;DR: The HGCROC-v2 is the second prototype of the front-end ASIC in terms of signal-to-noise ratio, charge and timing, as well as results from radiation qualification with total ionizing dose (TID) as mentioned in this paper.
Abstract: The High Granularity Calorimeter (HGCAL), presently being designed by the Compact Muon Solenoid collaboration (CMS) to replace the existing endcap calorimeters for the High Luminosity phase of the LHC (HL-LHC), will feature unprecedented transverse and longitudinal readout and triggering segmentation for both electromagnetic and hadronic sections. The requirements for the front-end electronics are extremely challenging, including high dynamic range (0–10 pC), low noise (0~ 200 electrons), high-precision timing information in order to mitigate the pileup effect (25 ps binning) and low power consumption (~ 15 mW/channel). The front-end electronics will face a harsh radiation environment which will reach 200 Mrad at the end of life. It will work at a controlled temperature of 240 K. HGCROC-v2 is the second prototype of the front-end ASIC. It has 72 channels of the full analog chain: low noise and high gain preamplifier and shapers, and a 10-bit 40 MHz SAR-ADC, which provides the charge measurement over the linear range of the preamplifier. In the saturation range of the preamplifier, a discriminator and TDC provide the charge information from TOT (Time Over Threshold) over 200 ns dynamic range using 50 ps binning. A fast discriminator and TDC provide timing information to 25 ps accuracy. Both charge and timing information are kept in a DRAM memory waiting for a Level 1-trigger decision (L1A). At a bunch crossing rate of 40 MHz, compressed charge data are sent out to participate in the generation of the L1-trigger primitives. We report on the performances of the chip in terms of signal-to-noise ratio, charge and timing, as well as results from radiation qualification with total ionizing dose (TID).

Journal ArticleDOI
TL;DR: The proposed Tone-mapping algorithm is compared with state-of-the-art algorithms, using some well-known metrics that quantify the quality of tone-mapped images, and is found to have the best performance.
Abstract: A new tone-mapping algorithm is presented for visualization of high dynamic range (HDR) images on low dynamic range (LDR) displays. In the first step, the real-world pixel intensities of the HDR image are transformed to a perceptual domain using the perceptual-quantizer (PQ). This is followed by construction of the histogram of the luminance channel. Tone-mapping curve is generated from the cumulative histogram. It is known that histogram-based tone-mapping approaches can lead to excessive stretching of contrast in highly populated bins, whereas the pixels in sparse bins can suffer from excessive compression of contrast. We handle these issues by restricting the pixel counts in the histogram to remain below a defined limit, determined by a uniform distribution model. The proposed method is compared with state-of-the-art algorithms, using some well-known metrics that quantify the quality of tone-mapped images, and is found to have the best performance.

Journal ArticleDOI
TL;DR: The MPHDR display is the first system to the authors' knowledge to allow for spatially controllable, high dynamic range stimulus generation using multiple primaries, and it is demonstrated the high luminance, highynamic range, and wide color gamut output of the MPHDR displays.
Abstract: We describe the design, construction, calibration, and characterization of a multi-primary high dynamic range (MPHDR) display system for use in vision research. The MPHDR display is the first system to our knowledge to allowfor spatially controllable, high dynamic range stimulus generation using multiple primaries.We demonstrate the high luminance, high dynamic range, and wide color gamut output of the MPHDR display. During characterization, the MPHDR display achieved a maximum luminance of 3200 cd=m2, a maximum contrast range of 3; 240; 000 V 1, and an expanded color gamut tailored to dedicated vision research tasks that spans beyond traditional sRGB displays. We discuss how the MPHDR display could be optimized for psychophysical experiments with photoreceptor isolating stimuli achieved through the method of silent substitution. We present an example case of a range of metameric pairs of melanopsin isolating stimuli across different luminance levels, from an available melanopsin contrast of117%at 75 cd=m2 to a melanopsin contrast of23%at 2000 cd=m2.

Journal ArticleDOI
Zhiyong Pan1, Mei Yu1, Gangyi Jiang1, Haiyong Xu1, Zongju Peng1, Fen Chen1 
TL;DR: The experimental results show that the proposed method is superior to the existing HDR imaging methods in quantitative and qualitative analysis, and can quickly generate high-quality HDR images.

Journal ArticleDOI
15 Oct 2020
TL;DR: In this article, a newly developed photon counting detector with a gallium arsenide sensor, which enables imaging with higher quantum efficiency, and compare it with a silicon-based photon counting and a scintillation-based charge integrating detector.
Abstract: Photon-counting detectors provide several potential advantages in biomedical x-ray imaging including fast and readout noise free data acquisition, sharp pixel response, and high dynamic range. Grating-based phase-contrast imaging is a biomedical imaging method, which delivers high soft-tissue contrast and strongly benefits from photon-counting properties. However, silicon sensors commonly used in photon-counting detectors have low quantum efficiency for mid- to high-energies, which limits high throughput capabilities when combined with grating-based phase contrast imaging. In this work, we characterize a newly developed photon-counting prototype detector with a gallium arsenide sensor, which enables imaging with higher quantum efficiency, and compare it with a silicon-based photon-counting and a scintillation-based charge integrating detector. In detail, we calculated the detective quantum efficiency (DQE) of all three detectors based on the experimentally measured modulation transfer function, noise power spectrum, and photon fluence. In addition, the DQEs were determined for two different spectra, namely, for a 28 kVp and a 50 kVp molybdenum spectrum. Among all tested detectors, the gallium arsenide prototype showed the highest DQE values for both x-ray spectra. Moreover, other than the comparison based on the DQE, we measured an ex vivo murine sample to assess the benefit using this detector for grating-based phase contrast computed tomography. Compared to the scintillation-based detector, the prototype revealed higher resolving power with an equal signal-to-noise ratio in the grating-based phase contrast computed tomography experiment.

Journal ArticleDOI
Junyu Zou1, En-Lin Hsiang1, Tao Zhan1, Kun Yin1, Ziqian He1, Shin-Tson Wu1 
TL;DR: A full-color high dynamic range head-up display (HUD) based on a polarization selective optical combiner, which is a three-layer cholesteric liquid crystal (CLC) film, which indicates that the dynamic range can be improved by ∼50x (17 dB).
Abstract: We demonstrate a full-color high dynamic range head-up display (HUD) based on a polarization selective optical combiner, which is a three-layer cholesteric liquid crystal (CLC) film. Such a CLC film has three reflection bands corresponding to the three primary colors. A key component in our HUD system is a polarization modulation layer (PML) consisting of a twisted-nematic LC polarization rotator sandwiched by two quarter-wave plates. This spatially switchable PML generates opposite polarization states for the displayed image and its background area. Thus, this optical combiner reflects the displayed image to the observer and transmits the background noise, making the black state darker. Furthermore, by matching the reflection spectra of the optical combiner with the colors of the display panel, the bright state gets brighter. Therefore, both bright state and dark state are improved simultaneously. Our experimental results show that the dark state of the new HUD is lowered by 3x and bright state is boosted by 2.5x. By applying antireflection coating to the optical components and optimizing the degree of polarization, our simulation results indicate that the dynamic range can be improved by ∼50x (17 dB). Potential applications of the proposed HUDs for improving the driver’s safety are foreseeable.

Journal ArticleDOI
TL;DR: The aim of this study was to develop an adaptive inverse tone mapping operator (iTMO) that can convert a single LDR image into a realistic HDR image based on artificial neural networks.
Abstract: In modern digital photographs, most images have low dynamic range (LDR) formats, which means that the range of light intensities from the darkest to the brightest is much lower than the range that can be perceived by the human eye. Therefore, to visualize images as naturally as possible on devices that display them in high dynamic range (HDR) format, the LDR images need to be converted into HDR images. The aim of this study was to develop an adaptive inverse tone mapping operator (iTMO) that can convert a single LDR image into a realistic HDR image based on artificial neural networks. In contrast to conventional iTMO algorithms, our technique was developed by learning the complicated relationship between various LDR–HDR pair images, which enabled nearly ground-truth HDR images to be generated from various types of LDR images. The novel learning technique is called cumulative histogram-based learning and color difference learning. The superior performance of our technique over conventional methods was assessed through objective evaluations of various types of LDR and HDR images.

Journal ArticleDOI
TL;DR: This work introduces a post-acquisition snapshot HDR enhancement scheme that generates a bracketed sequence from a small set of LDR images, and in the extreme case, directly from a single exposure.
Abstract: Bracketed High Dynamic Range (HDR) imaging architectures acquire a sequence of Low Dynamic Range (LDR) images in order to either produce a HDR image or an “optimally” exposed LDR image, achieving impressive results under static camera and scene conditions. However, in real world conditions, ghost-like artifacts and noise effects limit the quality of HDR reconstruction. We address these limitations by introducing a post-acquisition snapshot HDR enhancement scheme that generates a bracketed sequence from a small set of LDR images, and in the extreme case, directly from a single exposure. We achieve this goal via a sparse-based approach where transformations between differently exposed images are encoded through a dictionary learning process, while we learn appropriate features by employing a stacked sparse autoencoder (SSAE) based framework. Via experiments with real images, we demonstrate the improved performance of our method over the state-of-the-art, while our single-shot based HDR formulation provides a novel paradigm for the enhancement of LDR imaging and video sequences.

Proceedings ArticleDOI
06 Jul 2020
TL;DR: This work captures a novel light-field dataset featuring both a high spatial resolution and a high dynamic range (HDR) to enable the community to research and develop efficient reconstruction and tone-mapping algorithms for a hyper-realistic visual experience.
Abstract: Light-field (LF) imaging has various advantages over the traditional 2D photography, providing angular information of the real world scene by separately recording light rays in different directions. Despite the directional light information which enables new capabilities such as depth estimation, post-capture refocusing, and 3D modelling, currently available light-field datasets are very restricted in terms of spatial-resolution and dynamic range. In this work, we address this problem by capturing a novel light-field dataset featuring both a high spatial resolution and a high dynamic range (HDR). This dataset should enable the community to research and develop efficient reconstruction and tone-mapping algorithms for a hyper-realistic visual experience. The dataset consists of six static light-fields that are captured by a high-quality digital camera mounted on two precise linear axes using exposure bracketing at each view point. To demonstrate the usefulness of such a dataset, we also performed a thorough analysis on local and global tone-mapping of natural data in the context of novel view-rendering. The rendered results are compared and evaluated both visually and quantitatively. To our knowledge, the recorded dataset is the first attempt to jointly capture high-resolution and HDR light-fields.

Journal ArticleDOI
TL;DR: In this article, composite fringes are obtained from sets of measurements with slightly varying interrogation times, as in a moir\'e effect, and analyzed analytically the performance gain in this approach and the trade-offs it entails between sensitivity, dynamic range, and bandwidth.
Abstract: Atom interferometers offer excellent sensitivity to gravitational and inertial signals but have limited dynamic range. We introduce a scheme that improves this trade-off by a factor of 50 using composite fringes, obtained from sets of measurements with slightly varying interrogation times, as in a moir\'e effect. We analyze analytically the performance gain in this approach and the trade-offs it entails between sensitivity, dynamic range, and bandwidth, and we experimentally validate the analysis over a wide range of parameters. Combining composite-fringe measurements with a particle-filter estimation protocol, we demonstrate continuous tracking of a rapidly varying signal over a span 2 orders of magnitude larger than the dynamic range of a traditional atom interferometer.