scispace - formally typeset
Search or ask a question

Showing papers on "High dynamic range published in 2019"


Proceedings ArticleDOI
15 Jun 2019
TL;DR: The proposed AHDRNet is a non-flow-based method, which can also avoid the artifacts generated by optical-flow estimation error, and can achieve state-of-the-art quantitative and qualitative results.
Abstract: Ghosting artifacts caused by moving objects or misalignments is a key challenge in high dynamic range (HDR) imaging for dynamic scenes. Previous methods first register the input low dynamic range (LDR) images using optical flow before merging them, which are error-prone and cause ghosts in results. A very recent work tries to bypass optical flows via a deep network with skip-connections, however, which still suffers from ghosting artifacts for severe movement. To avoid the ghosting from the source, we propose a novel attention-guided end-to-end deep neural network (AHDRNet) to produce high-quality ghost-free HDR images. Unlike previous methods directly stacking the LDR images or features for merging, we use attention modules to guide the merging according to the reference image. The attention modules automatically suppress undesired components caused by misalignments and saturation and enhance desirable fine details in the non-reference images. In addition to the attention model, we use dilated residual dense block (DRDB) to make full use of the hierarchical features and increase the receptive field for hallucinating the missing details. The proposed AHDRNet is a non-flow-based method, which can also avoid the artifacts generated by optical-flow estimation error. Experiments on different datasets show that the proposed AHDRNet can achieve state-of-the-art quantitative and qualitative results.

190 citations


Posted Content
TL;DR: In this paper, a recurrent network is proposed to reconstruct videos from a stream of events, and train it on a large amount of simulated event data, which is able to produce high dynamic range reconstructions in challenging lighting conditions.
Abstract: Event cameras are novel sensors that report brightness changes in the form of a stream of asynchronous "events" instead of intensity frames. They offer significant advantages with respect to conventional cameras: high temporal resolution, high dynamic range, and no motion blur. While the stream of events encodes in principle the complete visual signal, the reconstruction of an intensity image from a stream of events is an ill-posed problem in practice. Existing reconstruction approaches are based on hand-crafted priors and strong assumptions about the imaging process as well as the statistics of natural images. In this work we propose to learn to reconstruct intensity images from event streams directly from data instead of relying on any hand-crafted priors. We propose a novel recurrent network to reconstruct videos from a stream of events, and train it on a large amount of simulated event data. During training we propose to use a perceptual loss to encourage reconstructions to follow natural image statistics. We further extend our approach to synthesize color images from color event streams. Our network surpasses state-of-the-art reconstruction methods by a large margin in terms of image quality (> 20%), while comfortably running in real-time. We show that the network is able to synthesize high framerate videos (> 5,000 frames per second) of high-speed phenomena (e.g. a bullet hitting an object) and is able to provide high dynamic range reconstructions in challenging lighting conditions. We also demonstrate the effectiveness of our reconstructions as an intermediate representation for event data. We show that off-the-shelf computer vision algorithms can be applied to our reconstructions for tasks such as object classification and visual-inertial odometry and that this strategy consistently outperforms algorithms that were specifically designed for event data.

168 citations


Proceedings ArticleDOI
15 Jun 2019
TL;DR: The potential of event camera-based conditional generative adversarial networks to create images/videos from an adjustable portion of the event data stream is unlocked and the results are evaluated by comparing the results with the intensity images captured on the same pixel grid-line of events.
Abstract: Event cameras have a lot of advantages over traditional cameras, such as low latency, high temporal resolution, and high dynamic range. However, since the outputs of event cameras are the sequences of asynchronous events over time rather than actual intensity images, existing algorithms could not be directly applied. Therefore, it is demanding to generate intensity images from events for other tasks. In this paper, we unlock the potential of event camera-based conditional generative adversarial networks to create images/videos from an adjustable portion of the event data stream. The stacks of space-time coordinates of events are used as inputs and the network is trained to reproduce images based on the spatio-temporal intensity changes. The usefulness of event cameras to generate high dynamic range (HDR) images even in extreme illumination conditions and also non blurred images under rapid motion is also shown. In addition, the possibility of generating very high frame rate videos is demonstrated, theoretically up to 1 million frames per second(FPS) since the temporal resolution of event cameras is about 1 microsecond. Proposed methods are evaluated by comparing the results with the intensity images captured on the same pixel grid-line of events using online available real datasets and synthetic datasets produced by the event camera simulator.

145 citations


Journal ArticleDOI
TL;DR: DeepTMO as discussed by the authors proposes a conditional generative adversarial network (cGAN) to learn to adapt to vast scenic-content (e.g., outdoor, indoor, human, structures, etc.) but also tackles the HDR related scene-specific challenges such as contrast and brightness, while preserving the fine-grained details.
Abstract: A computationally fast tone mapping operator (TMO) that can quickly adapt to a wide spectrum of high dynamic range (HDR) content is quintessential for visualization on varied low dynamic range (LDR) output devices such as movie screens or standard displays. Existing TMOs can successfully tone-map only a limited number of HDR content and require an extensive parameter tuning to yield the best subjective-quality tone-mapped output. In this paper, we address this problem by proposing a fast, parameter-free and scene-adaptable deep tone mapping operator (DeepTMO) that yields a high-resolution and high-subjective quality tone mapped output. Based on conditional generative adversarial network (cGAN), DeepTMO not only learns to adapt to vast scenic-content (e.g., outdoor, indoor, human, structures, etc.) but also tackles the HDR related scene-specific challenges such as contrast and brightness, while preserving the fine-grained details. We explore 4 possible combinations of Generator-Discriminator architectural designs to specifically address some prominent issues in HDR related deep-learning frameworks like blurring, tiling patterns and saturation artifacts. By exploring different influences of scales, loss-functions and normalization layers under a cGAN setting, we conclude with adopting a multi-scale model for our task. To further leverage on the large-scale availability of unlabeled HDR data, we train our network by generating targets using an objective HDR quality metric, namely Tone Mapping Image Quality Index (TMQI). We demonstrate results both quantitatively and qualitatively, and showcase that our DeepTMO generates high-resolution, high-quality output images over a large spectrum of real-world scenes. Finally, we evaluate the perceived quality of our results by conducting a pair-wise subjective study which confirms the versatility of our method.

39 citations


Journal ArticleDOI
TL;DR: Tristimulus colour calibration procedures for high dynamic range photography are developed to measure circadian lighting and demonstrate that measurements from high dynamicrange photographs can correspond to the physical quantity of circadian luminance with reasonable precision and repeatability.
Abstract: The human ocular system functions in a dual manner. While the most well-known function is to facilitate vision, a growing body of research demonstrates its role in resetting the internal body clock...

33 citations


Journal ArticleDOI
TL;DR: In this article, the authors proposed a broadband quantum cascade laser (QCL) with a current density dynamic range (Jdr) of 3.2, significantly larger than the state of the art, over a 1.3 THz bandwidth.
Abstract: We report on the engineering of broadband quantum cascade lasers (QCLs) emitting at Terahertz (THz) frequencies, which exploit a heterogeneous active region scheme and have a current density dynamic range (Jdr) of 3.2, significantly larger than the state of the art, over a 1.3 THz bandwidth. We demonstrate that the devised broadband lasers operate as THz optical frequency comb synthesizers in continuous wave, with a maximum optical output power of 4 mW (0.73 mW in the comb regime). Measurement of the intermode beatnote map reveals a clear dispersion-compensated frequency comb regime extending over a continuous 106 mA current range (current density dynamic range of 1.24), significantly larger than the state of the art reported under similar geometries, with a corresponding emission bandwidth of 1.05 THz ans a stable and narrow (4.15 KHz) beatnote detected with a signal-to-noise ratio of 34 dB. Analysis of the electrical and thermal beatnote tuning reveals a current-tuning coefficient ranging between 5 MHz/mA and 2.1 MHz/mA and a temperature-tuning coefficient of -4 MHz/K. The ability to tune the THz QCL combs over their full dynamic range by temperature and current paves the way for their use as powerful spectroscopy tool that can provide broad frequency coverage combined with high precision spectral accuracy.

30 citations


Journal ArticleDOI
TL;DR: An original method to meet the measurement requirements for ultra-wide bandwidth, ultra-high resolution, and ultra-large dynamic range simultaneously, based on an asymmetric optical probe signal generator (ASG) and receiver (ASR).
Abstract: Optical vector analysis (OVA) capable of achieving magnitude and phase responses is essential for the fabrication and application of emerging optical devices. Conventional OVA often has to make compromises among resolution, dynamic range, and bandwidth. Here we show an original method to meet the measurement requirements for ultra-wide bandwidth, ultra-high resolution, and ultra-large dynamic range simultaneously, based on an asymmetric optical probe signal generator (ASG) and receiver (ASR). The ASG and ASR remove the measurement errors introduced by the modulation nonlinearity and enable an ultra-large dynamic range. Thanks to the wavelength-independence of the ASG and ASR, the measurement range can increase by 2 N times by applying an N-tone optical frequency comb without complicated operation. In an experiment, OVA with a resolution of 334 Hz (2.67 attometer in the 1550-nm band), a dynamic range of > 90 dB and a measurement range of 1.075 THz is demonstrated. Typical methods for optical vector analysis have tradeoffs among resolution, dynamic range, and bandwidth. The authors use an asymmetric optical probe signal generator and receiver to perform attometer resolution measurement over a THz of bandwidth while maintaining high dynamic range, aiming to characterize emerging optical devices.

29 citations


Journal ArticleDOI
TL;DR: In this paper, a broadband quantum cascade laser (QCL) was proposed to achieve a current density dynamic range (Jdr) of 3.2, significantly larger than the state-of-the-art, over a 1.3 THz bandwidth.
Abstract: We report on the engineering of broadband quantum cascade lasers (QCLs) emitting at Terahertz (THz) frequencies, which exploit a heterogeneous active region scheme and have a current density dynamic range (Jdr) of 3.2, significantly larger than the state-of-the-art, over a 1.3 THz bandwidth. We demonstrate that the devised broadband lasers operate as THz optical frequency comb synthesizers, in continuous-wave, with a maximum optical output power of 4 mW (0.73 mW in the comb regime). Measurement of the intermode beatnote map reveals a clear dispersion-compensated frequency comb regime extending over a continuous 106 mA current range (current density dynamic range of 1.24), significantly broader than the state-of-the-art at similar geometries, with a corresponding emission bandwidth of ≈1.05 THz and a stable and narrow (4.15 kHz) beatnote detected with a signal-to-noise ratio of 34 dB. Analysis of the electrical and thermal beatnote tuning reveals a current-tuning coefficient ranging between 5 and 2.1 MHz/m...

27 citations


Journal ArticleDOI
TL;DR: The "Automated Imaging Routine for Compact Arrays for the Radio Sun (AIRCARS)", an end-to-end imaging pipeline optimized for solar imaging with arrays with a compact core, has the potential to transform the multi-petabyte MWA solar archive from raw visibilities into science-ready images.
Abstract: Solar radio emission, especially at metre-wavelengths, is well known to vary over small spectral ($\lesssim$100\,kHz) and temporal ($<1$\,s) spans. It is comparatively recently, with the advent of a new generation of instruments, that it has become possible to capture data with sufficient resolution (temporal, spectral and angular) that one can begin to characterize the solar morphology simultaneously along the axes of time and frequency. This ability is naturally accompanied by an enormous increase in data volumes and computational burden, a problem which will only become more acute with the next generation of instruments such as the Square Kilometre Array (SKA). The usual approach, which requires manual guidance of the calibration process, is impractical. Here we present the "Automated Imaging Routine for Compact Arrays for the Radio Sun (AIRCARS)", an end-to-end imaging pipeline optimized for solar imaging with arrays with a compact core. We have used AIRCARS so far on data from the Murchison Widefield Array (MWA) Phase-I. The dynamic range of the images is routinely from a few hundred to a few thousand. In the few cases, where we have pushed AIRCARS to its limits, the dynamic range can go as high as $\sim$75000. The images made represent a substantial improvement in the state-of-the-art in terms of imaging fidelity and dynamic range. This has the potential to transform the multi-petabyte MWA solar archive from raw visibilities into science-ready images. AIRCARS can also be tuned to upcoming telescopes like the SKA, making it a very useful tool for the heliophysics community.

26 citations


Journal ArticleDOI
TL;DR: A dc-coupled biomedical radar sensor is proposed incorporating an analog dc offset cancelation circuit with fast start-up feature that can automatically remove any dc offset in the baseband signal and emulates an ac-Coupling system.
Abstract: One challenge of designing a dc-coupled biomedical radar sensor is dealing with the dc offset voltage presented in its receiver. The undesired dc offset is mainly caused by clutter reflection and hardware imperfection. It may saturate the baseband amplifier and limit the maximum dynamic range that a biomedical radar sensor can achieve. AC-coupling the signal can eliminate dc offset but it will also distort the signal, and thus may not be acceptable for high precision applications. In this paper, a dc-coupled biomedical radar sensor is proposed incorporating an analog dc offset cancelation circuit with fast start-up feature. It can automatically remove any dc offset in the baseband signal and emulates an ac-coupling system. It can also be easily reconfigured into a dc-tracking mode when application requires. When entering this mode, the initial dc offset will be removed, whereas future dc change can be recorded. The proposed solution only uses analog components without requiring any digital signal processing nor software programing. Therefore, compared with the existing digitized dc offset calibration techniques, the proposed method has the advantage of low cost, easy implementation, short delay, and high resolution. The experiment results demonstrated that a wide range of dc offset can be successfully removed from the biomedical radar sensor, and its dynamic range can be maximized. The reconfiguration of the dc-tracking mode has also been tested and verified. Furthermore, the proposed dc offset cancelation circuit has the potential to be easily adopted by other systems that also face the dc offset problem.

26 citations


Proceedings ArticleDOI
01 May 2019
TL;DR: In this paper, an analog-digital hybrid null-steering beamformer was proposed to detect and decode the weak AmBC-modulated signal buried in the strong direct path signals and the noise, without requiring the instantaneous channel state information.
Abstract: In bi-static Ambient Backscatter Communications (AmBC) systems, the receiver needs to operate at a large dynamic range because the direct path from the ambient source to the receiver can be several orders of magnitude stronger than the scattered path modulated by the AmBC device. In this paper, we propose a novel analog-digital hybrid null-steering beamformer which allows the backscatter receiver to detect and decode the weak AmBC-modulated signal buried in the strong direct path signals and the noise, without requiring the instantaneous channel state information. The analog cancellation of the strong signal components allows the receiver automatic gain control to adjust to the level of the weak AmBC signals. This hence allows common analog-to-digital converters to be used for sampling the signal. After cancelling the strong components, the ambient source signal appears as zero mean fast fading from the AmBC system point of view. We use the direct path signal component to track the phase of the unknown ambient signal. In order to avoid channel estimation, we propose AmBC to use orthogonal channelization codes. The results show that the design allows the AmBC receiver to detect the backscatter binary phase shift keying signals without decoding the ambient signals and requiring knowledge of the instantaneous channel state information.

Journal ArticleDOI
TL;DR: This paper formulate the detection of foreground moving objects as a rank minimization problem, and in order to eliminate the image blurring caused by background slightly change of LDR images, further rectify the background by employing the irradiances alignment.
Abstract: The irradiance range of the real-world scene is often beyond the capability of digital cameras. Therefore, High Dynamic Range (HDR) images can be generated by fusing images with different exposure of the same scene. However, moving objects pose the most severe problem in the HDR imaging, leading to the annoying ghost artifacts in the fused image. In this paper, we present a novel HDR technique to address the moving objects problem. Since the input low dynamic range (LDR) images captured by a camera act as static linear related backgrounds with moving objects during each individual exposures, we formulate the detection of foreground moving objects as a rank minimization problem. Meanwhile, in order to eliminate the image blurring caused by background slightly change of LDR images, we further rectify the background by employing the irradiances alignment. Experiments on image sequences show that the proposed algorithm performs significant gains in synthesized HDR image quality compare to state-of-the-art methods.

Proceedings ArticleDOI
09 Jun 2019
TL;DR: A low power, reconfigurable, high dynamic range (DR), light-to-digital converter (LDC) for wearable PPG/NIRS recording, which merges the functionalities of a conventional transimpedance amplifier and ADC, while quantization in time domain significantly improves the DR.
Abstract: This paper presents a low power, reconfigurable, high dynamic range (DR), light-to-digital converter (LDC) for wearable PPG/NIRS recording. The LDC converts light into the time domain with a dual-slope mode integrator, followed by a counter-based time-to-digital converter. This architecture merges the functionalities of a conventional transimpedance amplifier and ADC, while quantization in time domain significantly improves the DR. The inherent low pulse repetition frequency (PRF) of LDC also reduces the LED power. Furthermore, the DR of the LDC can be easily reconfigured by re-programming the counting step size or the PRF of the LEDs, allowing optimal power consumption for different DR scenarios. The IC achieves a maximum DR of 119dB while only consuming $196 \mu \mathrm {W}($ including 2X LEDs). The IC is validated with PPG and NIRS tests, using photodiodes (PDs) and silicon photomultipliers (SiPMs) respectively.

Journal ArticleDOI
TL;DR: This paper presents an ultra-low-voltage (ULV) pixel pulsewidth-modulation (PWM) CMOS imager with monitoring and capturing for Internet-of-Things (IoT) and artificial intelligent (AI) applications, fabricated in the CMOS 0.18 standard process technology.
Abstract: This paper presents an ultra-low-voltage (ULV) $300\times 200$ pixel pulsewidth-modulation (PWM) CMOS imager with monitoring and capturing for Internet-of-Things (IoT) and artificial intelligent (AI) applications, fabricated in the CMOS 0.18- $\mu \text{m}$ standard process technology. In always-ON monitoring operation, the imager provides high dynamic range (HDR) and energy harvesting (EH) modes for event detection and energy collection, respectively. In low-power image capturing operation, the imager provides a linear-response (LR) mode for object identification and recording. In the LR mode, the proposed ULV PWM pixel with threshold variation cancellation (TVC) achieves a non-linearity of +0.36/−0.29% and a fixed-pattern noise (FPN) of 0.159%. With the proposed pixel-wise adaptive-multiple-sampling (AMS) scheme and the corresponding $n$ -time multiple sampling using dual-slope ramping (DSR) reference, the 0.4-V-operated PWM pixel achieves a total noise of 9.42e− at 4-time AMS operation. The achieved peak signal-to-noise ratio (PSNR) and dynamic range (DR) are 60.1 dB in the LR mode and 141 dB in the HDR mode, respectively, and the harvested power is 15.5 $\mu \text{W}$ at 60 klx in the EH mode.

Journal ArticleDOI
TL;DR: A tunable synthetic fourth-order bandpass filter at microwave frequencies that can maintain the DR of a second-order BPF while achieving a fourth- order frequency selectivity, which is favorable compared to cascading resonators.
Abstract: This paper demonstrates a tunable synthetic fourth-order bandpass filter (BPF) at microwave frequencies. Two parallel second-order Q-enhanced LC BPFs responses are added with the out of phase to synthesize a fourth-order BPF response. The filter is implemented in a 130-nm SiGe BiCMOS technology with a core die area of $0.53 \times 0.7$ mm2. The filter center frequency can be tuned from 4 to 8 GHz (C-band). The filter also achieves a wide 3-dB fractional bandwidth (BW) tuning range of 2%–25%, with a passband ripple of less than 0.5 dB. The corresponding normalized dynamic range (DR) is 151–166 $\text {dB}\cdot \text {Hz}$ owing to a switched varactor control scheme to realize a large effective tuning range with high linearity. Using the parallel synthesis approach, the filter can maintain the DR of a second-order BPF while achieving a fourth-order frequency selectivity, which is favorable compared to cascading resonators. On the lower side of the band, the filter achieves more than 65 dB of ultimate rejection. On the upper side, the rejection is more than 52 dB. The filter also employs a variable transconductor for noise-linearity tradeoff flexibility. The power consumption of the filter is 112–125 mW over the above fractional BW tuning range at the target C-band.

Journal ArticleDOI
TL;DR: This paper compares time correlated and uncorrelated imaging of single photon events using an InGaAs single-photon-counting-avalanche-photo-diode (SPAD) sensor with a 32 × 32 focal plane array detector and demonstrates imaging, ranging and photon flux measurements of a moving target from a few samples with a frame rate of 50 kHz.
Abstract: Optical sensing with single photon counting avalanche diode detectors has become a versatile approach for ranging and low light level imaging. In this paper, we compare time correlated and uncorrelated imaging of single photon events using an InGaAs single-photon-counting-avalanche-photo-diode (SPAD) sensor with a 32 × 32 focal plane array detector. We compare ranging, imaging and photon flux measurement capabilities at shortwave infrared wavelengths and determine the minimum number of photon event measurements to perform reliable scene reconstruction. With time-correlated-single-photon-counting (TCSPC), we obtained range images with centimeter resolution and determined the relative intensity. Using uncorrelated single photon counting (USPC), we demonstrated photon flux estimation with a high dynamic range from ϕ^=2×104 to 1.3 × 107 counts per second. Finally, we demonstrate imaging, ranging and photon flux measurements of a moving target from a few samples with a frame rate of 50 kHz.

Journal ArticleDOI
TL;DR: This study created a novel scanning pattern for achieving high dynamic range (HDR)-OCTA with a superior scanning efficiency and implemented a bidirectional, interleaved scanning pattern that is sensitive to different flow speeds by adjustable adjacent inter-scan time intervals.
Abstract: The dynamic range of current optical coherence tomography (OCT) angiography (OCTA) images is limited by the fixed scanning intervals. High speed OCT devices introduce the possibility of extending the flow signal dynamic range. In this study, we created a novel scanning pattern for achieving high dynamic range (HDR)-OCTA with a superior scanning efficiency. We implemented a bidirectional, interleaved scanning pattern that is sensitive to different flow speeds by adjustable adjacent inter-scan time intervals. We found that an improved flow dynamic range can be achieved by generating 3 different B-scan time intervals using 3 repetitions.


Proceedings ArticleDOI
16 Jan 2019
TL;DR: ePix10K as discussed by the authors is a hybrid pixel detector developed at SLAC for demanding free-electron laser (FEL) applications, providing an ultrahigh dynamic range (245 eV to 88 MeV) through gain auto-ranging.
Abstract: ePix10K is a hybrid pixel detector developed at SLAC for demanding free-electron laser (FEL) applications, providing an ultrahigh dynamic range (245 eV to 88 MeV) through gain auto-ranging. It has three gain modes (high, medium and low) and two auto-ranging modes (high-to-low and medium-to-low). The first ePix10K cameras are built around modules consisting of a sensor flip-chip bonded to 4 ASICs, resulting in 352 × 384 pixels of 100 µm x 100 µm each. We present results from extensive testing of three ePix10K cameras with FEL beams at LCLS, resulting in a measured noise floor of 245 eV rms, or 67 e− equivalent noise charge (ENC), and a range of 11 000 photons at 8 keV. We demonstrate the linearity of the response in various gain combinations: fixed high, fixed medium, fixed low, auto-ranging high to low, and auto-ranging medium-to-low, while maintaining a low noise (well within the counting statistics), a very low cross-talk, perfect saturation response at fluxes up to 900 times the maximum range, and acquisition rates of up to 480 Hz. Finally, we present examples of high dynamic range x-ray imaging spanning more than 4 orders of magnitude dynamic range (from a single photon to 11 000 photons/pixel/pulse at 8 keV). Achieving this high performance with only one auto-ranging switch leads to relatively simple calibration and reconstruction procedures. The low noise levels allow usage with long integration times at non-FEL sources. ePix10K cameras leverage the advantages of hybrid pixel detectors with high production yield and good availability, minimize development complexity through sharing the hardware, software and DAQ development with all other versions of ePix cameras, while providing an upgrade path to 5 kHz, 25 kHz and 100 kHz in three steps over the next few years, matching the LCLS-II requirements.

Proceedings ArticleDOI
10 May 2019
TL;DR: An end-to-end convolutional neural network (CNN) termed HDRNET is presented to directly reconstruct HDR image given only a single 8-bit LDR image, which does not require any human expertise.
Abstract: As opposed to the low dynamic range (LDR) image, high dynamic range (HDR) image can represent a greater dynamic range of luminosity that can be perceived by human visual system. As under-/over-exposure and color quantization will cause information loss, inferring a HDR image from a single LDR input is an ill-posed problem. To tackle this, we present an end-to-end convolutional neural network (CNN) termed HDRNET to directly reconstruct HDR image given only a single 8-bit LDR image, which does not require any human expertise. Information from different scales should be considered, so our architecture is multiscale. To enhance the representational ability of CNN, we propose a hybird loss and use channel attention mechanism to adaptively rescale channel-wise features. To train the HDRNET, we build a large dataset of LDR-HDR image pairs. Comparative experiments demonstrate the superiority of the proposed algorithm both qualitatively and quantitatively.

Journal ArticleDOI
TL;DR: A real-time hardware implementation of an exponent-based tone mapping algorithm of Horé et al., that uses both local and global image information for improving the contrast and increasing the brightness of tone-mapped images.
Abstract: In this paper, we present a real-time hardware implementation of an exponent-based tone mapping algorithm of Hore et al., that uses both local and global image information for improving the contrast and increasing the brightness of tone-mapped images. Although there are several tone mapping algorithms available in the literature, most of them require manual tuning of their rendering parameters. However, in our implementation, the algorithm has an embedded automatic key parameter estimation block that controls the brightness of the tone-mapped images. We also present the implementation of a Gaussian-based halo-reducing filter. The hardware implementation is described in Verilog and synthesized for a field programmable gate array device. Experimental results performed on different wide dynamic range images show that we are able to get images which are of good visual quality and have good brightness and contrast. The good performance of our hardware architecture is also confirmed quantitatively with the high peak signal-to-noise ratio and structural similarity index.

Journal ArticleDOI
TL;DR: A novel HDR video compression algorithm, which uses a perceptually uniform color opponent space, a novel perceptual transfer function to encode the dynamic range of the scene, and a novel error minimization scheme for accurate chroma reproduction is introduced.
Abstract: Recently, there has been a significant progress in the research and development of the high dynamic range (HDR) video technology and the state-of-the-art video pipelines are able to offer a higher bit depth support to capture, store, encode, and display HDR video content. In this paper, we introduce a novel HDR video compression algorithm, which uses a perceptually uniform color opponent space, a novel perceptual transfer function to encode the dynamic range of the scene, and a novel error minimization scheme for accurate chroma reproduction. The proposed algorithm was objectively and subjectively evaluated against four state-of-the-art algorithms. The objective evaluation was conducted across a set of 39 HDR video sequences, using the latest x265 10-bit video codec along with several perceptual and structural quality assessment metrics at 11 different quality levels. Furthermore, a rating-based subjective evaluation ( $n=40$ ) was conducted with six sequences at two different output bitrates. Results suggest that the proposed algorithm exhibits the lowest coding error amongst the five algorithms evaluated. Additionally, the rate–distortion characteristics suggest that the proposed algorithm outperforms the existing state-of-the-art at bitrates ≥ 0.4 bits/pixel.

Journal ArticleDOI
TL;DR: The proposed HDR method takes advantage of two effects to realize HDR 3D fringe acquisition for 3D shape measurements: the different responses of a RGB camera's three channels to a single color light source can be utilized to produce fringe images with different intensity levels.
Abstract: This paper introduces a novel real-time, high-dynamic-range (HDR) three-dimensional (3D) shape measurement method using a RGB camera. The proposed method takes advantage of two effects to realize HDR 3D fringe acquisition for 3D shape measurements: (1) the different responses of a RGB camera's three channels to a single color light source can be utilized to produce fringe images with different intensity levels; (2) the projector's dark time can be utilized to produce bright versus dark intensity contrast if bit-wise binary patterns are used. Experiments demonstrate the real-time capabilities of the proposed method by dynamic 3D shape measurements (with a maximum camera frame rate of 166 Hz). Given that a bit-wise defocused binary pattern projection is adopted, the proposed HDR method has the potential for high-speed applications if a high-speed color camera is present.

Journal ArticleDOI
TL;DR: This paper summarizes recent work from the group and others, to extended conventional to high dynamic range fluorescence imaging, which has many biological applications, such as mapping of neural connections, vascular imaging, bio-distribution studies or pharmacologic imaging at the single cell and organ level.
Abstract: Fluorescence acquisition and image display over a high dynamic range is highly desirable. However, the limited dynamic range of current photodetectors and imaging charge-coupled devices impose a limit on the fluorescence intensities that can be simultaneously captured during a single image acquisition. This is particularly troublesome when imaging biological samples, where protein expression fluctuates considerably. As a result, biological images will often contain regions with signal that is either saturated or hidden within background noise, causing information loss. In this paper, we summarize recent work from our group and others, to extended conventional to high dynamic range fluorescence imaging. These strategies have many biological applications, such as mapping of neural connections, vascular imaging, bio-distribution studies or pharmacologic imaging at the single cell and organ level.

Journal ArticleDOI
18 Sep 2019-Sensors
TL;DR: An adaptive binocular fringe dynamic projection method that can avoid image saturation by adaptively adjusting the projection intensity and can achieve higher accuracy for high dynamic range measurement is proposed.
Abstract: Three-dimensional measurement with fringe projection sensor has been commonly researched. However, the measurement accuracy and efficiency of most fringe projection sensors are still seriously affected by image saturation and the non-linear effects of the projector. In order to solve the challenge, in conjunction with the advantages of stereo vision technology and fringe projection technology, an adaptive binocular fringe dynamic projection method is proposed. The proposed method can avoid image saturation by adaptively adjusting the projection intensity. Firstly, the flowchart of the proposed method is explained. Then, an adaptive optimal projection intensity method based on multi-threshold segmentation is introduced to adjust the projection illumination. Finally, the mapping relationship of binocular saturation point and projection point is established by binocular transformation and left camera-projector mapping. Experiments demonstrate that the proposed method can achieve higher accuracy for high dynamic range measurement.

Journal ArticleDOI
TL;DR: In this paper, a series arrays of closely spaced, planar long Josephson junctions for magnetic field transduction in Earth's field, with a linear response and high dynamic range, was investigated.
Abstract: We investigated series arrays of closely spaced, planar long Josephson junctions for magnetic field transduction in Earth’s field, with a linear response and high dynamic range. The devices were fabricated from thin film high-temperature superconductor YBa2Cu3O7−δ (YBCO) thin films, using focused helium ion beam irradiation to create the Josephson barriers. Four series arrays, each consisting of several hundreds of long junctions, were fabricated and electrically tested. From fits of the current-voltage characteristics, we estimate the standard deviation in critical current to be around 25%. Voltage-magnetic field measurements exhibit a transfer function of 42 mV/mT and a linear response over a range of 303 μT at 71 K, resulting in a dynamic range of 124 dB.

Journal ArticleDOI
Xuanyu He1, Wei Zhang1, Haifeng Zhang1, Lin Ma2, Yibin Li1 
TL;DR: This work proposes to embed data into HDR images by exploiting the edge information among the luminance channel and color channels to achieve accurate prediction and high embedding capacity.
Abstract: In this work, a reversible data hiding (RDH) algorithm is proposed for high dynamic range (HDR) images containing an additional luminance channel. Since prediction accuracy is the key of RDH, we propose to embed data into HDR images by exploiting the edge information among the luminance channel and color channels to achieve accurate prediction and high embedding capacity. Besides, a new edge-directed order is presented to reduce the visual loss of the stego image. Various experimental results demonstrate that the proposed algorithm can hide more data into HDR images with less distortion than directly applying traditional RDH methods designed for low dynamic range (LDR) images. Compared to the current HDR hiding algorithms, the proposed method is not only reversible, but also achieves a tradeoff between embedding capacity and distortion.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a ghost imaging scheme that exploits Haar wavelets as illuminating patterns with a bi-frequency light projecting system and frequency-selecting single-pixel detectors.
Abstract: Recently, ghost imaging has been attracting attentions because its mechanism would lead to many applications inaccessible to conventional imaging methods. However, it is challenging for high contrast and high resolution imaging, due to its low signal-to-noise ratio (SNR) and the demand of high sampling rate in detection. To circumvent these challenges, we here propose a ghost imaging scheme that exploits Haar wavelets as illuminating patterns with a bi-frequency light projecting system and frequency-selecting single-pixel detectors. This method provides a theoretically 100% image contrast and high detection SNR, which reduces the requirement of high dynamic range of detectors, enabling high resolution ghost imaging. Moreover, it can highly reduce the sampling rate (far below Nyquist limit) for a sparse object by adaptively abandoning unnecessary patterns during the measurement. These characteristics are experimentally verified with a resolution of 512 times 512 and a sampling rate lower than 5%. A high-resolution (1000 times 1000 times 1000) 3D reconstruction of an object is also achieved from multi-angle images.

Journal ArticleDOI
Guo Chen1, Li Li1, Weiqi Jin1, Jin Zhu1, Feng Shi 
TL;DR: This study developed a dual-channel camera (DCC) to achieve HDR imaging, which can eliminate image motion blur and registration problems, and proposes a weighted sparse representation multi-scale transform fusion algorithm, which fully preserves the original image information.
Abstract: Most imaging devices lose image information during the acquisition process due to their low dynamic range (LDR). Existing high dynamic range (HDR) imaging techniques have a trade-off with time or spatial resolution, resulting in potential motion blur or image misalignment. Current HDR methods are based on the fusion of multi-frame LDR images and can suffer from blurring of fine details, image aliasing, and image boundary effects. This study developed a dual-channel camera (DCC) to achieve HDR imaging, which can eliminate image motion blur and registration problems. Considering the output characteristics of the camera, we propose a weighted sparse representation multi-scale transform fusion algorithm, which fully preserves the original image information, while eliminating image aliasing and boundary problems in the fused image, resulting in high-quality HDR imaging.

Journal ArticleDOI
TL;DR: In this article, an externally time-gated scheme is proposed to improve the dynamic range of photon counting optical time-domain reflectometry (PC-OTDR) system, which is realized by using a high-speed optical switch, i.e., a Mach-Zehnder interferometer, to modulate the backpropagation optical signal.
Abstract: Single photon detector (SPD) has a maximum count rate due to its dead time, which results in that the dynamic range of photon counting optical time-domain reflectometry (PC-OTDR) decreases with the length of monitored fiber. To further improve the dynamic range of PC-OTDR, we propose and demonstrate an externally time-gated scheme. The externally time-gated scheme is realized by using a high-speed optical switch, i.e., a Mach-Zehnder interferometer, to modulate the back-propagation optical signal, and to allow that only a certain segment of the fiber under test is monitored by the SPD. The feasibility of proposed scheme is first examined with theoretical analysis and simulation; then we experimentally demonstrate it with our experimental PC-OTDR testbed operating at 850 nm wavelength. In our studies, a dynamic range of 30.0 dB is achieved in a 70 meters long PC-OTDR system with 50 ns external gates, corresponding to an improvement of 11.0 dB in dynamic range comparing with no gating operation. Furthermore, with the improved dynamic range, a successful identification of a 0.37 dB loss event is detected with 30-seconds accumulation, which could not be identified without gating operation. Our scheme paves an avenue for developing PC-OTDR systems with high dynamic range.