scispace - formally typeset
Search or ask a question

Showing papers on "High dynamic range published in 2021"


Journal ArticleDOI
TL;DR: This work proposes a novel recurrent network to reconstruct videos from a stream of events, and trains it on a large amount of simulated event data, and shows that off-the-shelf computer vision algorithms can be applied to the reconstructions and that this strategy consistently outperforms algorithms that were specifically designed for event data.
Abstract: Event cameras are novel sensors that report brightness changes in the form of a stream of asynchronous “events” instead of intensity frames. They offer significant advantages with respect to conventional cameras: high temporal resolution, high dynamic range, and no motion blur. While the stream of events encodes in principle the complete visual signal, the reconstruction of an intensity image from a stream of events is an ill-posed problem in practice. Existing reconstruction approaches are based on hand-crafted priors and strong assumptions about the imaging process as well as the statistics of natural images. In this work we propose to learn to reconstruct intensity images from event streams directly from data instead of relying on any hand-crafted priors. We propose a novel recurrent network to reconstruct videos from a stream of events, and train it on a large amount of simulated event data. During training we propose to use a perceptual loss to encourage reconstructions to follow natural image statistics. We further extend our approach to synthesize color images from color event streams. Our quantitative experiments show that our network surpasses state-of-the-art reconstruction methods by a large margin in terms of image quality ( $>\!20\%$ > 20 % ), while comfortably running in real-time. We show that the network is able to synthesize high framerate videos ( $>5,000$ > 5 , 000 frames per second) of high-speed phenomena (e.g., a bullet hitting an object) and is able to provide high dynamic range reconstructions in challenging lighting conditions. As an additional contribution, we demonstrate the effectiveness of our reconstructions as an intermediate representation for event data. We show that off-the-shelf computer vision algorithms can be applied to our reconstructions for tasks such as object classification and visual-inertial odometry and that this strategy consistently outperforms algorithms that were specifically designed for event data. We release the reconstruction code, a pre-trained model and the datasets to enable further research.

164 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present a new calibration and imaging pipeline that aims at producing high fidelity, high dynamic range images with LOFAR High Band Antenna data, while being computationally efficient and robust against the absorption of unmodeled radio emission.
Abstract: The Low Frequency Array (LOFAR) is an ideal instrument to conduct deep extragalactic surveys. It has a large field of view and is sensitive to large-scale and compact emission. It is, however, very challenging to synthesize thermal noise limited maps at full resolution, mainly because of the complexity of the low-frequency sky and the direction dependent effects (phased array beams and ionosphere). In this first paper of a series, we present a new calibration and imaging pipeline that aims at producing high fidelity, high dynamic range images with LOFAR High Band Antenna data, while being computationally efficient and robust against the absorption of unmodeled radio emission. We apply this calibration and imaging strategy to synthesize deep images of the Bootes and Lockman Hole fields at ~150 MHz, totaling ~80 and ~100 h of integration, respectively, and reaching unprecedented noise levels at these low frequencies of ≲30 and ≲23 μ Jy beam−1 in the inner ~3 deg2 . This approach is also being used to reduce the LOTSS-wide data for the second data release.

126 citations


Proceedings ArticleDOI
01 Jun 2021
TL;DR: In this paper, an attention-guided deformable convolutional network is proposed for multi-frame high dynamic range (HDR) imaging, which adopts a spatial attention module to adaptively select the most appropriate regions of various expo-sure LDR images for fusion.
Abstract: In this paper, we present an attention-guided deformable convolutional network for hand-held multi-frame high dynamic range (HDR) imaging, namely ADNet. This problem comprises two intractable challenges of how to handle saturation and noise properly and how to tackle misalignments caused by object motion or camera jittering. To address the former, we adopt a spatial attention module to adaptively select the most appropriate regions of various expo-sure low dynamic range (LDR) images for fusion. For the latter one, we propose to align the gamma-corrected images in the feature-level with a Pyramid, Cascading and Deformable (PCD) alignment module. The proposed AD-Net shows state-of-the-art performance compared with previous methods, achieving a PSNR-l of 39.4471 and a PSNR-μ of 37.6359 in NTIRE 2021 Multi-Frame HDR Challenge.

57 citations


Journal ArticleDOI
TL;DR: The results in this paper explained that the HDR panorama images that resulting from the proposed method is more realistic image and appears as it is a real panorama environment.
Abstract: This paper presents a methodology for enhancement of panorama images environment by calculating high dynamic range. Panorama is constructing by merge of several photographs that are capturing by traditional cameras at different exposure times. Traditional cameras usually have much lower dynamic range compared to the high dynamic range in the real panorama environment, where the images are captured with traditional cameras will have regions that are too bright or too dark. A more details will be visible in bright regions with a lower exposure time and more details will be visible in dark regions with a higher exposure time. Since the details in both bright and dark regions cannot preserve in the images that are creating using traditional cameras, the proposed system have to calculate one using the images that traditional camera can actually produce. The proposed systems start by get LDR panorama image from multiple LDR images using SIFT features technology and then convert this LDR panorama image to the HDR panorama image using inverted local patterns. The results in this paper explained that the HDR panorama images that resulting from the proposed method is more realistic image and appears as it is a real panorama environment.

55 citations


Proceedings ArticleDOI
Xiangyu Chen1, Yihao Liu1, Zhengwen Zhang1, Yu Qiao1, Chao Dong1 
01 Jun 2021
TL;DR: This work proposes a novel learning-based approach using a spatially dynamic encoder-decoder network, HDRUNet, to learn an end-to-end mapping for single image HDR reconstruction with denoising and dequantization, which achieves the state-of-the-art performance in quantitative comparisons and visual quality.
Abstract: Most consumer-grade digital cameras can only capture a limited range of luminance in real-world scenes due to sensor constraints. Besides, noise and quantization errors are often introduced in the imaging process. In order to obtain high dynamic range (HDR) images with excellent visual quality, the most common solution is to combine multiple images with different exposures. However, it is not always feasible to obtain multiple images of the same scene and most HDR reconstruction methods ignore the noise and quantization loss. In this work, we propose a novel learning-based approach using a spatially dynamic encoder-decoder network, HDRUNet, to learn an end-to-end mapping for single image HDR reconstruction with denoising and dequantization. The network consists of a UNet-style base network to make full use of the hierarchical multi-scale information, a condition network toperform pattern-specific modulation and a weighting network for selectively retaining information. Moreover, we propose a Tanh_L 1 loss function to balance the impact of over-exposed values and well-exposed values on the network learning. Our method achieves the state-of-the-art performance in quantitative comparisons and visual quality. The proposed HDRUNet model won the second place in the single frame track of NITRE2021 High Dynamic Range Challenge. The code is available at https://github.com/chxy95/HDRUNet.

50 citations


Journal ArticleDOI
11 Jan 2021
TL;DR: In this article, the authors introduce the system-level requirements and design challenges on mm-wave power amplifiers due to high dynamic range signals and then introduce recent advances in mm-Wave PA technologies and innovations with several design examples.
Abstract: The next-generation 5G and beyond-5G wireless systems have stimulated a substantial growth in research, development, and deployment of mm-Wave electronic systems and antenna arrays at various scales. It is also envisioned that large dynamic range modulation signals with high spectral efficiency will be ubiquitously employed in future communication and sensing systems. As the interface between the antennas and transceiver electronics, power amplifiers (PAs) typically govern the output power, energy efficiency, and reliability of the entire wireless systems. However, the wide use of high dynamic range signals at mm-Wave carrier frequencies substantially complicates the design of PAs and demands an ultimate balance of energy efficiency and linearity as well as other PA performances. In this review paper, we will first introduce the system-level requirements and design challenges on mm-Waves PAs due to high dynamic range signals. We will review advanced active load modulation architectures for mm-Wave PAs and power devices. We will then introduce recent advances in mm-Wave PA technologies and innovations with several design examples. Special design considerations on mm-Wave PAs for phased array MIMOs and high mm-Wave frequencies will be outlined. We will also share our vision on future technology trends and innovation opportunities.

40 citations


Journal ArticleDOI
03 Apr 2021-Leukos
TL;DR: This tutorial presents a comprehensive overview of a step-by-step procedure to generate a 180° luminance map of a daylit scene from a sequence of multiple exposures with semiprofessional equipment and the Radiance suite of programs.
Abstract: In the field of lighting, luminance maps are often used to evaluate point-in-time lighting scenes from the occupant’s vantage point. High Dynamic Range (HDR) photography can be used to generate suc...

39 citations


Journal ArticleDOI
TL;DR: An unsupervised learning-based approach is presented for fusing bracketed exposures into high-quality images that avoids the need for interim conversion to intermediate high dynamic range (HDR) images and maintains the order of variations in the original image brightness.

23 citations


Proceedings ArticleDOI
01 Jun 2021
TL;DR: Zhang et al. as discussed by the authors used a network to estimate the camera response function (CRF) from the input image to linearize the image and decompose the linearised image into low-frequency and high-frequency feature maps that are processed separately through two networks for light effects suppression and noise removal respectively.
Abstract: Most existing nighttime visibility enhancement methods focus on low light. Night images, however, do not only suffer from low light, but also from man-made light effects such as glow, glare, floodlight, etc. Hence, when the existing nighttime visibility enhancement methods are applied to these images, they intensify the effects, degrading the visibility even further. High dynamic range (HDR) imaging methods can address the low light and over-exposed regions, however they cannot remove the light effects, and thus cannot enhance the visibility in the affected regions. In this paper, given a single nighttime image as input, our goal is to enhance its visibility by increasing the dynamic range of the intensity, and thus can boost the intensity of the low light regions, and at the same time, suppress the light effects (glow, glare) simultaneously. First, we use a network to estimate the camera response function (CRF) from the input image to linearise the image. Second, we decompose the linearised image into low-frequency (LF) and high-frequency (HF) feature maps that are processed separately through two networks for light effects suppression and noise removal respectively. Third, we use a network to increase the dynamic range of the processed LF feature maps, which are then combined with the processed HF feature maps to generate the final output that has increased dynamic range and suppressed light effects. Our experiments show the effectiveness of our method in comparison with the state-of-the-art nighttime visibility enhancement methods.

21 citations


Proceedings ArticleDOI
01 Jun 2021
TL;DR: In this article, a neural net-work for exposure selection was proposed for automotive object detection, which is trained jointly with an object detector and an image signal processing (ISP) pipeline.
Abstract: Real-world scenes have a dynamic range of up to 280 dB that todays imaging sensors cannot directly capture. Existing live vision pipelines tackle this fundamental challenge by relying on high dynamic range (HDR) sensors that try to recover HDR images from multiple captures with different exposures. While HDR sensors substantially increase the dynamic range, they are not without disadvantages, including severe artifacts for dynamic scenes, reduced fill-factor, lower resolution, and high sensor cost. At the same time, traditional auto-exposure methods for low-dynamic range sensors have advanced as proprietary methods relying on image statistics separated from downstream vision algorithms. In this work, we revisit auto-exposure control as an alternative to HDR sensors. We propose a neural net-work for exposure selection that is trained jointly, end-to-end with an object detector and an image signal processing (ISP) pipeline. To this end, we use an HDR dataset for automotive object detection and an HDR training procedure. We validate that the proposed neural auto-exposure control, which is tailored to object detection, outperforms conventional auto-exposure methods by more than 6 points in mean average precision (mAP).

21 citations


Journal ArticleDOI
TL;DR: A novel, simple yet effective method is proposed for static image exposure fusion based on weight map extraction via linear embeddings and watershed masking and the main advantage lies in watershedmasking-based adjustment for obtaining accurate weights for image fusion.

Journal ArticleDOI
TL;DR: A Fourier transform profilometry (FTP)-based method that can increase the dynamic range for real-time 3D measurement is proposed and demonstrated that the proposed method can measure the dynamic objects with shiny surfaces in real- time 3D Measurement system.
Abstract: The high dynamic range (HDR) fringe projection technique is widely used to measure the three-dimensional (3D) shapes of objects with shiny surfaces; however, it tends to compromise when measuring the dynamic objects. To reduce the number of images and improve the real-time performance, a Fourier transform profilometry (FTP)-based method that can increase the dynamic range for real-time 3D measurement is proposed. Firstly, single-color patterns are projected onto the measured object, and one captured color image can be separated into three monochrome images with different intensities by taking advantage of the different responses of R, G, and B channels. Second, the HDR fringe pattern is generated by compositing three monochrome images, and then processed by the background-normalized algorithm to obtain the final fringe pattern. Subsequently, FTP is employed to retrieve the phase map and obtain the 3D shape. The experimental results demonstrated that the proposed method can measure the dynamic objects with shiny surfaces in real-time 3D measurement system.

Journal ArticleDOI
TL;DR: In this study, a novel image contrast enhancement method, called low dynamic range histogram equalization (LDR-HE), is proposed based on the Quantized Discrete Haar Wavelet Transform (HWT), which provides a scalable and controlled dynamic range reduction in the histograms when the inverse operation is done in the reconstruction phase in order to regulate the excessive contrast enhancement rate.
Abstract: Conventional contrast enhancement methods stretch histogram bins to provide a uniform distribution. However, they also stretch the existing natural noises which cause abnormal distributions and annoying artifacts. Histogram equalization should mostly be performed in low dynamic range (LDR) in which noises are generally distributed in high dynamic range (HDR). In this study, a novel image contrast enhancement method, called low dynamic range histogram equalization (LDR-HE), is proposed based on the Quantized Discrete Haar Wavelet Transform (HWT). In the frequency domain, LDR-HE performs a de-boosting operation on the high-pass channel by stretching the high frequencies of the probability mass function to the nearby zero. For this purpose, greater amplitudes than the absolute mean frequency in the high pass band are divided by a hyper alpha parameter. This damping parameter, which regulates the global contrast on the processed image, is the coefficient of variations of high frequencies, i.e., standard deviation divided by mean. This fundamental procedure of LDR-HE definitely provides a scalable and controlled dynamic range reduction in the histograms when the inverse operation is done in the reconstruction phase in order to regulate the excessive contrast enhancement rate. In the experimental studies, LDR-HE is compared with the 14 most popular local, global, adaptive, and brightness preserving histogram equalization methods. Experimental studies qualitatively and quantitatively show promising and encouraging results in terms of different quality measurement metrics such as mean squared error (MSE), peak signal-to-noise ratio (PSNR), Contrast Improvement Index (CII), Universal Image Quality Index (UIQ), Quality-aware Relative Contrast Measure (QRCM), and Absolute Mean Brightness Error (AMBE). These results are not only assessed through qualitative visual observations but are also benchmarked with the state-of-the-art quantitative performance metrics.

Journal ArticleDOI
TL;DR: A stacked convolutional neural network (SCNN) is proposed that predicts high dynamic range (HDR) 360° RMs with varying roughness from a limited field of view, low dynamic range photograph and provides high-fidelity rendering of virtual objects to match with the background photograph.
Abstract: Corresponding lighting and reflectance between real and virtual objects is important for spatial presence in augmented and mixed reality (AR and MR) applications. We present a method to reconstruct real-world environmental lighting, encoded as a reflection map (RM), from a conventional photograph. To achieve this, we propose a stacked convolutional neural network (SCNN) that predicts high dynamic range (HDR) 360 ${}^\circ$ ∘ RMs with varying roughness from a limited field of view, low dynamic range photograph. The SCNN is progressively trained from high to low roughness to predict RMs at varying roughness levels, where each roughness level corresponds to a virtual object’s roughness (from diffuse to glossy) for rendering. The predicted RM provides high-fidelity rendering of virtual objects to match with the background photograph. We illustrate the use of our method with indoor and outdoor scenes trained on separate indoor/outdoor SCNNs showing plausible rendering and composition of virtual objects in AR/MR. We show that our method has improved quality over previous methods with a comparative user study and error metrics.

Journal ArticleDOI
TL;DR: In this article, a low tensile silver-aluminum silicone elastomer conductive material is considered for the tag providing a very high tensile dynamic range, as high elongation range of as 20% and high sensitivity in the range of 25 MHz/1% of strain is achieved.
Abstract: Microwave split-ring resonators are utilized as sensors in a wide variety of applications due to their remarkable features, such as extremely low cost, high sensitivity, and relatively high quality factor. In this article, another application is enabled according to a recently demonstrated chipless tag-reader structure providing the possibility of simplifying the sensor structure from a “multilayer structure” consisting of a dielectric substrate sandwiched between two metallic layers to a single-layer structure formed from a conductive material. This capability is specifically important for strain sensing applications as it brings the possibility of utilizing low stiff conductive materials instead of copper (which is the primary material used in microwave application) while keeping the reader structure with high-quality microwave application-specified substrates intact. With the explained approach in this work, a low tensile silver–aluminum silicone elastomer conductive material is considered for the tag providing a very high tensile dynamic range. According to the whole sensing system structure, as high elongation range of as 20% and the high sensitivity in the range of 25 MHz/1% of strain is achieved. Multiple simulations and experimental results support the idea of the novel microwave strain sensor proposed in this work.

Journal ArticleDOI
Zhenmin Zhu1, Duoduo You1, Fuqiang Zhou1, Sheng Wang1, Yulin Xie1 
TL;DR: In this paper, a polarization-enhanced fringe pattern (PEFP) method was proposed to estimate the degree of linear polarization (DOLP) of a high dynamic range image within a single exposure time.
Abstract: Measurement of high dynamic range objects is an obstacle in structured light 3D measurement. They entail both over-exposed and low-exposed pixels in a single exposure. This paper proposed a polarization-enhanced fringe pattern (PEFP) method that a high dynamic range image can be obtained within a single exposure time. The degree of linear polarization (DOLP) is calculated using the polarization properties of reflected light and a linear polarizer in fixed azimuth in this method. The DOLP is efficiently estimated by the projected polarization-state-encode (PSE) pattern, and it does not need to change the state of the polarizer. The DOLP depends on light intensity rather than the reflectivity of the object surfaces indicated in experimental results. The contrast of fringe patterns was enhanced, and the quality of fringe patterns was improved by the proposed method. More sufficient 3D point clouds and high-quality shape can be recovered using this method.


Journal ArticleDOI
TL;DR: This work proposes a pair of neural networks that represent mappings between images that have exposure levels one unit apart (stop-up/down network) that can restore the full dynamic range of scenes agilely with only two networks and generate photorealistic images in complex lighting situations.
Abstract: Inverse tone mapping aims at recovering the lost scene radiances from a single exposure image. With the successful use of deep learning in numerous applications, many inverse tone mapping methods use convolution neural networks in a supervised manner. As these approaches are trained with many pre-fixed high dynamic range (HDR) images, they fail to flexibly expand the dynamic ranges of images. To overcome this limitation, we consider a multiple exposure image synthesis approach for HDR imaging. In particular, we propose a pair of neural networks that represent mappings between images that have exposure levels one unit apart (stop-up/down network). Therefore, it is possible to construct two positive-feedback systems to generate images with greater or lesser exposure. Compared to previous works using the conditional generative adversarial learning framework, the stop-up/down network employs HDR friendly network structures and several techniques to stabilize the training processes. Experiments on HDR datasets demonstrate the advantages of the proposed method compared to conventional methods. Consequently, we apply our approach to restore the full dynamic range of scenes agilely with only two networks and generate photorealistic images in complex lighting situations.

Proceedings ArticleDOI
01 Jun 2021
TL;DR: In this article, a mixed 0th and 1st-order block coordinate descent optimizer is proposed to jointly learn sensor, ISP and detector network weights using RAW image data augmented with emulated SNR transition region artifacts.
Abstract: The real world is a 280 dB High Dynamic Range (HDR) world which imaging sensors cannot record in a single shot. HDR cameras acquire multiple measurements with different exposures, gains and photodiodes, from which an Image Signal Processor (ISP) reconstructs an HDR image. Dynamic scene HDR image recovery is an open challenge because of motion and because stitched captures have different noise characteristics, resulting in artifacts that ISPs must resolve in real time at double-digit megapixel resolutions. Traditionally, ISP settings used by downstream vision modules are chosen by domain experts; such frozen camera designs are then used for training data acquisition and supervised learning of downstream vision modules. We depart from this paradigm and formulate HDR ISP hyperparameter search as an end-to-end optimization problem, proposing a mixed 0th and 1st-order block coordinate descent optimizer that jointly learns sensor, ISP and detector network weights using RAW image data augmented with emulated SNR transition region artifacts. We assess the proposed method for human vision and image understanding. For automotive object detection, the method improves mAP and mAR by 33% over expert-tuning and 22% over state-of-the-art optimization methods, outperforming expert-tuned HDR imaging and vision pipelines in all HDR laboratory rig and field experiments.

Journal ArticleDOI
TL;DR: A comprehensive survey of 50+ tone mapping algorithms that have been implemented on hardware for acceleration and real-time performance and demonstrates the link between hardware cost and image quality thereby illustrating the underlying trade-off which will be useful for the research community.
Abstract: The rising demand for high quality display has ensued active research in high dynamic range (HDR) imaging, which has the potential to replace the standard dynamic range imaging. This is due to HDR’s features like accurate reproducibility of a scene with its entire spectrum of visible lighting and color depth. But this capability comes with expensive capture, display, storage and distribution resource requirements. Also, display of HDR images/video content on an ordinary display device with limited dynamic range requires some form of adaptation. Many adaptation algorithms, widely known as tone mapping (TM) operators, have been studied and proposed in the last few decades. In this paper, we present a comprehensive survey of 60 TM algorithms that have been implemented on hardware for acceleration and real-time performance. In this state-of-the-art survey, we will discuss those TM algorithms which have been implemented on GPU [1]–[12], FPGA [13]–[47], and ASIC [48]–[60] in terms of their hardware specifications and performance. Output image quality is an important metric for TM algorithms. From our literature survey we found that, various objective quality metrics have been used to demonstrate the quality of those algorithms hardware implementation. We have compiled those metrics used in this survey [61], [62], and analyzed the relationship between hardware cost, image quality and computational efficiency. Currently, machine learning-based (ML) algorithms have become an important tool to solve many image processing tasks, and this paper concludes with a discussion on the future research directions to realize ML-based TM operators on hardware.

Journal ArticleDOI
TL;DR: Off-axis k-space holography is introduced that circumvents this limitation of scattering signals of nanometric particles and dramatically boosts the achievable dynamic range to up to 110 dB.
Abstract: Optical sensing is one of the key enablers of modern diagnostics. Especially label-free imaging modalities hold great promise as they eliminate labeling procedures prior to analysis. However, scattering signals of nanometric particles scale with their volume square. This unfavorable scaling makes it extremely difficult to quantitatively characterize intrinsically heterogeneous clinical samples, such as extracellular vesicles, as their signal variation easily exceeds the dynamic range of currently available cameras. Here, we introduce off-axis k-space holography that circumvents this limitation. By imaging the back-focal plane of our microscope, we project the scattering signal of all particles onto all camera pixels, thus dramatically boosting the achievable dynamic range to up to 110 dB. We validate our platform by detecting and quantitatively sizing metallic and dielectric particles over a 200 × 200 μm field of view and demonstrate that independently performed signal calibrations allow correctly sizing particles made from different materials. Finally, we present quantitative size distributions of extracellular vesicle samples.

Journal ArticleDOI
TL;DR: In this article, the authors demonstrate a non-adaptive algorithm for quantum sensors to measure AC fields with a large range for which the loss in sensitivity is negligible, and explore this algorithm thoroughly by simulation, and discuss the T−2 scaling that this algorithm approaches in the coherent regime.
Abstract: Quantum sensors are highly sensitive since they capitalise on fragile quantum properties such as coherence, while enabling ultra-high spatial resolution. For sensing, the crux is to minimise the measurement uncertainty in a chosen range within a given time. However, basic quantum sensing protocols cannot simultaneously achieve both a high sensitivity and a large range. Here, we demonstrate a non-adaptive algorithm for increasing this range, in principle without limit, for alternating-current field sensing, while being able to get arbitrarily close to the best possible sensitivity. Therefore, it outperforms the standard measurement concept in both sensitivity and range. Also, we explore this algorithm thoroughly by simulation, and discuss the T−2 scaling that this algorithm approaches in the coherent regime, as opposed to the T−1/2 of the standard measurement. The same algorithm can be applied to any modulo-limited sensor. Usually, quantum sensing protocols impose a trade-off between sensitivity and maximum range. Here, the authors demonstrate a non-adaptive algorithm for quantum sensors to measure AC fields with a large range for which the loss in sensitivity is negligible.

Journal ArticleDOI
TL;DR: iADMIRE restored underlying speckle signal in dark artifact regions while suppressing sidelobes in bright target cases up to 100 dB and demonstrated the best performance for levels of reverberation clutter up to 0-dB signal-to-clutter ratio.
Abstract: Clutter produced using bright acoustic sources can obscure weaker acoustic targets, degrading the quality of the image in scenarios with high dynamic ranges. Many adaptive beamformers seek to improve image quality by reducing these sidelobe artifacts, generating a boost in contrast ratio or contrast-to-noise ratio. However, some of these beamformers inadvertently introduce a dark region artifact in place of the strong clutter, a situation that occurs when both clutter and the underlying signal of interest are removed. We introduce the iterative aperture domain model image reconstruction (iADMIRE) method that is designed to reduce clutter while preserving the underlying signal. We compare the contrast ratio dynamic range (CRDR) of iADMIRE to several other adaptive beamformers plus delay-and-sum (DAS) to quantify the accuracy and reliability of the reported measured contrast for each beamformer over a wide range of contrast levels. We also compare all beamformers in the presence of bright targets ranging from 40 to 120 dB to observe the presence of sidelobes. In cases with no added reverberation clutter, iADMIRE had a CRDR of 75.6 dB when compared with the next best method DAS with 60.8 dB. iADMIRE also demonstrated the best performance for levels of reverberation clutter up to 0-dB signal-to-clutter ratio. Finally, iADMIRE restored underlying speckle signal in dark artifact regions while suppressing sidelobes in bright target cases up to 100 dB.

Journal ArticleDOI
TL;DR: In this paper, an all-fiber-based optical demodulator was developed for the signal interrogation of low-coherence fiber-optic Fabry-Perot interferometric sensors.
Abstract: We developed an all-fiber-based optical demodulator for the signal interrogation of low-coherence fiber-optic Fabry–Perot interferometric sensors. The optical demodulator consists of a Michelson interferometer implemented by using a 3 × 3 fiber coupler and two fiber-coupled Faraday reflectors with tunable fiber delay lines. The demodulator's output contains two optical interference signals with a constant phase shift and the output shows no sensitivity to the polarization variation in the light source or fibers. A digital phase recovery algorithm is used to extract the measurand information from the phase-shifted signals at high accuracy, high dynamic range, and high stability. The optical demodulator has been applied to the measurement of high-frequency vibrations and strains using fiber-optic sensors.

Journal ArticleDOI
TL;DR: In this paper, a camera-aided method of using high dynamic range photogrammetry was developed to calculate luminous flux radiated to the camera lens from an environment or its sub-areas, with per-pixel contributio...
Abstract: A camera-aided method of using high dynamic range photogrammetry was developed to calculate luminous flux radiated to the camera lens from an environment or its subareas, with per-pixel contributio...

Journal ArticleDOI
TL;DR: The CAOS spectral imager can be used to image both full spectrum stationary line targets as well as spectrally map 2D targets using line scanning methods and is suited for improved speed and SNR linear HDR imaging.
Abstract: A CAOS line camera is introduced for spectral imaging of one-dimensional (1D) or line targets. The proposed spectral camera uses both a diffraction grating as well as a cylindrical lens optics system to provide line imaging along the line pixels’ direction of the image axis and Fourier transform operations in the orthogonal direction to provide optical spectrum analysis of the line pixels. The imager incorporates a coded access optical sensor (CAOS) structure based on a digital micromirror device (DMD). The design includes a line-by-line scan option to enable 2D spectral imaging. Line style spectral imaging using a 2850 K color temperature white light target illumination source along with visible band color bandpass filters and a moving mechanical pinhole is demonstrated for the first time, to the best of our knowledge, to simulate a line target with individual pixels in 1D that have unique spectral content. A ∼412 to ∼732nm input target spectrum is measured using a 38×52 CAOS pixel spatial sampling grid providing a test image line of 38 pixels, with each pixel providing a designed spectral resolution of ∼6.2nm. The spectral image is generated using the robust code division multiple access (CDMA) mode of the camera. Then, for what we believe is the first time, the high dynamic range (HDR) operation of the frequency division multiple access–time division multiple access (FDMA–TDMA) mode of the CAOS camera is demonstrated. The FDMA–TDMA mode also features HDR recovery like the frequency modulation TDMA (FM–TDMA) mode, although at a much faster imaging rate and a higher signal-to-noise ratio (SNR), as more than one CAOS pixel is extracted at a time. Experiments successfully demonstrate 66 dB HDR target recovery using both FDMA–TDMA and FM–TDMA modes, with the FDMA–TDMA mode operating at an encoding speed eight times faster than the FM–TDMA mode, given that eight FM channels are used for the FDMA–TDMA mode. The CAOS spectral imager can be used to image both full spectrum stationary line targets as well as spectrally map 2D targets using line scanning methods. The demonstrated FDMA–TDMA CAOS mode is suited for improved speed and SNR linear HDR imaging.

Journal ArticleDOI
TL;DR: Zhang et al. as discussed by the authors proposed a dual-attention-guided end-to-end deep neural network, called DAHDRNet, which produces high-quality ghost-free HDR images.
Abstract: Ghosting artifacts caused by moving objects and misalignments are a key challenge in constructing high dynamic range (HDR) images. Current methods first register the input low dynamic range (LDR) images using optical flow before merging them. This process is error-prone, and often causes ghosting in the resulting merged image. We propose a novel dual-attention-guided end-to-end deep neural network, called DAHDRNet, which produces high-quality ghost-free HDR images. Unlike previous methods that directly stack the LDR images or features for merging, we use dual-attention modules to guide the merging according to the reference image. DAHDRNet thus exploits both spatial attention and feature channel attention to achieve ghost-free merging. The spatial attention modules automatically suppress undesired components caused by misalignments and saturation, and enhance the fine details in the non-reference images. The channel attention modules adaptively rescale channel-wise features by considering the inter-dependencies between channels. The dual-attention approach is applied recurrently to further improve feature representation, and thus alignment. A dilated residual dense block is devised to make full use of the hierarchical features and increase the receptive field when hallucinating missing details. We employ a hybrid loss function, which consists of a perceptual loss, a total variation loss, and a content loss to recover photo-realistic images. Although DAHDRNet is not flow-based, it can be applied to flow-based registration to reduce artifacts caused by optical-flow estimation errors. Experiments on different datasets show that the proposed DAHDRNet achieves state-of-the-art quantitative and qualitative results.

Journal ArticleDOI
28 Jun 2021
TL;DR: In this article, the authors proposed a force sensor that combines semiconductor strain gauges and metallic foil strain gauge to present force information of a higher dynamic range to robots, which was confirmed from the SN ratio test that the proposed sensor has a measurement range from 0.005 N to 1000 N.
Abstract: This letter proposes a force sensor that combines semiconductor strain gauges and metallic foil strain gauges to present force information of a higher dynamic range to robots. The strain gauges have different sensitivities, with semiconductor strain gauge sensitivity being approximately 90-fold than that of the metallic foil strain gauges. Using this difference in sensitivity, large forces and small forces are both detected through the two types of strain gauges and high dynamic range force detection is achieved by combining the output signals of these two. It was confirmed from the SN ratio test that the proposed sensor has a measurement range from 0.005 N to 1000 N, the maximum load. The dynamic range is 2 $\times 10^5$ , extending the dynamic range of the 6-axis force sensor with the highest range in previous studies two-fold.

Journal ArticleDOI
01 Apr 2021
TL;DR: In this paper, the authors used the secondorder nonlinear optical (NLO) effect in NLO materials to obtain the phase matching between the near-infrared pump beam and the generated terahertz (THz) field.
Abstract: Over the past few decades, the generation and detection of broadband terahertz waves in the frequency range 0.1–30 THz have seen tremendous progress, both in the achievable bandwidths and in the generated THz intensities. Most of the progress has relied on rapid advances in ultrafast laser sources, techniques, and novel materials. Pulsed broadband THz sources are widely used in today’s THz time-domain spectroscopy (THz TDS) systems. For such systems, two main principles of THz wave generation are commonly used: photoconductive antennas that generate THz pulses by transient photocarriers induced by femtosecond pulses, whereas the second principle used here exploits optical rectification as a secondorder nonlinear optical (NLO) effect in NLO materials. The Fourier transformlimited femtosecond pulse duration defines the maximum bandwidth of the THz system. Besides this limit, other factors must be considered. In case of the photoconductive antennas, the main limiting factor is the recombination time of the photoinduced carriers in the photoconductor, typically being in the subpicosecond range. Due to these limitations, bandwidths beyond 6 THz are rarely achieved with the most typically used low-temperature-grown gallium arsenide (LT-GaAs) photoconductive antenna generators and detectors. On the other hand, NLO materials phase matching between the near-infrared pump beam and the generated THz field is essential in order for the generation process to be efficient. For organic NLO materials, due to the intrinsically lower dielectric dispersion from the lower THz frequency to the optical range, phase matching in a much broader THz range (tens of THz) is in principle possible in contrast to inorganic alternatives such as ZnTe, GaP, or LiNbO3. [6] Organic NLO crystals compared with polymers show excellent long-term stability and a high damage threshold (>20mJ cm 2 at 1500 nm), allowing to use powerful pump lasers resulting in very intense THz fields beyond 8 GVm 1 generated with these materials. Often a higher bandwidth can be already achieved in a combined approach using organic or inorganic NLO material as a THz generator and a photoconductive Dr. U. Puc, Dr. T. Bach, Dr. M. Jazbinsek Institute of Computational Physics Zurich University of Applied Sciences (ZHAW) Winterthur, Switzerland E-mail: mojca.jazbinsek@zhaw.ch Prof. P. Günter Swiss Federal Institute of Technology ETH Zurich Zurich, Switzerland Prof. M. Zgonik Faculty of Mathematics and Physics University of Ljubljana Ljubljana, Slovenia

Journal ArticleDOI
13 Mar 2021
TL;DR: The existing high dynamic range structured light 3D measurement technologies are classified into multiple measurement fusion (MMF) and single best measurement (SBM) based on the measurement principle, and the future development trends are proposed.
Abstract: Structured light method is one of the best methods for automated 3D measurement in industrial production due to its stability and speed. However, when the surface of industrial parts has high dynamic range (HDR) areas, e.g. rust, oil stains, or shiny surfaces, phase calculation errors may happen due to low modulation and pixel over-saturation in the image, making it difficult to obtain accurate 3D data. This paper classifies and summarizes the existing high dynamic range structured light 3D measurement technologies, compares the advantages and analyzes the future development trends. The existing methods are classified into multiple measurement fusion (MMF) and single best measurement (SBM) based on the measurement principle. Then, the advantages of the various methods in the two categories are discussed in detail, and the applicable scenarios are analyzed. Finally, the development trend of high dynamic range 3D measurement based on structed light is proposed.