scispace - formally typeset
Search or ask a question

Showing papers on "High-dynamic-range imaging published in 2020"


Journal ArticleDOI
TL;DR: Zhang et al. as mentioned in this paper proposed a feature masking mechanism that reduces the contribution of the features from the saturated areas to synthesize visually pleasing textures, which can reconstruct visually pleasing HDR results.
Abstract: Digital cameras can only capture a limited range of real-world scenes' luminance, producing images with saturated pixels. Existing single image high dynamic range (HDR) reconstruction methods attempt to expand the range of luminance, but are not able to hallucinate plausible textures, producing results with artifacts in the saturated areas. In this paper, we present a novel learning-based approach to reconstruct an HDR image by recovering the saturated pixels of an input LDR image in a visually pleasing way. Previous deep learning-based methods apply the same convolutional filters on wellexposed and saturated pixels, creating ambiguity during training and leading to checkerboard and halo artifacts. To overcome this problem, we propose a feature masking mechanism that reduces the contribution of the features from the saturated areas. Moreover, we adapt the VGG-based perceptual loss function to our application to be able to synthesize visually pleasing textures. Since the number of HDR images for training is limited, we propose to train our system in two stages. Specifically, we first train our system on a large number of images for image inpainting task and then fine-tune it on HDR reconstruction. Since most of the HDR examples contain smooth regions that are simple to reconstruct, we propose a sampling strategy to select challenging training patches during the HDR fine-tuning stage. We demonstrate through experimental results that our approach can reconstruct visually pleasing HDR results, better than the current state of the art on a wide range of scenes.

94 citations


Journal ArticleDOI
TL;DR: It is shown that without explicitly performing structural patch decomposition, an unnormalized version of SPD-MEF is arrived at, which enjoys an order of $30\times $ speed-up, and is closely related to pixel-level MEF methods as well as the standard two-layer decomposition method for MEF.
Abstract: Exposure bracketing is crucial to high dynamic range imaging, but it is prone to halos for static scenes and ghosting artifacts for dynamic scenes. The recently proposed structural patch decomposition for multi-exposure fusion (SPD-MEF) has achieved reliable performance in deghosting, but suffers from visible halo artifacts and is computationally expensive. In addition, its relationship to other MEF methods is unclear. We show that without explicitly performing structural patch decomposition, we arrive at an unnormalized version of SPD-MEF, which enjoys an order of $30\times $ speed-up, and is closely related to pixel-level MEF methods as well as the standard two-layer decomposition method for MEF. Moreover, we develop a fast multi-scale SPD-MEF method, which can effectively reduce halo artifacts. Experimental results demonstrate the effectiveness of the proposed MEF method in terms of speed and quality.

86 citations


Proceedings ArticleDOI
14 Jun 2020
TL;DR: Zhang et al. as discussed by the authors jointly trained an optical encoder and electronic decoder where the encoder is parameterized by the point spread function (PSF) of the lens, the bottleneck is the sensor with a limited dynamic range, and the decoder is a CNN.
Abstract: High-dynamic-range (HDR) imaging is crucial for many applications. Yet, acquiring HDR images with a single shot remains a challenging problem. Whereas modern deep learning approaches are successful at hallucinating plausible HDR content from a single low-dynamic-range (LDR) image, saturated scene details often cannot be faithfully recovered. Inspired by recent deep optical imaging approaches, we interpret this problem as jointly training an optical encoder and electronic decoder where the encoder is parameterized by the point spread function (PSF) of the lens, the bottleneck is the sensor with a limited dynamic range, and the decoder is a convolutional neural network (CNN). The lens surface is then jointly optimized with the CNN in a training phase; we fabricate this optimized optical element and attach it as a hardware add-on to a conventional camera during inference. In extensive simulations and with a physical prototype, we demonstrate that this end-to-end deep optical imaging approach to single-shot HDR imaging outperforms both purely CNN-based approaches and other PSF engineering approaches.

61 citations


Journal ArticleDOI
TL;DR: This work introduces neural sensors as a methodology to optimize per-pixel shutter functions jointly with a differentiable image processing method, such as a neural network, in an end-to-end fashion and demonstrates how to leverage emerging programmable and re-configurable sensor–processors to implement the optimized exposure functions directly on the sensor.
Abstract: Camera sensors rely on global or rolling shutter functions to expose an image. This fixed function approach severely limits the sensors’ ability to capture high-dynamic-range (HDR) scenes and resolve high-speed dynamics. Spatially varying pixel exposures have been introduced as a powerful computational photography approach to optically encode irradiance on a sensor and computationally recover additional information of a scene, but existing approaches rely on heuristic coding schemes and bulky spatial light modulators to optically implement these exposure functions. Here, we introduce neural sensors as a methodology to optimize per-pixel shutter functions jointly with a differentiable image processing method, such as a neural network, in an end-to-end fashion. Moreover, we demonstrate how to leverage emerging programmable and re-configurable sensor–processors to implement the optimized exposure functions directly on the sensor. Our system takes specific limitations of the sensor into account to optimize physically feasible optical codes and we evaluate its performance for snapshot HDR and high-speed compressive imaging both in simulation and experimentally with real scenes.

61 citations


Proceedings ArticleDOI
14 Jun 2020
TL;DR: A hybrid camera system has been built to validate that the proposed method is able to reconstruct quantitatively and qualitatively high-quality high dynamic range images by successfully fusing the images and intensity maps for various real-world scenarios.
Abstract: Reconstruction of high dynamic range image from a single low dynamic range image captured by a frame-based conventional camera, which suffers from over- or under-exposure, is an ill-posed problem. In contrast, recent neuromorphic cameras are able to record high dynamic range scenes in the form of an intensity map, with much lower spatial resolution, and without color. In this paper, we propose a neuromorphic camera guided high dynamic range imaging pipeline, and a network consisting of specially designed modules according to each step in the pipeline, which bridges the domain gaps on resolution, dynamic range, and color representation between two types of sensors and images. A hybrid camera system has been built to validate that the proposed method is able to reconstruct quantitatively and qualitatively high-quality high dynamic range images by successfully fusing the images and intensity maps for various real-world scenarios.

54 citations


Proceedings ArticleDOI
14 Jun 2020
TL;DR: A novel rank-1 parameterization of the proposed DOE which avoids vast trainable parameters and keeps high frequencies' encoding compared with conventional end-to-end design methods and improves the PSNR by more than 7 dB over state-of-the-art end- to-end designs.
Abstract: High-dynamic range (HDR) imaging is an essential imaging modality for a wide range of applications in uncontrolled environments, including autonomous driving, robotics, and mobile phone cameras. However, existing HDR techniques in commodity devices struggle with dynamic scenes due to multi-shot acquisition and post-processing time, e.g. mobile phone burst photography, making such approaches unsuitable for real-time applications. In this work, we propose a method for snapshot HDR imaging by learning an optical HDR encoding in a single image which maps saturated highlights into neighboring unsaturated areas using a diffractive optical element (DOE). We propose a novel rank-1 parameterization of the proposed DOE which avoids vast trainable parameters and keeps high frequencies' encoding compared with conventional end-to-end design methods. We further propose a reconstruction network tailored to this rank-1 parametrization for recovery of clipped information from the encoded measurements. The proposed end-to-end framework is validated through simulation and real-world experiments and improves the PSNR by more than 7 dB over state-of-the-art end-to-end designs.

54 citations


Journal ArticleDOI
13 Mar 2020-Sensors
TL;DR: Compared with six existing MEF methods, the proposed FPM not only improves the robustness of ghost removal in a dynamic scene, but also performs well in color saturation, image sharpness, and local detail processing.
Abstract: Multi exposure image fusion (MEF) provides a concise way to generate high-dynamic-range (HDR) images. Although the precise fusion can be achieved by existing MEF methods in different static scenes, the corresponding performance of ghost removal varies in different dynamic scenes. This paper proposes a precise MEF method based on feature patches (FPM) to improve the robustness of ghost removal in a dynamic scene. A reference image is selected by a priori exposure quality first and then used in the structure consistency test to solve the image ghosting issues existing in the dynamic scene MEF. Source images are decomposed into spatial-domain structures by a guided filter. Both the base and detail layer of the decomposed images are fused to achieve the MEF. The structure decomposition of the image patch and the appropriate exposure evaluation are integrated into the proposed solution. Both global and local exposures are optimized to improve the fusion performance. Compared with six existing MEF methods, the proposed FPM not only improves the robustness of ghost removal in a dynamic scene, but also performs well in color saturation, image sharpness, and local detail processing.

30 citations


Journal ArticleDOI
TL;DR: The primary idea is that blending two images in the deep-feature-domain is effective for synthesizing multi-exposure images that are structurally aligned to the reference, resulting in better-aligned images than the pixel-domain blending or geometric transformation methods.
Abstract: This paper presents a deep end-to-end network for high dynamic range (HDR) imaging of dynamic scenes with background and foreground motions Generating an HDR image from a sequence of multi-exposure images is a challenging process when the images have misalignments by being taken in a dynamic situation Hence, recent methods first align the multi-exposure images to the reference by using patch matching, optical flow, homography transformation, or attention module before the merging In this paper, we propose a deep network that synthesizes the aligned images as a result of blending the information from multi-exposure images, because explicitly aligning photos with different exposures is inherently a difficult problem Specifically, the proposed network generates under/over-exposure images that are structurally aligned to the reference, by blending all the information from the dynamic multi-exposure images Our primary idea is that blending two images in the deep-feature-domain is effective for synthesizing multi-exposure images that are structurally aligned to the reference, resulting in better-aligned images than the pixel-domain blending or geometric transformation methods Specifically, our alignment network consists of a two-way encoder for extracting features from two images separately, several convolution layers for blending deep features, and a decoder for constructing the aligned images The proposed network is shown to generate the aligned images with a wide range of exposure differences very well and thus can be effectively used for the HDR imaging of dynamic scenes Moreover, by adding a simple merging network after the alignment network and training the overall system end-to-end, we obtain a performance gain compared to the recent state-of-the-art methods

17 citations


Journal ArticleDOI
TL;DR: A learning-based stereo HDR imaging (SHDRI) method with three convolutional neural network modules that perform specific tasks, including exposure calibration CNN module, hole-filling CNN module and HDR fusion CNN (HDRF-CNN) module, to combine with traditional image processing methods to model SHDRI pipeline is proposed.
Abstract: It is possible to generate stereo high dynamic range (HDR) images/videos by using a pair of cameras with different exposure parameters. In this article, a learning-based stereo HDR imaging (SHDRI) method with three modules is proposed. In the proposed method, we construct three convolutional neural network (CNN) modules that perform specific tasks, including exposure calibration CNN (EC-CNN) module, hole-filling CNN (HF-CNN) module and HDR fusion CNN (HDRF-CNN) module, to combine with traditional image processing methods to model SHDRI pipeline. To avoid ambiguity, we assume that the left-view image is under-exposed and the right-view image is over-exposed. Specifically, the EC-CNN module is first constructed to convert stereo multi-exposure images into the same exposure to facilitate subsequent stereo matching. Then, based on the estimated disparity map, the right-view image is forward-warped to generate the initial left-view over-exposure image. After that, extra exposure information is utilized to guide hole-filling. Finally, the HDRF-CNN module is constructed and employed to extract fusion features to fuse the hole-filled left-view over-exposure image with the original left-view under-exposure image into the left-view HDR image. Right-view HDR images can be generated in the same way. In addition, we propose an effective two-phase training strategy to overcome the lack of a sufficient large stereo multi-exposure dataset. The experimental results demonstrate that the proposed method can generate stereo HDR images with high visual quality. Furthermore, the proposed method achieves better performance in comparison with the latest SHDRI method.

14 citations


Journal ArticleDOI
Zhiyong Pan1, Mei Yu1, Gangyi Jiang1, Haiyong Xu1, Zongju Peng1, Fen Chen1 
TL;DR: The experimental results show that the proposed method is superior to the existing HDR imaging methods in quantitative and qualitative analysis, and can quickly generate high-quality HDR images.

14 citations


Journal ArticleDOI
TL;DR: This work introduces a post-acquisition snapshot HDR enhancement scheme that generates a bracketed sequence from a small set of LDR images, and in the extreme case, directly from a single exposure.
Abstract: Bracketed High Dynamic Range (HDR) imaging architectures acquire a sequence of Low Dynamic Range (LDR) images in order to either produce a HDR image or an “optimally” exposed LDR image, achieving impressive results under static camera and scene conditions. However, in real world conditions, ghost-like artifacts and noise effects limit the quality of HDR reconstruction. We address these limitations by introducing a post-acquisition snapshot HDR enhancement scheme that generates a bracketed sequence from a small set of LDR images, and in the extreme case, directly from a single exposure. We achieve this goal via a sparse-based approach where transformations between differently exposed images are encoded through a dictionary learning process, while we learn appropriate features by employing a stacked sparse autoencoder (SSAE) based framework. Via experiments with real images, we demonstrate the improved performance of our method over the state-of-the-art, while our single-shot based HDR formulation provides a novel paradigm for the enhancement of LDR imaging and video sequences.

Proceedings Article
01 Jan 2020
TL;DR: A modulo edge-aware model is proposed, named as UnModNet, to iteratively estimate the binary rollover masks of the modulo image for unwrapping and can generate 12-bit HDR images from 8-bit modulo images reliably, and runs much faster than the previous MRF-based algorithm thanks to the GPU acceleration.
Abstract: A conventional camera often suffers from overor under-exposure when recording a real-world scene with a very high dynamic range (HDR). In contrast, a modulo camera with a Markov random field (MRF) based unwrapping algorithm can theoretically accomplish unbounded dynamic range but shows degenerate performances when there are modulus-intensity ambiguity, strong local contrast, and color misalignment. In this paper, we reformulate the modulo image unwrapping problem into a series of binary labeling problems and propose a modulo edge-aware model, named as UnModNet, to iteratively estimate the binary rollover masks of the modulo image for unwrapping. Experimental results show that our approach can generate 12-bit HDR images from 8-bit modulo images reliably, and runs much faster than the previous MRF-based algorithm thanks to the GPU acceleration.

Proceedings ArticleDOI
04 May 2020
TL;DR: Two different approaches to high dynamic range (HDR) imaging are considered – gamma encoding and modulo encoding – and a combination of deep image prior and total variation (TV) regularization for reconstructing low-light images is proposed.
Abstract: Traditionally, dynamic range enhancement for images has involved a combination of contrast improvement (via gamma correction or histogram equalization) and a denoising operation to reduce the effects of photon noise. More recently, modulo-imaging methods have been introduced for high dynamic range photography to significantly expand dynamic range at the sensing stage itself. The transformation function for both of these problems is highly non-linear, and the image reconstruction procedure is typically non-convex and ill-posed. A popular recent approach is to regularize the above inverse problem via a neural network prior (such as a trained autoencoder), but this requires extensive training over a dataset with thousands of paired regular/HDR image data samples.In this paper, we introduce a new approach for HDR image reconstruction using neural priors that require no training data. Specifically, we employ deep image priors, which have been successfully used for imaging problems such as denoising, super-resolution, inpainting and compressive sensing with promising performance gains over conventional regularization techniques. In this paper, we consider two different approaches to high dynamic range (HDR) imaging – gamma encoding and modulo encoding – and propose a combination of deep image prior and total variation (TV) regularization for reconstructing low-light images. We demonstrate the significant improvement achieved by both of these approaches as compared to traditional dynamic range enhancement techniques.

Journal ArticleDOI
01 Mar 2020-Optik
TL;DR: A defect detection method based on HDRI is proposed to solve the visual inspection problem of industrial parts with high-reflective surface by using adaptive tone mapping based on color correction model in gradient domain and adaptive threshold segmentation and Haar-like feature extraction algorithm.

Journal ArticleDOI
TL;DR: A completely blind image quality evaluator for tone-mapped images based on a multi-attribute feature extraction scheme that shows superior performance compared to the competing metrics is proposed.
Abstract: High dynamic range (HDR) imaging enables capturing a wide range of luminance levels existing in real-world scenes. While HDR capturing devices become widespread in the market, the display technology is yet limited in representing full luminance ranges and standard low dynamic range (LDR) displays are currently more prevalent. To visualize the HDR content on traditional displays, tone mapping (TM) operators are introduced that convert HDR content into LDR. The dynamic range compression and different processing steps during TM can lead to loss of scene details, as well as luminance and chrominance changes. Such signal deviations will affect image naturalness and consequently disturb the visual quality of experience. Therefore, research into objective methods for quality evaluation of tone-mapped images has received attention in recent years. In this paper, we proposed a completely blind image quality evaluator for tone-mapped images based on a multi-attribute feature extraction scheme. Due to the diversity of TM distortions, various image characteristics are taken into account to develop an effective metric. The features are designed by considering spectral and spatial entropy, detection probability of visual information, image exposure, sharpness, and color properties. The quality-relevant features are then fed into a machine-learning regression framework to pool a quality score. The validation tests on two benchmark datasets reveal the superior performance of the proposed approach compared to the competing metrics.

Proceedings ArticleDOI
01 Feb 2020
TL;DR: A method will be proposed that in addition to the reconstruction of the details, also it will be focused on image generation time, and merging images and image Wavelet coefficients are used to reconstruct more details and make data reduction, respectively.
Abstract: Despite the advances in technology in the world of mobile and photography, the images taken from such devices do not resemble the scene in terms of brightness or details that users see in the scene cannot be seen in the images. Standard devices capture images with limit dynamic range, hence these images will have high-exposure and low-exposure areas. To tackle this problem, High Dynamic Range (HDR) imaging algorithms are used in which these algorithms pay more attention to the detail reconstruction however in this research, a method will be proposed that in addition to the reconstruction of the details, also it will be focused on image generation time. For this purpose, merging images and image Wavelet coefficients are used to reconstruct more details and make data reduction, respectively. Eventually, the proposed algorithm will be evaluated using PSNR and SSIM evaluation metrics which the results will be shown that the proposed method has appropriate results.

Proceedings ArticleDOI
11 Oct 2020
TL;DR: A new per-pixel calculation process is developed to retrieve the illuminance of a target scene with selected regions of interest, which can be used to assist human-centric lighting tasks.
Abstract: This study proposes a fast high dynamic range imaging (HDRI) technique for light measurement to shorten the long capturing time of current camera-aided computational photography widely used in lighting practice. In comparison with the conventional meter measurement, HDRI-assisted lighting measurement is a remote, efficient, affordable yet time-consuming method. The fast HDRI technique increases the film speed (ISO) to speed up the process taking a sequence of low dynamic range images. Since increasing camera’s film speed may introduce more image noise, the possible error rate of the proposed method is evaluated by applying Gaussian noise estimation and impulsive noise detection on the image with different film speeds. In addition, a new per-pixel calculation process is developed to retrieve the illuminance of a target scene with selected regions of interest, which can be used to assist human-centric lighting tasks. Extensive comparative experiments are also conducted to verify the accuracy and efficiency of the proposed method.

Book ChapterDOI
01 Jan 2020
TL;DR: This chapter proposes a novel approach to recognize Traffic Light and Vehicle Signal with high dynamic range imaging and deep learning in real-time, and extends the dual-channel approach to vehicle signal recognition.
Abstract: Use of autonomous vehicles aims to eventually reduce the number of motor vehicle fatalities caused by humans. Deep learning plays an important role in making this possible because it can leverage the huge amount of training data that comes from autonomous car sensors. Automatic recognition of traffic light and vehicle signal is a perception module critical to autonomous vehicles because a deadly car accident could happen if a vehicle fails to follow traffic lights or vehicle signals. A practical Traffic Light Recognition (TLR) or Vehicle Signal Recognition (VSR) faces some challenges, including varying illumination conditions, false positives and long computation time. In this chapter, we propose a novel approach to recognize Traffic Light (TL) and Vehicle Signal (VS) with high dynamic range imaging and deep learning in real-time. Different from existing approaches which use only bright images, we use both high exposure/bright and low exposure/dark images provided by a high dynamic range camera. TL candidates can be detected robustly from low exposure/dark frames because they have a clean dark background. The TL candidates on the consecutive high exposure/bright frames are then classified accurately using a convolutional neural network. The dual-channel mechanism can achieve promising results because it uses undistorted color and shape information of low exposure/dark frames as well as rich texture of high exposure/bright frames. Furthermore, the TLR performance is boosted by incorporating a temporal trajectory tracking method. To speed up the process, a region of interest is generated to reduce the search regions for the TL candidates. The experimental results on a large dual-channel database have shown that our dual-channel approach outperforms the state of the art which uses only bright images. Encouraged by the promising performance of the TLR, we extend the dual-channel approach to vehicle signal recognition. The algorithm reported in this chapter has been integrated into our autonomous vehicle via Data Distribute Service (DDS) and works robustly in real roads.

Journal ArticleDOI
TL;DR: In this article, a wide field-of-view (WFOV) imager was proposed to model the sky on an adaptive-scale basis, and the sky curvature and the effects of non-coplanar observations with the w-projection method.
Abstract: Sky curvature and non-coplanar effects, caused by low frequencies, long baselines, or small apertures in wide field-of-view instruments such as the Square Kilometre Array (SKA), significantly limit the imaging performance of an interferometric array. High dynamic range imaging essentially requires both an excellent sky model and the correction of imaging factors such as non-coplanar effects. New CLEAN deconvolution with adaptive-scale modeling already has the ability to construct significantly better narrow-band sky models. However, the application of wide-field observations based on modern arrays has not yet been jointly explored. We present a new wide-field imager that can model the sky on an adaptive-scale basis, and the sky curvature and the effects of non-coplanar observations with the w-projection method. The degradation caused by the dirty beam due to incomplete spatial frequency sampling is eliminated during sky model construction by our new method, while the w-projection mainly removes distortion of sources far from the image phase center. Applying our imager to simulated SKA data and the real observation data of the Karl G. Jansky Very Large Array (an SKA pathfinder) suggested that our imager can handle the effects of wide-field observations well and can reconstruct more accurate images. This provides a route for high dynamic range imaging of SKA wide-field observations, which is an important step forward in the development of the SKA imaging pipeline.

Journal ArticleDOI
TL;DR: A high dynamic range (HDR)-based FA imaging modality is presented, shown to outperform standard FA imaging microscopy narrowing the spread of the propagated error and yielding higher quality images.
Abstract: Significance: Fluorescence polarization (FP) and fluorescence anisotropy (FA) microscopy are powerful imaging techniques that allow to translate the common FP assay capabilities into the in vitro and in vivo cellular domain. As a result, they have found potential for mapping drug–protein or protein–protein interactions. Unfortunately, these imaging modalities are ratiometric in nature and as such they suffer from excessive noise even under regular imaging conditions, preventing accurate image-feature analysis of fluorescent molecules behaviors. Aim: We present a high dynamic range (HDR)-based FA imaging modality for improving image quality in FA microscopy. Approach: The method exploits ad hoc acquisition schemes to extend the dynamic range of individual FP channels, allowing to obtain FA images with increased signal-to-noise ratio. Results: A direct comparison between FA images obtained with our method and the standard, clearly indicates how an HDR-based FA imaging approach allows to obtain high-quality images, with the ability to correctly resolve image features at different values of FA and over a substantially higher range of fluorescence intensities. Conclusion: The method presented is shown to outperform standard FA imaging microscopy narrowing the spread of the propagated error and yielding higher quality images. The method can be effectively and routinely used on any commercial imaging system and could be also translated to other microscopy ratiometric imaging modalities.

Proceedings ArticleDOI
06 Jul 2020
TL;DR: This paper proposes a new local tone mapping approach by introducing semantic information using off-the-shelf semantic segmentation tools into a novel tone mapping pipeline and adjusts pixel values to a semantic specific target to reproduce the real-world semantic perception.
Abstract: A Tone Mapping Operator (TMO) aims at reproducing the visual perception of a scene with a high dynamic range (HDR) on low dynamic range (LDR) media. TMOs have primarily aimed to preserve global perception by employing a model of human visual system (HVS), analysing perceptual attributes of each pixel and adjusting exposure at the pixel level.Preserving semantic perception, also an essential step for HDR rendering, has never been in explicit focus. We argue that explicitly introducing semantic information to create a ‘content and semantic’-aware TMO has the potential to further improve existing approaches. In this paper, we therefore propose a new local tone mapping approach by introducing semantic information using off-the-shelf semantic segmentation tools into a novel tone mapping pipeline. More specifically, we adjust pixel values to a semantic specific target to reproduce the real-world semantic perception.

Journal ArticleDOI
TL;DR: This paper applies the high dynamic range imaging scheme to improve the Stokes estimation from a single CPFA image and shows qualitative and quantitative results on real data.
Abstract: Color-polarization filter array (CPFA) sensors are able to capture linear polarization and color information in a single shot. For a scene that contains a high dynamic range of irradiance and polarization signatures, some pixel values approach the saturation and noise levels of the sensor. The most common CPFA configuration is overdetermined, and contains four different linear polarization analyzers. Assuming that not all pixel responses are equally reliable in CPFA channels, one can therefore apply the high dynamic range imaging scheme to improve the Stokes estimation from a single CPFA image. Here I present this alternative methodology and show qualitative and quantitative results on real data.

Proceedings ArticleDOI
01 Jan 2020
TL;DR: This work uses Gaussian mixture model clustering algorithm to estimate the dark and bright distributions in the luminance histogram of the input HDR image and generates two LDR images using two locally-adaptive TFs obtained by the components of each distribution.
Abstract: Tone mapping (TM) algorithms convert high dynamic range (HDR) images into low dynamic range (LDR) images to represent on conventional display devices. Most TM methods compress the dynamic range of input HDR images by using a global transformation function (TF), and then improve local detail by applying contrast enhancement techniques. However, these approaches often fail to restore local detail lost in the dynamic range compression. To solve this problem, we propose a novel image fusion-based TM method. We use Gaussian mixture model clustering algorithm to estimate the dark and bright distributions in the luminance histogram of the input HDR image. Then, we generate two LDR images using two locally-adaptive TFs obtained by the components of each distribution. Finally, the output image is produced by the image fusion technique employing a brightness weight and a local contrast weight. The experimental results show that the proposed algorithm achieves high performance compared to state-of-the-art methods in terms of detail preservation and brightness adjustment.

Proceedings ArticleDOI
Jiong-hui Song1, Qi Li1, Huajun Feng1, Zhihai Xu1, Yueting Chen1 
05 Nov 2020
TL;DR: Detailed qualitative and quantitative comparisons are performed to show that the proposed method based on the dual-focal camera facing the same target to expanse the dynamic range of images produces excellent results where ghost and color artifacts are significantly reduced compared to existing general multi-frame high dynamic range methods.
Abstract: In this paper, we propose a method which is based on the dual-focal camera facing the same target to expanse the dynamic range of images. Since the spatial resolution of dual-focal camera in this paper is different, down-sampling, up-sampling, and multi-resolution fusion are required in image fusion processing to obtain an ideal high dynamic range image. The current multi-frame high dynamic range algorithm is mainly for similar resolution images. When there are two images with large resolution differences, The effect of ordinary registration algorithms (For example, optical flow registration algorithm) are limited, and the image may appear ghost and color artifacts after registration. Our method uses a convolutional neural network, which composed of two subnets. An image fusion subnet and a style transfer subnet. Because there is only one exposure image in the surrounding field of view, the central field of view is processed separately from the surrounding field of view. In the central field of view, U-Net is used to register the images layer by layer to increase the registration speed and registration accuracy. After the high dynamic range image in the central field of view, the style transfer network is used to transfer the color distribution of the high dynamic range image to the surrounding field of view. As for result, we performed extensive qualitative and quantitative comparisons to show that our method produces excellent results where ghost and color artifacts are significantly reduced compared to existing general multi-frame high dynamic range methods, and is robust across various inputs.

Journal ArticleDOI
TL;DR: The Luminance-Chrominance-Gradient High Dynamic Range (LCGHDR) method is proposed to obtain the proper luminous value of images and feature values extracted from the different images and exposure fusion technique was developed that helps for the proper imaging.
Abstract: Abstract The luminous value is high for many natural scenes, which causes loss of information and occurs in dark images. The High Dynamic Range (HDR) technique captures the same objects or scene for multiple times in different exposure and produces the images with proper illumination. This technique is used in the various applications such as medical imaging and observing the skylight, etc. HDR imaging techniques usually have the issue of lower efficiency due to capturing of multiple photos. In this paper, an efficient method is proposed for HDR imaging technique to achieve better performance and lower noise. The Luminance-Chrominance-Gradient High Dynamic Range (LCGHDR) method is proposed to obtain the proper luminous value of images. The same scenario is captured at different exposure are processed by the proposed method. Based on these feature values extracted from the different images and exposure fusion technique was developed that helps for the proper imaging. This experiment was evaluated and analyzed by comparing with the other methods, which showed the efficiency of the proposed method. This method needs only 124.594 seconds for the computation, while existing method need 139.869 seconds for the same number of images.

Journal ArticleDOI
Abstract: A High Dynamic Range (HDR) image produced from a sequence of low dynamic range (LDR) images can contain motion artefacts (ghosting) if the scene contains moving objects. Conventional d ...


01 Jan 2020
TL;DR: This publication describes systems and techniques directed to enhancing High Dynamic Range (HDR) imaging by identifying and understanding the lighting environment of a scene for image capture and captures the most important region of interest within a good exposure value.
Abstract: This publication describes systems and techniques directed to enhancing High Dynamic Range (HDR) imaging by identifying and understanding the lighting environment of a scene for image capture. Natural light (e.g., outdoor sky, sunlight) is identified and differentiated from artificial light (e.g., light-emitting diode (LED), fluorescent, halogen, incandescent lighting) for advanced metering and optimal exposure control. Exposure is adjusted relative to the differentiated lighting for final image capture. Regions of the scene are differentiated by mapping different weights to the dynamic range detected. This comprehensive understanding of the scene captures the most important region of interest (e.g., from the viewer’s perspective) within a good exposure value.

Patent
15 Sep 2020
TL;DR: In this article, a hybrid camera system comprising a neuromorphic camera and a common camera is built; and a low-dynamic-range image shot by a common camerain is input into the trained neural network to complete fusion imaging operation.
Abstract: The invention provides a high dynamic range imaging method. A hybrid camera system comprising a neuromorphic camera and a common camera is built; and a low-dynamic-range image shot by a common camerain the built hybrid camera system and a high-dynamic-range grayscale image shot and reconstructed by a neuromorphic camera are input into the trained neural network to complete fusion imaging operation. According to the method, a single LDR image is fused with the output of a neuromorphic camera, so that the quality of high-dynamic-range imaging greatly exceeds the reconstruction effect of the single LDR image; a deep learning method is used, a network module is independently designed for the difference between an LDR image and an HDR grey-scale map in each aspect, and compared with a non-deeplearning fusion method, the quality of a fused image is effectively improved; the number of input LDR images is reduced, the difficulty of data acquisition is reduced, the problems of blurring, ghosting and the like cannot be caused, and the application range of the algorithm is expanded. The invention further provides a high dynamic range imaging device.

Patent
27 Oct 2020
TL;DR: In this paper, the authors proposed a system and methods for high dynamic range (HDR) image capture and video processing in mobile devices, which includes a mobile device, such as a smartphone or digital mobile camera, including at least two image sensors fixed in a co-planar arrangement to a substrate and an optical splitting system configured to reflect at least about 90% of incident light received through an aperture of the mobile device onto the coplanar image sensors.
Abstract: The invention is relates to systems and methods for high dynamic range (HDR) image capture and video processing in mobile devices. Aspects of the invention include a mobile device, such as a smartphone or digital mobile camera, including at least two image sensors fixed in a co-planar arrangement to a substrate and an optical splitting system configured to reflect at least about 90% of incident light received through an aperture of the mobile device onto the co-planar image sensors, to thereby capture a HDR image. In some embodiments, greater than about 95% of the incident light received through the aperture of the device is reflected onto the image sensors.