scispace - formally typeset
Search or ask a question

Showing papers on "High-dynamic-range imaging published in 2021"


Proceedings ArticleDOI
01 Jun 2021
TL;DR: The first challenge on high-dynamic range (HDR) imaging was part of the New Trends in Image Restoration and Enhancement (NTIRE) workshop, held in conjunction with CVPR 2021 as mentioned in this paper.
Abstract: This paper reviews the first challenge on high-dynamic range (HDR) imaging that was part of the New Trends in Image Restoration and Enhancement (NTIRE) workshop, held in conjunction with CVPR 2021. This manuscript focuses on the newly introduced dataset, the proposed methods and their results. The challenge aims at estimating a HDR image from one or multiple respective low-dynamic range (LDR) observations, which might suffer from under-or over-exposed regions and different sources of noise. The challenge is composed by two tracks: In Track 1 only a single LDR image is provided as input, whereas in Track 2 three differently-exposed LDR images with inter-frame motion are available. In both tracks, the ultimate goal is to achieve the best objective HDR reconstruction in terms of PSNR with respect to a ground-truth image, evaluated both directly and with a canonical tonemapping operation.

61 citations


Proceedings ArticleDOI
01 Jun 2021
TL;DR: In this paper, an attention-guided deformable convolutional network is proposed for multi-frame high dynamic range (HDR) imaging, which adopts a spatial attention module to adaptively select the most appropriate regions of various expo-sure LDR images for fusion.
Abstract: In this paper, we present an attention-guided deformable convolutional network for hand-held multi-frame high dynamic range (HDR) imaging, namely ADNet. This problem comprises two intractable challenges of how to handle saturation and noise properly and how to tackle misalignments caused by object motion or camera jittering. To address the former, we adopt a spatial attention module to adaptively select the most appropriate regions of various expo-sure low dynamic range (LDR) images for fusion. For the latter one, we propose to align the gamma-corrected images in the feature-level with a Pyramid, Cascading and Deformable (PCD) alignment module. The proposed AD-Net shows state-of-the-art performance compared with previous methods, achieving a PSNR-l of 39.4471 and a PSNR-μ of 37.6359 in NTIRE 2021 Multi-Frame HDR Challenge.

57 citations


Journal ArticleDOI
TL;DR: A novel multi-scale fusion framework for low-illumination image enhancement, which effectively enhances images taken under various low-light conditions by employing a novel remapping function to generate a sequence of artificial multi-exposure images.

44 citations


Journal ArticleDOI
TL;DR: A novel, simple yet effective method is proposed for static image exposure fusion based on weight map extraction via linear embeddings and watershed masking and the main advantage lies in watershedmasking-based adjustment for obtaining accurate weights for image fusion.

17 citations


Journal ArticleDOI
TL;DR: This work proposes a pair of neural networks that represent mappings between images that have exposure levels one unit apart (stop-up/down network) that can restore the full dynamic range of scenes agilely with only two networks and generate photorealistic images in complex lighting situations.
Abstract: Inverse tone mapping aims at recovering the lost scene radiances from a single exposure image. With the successful use of deep learning in numerous applications, many inverse tone mapping methods use convolution neural networks in a supervised manner. As these approaches are trained with many pre-fixed high dynamic range (HDR) images, they fail to flexibly expand the dynamic ranges of images. To overcome this limitation, we consider a multiple exposure image synthesis approach for HDR imaging. In particular, we propose a pair of neural networks that represent mappings between images that have exposure levels one unit apart (stop-up/down network). Therefore, it is possible to construct two positive-feedback systems to generate images with greater or lesser exposure. Compared to previous works using the conditional generative adversarial learning framework, the stop-up/down network employs HDR friendly network structures and several techniques to stabilize the training processes. Experiments on HDR datasets demonstrate the advantages of the proposed method compared to conventional methods. Consequently, we apply our approach to restore the full dynamic range of scenes agilely with only two networks and generate photorealistic images in complex lighting situations.

12 citations


Journal ArticleDOI
TL;DR: Zhang et al. as discussed by the authors proposed a dual-attention-guided end-to-end deep neural network, called DAHDRNet, which produces high-quality ghost-free HDR images.
Abstract: Ghosting artifacts caused by moving objects and misalignments are a key challenge in constructing high dynamic range (HDR) images. Current methods first register the input low dynamic range (LDR) images using optical flow before merging them. This process is error-prone, and often causes ghosting in the resulting merged image. We propose a novel dual-attention-guided end-to-end deep neural network, called DAHDRNet, which produces high-quality ghost-free HDR images. Unlike previous methods that directly stack the LDR images or features for merging, we use dual-attention modules to guide the merging according to the reference image. DAHDRNet thus exploits both spatial attention and feature channel attention to achieve ghost-free merging. The spatial attention modules automatically suppress undesired components caused by misalignments and saturation, and enhance the fine details in the non-reference images. The channel attention modules adaptively rescale channel-wise features by considering the inter-dependencies between channels. The dual-attention approach is applied recurrently to further improve feature representation, and thus alignment. A dilated residual dense block is devised to make full use of the hierarchical features and increase the receptive field when hallucinating missing details. We employ a hybrid loss function, which consists of a perceptual loss, a total variation loss, and a content loss to recover photo-realistic images. Although DAHDRNet is not flow-based, it can be applied to flow-based registration to reduce artifacts caused by optical-flow estimation errors. Experiments on different datasets show that the proposed DAHDRNet achieves state-of-the-art quantitative and qualitative results.

8 citations


Journal ArticleDOI
03 Apr 2021-Leukos
TL;DR: A set of recommendations was developed for minimizing the possible errors in HDRI luminance measurements as well as recommendations for future research using HDRI.
Abstract: Compared to the use of conventional spot luminance meters, high dynamic range imaging (HDRI) offers significant advantages for luminance measurements in lighting research. Consequently, the reporti...

8 citations


Journal ArticleDOI
TL;DR: In this paper, a simple and effective image contrast enhancement method is proposed to achieve high dynamic range imaging, where the illumination of each pixel is estimated by using an induced norm of a patch of the image.
Abstract: Traditional histogram equalization may cause degraded results of over-enhanced images under uneven illuminations. In this paper, a simple and effective image contrast enhancement method is proposed to achieve high dynamic range imaging. First, the illumination of each pixel is estimated by using an induced norm of a patch of the image. Second, a pre-gamma correction is proposed to enhance the contrast of the illumination component appropriately. The parameters of gamma correction are set dynamically based on the local patch of the image. Third, an automatic Contrast-Limited Adaptive Histogram Equalization (CLAHE) whose clip point is automatically set is applied to the processed image for further image contrast enhancement. Fourth, a noise reduction algorithm based on the local patch is developed to reduce image noise and increase image quality. Finally, a post-gamma correction is applied to slightly enhance the dark regions of images and not affect the brighter areas. Experimental results show that the proposed method has its superiority over several state-of-the-art enhancement quality techniques by using qualitative and quantitative evaluations.

7 citations


Proceedings ArticleDOI
17 Oct 2021
TL;DR: In this paper, a multi-step feature fusion method was proposed to fuse the features in a stack of blocks having the same structure, and the design of the component block that effectively performs two operations essential to the problem, i.e., comparing and selecting appropriate images/regions.
Abstract: This paper considers the problem of generating an HDR image of a scene from its LDR images. Recent studies employ deep learning and solve the problem in an end-to-end fashion, leading to significant performance improvements. However, it is still hard to generate a good quality image from LDR images of a dynamic scene captured by a hand-held camera, e.g., occlusion due to the large motion of foreground objects, causing ghosting artifacts. The key to success relies on how well we can fuse the input images in their feature space, where we wish to remove the factors leading to low-quality image generation while performing the fundamental computations for HDR image generation, e.g., selecting the best-exposed image/region. We propose a novel method that can better fuse the features based on two ideas. One is multi-step feature fusion; our network gradually fuses the features in a stack of blocks having the same structure. The other is the design of the component block that effectively performs two operations essential to the problem, i.e., comparing and selecting appropriate images/regions. Experimental results show that the proposed method outperforms the previous state-of-the-art methods on the standard benchmark tests.

7 citations


Journal ArticleDOI
TL;DR: Zhang et al. as discussed by the authors proposed a deep learning technique for the seamless fusion of multi-exposed low dynamic range (LDR) images using a focus-pixel sensor.
Abstract: Multi-exposure image fusion inevitably causes ghost artifacts owing to inaccurate image registration. In this study, we propose a deep learning technique for the seamless fusion of multi-exposed low dynamic range (LDR) images using a focus-pixel sensor. For auto-focusing in mobile cameras, a focus-pixel sensor originally provides left (L) and right (R) luminance images simultaneously with a full-resolution RGB image. These L/R images are less saturated than the RGB images because they are summed up to be a normal pixel value in the RGB image of the focus pixel sensor. These two features of the focus pixel image, namely, relatively short exposure and perfect alignment are utilized in this study to provide fusion cues for high dynamic range (HDR) imaging. To minimize fusion artifacts, luminance and chrominance fusions are performed separately in two sub-nets. In a luminance recovery network, two heterogeneous images, the focus pixel image and the corresponding overexposed LDR image, are first fused by joint learning to produce an HDR luminance image. Subsequently, a chrominance network fuses the color components of the misaligned underexposed LDR input to obtain a 3-channel HDR image. Existing deep-neural-network-based HDR fusion methods fuse misaligned multi-exposed inputs directly. They suffer from visual artifacts that are observed mostly in saturated regions because pixel values are clipped out. Meanwhile, the proposed method reconstructs missing luminance with aligned unsaturated focus pixel image first, and thus, the luma-recovered image provides the cues for accurate color fusion. The experimental results show that the proposed method not only accurately restores fine details in saturated areas, but also produce ghost-free high-quality HDR images without pre-alignment.

6 citations


Posted Content
TL;DR: In this article, an efficient multi-exposure fusion (MEF) approach with a simple yet effective weight extraction method relying on principal component analysis, adaptive well-exposedness and saliency maps was proposed.
Abstract: High dynamic range (HDR) imaging enables to immortalize natural scenes similar to the way that they are perceived by human observers. With regular low dynamic range (LDR) capture/display devices, significant details may not be preserved in images due to the huge dynamic range of natural scenes. To minimize the information loss and produce high quality HDR-like images for LDR screens, this study proposes an efficient multi-exposure fusion (MEF) approach with a simple yet effective weight extraction method relying on principal component analysis, adaptive well-exposedness and saliency maps. These weight maps are later refined through a guided filter and the fusion is carried out by employing a pyramidal decomposition. Experimental comparisons with existing techniques demonstrate that the proposed method produces very strong statistical and visual results.

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a single-shot high dynamic range (HDR) imaging algorithm with row-wise varying exposures in a single raw image based on a deep convolutional neural network (CNN).
Abstract: We propose a single-shot high dynamic range (HDR) imaging algorithm with row-wise varying exposures in a single raw image based on a deep convolutional neural network (CNN). We first convert a raw Bayer input image into a radiance map by calibrating rows with different exposures, and then we design a new CNN model to restore missing information at the under- and over-exposed pixels and reconstruct color information from the raw radiance map. The proposed CNN model consists of three branch networks to obtain multiscale feature maps for an image. To effectively estimate the high-quality HDR images, we develop a robust loss function that considers the human visual system (HVS) model, color perception model, and multiscale contrast. Experimental results on both synthetic and captured real images demonstrate that the proposed algorithm can achieve synthesis results of significantly higher quality than conventional algorithms in terms of structure, color, and visual artifacts.

Proceedings ArticleDOI
Pengfei Xiong1, Yu Chen1
17 Oct 2021
TL;DR: HuHuang et al. as mentioned in this paper decompose HDR imaging into ghost-free image fusion and ghost-based image restoration, and propose a novel practical Hierarchical Fusion Network (HFNet), which contains three sub-networks: Mask Fusion Network, Mask Compensation Network, and refine network.
Abstract: Ghosting artifacts and missing content due to the over-/under-saturated regions caused by misalignments are generally considered as the two key challenges in high dynamic range (HDR) imaging for dynamic scenes. However, previous CNN-based methods directly reconstruct the HDR image from the input low dynamic range (LDR) images, with implicit ghost removal and multi-exposure image fusion in an end-to-end network structure. In this paper, we decompose HDR imaging into ghost-free image fusion and ghost-based image restoration, and propose a novel practical Hierarchical Fusion Network (HFNet), which contains three sub-networks: Mask Fusion Network, Mask Compensation Network, and Refine Network. Specifically, LDR images are linearly fused in Mask Fusion Network ignoring the misaligned regions. Then the ghost regions of fusion image are restored with mask compensation. Finally, all these results are refined in the third network. This strategy of divide and rule makes the proposed method significantly more tiny than previous methods. Experiments on different datasets show that superior performance of HFNet with 9x fewer FLOPs, 4x fewer parameters and 3x faster inference speed than the existing methods while providing comparable accuracy. And it achieves state-of-the-art quantitative and qualitative results while applied with similar FLOPs.

Posted Content
TL;DR: APNT-Fusion as discussed by the authors proposes an attention-guided progressive neural texture fusion (APNT)-based HDR restoration model which aims to address content association ambiguities caused by saturation, motion, and various artifacts introduced during multi-exposure fusion such as ghosting, noise and blur.
Abstract: High Dynamic Range (HDR) imaging via multi-exposure fusion is an important task for most modern imaging platforms. In spite of recent developments in both hardware and algorithm innovations, challenges remain over content association ambiguities caused by saturation, motion, and various artifacts introduced during multi-exposure fusion such as ghosting, noise, and blur. In this work, we propose an Attention-guided Progressive Neural Texture Fusion (APNT-Fusion) HDR restoration model which aims to address these issues within one framework. An efficient two-stream structure is proposed which separately focuses on texture feature transfer over saturated regions and multi-exposure tonal and texture feature fusion. A neural feature transfer mechanism is proposed which establishes spatial correspondence between different exposures based on multi-scale VGG features in the masked saturated HDR domain for discriminative contextual clues over the ambiguous image areas. A progressive texture blending module is designed to blend the encoded two-stream features in a multi-scale and progressive manner. In addition, we introduce several novel attention mechanisms, i.e., the motion attention module detects and suppresses the content discrepancies among the reference images; the saturation attention module facilitates differentiating the misalignment caused by saturation from those caused by motion; and the scale attention module ensures texture blending consistency between different coder/decoder scales. We carry out comprehensive qualitative and quantitative evaluations and ablation studies, which validate that these novel modules work coherently under the same framework and outperform state-of-the-art methods.

Proceedings Article
18 May 2021
TL;DR: Zhang et al. as mentioned in this paper proposed a fully differentiable high dynamic range imaging (HDRI) process, which enables a neural network that generates the multiple exposure stack for HDRI to train stably.
Abstract: Recently, high dynamic range (HDR) image reconstruction based on the multiple exposure stack from a given single exposure utilizes a deep learning framework to generate high-quality HDR images. These conventional networks focus on the exposure transfer task to reconstruct the multi-exposure stack. Therefore, they often fail to fuse the multi-exposure stack into a perceptually pleasant HDR image as the inversion artifacts occur. We tackle the problem in stack reconstruction-based methods by proposing a novel framework with a fully differentiable high dynamic range imaging (HDRI) process. By explicitly using the loss, which compares the network's output with the ground truth HDR image, our framework enables a neural network that generates the multiple exposure stack for HDRI to train stably. In other words, our differentiable HDR synthesis layer helps the deep neural network to train to create multi-exposure stacks while reflecting the precise correlations between multi-exposure images in the HDRI process. In addition, our network uses the image decomposition and the recursive process to facilitate the exposure transfer task and to adaptively respond to recursion frequency. The experimental results show that the proposed network outperforms the state-of-the-art quantitative and qualitative results in terms of both the exposure transfer tasks and the whole HDRI process.

Journal ArticleDOI
TL;DR: A semi-automated robust framework for monitoring luminance within the FOV of occupants using a HDRI camera installed in a non-intrusive position is presented and showed reasonable performance in re-projected indoor luminance map as perceived from the occupant position, except for sunlit projections from unshaded windows.

Journal ArticleDOI
13 Sep 2021-Sensors
TL;DR: In this paper, the authors describe a near-infrared thermal imaging system operating at a wavelength of 940 nm based on a commercial photovoltaic mode high dynamic range camera and analyse its measurement uncertainty.
Abstract: The measurement of a wide temperature range in a scene requires hardware capable of high dynamic range imaging. We describe a novel near-infrared thermal imaging system operating at a wavelength of 940 nm based on a commercial photovoltaic mode high dynamic range camera and analyse its measurement uncertainty. The system is capable of measuring over an unprecedently wide temperature range; however, this comes at the cost of a reduced temperature resolution and increased uncertainty compared to a conventional CMOS camera operating in photodetective mode. Despite this, the photovoltaic mode thermal camera has an acceptable level of uncertainty for most thermal imaging applications with an NETD of 4–12 °C and a combined measurement uncertainty of approximately 1% K if a low pixel clock is used. We discuss the various sources of uncertainty and how they might be minimised to further improve the performance of the thermal camera. The thermal camera is a good choice for imaging low frame rate applications that have a wide inter-scene temperature range.

Journal ArticleDOI
01 May 2021
TL;DR: In this article, a parametric filtering approach based on Savitzky-Golay filter is proposed to generate alpha matte coefficients required for fusing the input multiple exposure set.
Abstract: The problem of compositing multiple exposure images has attracted lots of researchers, over the past years. It all began with the problem of High Dynamic Range (HDR) imaging, for capturing scenes with vast differences in their dynamic range. Fine details in all the areas in these scenes cannot be captured with one single exposure setting of the camera aperture. This leads to multiple exposure images with each image containing accurate representation of different regions dimly lit, well lit and brightly lit in the scenes. One can make a combined HDR image out of these multiple exposure shots. This combination of multiple exposure shots leads to an image of a higher dynamic range in a different image format which cannot be represented in the traditional Low Dynamic Range (LDR) formats. Moreover HDR images cannot be displayed in traditional display devices suitable for LDR. So these images have to undergo a process called as tone mapping for further converting them to be suitable enough to be represented on usual LDR displays. An approach based on Savitzky–Golay parametric filtering which preserves edges, is proposed which uses filtered multiple exposure images to generate the alpha matte coefficients required for fusing the input multiple exposure set. The coefficients generated in the proposed approach helps in retaining the weak edges and the fine textures which are lost as a result of the under and over exposures. The proposed approach is similar in nature to the bilateral filter-based compositing approach for multiple exposure images in the literature but it is novel, in exploring the possibility of compositing using a parametric filtering approach. The proposed approach performs the fusion in the LDR domain and the fused output can also be displayed using standard LDR image formats on standard LDR displays. A brief comparison of the results generated by the proposed method and various other approaches, including the traditional exposure fusion, tone mapping-based techniques and bilateral filter-based approach is presented where in the proposed method compares well and fares better in majority of the test cases.

Journal ArticleDOI
Xiaomei Guan1, Xinghua Qu1, Bin Niu1, Zhang Yuanjun1, Fumin Zhang1 
TL;DR: An absolute phase mapping method based on the first partial derivatives of phase-shifted composite fringes for high dynamic range imaging by photographing the surface of highlighted object is proposed, which verified the validity and practicability of the method.

Journal ArticleDOI
TL;DR: In this article, a conditional adversarial generative network composed of a U-Net generator and patchGAN discriminator was designed to adaptively convert HDR images into low-dynamic range (LDR) images.
Abstract: Tone mapping is one of the main techniques to convert high-dynamic range (HDR) images into low-dynamic range (LDR) images. We propose to use a variant of generative adversarial networks to adaptively tone map images. We designed a conditional adversarial generative network composed of a U-Net generator and patchGAN discriminator to adaptively convert HDR images into LDR images. We extended previous work to include additional metrics such as tone-mapped image quality index (TMQI), structural similarity index measure, Frechet inception distance, and perceptual path length. In addition, we applied face detection on the Kalantari dataset and showed that our proposed adversarial tone mapping operator generates the best LDR image for the detection of faces. One of our training schemes, trained via 256 × 256 resolution HDR–LDR image pairs, results in a model that can generate high TMQI low-resolution 256 × 256 and high-resolution 1024 × 2048 LDR images. Given 1024 × 2048 resolution HDR images, the TMQI of the generated LDR images reaches a value of 0.90, which outperforms all other contemporary tone mapping operators.

Posted Content
TL;DR: In this article, an attention-guided deformable convolutional network is proposed for multi-frame high dynamic range (HDR) imaging, which adopts a spatial attention module to adaptively select the most appropriate regions of various exposure LDR images for fusion.
Abstract: In this paper, we present an attention-guided deformable convolutional network for hand-held multi-frame high dynamic range (HDR) imaging, namely ADNet. This problem comprises two intractable challenges of how to handle saturation and noise properly and how to tackle misalignments caused by object motion or camera jittering. To address the former, we adopt a spatial attention module to adaptively select the most appropriate regions of various exposure low dynamic range (LDR) images for fusion. For the latter one, we propose to align the gamma-corrected images in the feature-level with a Pyramid, Cascading and Deformable (PCD) alignment module. The proposed ADNet shows state-of-the-art performance compared with previous methods, achieving a PSNR-$l$ of 39.4471 and a PSNR-$\mu$ of 37.6359 in NTIRE 2021 Multi-Frame HDR Challenge.

Journal ArticleDOI
TL;DR: In this paper, a real-time high dynamic range (HDR) imaging and display method based on correlated double sampling is proposed for short-wave infrared (SWIR) cameras in order to effectively improve its range of brightness and contrast, as well as to obtain more image details.
Abstract: A real-time high dynamic range (HDR) imaging and display method based on correlated double sampling is proposed for short-wave infrared (SWIR) cameras in order to effectively improve its range of brightness and contrast, as well as to obtain more image details. The method utilizes the correlated double sampling technique of the SWIR detector to extend the 14-bit raw image into a 16-bit HDR image and achieve 4 times the HDR imaging. Subsequently, a dynamic range compression process, including logarithmic mapping and histogram equalization, is performed for the 16-bit HDR image to be mapped to an 8-bit display. Finally, the experimental results show that the method can enrich the details of SWIR images under the premise of ensuring real-time imaging.

Posted Content
TL;DR: The first challenge on high-dynamic range (HDR) imaging was part of the New Trends in Image Restoration and Enhancement (NTIRE) workshop, held in conjunction with CVPR 2021 as discussed by the authors.
Abstract: This paper reviews the first challenge on high-dynamic range (HDR) imaging that was part of the New Trends in Image Restoration and Enhancement (NTIRE) workshop, held in conjunction with CVPR 2021. This manuscript focuses on the newly introduced dataset, the proposed methods and their results. The challenge aims at estimating a HDR image from one or multiple respective low-dynamic range (LDR) observations, which might suffer from under- or over-exposed regions and different sources of noise. The challenge is composed by two tracks: In Track 1 only a single LDR image is provided as input, whereas in Track 2 three differently-exposed LDR images with inter-frame motion are available. In both tracks, the ultimate goal is to achieve the best objective HDR reconstruction in terms of PSNR with respect to a ground-truth image, evaluated both directly and with a canonical tonemapping operation.

Patent
16 Jun 2021
TL;DR: In this paper, a rolling shutter is used for high dynamic range (HDR) imaging, where the first image data is captured using a first image sensor and a first exposure time, and the second image data are captured using one or more second image sensors with shorter exposure times than the first exposure times.
Abstract: Methods and apparatus, including computer program products, for high dynamic range (HDR) imaging. First image data is captured using a first image sensor and a first exposure time, using a rolling shutter such that different lines within the first image data are captured at different first capture times. Two or more instances of second image data are captured using one or more second image sensors and one or more second exposure times that are shorter than the first exposure time. The two or more instances of second image data are captured using a rolling shutter, and overlap at least in part with the first image data. A line of the first image data has a corresponding line in each instance of second image data, and the corresponding lines in the different instances of second image data are captured at different second capture times. For a line in the first image data, the corresponding line from the instance of second image data whose second capture time is closest to the first capture time, is selected to be merged with the line in the first image data to generate a high dynamic range image.


Journal ArticleDOI
TL;DR: A machine learning solution is proposed that avoids High Dynamic Range Imaging computations that are the radiance map estimation, tone-mapping algorithms and quality measures calculation and produces suitable images for road lane detection facing sun glare, in real time.
Abstract: There are several studies on road lane detection but very few address adverse conditions for acquisition such as sun glare. Loss of details in underexposed images captured facing a low sun leads to misleading road lane detection. High Dynamic Range Imaging methods are used to acquire most details in such scenes. Unfortunately, these techniques are heavy on computations and therefore unsuitable for real time road lane detection. In this paper, we propose a machine learning solution that avoids High Dynamic Range Imaging computations that are the radiance map estimation, tone-mapping algorithms and quality measures calculation. We train a neural network on a High Dynamic Range Imaging dataset. The resulting model produces suitable images for road lane detection facing sun glare, in real time. Subjective and objective comparisons with the most popular High Dynamic Range Imaging method, Mertens Algorithm, are conducted to prove the effectiveness of the proposed Neural Network. The delivered images demonstrated an improvement in road lane detection.

Proceedings ArticleDOI
01 Apr 2021
TL;DR: Zhang et al. as discussed by the authors proposed a novel attention guided neural network (ADeepHDR) to produce high-quality ghost-free HDR images, which used the attention module to guide the process of image merging.
Abstract: In natural scenes with multi-exposure image fusion (MEF), high dynamic range (HDR) imaging is often affected by moving objects or misalignments in the scene, resulting in ghosting artifacts in the final imaging results, with the help of optical flow method and deep network architecture. To avoid ghosting artifacts better, we propose a novel attention- guided neural network (ADeepHDR) to produce high-quality ghost-free HDR images. Unlike the previous methods, we use the attention module to guide the process of image merging. The attention module can detect the large motions and the notable parts of the different input features and enhance details in the results. Based on the attention module, we also try different subnetwork variants to make full use of the hierarchical features to get more ideal results. Besides, fractional-oder differential convolution is used in the subnetwork variant to extract more detailed features. The proposed ADeepHDR is an improvement method without optical flows, which can better avoid the ghosting artifacts caused by error optical flow estimation and large motions. We have conducted extensive quantitative and qualitative assessments, and show that the proposed method is superior to the most state-of-the- art approaches.

Proceedings ArticleDOI
01 Oct 2021
TL;DR: In this article, a high throughput imaging method combined with structured illumination based on a digital micro-mirror device (DMD) was proposed to achieve a high dynamic range (HDR).
Abstract: In many biological systems, the sample's morphological information cannot be perfectly captured simultaneously by a single image acquisition due to the limited dynamic range of the detector. Here, we propose a high throughput imaging method combined with structured illumination based on a digital micro-mirror device (DMD) to achieve a high dynamic range (HDR). Furthermore, we demonstrate the HDR imaging method enabling depth sectioning by employing HiLo microscopy to the identical experimental system.

Posted Content
TL;DR: In this paper, a multi-step feature fusion method was proposed to fuse the features in a stack of blocks having the same structure, and the design of the component block that effectively performs two operations essential to the problem, i.e., comparing and selecting appropriate images/regions.
Abstract: This paper considers the problem of generating an HDR image of a scene from its LDR images. Recent studies employ deep learning and solve the problem in an end-to-end fashion, leading to significant performance improvements. However, it is still hard to generate a good quality image from LDR images of a dynamic scene captured by a hand-held camera, e.g., occlusion due to the large motion of foreground objects, causing ghosting artifacts. The key to success relies on how well we can fuse the input images in their feature space, where we wish to remove the factors leading to low-quality image generation while performing the fundamental computations for HDR image generation, e.g., selecting the best-exposed image/region. We propose a novel method that can better fuse the features based on two ideas. One is multi-step feature fusion; our network gradually fuses the features in a stack of blocks having the same structure. The other is the design of the component block that effectively performs two operations essential to the problem, i.e., comparing and selecting appropriate images/regions. Experimental results show that the proposed method outperforms the previous state-of-the-art methods on the standard benchmark tests.