scispace - formally typeset
Open AccessJournal ArticleDOI

Single image HDR reconstruction using a CNN with masked features and perceptual loss

Reads0
Chats0
TLDR
Zhang et al. as mentioned in this paper proposed a feature masking mechanism that reduces the contribution of the features from the saturated areas to synthesize visually pleasing textures, which can reconstruct visually pleasing HDR results.
Abstract
Digital cameras can only capture a limited range of real-world scenes' luminance, producing images with saturated pixels. Existing single image high dynamic range (HDR) reconstruction methods attempt to expand the range of luminance, but are not able to hallucinate plausible textures, producing results with artifacts in the saturated areas. In this paper, we present a novel learning-based approach to reconstruct an HDR image by recovering the saturated pixels of an input LDR image in a visually pleasing way. Previous deep learning-based methods apply the same convolutional filters on wellexposed and saturated pixels, creating ambiguity during training and leading to checkerboard and halo artifacts. To overcome this problem, we propose a feature masking mechanism that reduces the contribution of the features from the saturated areas. Moreover, we adapt the VGG-based perceptual loss function to our application to be able to synthesize visually pleasing textures. Since the number of HDR images for training is limited, we propose to train our system in two stages. Specifically, we first train our system on a large number of images for image inpainting task and then fine-tune it on HDR reconstruction. Since most of the HDR examples contain smooth regions that are simple to reconstruct, we propose a sampling strategy to select challenging training patches during the HDR fine-tuning stage. We demonstrate through experimental results that our approach can reconstruct visually pleasing HDR results, better than the current state of the art on a wide range of scenes.

read more

Citations
More filters
Proceedings ArticleDOI

ADNet: Attention-guided Deformable Convolutional Network for High Dynamic Range Imaging

TL;DR: In this paper, an attention-guided deformable convolutional network is proposed for multi-frame high dynamic range (HDR) imaging, which adopts a spatial attention module to adaptively select the most appropriate regions of various expo-sure LDR images for fusion.
Proceedings ArticleDOI

HDRUNet: Single Image HDR Reconstruction with Denoising and Dequantization

TL;DR: This work proposes a novel learning-based approach using a spatially dynamic encoder-decoder network, HDRUNet, to learn an end-to-end mapping for single image HDR reconstruction with denoising and dequantization, which achieves the state-of-the-art performance in quantitative comparisons and visual quality.
Proceedings ArticleDOI

Comparison of single image HDR reconstruction methods — the caveats of quality assessment

TL;DR: This work compared six recent single image HDR reconstruction methods in a subjective image quality experiment on an HDR display and found that only two methods produced results that are, on average, more preferred than the unprocessed single exposure images.
Posted Content

HDR Video Reconstruction: A Coarse-to-fine Network and A Real-world Benchmark Dataset

TL;DR: In this paper, a coarse-to-fine deep learning framework is proposed to estimate the coarse HDR video and then perform more sophisticated alignment and temporal fusion in the feature space of the coarse video to produce better reconstruction.
References
More filters
Book ChapterDOI

U-Net: Convolutional Networks for Biomedical Image Segmentation

TL;DR: Neber et al. as discussed by the authors proposed a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently, which can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks.
Journal ArticleDOI

Image Super-Resolution Using Deep Convolutional Networks

TL;DR: Zhang et al. as discussed by the authors proposed a deep learning method for single image super-resolution (SR), which directly learns an end-to-end mapping between the low/high-resolution images.
Proceedings ArticleDOI

Image Style Transfer Using Convolutional Neural Networks

TL;DR: A Neural Algorithm of Artistic Style is introduced that can separate and recombine the image content and style of natural images and provide new insights into the deep image representations learned by Convolutional Neural Networks and demonstrate their potential for high level image synthesis and manipulation.
Proceedings Article

Learning Deep Features for Scene Recognition using Places Database

TL;DR: A new scene-centric database called Places with over 7 million labeled pictures of scenes is introduced with new methods to compare the density and diversity of image datasets and it is shown that Places is as dense as other scene datasets and has more diversity.
Book ChapterDOI

Colorful Image Colorization

TL;DR: This paper proposes a fully automatic approach to colorization that produces vibrant and realistic colorizations and shows that colorization can be a powerful pretext task for self-supervised feature learning, acting as a cross-channel encoder.
Related Papers (5)