scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

LLCNN: A convolutional neural network for low-light image enhancement

Li Tao1, Chuang Zhu1, Guoqing Xiang1, Yuan Li1, Huizhu Jia1, Xiaodong Xie1 
01 Dec 2017-pp 1-4
TL;DR: A CNN based method to perform low-light image enhancement with a special module to utilize multiscale feature maps, which can avoid gradient vanishing problem and demonstrates that this method outperforms other contrast enhancement methods.
Abstract: In this paper, we propose a CNN based method to perform low-light image enhancement. We design a special module to utilize multiscale feature maps, which can avoid gradient vanishing problem as well. In order to preserve image textures as much as possible, we use SSIM loss to train our model. The contrast of low-light images can be adaptively enhanced using our method. Results demonstrate that our CNN based method outperforms other contrast enhancement methods.
Citations
More filters
Journal ArticleDOI
TL;DR: A comparative study of deep techniques in image denoising by classifying the deep convolutional neural networks for additive white noisy images, the deep CNNs for real noisy images; the deepCNNs for blind Denoising and the deep network for hybrid noisy images.

518 citations


Cites background or methods from "LLCNN: A convolutional neural netwo..."

  • ...For example, a CNN comprising of convolution, ReLU and RL employed different phase features to enhance the expressive ability of the low-light image denoising model [177]....

    [...]

  • ...(2019) [177] CNN Real noisy image denoising, low-light image enhancement CNN with ReLU, and RL for real noisy image denoising Chen et al....

    [...]

Proceedings Article
01 Jan 2018
TL;DR: The proposed multi-branch low-light enhancement network (MBLLEN) is found to outperform the state-of-art techniques by a large margin and can be directly extended to handle low-lights videos.
Abstract: We present a deep learning based method for low-light image enhancement. This problem is challenging due to the difficulty in handling various factors simultaneously including brightness, contrast, artifacts and noise. To address this task, we propose the multi-branch low-light enhancement network (MBLLEN). The key idea is to extract rich features up to different levels, so that we can apply enhancement via multiple subnets and finally produce the output image via multi-branch fusion. In this manner, image quality is improved from different aspects. Through extensive experiments, our proposed MBLLEN is found to outperform the state-of-art techniques by a large margin. We additionally show that our method can be directly extended to handle low-light videos.

277 citations


Cites methods from "LLCNN: A convolutional neural netwo..."

  • ...A similar strategy has also been adopted in a recent method LLCNN [35]....

    [...]

  • ...Other CNN-based methods like LLCNN [35] and [34] do not handle brightness/contrast enhancement and image denoising simultaneously....

    [...]

Journal ArticleDOI
TL;DR: A new classification of the main techniques of low-light image enhancement developed over the past decades is presented, dividing them into seven categories: gray transformation methods, histogram equalization methods, Retinex methods, frequency-domain methods, image fusion methods, defogging model methods and machine learning methods.
Abstract: Images captured under poor illumination conditions often exhibit characteristics such as low brightness, low contrast, a narrow gray range, and color distortion, as well as considerable noise, which seriously affect the subjective visual effect on human eyes and greatly limit the performance of various machine vision systems. The role of low-light image enhancement is to improve the visual effect of such images for the benefit of subsequent processing. This paper reviews the main techniques of low-light image enhancement developed over the past decades. First, we present a new classification of these algorithms, dividing them into seven categories: gray transformation methods, histogram equalization methods, Retinex methods, frequency-domain methods, image fusion methods, defogging model methods and machine learning methods. Then, all the categories of methods, including subcategories, are introduced in accordance with their principles and characteristics. In addition, various quality evaluation methods for enhanced images are detailed, and comparisons of different algorithms are discussed. Finally, the current research progress is summarized, and future research directions are suggested.

138 citations


Additional excerpts

  • ...proposed a low-light CNN (LLCNN) in which a multistage characteristic map was used to generate an enhanced image by learning from low-light images with different nuclei [260]....

    [...]

Posted Content
TL;DR: This paper proposes a novel end-to-end attention-guided method based on multi-branch convolutional neural network that can produce high fidelity enhancement results for low-light images and outperforms the current state-of-the-art methods both quantitatively and visually.
Abstract: Low-light image enhancement is challenging in that it needs to consider not only brightness recovery but also complex issues like color distortion and noise, which usually hide in the dark. Simply adjusting the brightness of a low-light image will inevitably amplify those artifacts. To address this difficult problem, this paper proposes a novel end-to-end attention-guided method based on multi-branch convolutional neural network. To this end, we first construct a synthetic dataset with carefully designed low-light simulation strategies. The dataset is much larger and more diverse than existing ones. With the new dataset for training, our method learns two attention maps to guide the brightness enhancement and denoising tasks respectively. The first attention map distinguishes underexposed regions from well lit regions, and the second attention map distinguishes noises from real textures. With their guidance, the proposed multi-branch decomposition-and-fusion enhancement network works in an input adaptive way. Moreover, a reinforcement-net further enhances color and contrast of the output image. Extensive experiments on multiple datasets demonstrate that our method can produce high fidelity enhancement results for low-light images and outperforms the current state-of-the-art methods by a large margin both quantitatively and visually.

106 citations


Cites methods from "LLCNN: A convolutional neural netwo..."

  • ...LLCNN [72] and [71] rely on some traditional methods and are not end-to-end solutions to handle brightness/contrast en-...

    [...]

Journal ArticleDOI
03 Apr 2020
TL;DR: This paper proposes a two-stage method called Edge-Enhanced Multi-Exposure Fusion Network (EEMEFN) to enhance extremely low-light images, which can reconstruct high-quality images with sharp edges when minimizing the pixel-wise loss.
Abstract: This work focuses on the extremely low-light image enhancement, which aims to improve image brightness and reveal hidden information in darken areas. Recently, image enhancement approaches have yielded impressive progress. However, existing methods still suffer from three main problems: (1) low-light images usually are high-contrast. Existing methods may fail to recover images details in extremely dark or bright areas; (2) current methods cannot precisely correct the color of low-light images; (3) when the object edges are unclear, the pixel-wise loss may treat pixels of different objects equally and produce blurry images. In this paper, we propose a two-stage method called Edge-Enhanced Multi-Exposure Fusion Network (EEMEFN) to enhance extremely low-light images. In the first stage, we employ a multi-exposure fusion module to address the high contrast and color bias issues. We synthesize a set of images with different exposure time from a single image and construct an accurate normal-light image by combining well-exposed areas under different illumination conditions. Thus, it can produce realistic initial images with correct color from extremely noisy and low-light images. Secondly, we introduce an edge enhancement module to refine the initial images with the help of the edge information. Therefore, our method can reconstruct high-quality images with sharp edges when minimizing the pixel-wise loss. Experiments on the See-in-the-Dark dataset indicate that our EEMEFN approach achieves state-of-the-art performance.

100 citations


Cites methods from "LLCNN: A convolutional neural netwo..."

  • ...LLCNN (Tao et al. 2017) applies a special-designed convolutional module to utilize multi-scale feature maps for image enhancement....

    [...]

References
More filters
Proceedings ArticleDOI
27 Jun 2016
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Abstract: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

123,388 citations

Proceedings ArticleDOI
07 Jun 2015
TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Abstract: We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.

40,257 citations

Journal ArticleDOI
TL;DR: Zhang et al. as mentioned in this paper proposed a feed-forward denoising convolutional neural networks (DnCNNs) to handle Gaussian denobling with unknown noise level.
Abstract: The discriminative model learning for image denoising has been recently attracting considerable attentions due to its favorable denoising performance. In this paper, we take one step forward by investigating the construction of feed-forward denoising convolutional neural networks (DnCNNs) to embrace the progress in very deep architecture, learning algorithm, and regularization method into image denoising. Specifically, residual learning and batch normalization are utilized to speed up the training process as well as boost the denoising performance. Different from the existing discriminative denoising models which usually train a specific model for additive white Gaussian noise at a certain noise level, our DnCNN model is able to handle Gaussian denoising with unknown noise level (i.e., blind Gaussian denoising). With the residual learning strategy, DnCNN implicitly removes the latent clean image in the hidden layers. This property motivates us to train a single DnCNN model to tackle with several general image denoising tasks, such as Gaussian denoising, single image super-resolution, and JPEG image deblocking. Our extensive experiments demonstrate that our DnCNN model can not only exhibit high effectiveness in several general image denoising tasks, but also be efficiently implemented by benefiting from GPU computing.

5,902 citations

Proceedings ArticleDOI
27 Jun 2016
TL;DR: In this article, a very deep convolutional network inspired by VGG-net was used for image superresolution, which achieved state-of-the-art performance in accuracy.
Abstract: We present a highly accurate single-image superresolution (SR) method. Our method uses a very deep convolutional network inspired by VGG-net used for ImageNet classification [19]. We find increasing our network depth shows a significant improvement in accuracy. Our final model uses 20 weight layers. By cascading small filters many times in a deep network structure, contextual information over large image regions is exploited in an efficient way. With very deep networks, however, convergence speed becomes a critical issue during training. We propose a simple yet effective training procedure. We learn residuals only and use extremely high learning rates (104 times higher than SRCNN [6]) enabled by adjustable gradient clipping. Our proposed method performs better than existing methods in accuracy and visual improvements in our results are easily noticeable.

4,136 citations

Posted Content
TL;DR: This work presents a highly accurate single-image superresolution (SR) method using a very deep convolutional network inspired by VGG-net used for ImageNet classification and uses extremely high learning rates enabled by adjustable gradient clipping.
Abstract: We present a highly accurate single-image super-resolution (SR) method. Our method uses a very deep convolutional network inspired by VGG-net used for ImageNet classification \cite{simonyan2015very}. We find increasing our network depth shows a significant improvement in accuracy. Our final model uses 20 weight layers. By cascading small filters many times in a deep network structure, contextual information over large image regions is exploited in an efficient way. With very deep networks, however, convergence speed becomes a critical issue during training. We propose a simple yet effective training procedure. We learn residuals only and use extremely high learning rates ($10^4$ times higher than SRCNN \cite{dong2015image}) enabled by adjustable gradient clipping. Our proposed method performs better than existing methods in accuracy and visual improvements in our results are easily noticeable.

3,628 citations


"LLCNN: A convolutional neural netwo..." refers background or methods in this paper

  • ...For super resolution, VDSR [9] utilizes VGG filters and uses twenty convolutional layers to get impressive results....

    [...]

  • ...We also use the same network structure as VDSR [9] and train it using our training data....

    [...]

  • ...As to low-level image processing applications, CNN makes several breakthroughs in super resolution [9], image denoising [10], etc....

    [...]