scispace - formally typeset
Open AccessProceedings ArticleDOI

Toward Convolutional Blind Denoising of Real Photographs

TLDR
CBDNet as discussed by the authors proposes to train a convolutional blind denoising network with more realistic noise model and real-world clean image pairs to improve the generalization ability of deep CNN denoisers.
Abstract
While deep convolutional neural networks (CNNs) have achieved impressive success in image denoising with additive white Gaussian noise (AWGN), their performance remains limited on real-world noisy photographs. The main reason is that their learned models are easy to overfit on the simplified AWGN model which deviates severely from the complicated real-world noise model. In order to improve the generalization ability of deep CNN denoisers, we suggest training a convolutional blind denoising network (CBDNet) with more realistic noise model and real-world noisy-clean image pairs. On the one hand, both signal-dependent noise and in-camera signal processing pipeline is considered to synthesize realistic noisy images. On the other hand, real-world noisy photographs and their nearly noise-free counterparts are also included to train our CBDNet. To further provide an interactive strategy to rectify denoising result conveniently, a noise estimation subnetwork with asymmetric learning to suppress under-estimation of noise level is embedded into CBDNet. Extensive experimental results on three datasets of real-world noisy pho- tographs clearly demonstrate the superior performance of CBDNet over state-of-the-arts in terms of quantitative met- rics and visual quality. The code has been made available at https://github.com/GuoShi28/CBDNet.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI

Multi-Stage Progressive Image Restoration

TL;DR: MPRNet as discussed by the authors proposes a multi-stage architecture that progressively learns restoration functions for the degraded inputs, thereby breaking down the overall recovery process into more manageable steps, and introduces a novel per-pixel adaptive design that leverages in-situ supervised attention to reweight the local features.
Proceedings ArticleDOI

Noise2Void - Learning Denoising From Single Noisy Images

TL;DR: Noise2Void is introduced, a training scheme that allows us to train directly on the body of data to be denoised and can therefore be applied when other methods cannot, and compares favorably to training-free denoising methods.
Posted Content

Pre-Trained Image Processing Transformer

TL;DR: To maximally excavate the capability of transformer, the IPT model is presented to utilize the well-known ImageNet benchmark for generating a large amount of corrupted image pairs and the contrastive learning is introduced for well adapting to different image processing tasks.
Proceedings ArticleDOI

Pre-Trained Image Processing Transformer

TL;DR: Hu et al. as discussed by the authors proposed a pre-trained image processing transformer (IPT) model for denoising, super-resolution and deraining tasks, which is trained on corrupted image pairs with multi-heads and multi-tails.
Proceedings ArticleDOI

Unprocessing Images for Learned Raw Denoising

TL;DR: In this paper, the authors propose a technique to "unprocess" images by inverting each step of an image processing pipeline, thereby allowing them to synthesize realistic raw sensor measurements from commonly available Internet photos.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article

Adam: A Method for Stochastic Optimization

TL;DR: This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Book ChapterDOI

U-Net: Convolutional Networks for Biomedical Image Segmentation

TL;DR: Neber et al. as discussed by the authors proposed a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently, which can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks.
Proceedings Article

Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift

TL;DR: Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin.
Related Papers (5)