scispace - formally typeset
Open AccessPosted Content

ESRGAN: Enhanced Super-Resolution Generative Adversarial Networks

Reads0
Chats0
TLDR
This work thoroughly study three key components of SRGAN – network architecture, adversarial loss and perceptual loss, and improves each of them to derive an Enhanced SRGAN (ESRGAN), which achieves consistently better visual quality with more realistic and natural textures than SRGAN.
Abstract
The Super-Resolution Generative Adversarial Network (SRGAN) is a seminal work that is capable of generating realistic textures during single image super-resolution. However, the hallucinated details are often accompanied with unpleasant artifacts. To further enhance the visual quality, we thoroughly study three key components of SRGAN - network architecture, adversarial loss and perceptual loss, and improve each of them to derive an Enhanced SRGAN (ESRGAN). In particular, we introduce the Residual-in-Residual Dense Block (RRDB) without batch normalization as the basic network building unit. Moreover, we borrow the idea from relativistic GAN to let the discriminator predict relative realness instead of the absolute value. Finally, we improve the perceptual loss by using the features before activation, which could provide stronger supervision for brightness consistency and texture recovery. Benefiting from these improvements, the proposed ESRGAN achieves consistently better visual quality with more realistic and natural textures than SRGAN and won the first place in the PIRM2018-SR Challenge. The code is available at this https URL .

read more

Citations
More filters
Proceedings ArticleDOI

Laplacian Generative Adversarial Networks for Multi-Scale Super-Resolution

TL;DR: The proposed LapSRGAN is an end-to-end image reconstruction network, which can achieve double and quadruple high-quality, high-resolution reconstruction of the original image and is trained with multiscale discriminators and perceptual loss which calculated on feature maps of image.
Posted Content

Image-to-Image Translation with Low Resolution Conditioning.

TL;DR: In this article, the authors focus on transferring fine details from a high resolution (HR) source image to fit a coarse, low resolution (LR) image representation of the target, and generate HR images that share features from both HR and LR inputs.
Journal ArticleDOI

Distributed Learning and Inference with Compressed Images

TL;DR: This paper sees that loss of semantic information and covariate shift do indeed exist, resulting in a drop in performance that depends on the compression rate, and proposes dataset restoration, based on image restoration with generative adversarial networks (GANs), which has the advantage of not adding additional cost to the deployed models.
Proceedings Article

Neural Differential Equations for Single Image Super-Resolution

Teven Le Scao
TL;DR: In this article, the authors compare several forms of neural DEs and backpropagation methods on single image super-resolution, and show that discrete sensitivity analysis has better stability.
Book ChapterDOI

Machine Learning Cancer Diagnosis Based on Medical Image Size and Modalities

TL;DR: In this chapter, medical imaging modalities and histopathology are explained, and the best medical image type and size for classification and detection of medical diagnoses are explained.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article

Adam: A Method for Stochastic Optimization

TL;DR: This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Journal ArticleDOI

Generative Adversarial Nets

TL;DR: A new framework for estimating generative models via an adversarial process, in which two models are simultaneously train: a generative model G that captures the data distribution and a discriminative model D that estimates the probability that a sample came from the training data rather than G.
Related Papers (5)