scispace - formally typeset
Open AccessPosted Content

ESRGAN: Enhanced Super-Resolution Generative Adversarial Networks

Reads0
Chats0
TLDR
This work thoroughly study three key components of SRGAN – network architecture, adversarial loss and perceptual loss, and improves each of them to derive an Enhanced SRGAN (ESRGAN), which achieves consistently better visual quality with more realistic and natural textures than SRGAN.
Abstract
The Super-Resolution Generative Adversarial Network (SRGAN) is a seminal work that is capable of generating realistic textures during single image super-resolution. However, the hallucinated details are often accompanied with unpleasant artifacts. To further enhance the visual quality, we thoroughly study three key components of SRGAN - network architecture, adversarial loss and perceptual loss, and improve each of them to derive an Enhanced SRGAN (ESRGAN). In particular, we introduce the Residual-in-Residual Dense Block (RRDB) without batch normalization as the basic network building unit. Moreover, we borrow the idea from relativistic GAN to let the discriminator predict relative realness instead of the absolute value. Finally, we improve the perceptual loss by using the features before activation, which could provide stronger supervision for brightness consistency and texture recovery. Benefiting from these improvements, the proposed ESRGAN achieves consistently better visual quality with more realistic and natural textures than SRGAN and won the first place in the PIRM2018-SR Challenge. The code is available at this https URL .

read more

Citations
More filters
Book ChapterDOI

Effectiveness of State-of-the-Art Super Resolution Algorithms in Surveillance Environment

TL;DR: In this paper, the effectiveness of four conventional yet effective image super resolution (SR) algorithms and three deep learning-based SR algorithms were inspected to seek the finest method that executes well in a surveillance environment with limited training data.
Book ChapterDOI

GAN with Pixel and Perceptual Regularizations for Photo-Realistic Joint Deblurring and Super-Resolution

TL;DR: The proposed P2GAN integrates pixel-wise loss in pixel-level, contextual loss and adversarial loss in perceptual level simultaneously, in order to guide on deblurring and super-resolution reconstruction of the raw images that are blurry and in low-resolution, which help obtaining realistic images.
Proceedings ArticleDOI

Monte-Carlo Siamese Policy on Actor for Satellite Image Super Resolution

TL;DR: This study explores the plausible usage of RL in super resolution of remote sensing imagery, and proposes a theoretical framework that leverages the benefits of supervised and reinforcement learning.
Journal ArticleDOI

ID Preserving Face Super-Resolution Generative Adversarial Networks

TL;DR: An ID Preserving Face Super-Resolution Generative Adversarial Networks (IP-FSRGAN) to reconstruct realistic super-resolution face images from low-resolution ones and demonstrates excellent robustness under different downsample scaling factors and extensibility to various face verification models is proposed.
Journal ArticleDOI

Robust Prior-Based Single Image Super Resolution Under Multiple Gaussian Degradations

TL;DR: The experiments prove that the proposed RPSRMD, which includes RPGen and PResNet as two core components, is superior to many state-of-the-art SR methods that were designed and trained to handle multi-degradation.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article

Adam: A Method for Stochastic Optimization

TL;DR: This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Journal ArticleDOI

Generative Adversarial Nets

TL;DR: A new framework for estimating generative models via an adversarial process, in which two models are simultaneously train: a generative model G that captures the data distribution and a discriminative model D that estimates the probability that a sample came from the training data rather than G.
Related Papers (5)