scispace - formally typeset
Open AccessPosted Content

ESRGAN: Enhanced Super-Resolution Generative Adversarial Networks

Reads0
Chats0
TLDR
This work thoroughly study three key components of SRGAN – network architecture, adversarial loss and perceptual loss, and improves each of them to derive an Enhanced SRGAN (ESRGAN), which achieves consistently better visual quality with more realistic and natural textures than SRGAN.
Abstract
The Super-Resolution Generative Adversarial Network (SRGAN) is a seminal work that is capable of generating realistic textures during single image super-resolution. However, the hallucinated details are often accompanied with unpleasant artifacts. To further enhance the visual quality, we thoroughly study three key components of SRGAN - network architecture, adversarial loss and perceptual loss, and improve each of them to derive an Enhanced SRGAN (ESRGAN). In particular, we introduce the Residual-in-Residual Dense Block (RRDB) without batch normalization as the basic network building unit. Moreover, we borrow the idea from relativistic GAN to let the discriminator predict relative realness instead of the absolute value. Finally, we improve the perceptual loss by using the features before activation, which could provide stronger supervision for brightness consistency and texture recovery. Benefiting from these improvements, the proposed ESRGAN achieves consistently better visual quality with more realistic and natural textures than SRGAN and won the first place in the PIRM2018-SR Challenge. The code is available at this https URL .

read more

Citations
More filters
Journal ArticleDOI

Face hallucination from low quality images using definition-scalable inference

TL;DR: Experimental results show that SRDSI can effectively recover more structural information as well as SIFT key-points from real low-res faces and achieves better performance than state-of-the-art super-resolution techniques in terms of both visual and objective quality.
Book ChapterDOI

Towards Content-Independent Multi-Reference Super-Resolution: Adaptive Pattern Matching and Feature Aggregation.

TL;DR: This supplementary material provides implementation details and more ablation studies about LFE and RP design and visualizations of feature searching, the element in the reference pool, and the SR reconstruction results obtained by the SRCNN, MDSR, SRGAN,SRGAN, SRNTT and the proposed CIMR-SR with a 4 upscaling factor.
Journal ArticleDOI

Fine-grained Attention and Feature-sharing Generative Adversarial Networks for Single Image Super-Resolution

TL;DR: Li et al. as mentioned in this paper proposed a fine-grained attention generative adversarial network (FASRGAN) to discriminate each pixel of real and fake images, which adopts a UNetlike network as the discriminator with two outputs: an image score and image score map.
Proceedings ArticleDOI

DehazeFlow: Multi-scale Conditional Flow Network for Single Image Dehazing

TL;DR: Zhang et al. as mentioned in this paper proposed DehazeFlow, a single image dehazing framework based on conditional normalizing flow, which learns the conditional distribution of haze-free images given a hazy image, enabling the model to sample multiple dehazed results.
Proceedings ArticleDOI

Perceptually-inspired super-resolution of compressed videos

TL;DR: A perceptually-inspired super-resolution approach (M-SRGAN) is proposed for spatial up-sampling of compressed video using a modified CNN model, which has been trained using a generative adversarial network (GAN) on compressed content with perceptual loss functions.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article

Adam: A Method for Stochastic Optimization

TL;DR: This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Journal ArticleDOI

Generative Adversarial Nets

TL;DR: A new framework for estimating generative models via an adversarial process, in which two models are simultaneously train: a generative model G that captures the data distribution and a discriminative model D that estimates the probability that a sample came from the training data rather than G.
Related Papers (5)