Open AccessPosted Content
ESRGAN: Enhanced Super-Resolution Generative Adversarial Networks
Xintao Wang,Ke Yu,Shixiang Wu,Jinjin Gu,Yihao Liu,Chao Dong,Chen Change Loy,Yu Qiao,Xiaoou Tang +8 more
Reads0
Chats0
TLDR
This work thoroughly study three key components of SRGAN – network architecture, adversarial loss and perceptual loss, and improves each of them to derive an Enhanced SRGAN (ESRGAN), which achieves consistently better visual quality with more realistic and natural textures than SRGAN.Abstract:
The Super-Resolution Generative Adversarial Network (SRGAN) is a seminal work that is capable of generating realistic textures during single image super-resolution. However, the hallucinated details are often accompanied with unpleasant artifacts. To further enhance the visual quality, we thoroughly study three key components of SRGAN - network architecture, adversarial loss and perceptual loss, and improve each of them to derive an Enhanced SRGAN (ESRGAN). In particular, we introduce the Residual-in-Residual Dense Block (RRDB) without batch normalization as the basic network building unit. Moreover, we borrow the idea from relativistic GAN to let the discriminator predict relative realness instead of the absolute value. Finally, we improve the perceptual loss by using the features before activation, which could provide stronger supervision for brightness consistency and texture recovery. Benefiting from these improvements, the proposed ESRGAN achieves consistently better visual quality with more realistic and natural textures than SRGAN and won the first place in the PIRM2018-SR Challenge. The code is available at this https URL .read more
Citations
More filters
Journal ArticleDOI
Three-dimensional residual channel attention networks denoise and sharpen fluorescence microscopy image volumes.
Jiji Chen,Hideki Sasaki,Hoyin Lai,Yijun Su,Jiamin Liu,Yicong Wu,Alexander Zhovmer,Christian A. Combs,Ivan Rey-Suarez,Hung-Yu Chang,Chi-Chou Huang,Xuesong Li,Min Guo,Srineil Nizambad,Arpita Upadhyaya,Shih-Jong J. Lee,Luciano A. G. Lucas,Hari Shroff +17 more
TL;DR: In this paper, a three-dimensional residual channel attention network (RCAN) is proposed to restore noisy four-dimensional super-resolution data, enabling image capture of over tens of thousands of images (thousands of volumes) without apparent photobleaching.
Proceedings ArticleDOI
DIV8K: DIVerse 8K Resolution Image Dataset
TL;DR: The DIVerse 8K resolution image dataset (DIV8K) is introduced, which contains a over 1500 images with a resolution up to 8K, and is therefore the ideal dataset for training and benchmarking super-resolution approaches, applicable to extreme upscaling factors of 32x and beyond.
Proceedings Article
LAPAR: Linearly-Assembled Pixel-Adaptive Regression Network for Single Image Super-resolution and Beyond
TL;DR: A linearly-assembled pixel-adaptive regression network (LAPAR) is proposed, which casts the direct LR to HR mapping learning into a linear coefficient regression task over a dictionary of multiple predefined filter bases, which renders the model highly lightweight and easy to optimize while achieving state-of-the-art results on SISR benchmarks.
Journal ArticleDOI
Fine Perceptive GANs for Brain MR Image Super-Resolution in Wavelet Domain
TL;DR: In this paper , a fine perceptive generative adversarial networks (FP-GANs) is proposed to produce super-resolution (SR) MR images from the low-resolution counterparts.
Posted Content
AutoGAN-Distiller: Searching to Compress Generative Adversarial Networks
TL;DR: Inspired by the recent success of AutoML in deep compression, AutoML is introduced to GAN compression and an AutoGAN-Distiller (AGD) framework is developed and yields remarkably lightweight yet more competitive compressed models, that largely outperform existing alternatives.
References
More filters
Proceedings ArticleDOI
Deep Residual Learning for Image Recognition
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article
Adam: A Method for Stochastic Optimization
Diederik P. Kingma,Jimmy Ba +1 more
TL;DR: This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Proceedings Article
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan,Andrew Zisserman +1 more
TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Proceedings Article
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan,Andrew Zisserman +1 more
TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Journal ArticleDOI
Generative Adversarial Nets
Ian Goodfellow,Jean Pouget-Abadie,Mehdi Mirza,Bing Xu,David Warde-Farley,Sherjil Ozair,Aaron Courville,Yoshua Bengio +7 more
TL;DR: A new framework for estimating generative models via an adversarial process, in which two models are simultaneously train: a generative model G that captures the data distribution and a discriminative model D that estimates the probability that a sample came from the training data rather than G.