Open AccessProceedings Article
Improved training of wasserstein GANs
Ishaan Gulrajani,Faruk Ahmed,Martin Arjovsky,Vincent Dumoulin,Aaron Courville +4 more
- Vol. 30, pp 5769-5779
TLDR
The authors proposed to penalize the norm of the gradient of the critic with respect to its input to improve the training stability of Wasserstein GANs and achieve stable training of a wide variety of GAN architectures with almost no hyperparameter tuning.Abstract:Â
Generative Adversarial Networks (GANs) are powerful generative models, but suffer from training instability. The recently proposed Wasserstein GAN (WGAN) makes progress toward stable training of GANs, but sometimes can still generate only poor samples or fail to converge. We find that these problems are often due to the use of weight clipping in WGAN to enforce a Lipschitz constraint on the critic, which can lead to undesired behavior. We propose an alternative to clipping weights: penalize the norm of gradient of the critic with respect to its input. Our proposed method performs better than standard WGAN and enables stable training of a wide variety of GAN architectures with almost no hyperparameter tuning, including 101-layer ResNets and language models with continuous generators. We also achieve high quality generations on CIFAR-10 and LSUN bedrooms.read more
Citations
More filters
Posted Content
Variational Discriminator Bottleneck: Improving Imitation Learning, Inverse RL, and GANs by Constraining Information Flow
TL;DR: This work proposes a simple and general technique to constrain information flow in the discriminator by means of an information bottleneck, and demonstrates that the proposed variational discriminator bottleneck (VDB) leads to significant improvements across three distinct application areas for adversarial learning algorithms.
Book ChapterDOI
Learning Gradient Fields for Shape Generation
Ruojin Cai,Guandao Yang,Hadar Averbuch-Elor,Zekun Hao,Serge Belongie,Noah Snavely,Bharath Hariharan +6 more
TL;DR: Cai et al. as discussed by the authors generate point clouds by performing stochastic gradient ascent on an unnormalized probability density, thereby moving sampled points toward the high-likelihood regions.
Journal ArticleDOI
Deep Audio-visual Learning: A Survey
TL;DR: A comprehensive survey of recent audio-visual learning development is provided, dividing the current audio- visual learning tasks into four different subfields: audio- Visual separation and localization, audio-Visual correspondence learning, audio -visual generation, and audio- visuals representation learning.
Posted Content
Improved Techniques for Training Score-Based Generative Models
Yang Song,Stefano Ermon +1 more
TL;DR: This work provides a new theoretical analysis of learning and sampling from score models in high dimensional spaces, explaining existing failure modes and motivating new solutions that generalize across datasets.
Posted Content
Image Super-Resolution via Iterative Refinement
TL;DR: SR3 as discussed by the authors adapts denoising diffusion probabilistic models to conditional image generation and performs super-resolution through a stochastic denoizing process, which achieves a fool rate close to 50%, suggesting photo-realistic outputs.
References
More filters
Dissertation
Learning Multiple Layers of Features from Tiny Images
TL;DR: In this paper, the authors describe how to train a multi-layer generative model of natural images, using a dataset of millions of tiny colour images, described in the next section.
Journal ArticleDOI
Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning
TL;DR: This article presents a general class of associative reinforcement learning algorithms for connectionist networks containing stochastic units that are shown to make weight adjustments in a direction that lies along the gradient of expected reinforcement in both immediate-reinforcement tasks and certain limited forms of delayed-reInforcement tasks, and they do this without explicitly computing gradient estimates.
Posted Content
Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks
TL;DR: This work introduces a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrates that they are a strong candidate for unsupervised learning.
Posted Content
Improved Techniques for Training GANs
TL;DR: In this article, the authors present a variety of new architectural features and training procedures that apply to the generative adversarial networks (GANs) framework and achieve state-of-the-art results in semi-supervised classification on MNIST, CIFAR-10 and SVHN.
Proceedings Article
Categorical Reparameterization with Gumbel-Softmax
Eric Jang,Shixiang Gu,Ben Poole +2 more
TL;DR: Gumbel-Softmax as mentioned in this paper replaces the non-differentiable samples from a categorical distribution with a differentiable sample from a novel Gumbel softmax distribution, which has the essential property that it can be smoothly annealed into the categorical distributions.