scispace - formally typeset
Open AccessProceedings Article

Improved training of wasserstein GANs

TLDR
The authors proposed to penalize the norm of the gradient of the critic with respect to its input to improve the training stability of Wasserstein GANs and achieve stable training of a wide variety of GAN architectures with almost no hyperparameter tuning.
Abstract
Generative Adversarial Networks (GANs) are powerful generative models, but suffer from training instability. The recently proposed Wasserstein GAN (WGAN) makes progress toward stable training of GANs, but sometimes can still generate only poor samples or fail to converge. We find that these problems are often due to the use of weight clipping in WGAN to enforce a Lipschitz constraint on the critic, which can lead to undesired behavior. We propose an alternative to clipping weights: penalize the norm of gradient of the critic with respect to its input. Our proposed method performs better than standard WGAN and enables stable training of a wide variety of GAN architectures with almost no hyperparameter tuning, including 101-layer ResNets and language models with continuous generators. We also achieve high quality generations on CIFAR-10 and LSUN bedrooms.

read more

Content maybe subject to copyright    Report

Citations
More filters
Posted Content

Adversarial Feature Hallucination Networks for Few-Shot Learning

TL;DR: Adversarial Feature Hallucination Networks (AFHN) is proposed which is based on conditional Wasserstein Generative Adversarial networks (cWGAN) and hallucinates diverse and discriminative features conditioned on the few labeled samples and comparative results substantiate the superiority of AFHN to existing data augmentation based FSL approaches and other state-of-the-art ones.
Journal ArticleDOI

Out-of-Domain Detection for Natural Language Understanding in Dialog Systems

TL;DR: A novel model is proposed to generate high-quality pseudo OOD samples that are akin to IN-Domain (IND) input utterances and thereby improves the performance of OOD detection and is demonstrated to be effective in NLU.
Proceedings Article

GenDICE: Generalized Offline Estimation of Stationary Values

TL;DR: This work proves the consistency of the method under general conditions, provides a detailed error analysis, and demonstrates strong empirical performance on benchmark tasks, including off-line PageRank and off-policy policy evaluation.
Journal ArticleDOI

$L1$ -Norm Batch Normalization for Efficient Training of Deep Neural Networks

TL;DR: A hardware-friendly normalization method that not only surpasses L2BN in speed but also simplifies the design of deep learning accelerators and promises a fully quantized training of DNNs, which empowers future artificial intelligence applications on mobile devices.
Proceedings ArticleDOI

Catastrophic forgetting and mode collapse in GANs

TL;DR: It is shown that Generative Adversarial Networks (GANs) suffer from catastrophic forgetting even when they are trained to approximate a single target distribution, and how catastrophic forgetting prevents the discriminator from making real datapoints local maxima, and thus causes non-convergence.
References
More filters
Dissertation

Learning Multiple Layers of Features from Tiny Images

TL;DR: In this paper, the authors describe how to train a multi-layer generative model of natural images, using a dataset of millions of tiny colour images, described in the next section.
Journal ArticleDOI

Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning

TL;DR: This article presents a general class of associative reinforcement learning algorithms for connectionist networks containing stochastic units that are shown to make weight adjustments in a direction that lies along the gradient of expected reinforcement in both immediate-reinforcement tasks and certain limited forms of delayed-reInforcement tasks, and they do this without explicitly computing gradient estimates.
Posted Content

Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks

TL;DR: This work introduces a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrates that they are a strong candidate for unsupervised learning.
Posted Content

Improved Techniques for Training GANs

TL;DR: In this article, the authors present a variety of new architectural features and training procedures that apply to the generative adversarial networks (GANs) framework and achieve state-of-the-art results in semi-supervised classification on MNIST, CIFAR-10 and SVHN.
Proceedings Article

Categorical Reparameterization with Gumbel-Softmax

TL;DR: Gumbel-Softmax as mentioned in this paper replaces the non-differentiable samples from a categorical distribution with a differentiable sample from a novel Gumbel softmax distribution, which has the essential property that it can be smoothly annealed into the categorical distributions.
Related Papers (5)