Wasserstein Generative Adversarial Networks
Citations
2,640 citations
2,479 citations
2,159 citations
Cites methods from "Wasserstein Generative Adversarial ..."
...s function used in pix2pixHD except that we replace the least squared loss term [28] with the hinge loss term [25,30,45]. We test several ResNet-based discriminators used in recent unconditional GANs [1,29,31] but observe similar results at the cost of a higher GPU memory requirement. Adding the SPADE to the discriminator also yields a similar performance. For the loss function, we observe that removing an...
[...]
2,033 citations
Cites background or methods from "Wasserstein Generative Adversarial ..."
...(1) with Wasserstein GAN objective with gradient penalty [1, 4] defined as...
[...]
...Generative adversarial networks (GANs) [3] have shown remarkable results in various computer vision tasks such as image generation [1, 6, 23, 31], image translation [7, 8, 32], super-resolution imaging [13], and face image synthesis [9, 15, 25, 30]....
[...]
1,874 citations
References
111,197 citations
38,211 citations
"Wasserstein Generative Adversarial ..." refers background or methods in this paper
...GANs offer much more flexibility in the definition of the objective function, including Jensen-Shannon (Goodfellow et al., 2014), and all f -divergences (Nowozin et al., 2016) as well as some exotic combinations (Huszar, 2015)....
[...]
...This is due to the fact that mode collapse comes from the fact that the optimal generator for a fixed discriminator is a sum of deltas on the points the discriminator assigns the highest values, as observed by (Goodfellow et al., 2014) and highlighted in (Metz et al., 2016)....
[...]
...Variational Auto-Encoders (VAEs) (Kingma & Welling, 2013) and Generative Adversarial Networks (GANs) (Goodfellow et al., 2014) are well known examples of this approach....
[...]
...• JS(Pn,P)→ 0 with JS the Jensen-Shannon di- vergence....
[...]
...Our baseline comparison is DCGAN (Radford et al., 2015), a GAN with a convolutional architecture trained with the standard GAN procedure using the− logD trick (Goodfellow et al., 2014)....
[...]
20,769 citations
6,759 citations
"Wasserstein Generative Adversarial ..." refers methods in this paper
...We keep the convolutional DCGAN architecture for the WGAN critic or the GAN discriminator....
[...]
...Our baseline comparison is DCGAN (Radford et al., 2015), a GAN with a convolutional architecture trained with the standard GAN procedure using the− logD trick (Goodfellow et al., 2014)....
[...]
...In other words, the JS distance saturates, the discriminator has zero loss, and the generated samples are in some cases meaningful (DCGAN generator, top right plot) and in other cases collapse to a single nonsensical image (Goodfellow et al., 2014)....
[...]
...Besides the convolutional DCGAN architecture, we also ran experiments where we replace the generator or both the generator and the critic by 4-layer ReLU-MLP with 512 hidden units....
[...]
...We illustrate this by running experiments on three generator architectures: (1) a convolutional DCGAN generator, (2) a convolutional DCGAN generator without batch normalization and with a constant number of filters (the capacity of the generator is drastically smaller than that of the discriminator), and (3) a 4-layer ReLU-MLP with 512 hidden units....
[...]
6,736 citations