Wasserstein Divergence for GANs
Citations
451 citations
Cites background from "Wasserstein Divergence for GANs"
...The Wasserstein distance has also recently raised interest in stabilizing generative modeling [1, 14, 73], learning introspective neural networks [32], and obtaining Gaussian mixture models [29] thanks to its geometrically meaningful distance measure even when the supports of the distributions do not overlap....
[...]
...The Wasserstein distance has recently received great attention in designing loss functions for its superiority over other probability measures [73, 41]....
[...]
99 citations
95 citations
93 citations
77 citations
References
123,388 citations
38,211 citations
"Wasserstein Divergence for GANs" refers background in this paper
...Over the past few years, we have witnessed the great success of generative adversarial networks (GANs) [1] for a variety of applications....
[...]
...To measure the distance between real and fake data distributions, [1] proposed the objective...
[...]
17,184 citations
6,759 citations
"Wasserstein Divergence for GANs" refers methods in this paper
...We compute the FID scores for DCGAN, WGAN-GP, RJS-GAN, CTGAN, and WGAN-div....
[...]
...We train these methods with two standard architectures—ConvNet as used by DCGAN [2] and ResNet [20], which is used by WGAN-GP [7]....
[...]
...We compare our WGAN-div to the state-of-the-art DCGAN [2], WGANGP [7], RJS-GAN [18], CTGAN [9], SNGAN [8], and PGGAN [14]....
[...]
...Since batch normalization [27] (BN) is considered to be a key ingredient in stabilizing the training process [2], we also evaluate the FID without BN....
[...]
6,273 citations
"Wasserstein Divergence for GANs" refers methods in this paper
...We also present the 256 × 256 visual results for CelebA-HQ (Fig....
[...]
...In this section, we evaluate WGAN-div on toy datasets and three widely used image datasets—CIFAR-10, CelebA [22] and LSUN [23]....
[...]
...We report the obtained FID scores on the 64× 64 CelebA dataset in the bottom row of Fig....
[...]
...While the FID score of WGAN-div mildly outperforms the state-of-the-art methods on the dataset CIFAR-10, it demonstrates clearer improvements on the larger scale datasets CelebA and LSUN....
[...]
...By cross validation we determine the number of iterations for D per training step to be 4 for CelebA and LSUN, and 5 for CIFAR-10....
[...]