scispace - formally typeset
Open AccessPosted Content

Mode matching in GANs through latent space learning and inversion

TLDR
This paper addresses the problem of imposing desired modal properties on the generated space using a latent distribution, engineered in accordance with themodal properties of the true data distribution, by training a latent space inversion network in tandem with the generative network using a divergence loss.
Abstract
Generative adversarial networks (GANs) have shown remarkable success in generation of unstructured data, such as, natural images. However, discovery and separation of modes in the generated space, essential for several tasks beyond naive data generation, is still a challenge. In this paper, we address the problem of imposing desired modal properties on the generated space using a latent distribution, engineered in accordance with the modal properties of the true data distribution. This is achieved by training a latent space inversion network in tandem with the generative network using a divergence loss. The latent space is made to follow a continuous multimodal distribution generated by reparameterization of a pair of continuous and discrete random variables. In addition, the modal priors of the latent distribution are learned to match with the true data distribution using minimal-supervision with negligible increment in number of learnable parameters. We validate our method on multiple tasks such as mode separation, conditional generation, and attribute discovery on multiple real world image datasets and demonstrate its efficacy over other state-of-the-art methods.

read more

Citations
More filters
Posted Content

Generative Adversarial Networks (GANs): Challenges, Solutions, and Future Directions

TL;DR: This study performs a comprehensive survey of the advancements in GANs design and optimization solutions and proposes a new taxonomy to structure solutions by key research issues and presents the promising research directions in this rapidly growing field.
Posted Content

Imperfect ImaGANation: Implications of GANs Exacerbating Biases on Facial Data Augmentation and Snapchat Selfie Lenses

TL;DR: This study empirically demonstrates that given input of a dataset of headshots of engineering faculty collected from 47 online university directory webpages in the United States is biased toward white males, a state-of-the-art (unconditional variant of) GAN "imagines" faces of synthetic engineering professors that have masculine facial features and white skin color.
References
More filters
Journal ArticleDOI

Generative Adversarial Nets

TL;DR: A new framework for estimating generative models via an adversarial process, in which two models are simultaneously train: a generative model G that captures the data distribution and a discriminative model D that estimates the probability that a sample came from the training data rather than G.
Journal ArticleDOI

Reducing the Dimensionality of Data with Neural Networks

TL;DR: In this article, an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data is described.
Journal ArticleDOI

A fast learning algorithm for deep belief nets

TL;DR: A fast, greedy algorithm is derived that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory.
Proceedings ArticleDOI

Image-to-Image Translation with Conditional Adversarial Networks

TL;DR: Conditional adversarial networks are investigated as a general-purpose solution to image-to-image translation problems and it is demonstrated that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks.
Journal ArticleDOI

Representation Learning: A Review and New Perspectives

TL;DR: Recent work in the area of unsupervised feature learning and deep learning is reviewed, covering advances in probabilistic models, autoencoders, manifold learning, and deep networks.
Related Papers (5)