scispace - formally typeset
Open AccessProceedings Article

Latent space oddity: On the curvature of deep generative models

TLDR
This work shows that the nonlinearity of the generator imply that the latent space gives a distorted view of the input space, and shows that this distortion can be characterized by a stochastic Riemannian metric, and demonstrates that distances and interpolants are significantly improved under this metric.
Abstract
Deep generative models provide a systematic way to learn nonlinear data distributions, through a set of latent variables and a nonlinear "generator" function that maps latent points into the input space. The nonlinearity of the generator imply that the latent space gives a distorted view of the input space. Under mild conditions, we show that this distortion can be characterized by a stochastic Riemannian metric, and demonstrate that distances and interpolants are significantly improved under this metric. This in turn improves probability distributions, sampling algorithms and clustering in the latent space. Our geometric analysis further reveals that current generators provide poor variance estimates and we propose a new generator architecture with vastly improved variance estimates. Results are demonstrated on convolutional and fully connected variational autoencoders, but the formalism easily generalize to other deep generative models.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI

Interpreting the Latent Space of GANs for Semantic Face Editing

TL;DR: This work proposes a novel framework, called InterFaceGAN, for semantic face editing by interpreting the latent semantics learned by GANs, and finds that the latent code of well-trained generative models actually learns a disentangled representation after linear transformations.
Posted Content

Interpreting the Latent Space of GANs for Semantic Face Editing

TL;DR: InterFaceGAN as discussed by the authors explores the disentanglement between various semantics and manage to decouple some entangled semantics with subspace projection, leading to more precise control of facial attributes, including gender, age, expression, and the presence of eyeglasses.
Posted Content

InterFaceGAN: Interpreting the Disentangled Face Representation Learned by GANs

TL;DR: A framework called InterFaceGAN is proposed to interpret the disentangled face representation learned by the state-of-the-art GAN models and study the properties of the facial semantics encoded in the latent space to suggest that learning to synthesize faces spontaneously brings a disentangling and controllable face representation.
Posted Content

A survey of algorithmic recourse: definitions, formulations, solutions, and prospects.

TL;DR: An extensive literature review is performed, and an overview of the prospective research directions towards which the community may engage is provided, challenging existing assumptions and making explicit connections to other ethical challenges such as security, privacy, and fairness.
Posted Content

Is Generator Conditioning Causally Related to GAN Performance

TL;DR: In this article, the authors study the distribution of singular values of the Jacobian of the generator in Generative Adversarial Networks (GANs) and find that this Jacobian generally becomes ill-conditioned at the beginning of training.
References
More filters
Journal ArticleDOI

Generative Adversarial Nets

TL;DR: A new framework for estimating generative models via an adversarial process, in which two models are simultaneously train: a generative model G that captures the data distribution and a discriminative model D that estimates the probability that a sample came from the training data rather than G.
Proceedings Article

Auto-Encoding Variational Bayes

TL;DR: A stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case is introduced.
Journal ArticleDOI

Representation Learning: A Review and New Perspectives

TL;DR: Recent work in the area of unsupervised feature learning and deep learning is reviewed, covering advances in probabilistic models, autoencoders, manifold learning, and deep networks.
Posted Content

Stochastic Backpropagation and Approximate Inference in Deep Generative Models

TL;DR: In this article, a generative and recognition model is proposed to represent approximate posterior distributions and act as a stochastic encoder of the data, which allows for joint optimisation of the parameters of both the generative model and the recognition model.
Proceedings Article

Contractive Auto-Encoders: Explicit Invariance During Feature Extraction

TL;DR: It is found empirically that this penalty helps to carve a representation that better captures the local directions of variation dictated by the data, corresponding to a lower-dimensional non-linear manifold, while being more invariant to the vast majority of directions orthogonal to the manifold.
Related Papers (5)