scispace - formally typeset
Open AccessProceedings Article

PixelGAN Autoencoders

Alireza Makhzani, +1 more
- Vol. 30, pp 1975-1985
TLDR
PixelGAN as discussed by the authors is a generative autoencoder with a categorical prior, which can disentangle the style and content information of images in an unsupervised fashion.
Abstract
In this paper, we describe the "PixelGAN autoencoder", a generative autoencoder in which the generative path is a convolutional autoregressive neural network on pixels (PixelCNN) that is conditioned on a latent code, and the recognition path uses a generative adversarial network (GAN) to impose a prior distribution on the latent code. We show that different priors result in different decompositions of information between the latent code and the autoregressive decoder. For example, by imposing a Gaussian distribution as the prior, we can achieve a global vs. local decomposition, or by imposing a categorical distribution as the prior, we can disentangle the style and content information of images in an unsupervised fashion. We further show how the PixelGAN autoencoder with a categorical prior can be directly used in semi-supervised settings and achieve competitive semi-supervised classification results on the MNIST, SVHN and NORB datasets.

read more

Citations
More filters
Proceedings Article

Disentangling by Factorising

TL;DR: This article proposed FactorVAE, a method that disentangles by encouraging the distribution of representations to be factorial and hence independent across the dimensions, and showed that it improves upon β-VAE by providing a better trade-off between disentanglement and reconstruction quality.
Posted Content

Recent Advances in Autoencoder-Based Representation Learning

TL;DR: An in-depth review of recent advances in representation learning with a focus on autoencoder-based models and makes use of meta-priors believed useful for downstream tasks, such as disentanglement and hierarchical organization of features.
Journal ArticleDOI

A practical tutorial on autoencoders for nonlinear feature fusion: taxonomy, models, software and guidelines

TL;DR: Autoencoders (AEs) as mentioned in this paper have emerged as an alternative to manifold learning for conducting nonlinear feature fusion, and they can be used to generate reduced feature sets through the fusion of the original ones.
Proceedings Article

Generative probabilistic novelty detection with adversarial autoencoders

TL;DR: In this paper, a probabilistic approach is proposed to estimate the likelihood that a sample was generated by the inlier distribution, and the probability factorizes and can be computed with respect to local coordinates of the manifold tangent space.
References
More filters
Journal ArticleDOI

Generative Adversarial Nets

TL;DR: A new framework for estimating generative models via an adversarial process, in which two models are simultaneously train: a generative model G that captures the data distribution and a discriminative model D that estimates the probability that a sample came from the training data rather than G.
Proceedings Article

Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift

TL;DR: Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin.
Proceedings Article

Auto-Encoding Variational Bayes

TL;DR: A stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case is introduced.
Posted Content

Improved Techniques for Training GANs

TL;DR: In this article, the authors present a variety of new architectural features and training procedures that apply to the generative adversarial networks (GANs) framework and achieve state-of-the-art results in semi-supervised classification on MNIST, CIFAR-10 and SVHN.
Related Papers (5)