scispace - formally typeset
Book ChapterDOI

Generalized Loss-Sensitive Adversarial Learning with Manifold Margins

Reads0
Chats0
TLDR
A pullback operator to map samples back to their data manifold, and a manifold margin is defined as the distance between the pullback representations to distinguish between real and fake samples and learn the optimal generators is defined.
Abstract
The classic Generative Adversarial Net and its variants can be roughly categorized into two large families: the unregularized versus regularized GANs. By relaxing the non-parametric assumption on the discriminator in the classic GAN, the regularized GANs have better generalization ability to produce new samples drawn from the real distribution. It is well known that the real data like natural images are not uniformly distributed over the whole data space. Instead, they are often restricted to a low-dimensional manifold of the ambient space. Such a manifold assumption suggests the distance over the manifold should be a better measure to characterize the distinct between real and fake samples. Thus, we define a pullback operator to map samples back to their data manifold, and a manifold margin is defined as the distance between the pullback representations to distinguish between real and fake samples and learn the optimal generators. We justify the effectiveness of the proposed model both theoretically and empirically.

read more

Content maybe subject to copyright    Report

Citations
More filters
Posted Content

Loss-Sensitive Generative Adversarial Networks on Lipschitz Densities

TL;DR: In this article, a loss-sensitive GAN (LS-GAN) is proposed to distinguish between real and fake samples by designated margins, while learning a generator alternately to produce realistic samples by minimizing their losses.
Journal ArticleDOI

Loss-Sensitive Generative Adversarial Networks on Lipschitz Densities

TL;DR: The Lipschitz regularization theory and algorithms for a novel Loss-Sensitive Generative Adversarial Network (LS-GAN) are presented, yielding a regularized model that can better generalize to produce new data from a reasonable number of training examples than the classic GAN.
Proceedings ArticleDOI

AET vs. AED: Unsupervised Representation Learning by Auto-Encoding Transformations Rather Than Data

TL;DR: The experiments show that AET greatly improves over existing unsupervised approaches, setting new state-of-the-art performances being greatly closer to the upper bounds by their fully supervised counterparts on CIFAR-10, ImageNet and Places datasets.
Book ChapterDOI

An Adversarial Approach to Hard Triplet Generation

TL;DR: This work proposes an adversarial network for Hard Triplet Generation (HTG) to optimize the network ability in distinguishing similar examples of different categories as well as grouping varied examples of the same categories.
Posted Content

AET vs. AED: Unsupervised Representation Learning by Auto-Encoding Transformations rather than Data

TL;DR: In this article, an unsupervised representation learning by Auto-Encoding Transformation (AET) is proposed, which aims to predict a given transformation from the encoded features as accurately as possible at the output end.
References
More filters
Journal ArticleDOI

Generative Adversarial Nets

TL;DR: A new framework for estimating generative models via an adversarial process, in which two models are simultaneously train: a generative model G that captures the data distribution and a discriminative model D that estimates the probability that a sample came from the training data rather than G.
Journal Article

Dropout: a simple way to prevent neural networks from overfitting

TL;DR: It is shown that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.
Proceedings Article

Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift

TL;DR: Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin.
Journal ArticleDOI

ImageNet Large Scale Visual Recognition Challenge

TL;DR: The ImageNet Large Scale Visual Recognition Challenge (ILSVRC) as mentioned in this paper is a benchmark in object category classification and detection on hundreds of object categories and millions of images, which has been run annually from 2010 to present, attracting participation from more than fifty institutions.
Proceedings Article

Auto-Encoding Variational Bayes

TL;DR: A stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case is introduced.
Related Papers (5)