scispace - formally typeset
Proceedings ArticleDOI

Disentangled Representation Learning GAN for Pose-Invariant Face Recognition

TLDR
Quantitative and qualitative evaluation on both controlled and in-the-wild databases demonstrate the superiority of DR-GAN over the state of the art.
Abstract
The large pose discrepancy between two face images is one of the key challenges in face recognition. Conventional approaches for pose-invariant face recognition either perform face frontalization on, or learn a pose-invariant representation from, a non-frontal face image. We argue that it is more desirable to perform both tasks jointly to allow them to leverage each other. To this end, this paper proposes Disentangled Representation learning-Generative Adversarial Network (DR-GAN) with three distinct novelties. First, the encoder-decoder structure of the generator allows DR-GAN to learn a generative and discriminative representation, in addition to image synthesis. Second, this representation is explicitly disentangled from other face variations such as pose, through the pose code provided to the decoder and pose estimation in the discriminator. Third, DR-GAN can take one or multiple images as the input, and generate one unified representation along with an arbitrary number of synthetic images. Quantitative and qualitative evaluation on both controlled and in-the-wild databases demonstrate the superiority of DR-GAN over the state of the art.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

A Domain Gap Aware Generative Adversarial Network for Multi-Domain Image Translation

TL;DR: In this paper , a perceptual self-regularization constraint is employed to maintain consistency over the global shapes as well as the local texture information across multiple visual image domains, which is more effective in representing shape deformation in challenging mappings with significant dataset variation across multiple domains.
Proceedings ArticleDOI

Knowledge Router: Learning Disentangled Representations for Knowledge Graphs

TL;DR: This paper proposes to learn disentangled representations of KG entities - a new method that disentangles the inner latent properties of KGs entities using a graph level process and a neighborhood mechanism leveraged to disentangle the hidden properties of each entity.
Book ChapterDOI

OpenGAN: Open Set Generative Adversarial Networks

TL;DR: OpenGAN as mentioned in this paper proposes an open set GAN architecture that is conditioned per-input sample with a feature embedding drawn from a metric space, allowing the generative model to produce samples that are outside of the training distribution.
Posted Content

Toward a Controllable Disentanglement Network

TL;DR: Zhang et al. as discussed by the authors proposed a distance covariance based decorrelation regularization to encourage disentanglement, and then leveraged a soft target representation combined with the latent image code to synthesize novel images with designated properties.
Posted Content

Disentangling Pose from Appearance in Monochrome Hand Images

TL;DR: Li et al. as discussed by the authors disentangle the representation of pose from a complementary appearance factor in 2D monochrome images, and supervise this disentanglement process using a network that learns to generate images of hand using specified pose+appearance features.
References
More filters
Proceedings Article

Adam: A Method for Stochastic Optimization

TL;DR: This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Journal ArticleDOI

Generative Adversarial Nets

TL;DR: A new framework for estimating generative models via an adversarial process, in which two models are simultaneously train: a generative model G that captures the data distribution and a discriminative model D that estimates the probability that a sample came from the training data rather than G.
Journal ArticleDOI

Representation Learning: A Review and New Perspectives

TL;DR: Recent work in the area of unsupervised feature learning and deep learning is reviewed, covering advances in probabilistic models, autoencoders, manifold learning, and deep networks.
Proceedings ArticleDOI

FaceNet: A unified embedding for face recognition and clustering

TL;DR: A system that directly learns a mapping from face images to a compact Euclidean space where distances directly correspond to a measure offace similarity, and achieves state-of-the-art face recognition performance using only 128-bytes perface.
Posted Content

Conditional Generative Adversarial Nets

Mehdi Mirza, +1 more
- 06 Nov 2014 - 
TL;DR: The conditional version of generative adversarial nets is introduced, which can be constructed by simply feeding the data, y, to the generator and discriminator, and it is shown that this model can generate MNIST digits conditioned on class labels.
Related Papers (5)