scispace - formally typeset
Proceedings ArticleDOI

Disentangled Representation Learning GAN for Pose-Invariant Face Recognition

TLDR
Quantitative and qualitative evaluation on both controlled and in-the-wild databases demonstrate the superiority of DR-GAN over the state of the art.
Abstract
The large pose discrepancy between two face images is one of the key challenges in face recognition. Conventional approaches for pose-invariant face recognition either perform face frontalization on, or learn a pose-invariant representation from, a non-frontal face image. We argue that it is more desirable to perform both tasks jointly to allow them to leverage each other. To this end, this paper proposes Disentangled Representation learning-Generative Adversarial Network (DR-GAN) with three distinct novelties. First, the encoder-decoder structure of the generator allows DR-GAN to learn a generative and discriminative representation, in addition to image synthesis. Second, this representation is explicitly disentangled from other face variations such as pose, through the pose code provided to the decoder and pose estimation in the discriminator. Third, DR-GAN can take one or multiple images as the input, and generate one unified representation along with an arbitrary number of synthetic images. Quantitative and qualitative evaluation on both controlled and in-the-wild databases demonstrate the superiority of DR-GAN over the state of the art.

read more

Content maybe subject to copyright    Report

Citations
More filters
Posted Content

Occlusion-Invariant Rotation-Equivariant Semi-Supervised Depth Based Cross-View Gait Pose Estimation.

TL;DR: In this article, a semi-supervised learning framework is proposed for cross-view generalization with an occlusion-invariant semi supervised learning framework built upon a novel rotation-equivariant backbone, which generalizes well on the real-world data from all the other unseen views.
Journal ArticleDOI

Rotation and Translation Invariant Representation Learning with Implicit Neural Representations

TL;DR: In this paper , an implicit neural representation (INRNN) with a hypernetwork is used to obtain semantic representations disentangled from the orientation of the image, which can effectively learn disentanglement semantic representations on more complex images.
Journal ArticleDOI

A review of disentangled representation learning for visual data processing and analysis

TL;DR: In this paper , the authors proposed a method to improve the quality of the training environment for teachers in the field of education by using the knowledge of the students' own knowledge of their teachers.
References
More filters
Proceedings Article

Adam: A Method for Stochastic Optimization

TL;DR: This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Journal ArticleDOI

Generative Adversarial Nets

TL;DR: A new framework for estimating generative models via an adversarial process, in which two models are simultaneously train: a generative model G that captures the data distribution and a discriminative model D that estimates the probability that a sample came from the training data rather than G.
Journal ArticleDOI

Representation Learning: A Review and New Perspectives

TL;DR: Recent work in the area of unsupervised feature learning and deep learning is reviewed, covering advances in probabilistic models, autoencoders, manifold learning, and deep networks.
Proceedings ArticleDOI

FaceNet: A unified embedding for face recognition and clustering

TL;DR: A system that directly learns a mapping from face images to a compact Euclidean space where distances directly correspond to a measure offace similarity, and achieves state-of-the-art face recognition performance using only 128-bytes perface.
Posted Content

Conditional Generative Adversarial Nets

Mehdi Mirza, +1 more
- 06 Nov 2014 - 
TL;DR: The conditional version of generative adversarial nets is introduced, which can be constructed by simply feeding the data, y, to the generator and discriminator, and it is shown that this model can generate MNIST digits conditioned on class labels.
Related Papers (5)