scispace - formally typeset
Proceedings ArticleDOI

Disentangled Representation Learning GAN for Pose-Invariant Face Recognition

TLDR
Quantitative and qualitative evaluation on both controlled and in-the-wild databases demonstrate the superiority of DR-GAN over the state of the art.
Abstract
The large pose discrepancy between two face images is one of the key challenges in face recognition. Conventional approaches for pose-invariant face recognition either perform face frontalization on, or learn a pose-invariant representation from, a non-frontal face image. We argue that it is more desirable to perform both tasks jointly to allow them to leverage each other. To this end, this paper proposes Disentangled Representation learning-Generative Adversarial Network (DR-GAN) with three distinct novelties. First, the encoder-decoder structure of the generator allows DR-GAN to learn a generative and discriminative representation, in addition to image synthesis. Second, this representation is explicitly disentangled from other face variations such as pose, through the pose code provided to the decoder and pose estimation in the discriminator. Third, DR-GAN can take one or multiple images as the input, and generate one unified representation along with an arbitrary number of synthetic images. Quantitative and qualitative evaluation on both controlled and in-the-wild databases demonstrate the superiority of DR-GAN over the state of the art.

read more

Content maybe subject to copyright    Report

Citations
More filters
Posted Content

Auditing AI models for Verified Deployment under Semantic Specifications

TL;DR: In this paper, the authors propose AuditAI, a framework for auditing deep learning models under semantically-aligned formal verification and scalability, where a sequence of semantically aligned unit tests are used to verify whether a predefined specification (e.g., accuracy over 95%) is satisfied.
Journal ArticleDOI

Heterogeneous Face Interpretable Disentangled Representation for Joint Face Recognition and Synthesis

TL;DR: Zhang et al. as mentioned in this paper proposed the heterogeneous face interpretable disentangled representation (HFIDR) that could explicitly interpret dimensions of face representation rather than simple mapping, and further could extract latent identity information for cross-modality recognition and convert the modality factor to synthesize cross-mode faces.
Proceedings ArticleDOI

SSDL: Self-Supervised Domain Learning for Improved Face Recognition

TL;DR: In this article, a self-supervised domain learning (SSDL) scheme was proposed to train on triplets mined from unlabeled data and follow an easy-to-hard scheme of alternate triplet mining and self learning.
Posted Content

Red Carpet to Fight Club: Partially-supervised Domain Transfer for Face Recognition in Violent Videos

TL;DR: The "WildestFaces" dataset is introduced, tailored to study cross-domain recognition under a variety of adverse conditions and a rigorous evaluation protocol is established for this "clean-to-violent" recognition task.
Posted Content

Gotta Adapt 'Em All: Joint Pixel and Feature-Level Domain Adaptation for Recognition in the Wild

TL;DR: In this article, a classification-aware domain adversarial neural network is proposed to bring target examples into more classifiable regions of the source domain by using 3D geometry and image synthesis to preserve identity across pose transformations.
References
More filters
Proceedings Article

Adam: A Method for Stochastic Optimization

TL;DR: This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Journal ArticleDOI

Generative Adversarial Nets

TL;DR: A new framework for estimating generative models via an adversarial process, in which two models are simultaneously train: a generative model G that captures the data distribution and a discriminative model D that estimates the probability that a sample came from the training data rather than G.
Journal ArticleDOI

Representation Learning: A Review and New Perspectives

TL;DR: Recent work in the area of unsupervised feature learning and deep learning is reviewed, covering advances in probabilistic models, autoencoders, manifold learning, and deep networks.
Proceedings ArticleDOI

FaceNet: A unified embedding for face recognition and clustering

TL;DR: A system that directly learns a mapping from face images to a compact Euclidean space where distances directly correspond to a measure offace similarity, and achieves state-of-the-art face recognition performance using only 128-bytes perface.
Posted Content

Conditional Generative Adversarial Nets

Mehdi Mirza, +1 more
- 06 Nov 2014 - 
TL;DR: The conditional version of generative adversarial nets is introduced, which can be constructed by simply feeding the data, y, to the generator and discriminator, and it is shown that this model can generate MNIST digits conditioned on class labels.
Related Papers (5)