scispace - formally typeset
Open AccessJournal ArticleDOI

Large-Scale Bisample Learning on ID Versus Spot Face Recognition

Reads0
Chats0
TLDR
Wang et al. as mentioned in this paper proposed a deep learning based large-scale bisample learning (LBL) method for ID versus Spot (IvS) face recognition, where a classification-verification-classification training strategy is proposed to progressively enhance the IvS performance.
Abstract
In real-world face recognition applications, there is a tremendous amount of data with two images for each person. One is an ID photo for face enrollment, and the other is a probe photo captured on spot. Most existing methods are designed for training data with limited breadth (a relatively small number of classes) and sufficient depth (many samples for each class). They would meet great challenges on ID versus Spot (IvS) data, including the under-represented intra-class variations and an excessive demand on computing devices. In this paper, we propose a deep learning based large-scale bisample learning (LBL) method for IvS face recognition. To tackle the bisample problem with only two samples for each class, a classification–verification–classification training strategy is proposed to progressively enhance the IvS performance. Besides, a dominant prototype softmax is incorporated to make the deep learning scalable on large-scale classes. We conduct LBL on a IvS face dataset with more than two million identities. Experimental results show the proposed method achieves superior performance to previous ones, validating the effectiveness of LBL on IvS face recognition.

read more

Citations
More filters
Proceedings ArticleDOI

AdaptiveFace: Adaptive Margin and Sampling for Face Recognition

TL;DR: This paper proposes the Adaptive Margin Softmax to adjust the margins for different classes adaptively, and makes the sampling process adaptive in two folds: Firstly, the Hard Prototype Mining to adaptively select a small number of hard classes to participate in classification, and secondly, theAdaptive Data Sampling to find valuable samples for training adaptively.
Proceedings ArticleDOI

Learning Meta Face Recognition in Unseen Domains

TL;DR: This paper proposes a novel face recognition method via meta-learning named Meta Face Recognition (MFR), which synthesizes the source/target domain shift with a meta-optimization objective, which requires the model to learn effective representations not only on synthesized source domains but also on synthesizer target domains.
Journal ArticleDOI

Occluded Face Recognition in the Wild by Identity-Diversity Inpainting

TL;DR: This paper proposes identity-diversity inpainting to facilitate occluded face recognition by integrating GAN with an optimized pre-trained CNN recognizer which serves as the third player to compete with the generator by distinguishing diversity within the same identity class.
Proceedings ArticleDOI

Domain Balancing: Face Recognition on Long-Tailed Domains

TL;DR: A novel Domain Balancing (DB) mechanism to handle the long-tailed domain distribution problem, which refers to the fact that a small number of domains frequently appear while other domains far less existing, is proposed.
Posted Content

The Elements of End-to-end Deep Face Recognition: A Survey of Recent Advances

TL;DR: This survey article presents a comprehensive review about the recent advance of each element of the end-to-end deep face recognition, since the thriving deep learning techniques have greatly improved the capability of them.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Proceedings ArticleDOI

Going deeper with convolutions

TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Proceedings ArticleDOI

Rethinking the Inception Architecture for Computer Vision

TL;DR: In this article, the authors explore ways to scale up networks in ways that aim at utilizing the added computation as efficiently as possible by suitably factorized convolutions and aggressive regularization.
Related Papers (5)