ArcFace: Additive Angular Margin Loss for Deep Face Recognition
Jiankang Deng,Jia Guo,Niannan Xue,Stefanos Zafeiriou +3 more
- pp 4690-4699
Reads0
Chats0
TLDR
This paper presents arguably the most extensive experimental evaluation against all recent state-of-the-art face recognition methods on ten face recognition benchmarks, and shows that ArcFace consistently outperforms the state of the art and can be easily implemented with negligible computational overhead.Abstract:
One of the main challenges in feature learning using Deep Convolutional Neural Networks (DCNNs) for large-scale face recognition is the design of appropriate loss functions that can enhance the discriminative power. Centre loss penalises the distance between deep features and their corresponding class centres in the Euclidean space to achieve intra-class compactness. SphereFace assumes that the linear transformation matrix in the last fully connected layer can be used as a representation of the class centres in the angular space and therefore penalises the angles between deep features and their corresponding weights in a multiplicative way. Recently, a popular line of research is to incorporate margins in well-established loss functions in order to maximise face class separability. In this paper, we propose an Additive Angular Margin Loss (ArcFace) to obtain highly discriminative features for face recognition. The proposed ArcFace has a clear geometric interpretation due to its exact correspondence to geodesic distance on a hypersphere. We present arguably the most extensive experimental evaluation against all recent state-of-the-art face recognition methods on ten face recognition benchmarks which includes a new large-scale image database with trillions of pairs and a large-scale video dataset. We show that ArcFace consistently outperforms the state of the art and can be easily implemented with negligible computational overhead. To facilitate future research, the code has been made available.read more
Citations
More filters
Journal ArticleDOI
DiffFace: Diffusion-based Face Swapping with Facial Guidance
Kihong Kim,Yunho Kim,Seokju Cho,Junyoung Seo,Jisu Nam,Kychul Lee,Seung Wook Kim,Kwanghee Lee +7 more
TL;DR: DiffFace as discussed by the authors proposes a diffusion-based face swapping framework for the first time, which is composed of training ID conditional DDPM, sampling with facial guidance, and a target-preserving blending strategy.
Proceedings ArticleDOI
LightEA: A Scalable, Robust, and Interpretable Entity Alignment Framework via Three-view Label Propagation
TL;DR: This paper argues that existing GNN-based EA methods inherit the inborn defects from their neural network lineage: weak scalability and poor interpretability, and proposes a non-neural EA framework — LightEA, consisting of three components: Random Orthogonal Label Generation, Three-view Label Propagation, and Sparse Sinkhorn Iteration.
Journal ArticleDOI
True Black-Box Explanation in Facial Analysis
TL;DR: This paper presents a saliency map methodology, called MinPlus, that can be used to explain any facial analysis approach with no manipulation inside of the recognition model, because it only needs the input-output function of the black-box ‘fx’.
Proceedings ArticleDOI
A Unified Framework for Masked and Mask-Free Face Recognition Via Feature Rectification
TL;DR: Experiments show that the unified framework, named Face Feature Rectification Network (FFR-Net), can learn a rectified feature space for recognizing both masked and mask-free faces effectively, achieving state-of-the-art results.
Journal ArticleDOI
L-Mix: A Latent-Level Instance Mixup Regularization for Robust Self-Supervised Speaker Representation Learning
TL;DR: The i-mix and the proposed l-mix strategy were incorporated into the self-supervised angular prototypical and softmax-based objective functions and were evaluated on the VoxCeleb dataset and are observed to benefit greatly from the i- Mix and l- Mix strategies in terms of training stability and speaker verification performance.
References
More filters
Proceedings ArticleDOI
Deep Residual Learning for Image Recognition
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Journal Article
Dropout: a simple way to prevent neural networks from overfitting
TL;DR: It is shown that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.
Proceedings Article
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
Sergey Ioffe,Christian Szegedy +1 more
TL;DR: Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin.
Automatic differentiation in PyTorch
Adam Paszke,Sam Gross,Soumith Chintala,Gregory Chanan,Edward Z. Yang,Zachary DeVito,Zeming Lin,Alban Desmaison,Luca Antiga,Adam Lerer +9 more
TL;DR: An automatic differentiation module of PyTorch is described — a library designed to enable rapid research on machine learning models that focuses on differentiation of purely imperative programs, with a focus on extensibility and low overhead.
Posted Content
TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems
Martín Abadi,Ashish Agarwal,Paul Barham,Eugene Brevdo,Zhifeng Chen,Craig Citro,Greg S. Corrado,Andy Davis,Jeffrey Dean,Matthieu Devin,Sanjay Ghemawat,Ian Goodfellow,Andrew Harp,Geoffrey Irving,Michael Isard,Yangqing Jia,Rafal Jozefowicz,Lukasz Kaiser,Manjunath Kudlur,Josh Levenberg,Dan Mané,Rajat Monga,Sherry Moore,Derek G. Murray,Chris Olah,Mike Schuster,Jonathon Shlens,Benoit Steiner,Ilya Sutskever,Kunal Talwar,Paul A. Tucker,Vincent Vanhoucke,Vijay K. Vasudevan,Fernanda B. Viégas,Oriol Vinyals,Pete Warden,Martin Wattenberg,Martin Wicke,Yuan Yu,Xiaoqiang Zheng +39 more
TL;DR: The TensorFlow interface and an implementation of that interface that is built at Google are described, which has been used for conducting research and for deploying machine learning systems into production across more than a dozen areas of computer science and other fields.