ArcFace: Additive Angular Margin Loss for Deep Face Recognition
Jiankang Deng,Jia Guo,Niannan Xue,Stefanos Zafeiriou +3 more
- pp 4690-4699
Reads0
Chats0
TLDR
This paper presents arguably the most extensive experimental evaluation against all recent state-of-the-art face recognition methods on ten face recognition benchmarks, and shows that ArcFace consistently outperforms the state of the art and can be easily implemented with negligible computational overhead.Abstract:
One of the main challenges in feature learning using Deep Convolutional Neural Networks (DCNNs) for large-scale face recognition is the design of appropriate loss functions that can enhance the discriminative power. Centre loss penalises the distance between deep features and their corresponding class centres in the Euclidean space to achieve intra-class compactness. SphereFace assumes that the linear transformation matrix in the last fully connected layer can be used as a representation of the class centres in the angular space and therefore penalises the angles between deep features and their corresponding weights in a multiplicative way. Recently, a popular line of research is to incorporate margins in well-established loss functions in order to maximise face class separability. In this paper, we propose an Additive Angular Margin Loss (ArcFace) to obtain highly discriminative features for face recognition. The proposed ArcFace has a clear geometric interpretation due to its exact correspondence to geodesic distance on a hypersphere. We present arguably the most extensive experimental evaluation against all recent state-of-the-art face recognition methods on ten face recognition benchmarks which includes a new large-scale image database with trillions of pairs and a large-scale video dataset. We show that ArcFace consistently outperforms the state of the art and can be easily implemented with negligible computational overhead. To facilitate future research, the code has been made available.read more
Citations
More filters
Journal ArticleDOI
Rodlike nanoparticle parameter measurement method based on improved Mask R-CNN segmentation
TL;DR: An automated procedure that can expedite the parameter measurement of the rodlike nanoparticles using the Mask R-CNN network was presented, and the network was optimized to improve the segmentation accuracy.
Proceedings ArticleDOI
Log-Likelihood-Ratio Cost Function as Objective Loss for Speaker Verification Systems
TL;DR: CLLR function as optimization loss was tested on the RSR2015-Part II database for text-dependent speaker verification, providing competitive results without using score normalization and outperforming other similar loss functions as Cross-Entropy combined with Ring Loss, as well as the previous loss function based on an approximation of the Detection Cost Function (DCF).
Posted Content
Search to Distill: Pearls are Everywhere but not the Eyes
Yu Liu,Xuhui Jia,Mingxing Tan,Raviteja Vemulapalli,Yukun Zhu,Bradley Ray Green,Xiaogang Wang +6 more
TL;DR: In this paper, an architecture-aware knowledge distillation (AKD) approach is proposed to find student models (pearls for the teacher) that are best for distilling the given teacher model.
Journal ArticleDOI
Training Robust Deep Neural Networks on Noisy Labels Using Adaptive Sample Selection With Disagreement
TL;DR: In this article, the authors proposed an adaptive sample selection method to train deep neural networks robustly and prevent noise contamination in the disagreement strategy, which calculates the threshold of the small-loss criterion by considering the loss distribution of the whole batch at each iteration and then, the network is backpropagated by extracting samples below this threshold from the disagreement data.
Posted Content
Multi-label Classification of Common Bengali Handwritten Graphemes: Dataset and Challenge.
Samiul Alam,Tahsin Reasat,Asif Shahriyar Sushmit,Sadi Mohammad Siddiquee,Fuad Rahman,Mahady Hasan,Ahmed Imtiaz Humayun +6 more
TL;DR: This work proposes a labeling scheme based on graphemes (linguistic segments of word formation) that makes segmentation inside alpha-syllabary words linear and presents the first dataset of Bengali handwritten graphemers that are commonly used in an everyday context.
References
More filters
Proceedings ArticleDOI
Deep Residual Learning for Image Recognition
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Journal Article
Dropout: a simple way to prevent neural networks from overfitting
TL;DR: It is shown that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.
Proceedings Article
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
Sergey Ioffe,Christian Szegedy +1 more
TL;DR: Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin.
Automatic differentiation in PyTorch
Adam Paszke,Sam Gross,Soumith Chintala,Gregory Chanan,Edward Z. Yang,Zachary DeVito,Zeming Lin,Alban Desmaison,Luca Antiga,Adam Lerer +9 more
TL;DR: An automatic differentiation module of PyTorch is described — a library designed to enable rapid research on machine learning models that focuses on differentiation of purely imperative programs, with a focus on extensibility and low overhead.
Posted Content
TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems
Martín Abadi,Ashish Agarwal,Paul Barham,Eugene Brevdo,Zhifeng Chen,Craig Citro,Greg S. Corrado,Andy Davis,Jeffrey Dean,Matthieu Devin,Sanjay Ghemawat,Ian Goodfellow,Andrew Harp,Geoffrey Irving,Michael Isard,Yangqing Jia,Rafal Jozefowicz,Lukasz Kaiser,Manjunath Kudlur,Josh Levenberg,Dan Mané,Rajat Monga,Sherry Moore,Derek G. Murray,Chris Olah,Mike Schuster,Jonathon Shlens,Benoit Steiner,Ilya Sutskever,Kunal Talwar,Paul A. Tucker,Vincent Vanhoucke,Vijay K. Vasudevan,Fernanda B. Viégas,Oriol Vinyals,Pete Warden,Martin Wattenberg,Martin Wicke,Yuan Yu,Xiaoqiang Zheng +39 more
TL;DR: The TensorFlow interface and an implementation of that interface that is built at Google are described, which has been used for conducting research and for deploying machine learning systems into production across more than a dozen areas of computer science and other fields.