scispace - formally typeset
Journal ArticleDOI

Benchmarking lightweight face architectures on specific face recognition scenarios

TLDR
This paper studies the impact of lightweight face models on real applications and evaluates the performance of five recent lightweight architectures on five face recognition scenarios: image and video based face recognition, cross-factor and heterogeneous face Recognition, as well as active authentication on mobile devices.
Abstract
This paper studies the impact of lightweight face models on real applications. Lightweight architectures proposed for face recognition are analyzed and evaluated on different scenarios. In particular, we evaluate the performance of five recent lightweight architectures on five face recognition scenarios: image and video based face recognition, cross-factor and heterogeneous face recognition, as well as active authentication on mobile devices. In addition, we show the lacks of using common lightweight models unchanged for specific face recognition tasks, by assessing the performance of the original lightweight versions of the lightweight face models considered in our study. We also show that the inference time on different devices and the computational requirements of the lightweight architectures allows their use on real-time applications or computationally limited platforms. In summary, this paper can serve as a baseline in order to select lightweight face architectures depending on the practical application at hand. Besides, it provides some insights about the remaining challenges and possible future research topics.

read more

Citations
More filters
Proceedings ArticleDOI

MixFaceNets: Extremely Efficient Face Recognition Networks

TL;DR: MixFaceNets as discussed by the authors is a set of extremely efficient and high throughput models for accurate face verification, which are inspired by Mixed Depthwise Convolutional Kernels (MDCK).
Posted Content

PocketNet: Extreme Lightweight Face Recognition Network using Neural Architecture Search and Multi-Step Knowledge Distillation

TL;DR: In this article, a new family of face recognition models, namely PocketNet, is proposed to enhance the verification performance of the compact model by presenting a novel training paradigm based on knowledge distillation.
Journal ArticleDOI

PocketNet: Extreme Lightweight Face Recognition Network Using Neural Architecture Search and Multistep Knowledge Distillation

- 01 Jan 2022 - 
TL;DR: In this article , a novel training paradigm based on knowledge distillation is proposed, where the knowledge is distilled from the teacher model to the student model at different stages of the training maturity.
Proceedings ArticleDOI

QuantFace: Towards Lightweight Face Recognition by Synthetic Data Low-bit Quantization

TL;DR: QuantFace reduces the required computational cost of the existing face recognition models without the need for designing a particular architecture or accessing real training data, and intro-duces privacy-friendly synthetic face data to the quantization process to mitigate potential privacy concerns and issues related to the accessibility to realTraining data.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings ArticleDOI

Going deeper with convolutions

TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Proceedings Article

Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift

TL;DR: Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin.
Proceedings ArticleDOI

Densely Connected Convolutional Networks

TL;DR: DenseNet as mentioned in this paper proposes to connect each layer to every other layer in a feed-forward fashion, which can alleviate the vanishing gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters.
Posted Content

Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift

TL;DR: Batch Normalization as mentioned in this paper normalizes layer inputs for each training mini-batch to reduce the internal covariate shift in deep neural networks, and achieves state-of-the-art performance on ImageNet.