scispace - formally typeset
Open AccessJournal ArticleDOI

Deep face recognition: A survey

TLDR
A comprehensive review of the recent developments on deep face recognition can be found in this paper, covering broad topics on algorithm designs, databases, protocols, and application scenes, as well as the technical challenges and several promising directions.
About
This article is published in Neurocomputing.The article was published on 2021-03-14 and is currently open access. It has received 353 citations till now. The article focuses on the topics: Deep learning & Feature extraction.

read more

Citations
More filters
Posted Content

The 5th Recognizing Families in the Wild Data Challenge: Predicting Kinship from Faces.

TL;DR: Recognizing families in the wild (RFIW) as discussed by the authors was held as a data challenge in conjunction with the 16th IEEE International Conference on Automatic Face and Gesture Recognition (FG).
Book ChapterDOI

An Advanced Framework for Critical Infrastructure Protection Using Computer Vision Technologies

TL;DR: In this article, the authors presented a framework that integrates three main computer vision technologies namely (i) person detection, (ii) person re-identification and (iii) face recognition) to enhance the operational security of critical infrastructure perimeter.
Journal ArticleDOI

Interpretability for reliable, efficient, and self-cognitive DNNs: From theories to applications

TL;DR: In this paper , the authors elaborate on the definition of model interpretability from the three perspectives of model reliability, feature efficiency, and self-cognition, and categorize the interpretable methods involved according to typical application scenarios.
Proceedings ArticleDOI

A long-distance 3D face recognition architecture utilizing MEMS-based region-scanning LiDAR

TL;DR: In this paper , a 3D face recognition system utilizing the MEMS-based indirect Time-of-Flight (ToF) region-scanning LiDAR is proposed for long-distance person identification.
Posted Content

Don't Generate Me: Training Differentially Private Generative Models with Sinkhorn Divergence

TL;DR: DP-Sinkhorn as mentioned in this paper minimizes the Sinkhorn divergence, a computationally efficient approximation to the exact optimal transport distance, between the model and data in a differentially private manner and uses a novel technique for control-ling the bias-variance tradeoff of gradient estimates.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Proceedings ArticleDOI

Going deeper with convolutions

TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Journal ArticleDOI

Generative Adversarial Nets

TL;DR: A new framework for estimating generative models via an adversarial process, in which two models are simultaneously train: a generative model G that captures the data distribution and a discriminative model D that estimates the probability that a sample came from the training data rather than G.
Related Papers (5)