scispace - formally typeset
Open AccessJournal ArticleDOI

Deep face recognition: A survey

TLDR
A comprehensive review of the recent developments on deep face recognition can be found in this paper, covering broad topics on algorithm designs, databases, protocols, and application scenes, as well as the technical challenges and several promising directions.
About
This article is published in Neurocomputing.The article was published on 2021-03-14 and is currently open access. It has received 353 citations till now. The article focuses on the topics: Deep learning & Feature extraction.

read more

Citations
More filters
Posted Content

Information-Theoretic Bias Assessment Of Learned Representations Of Pretrained Face Recognition.

TL;DR: This paper proposed an information-theoretic, independent bias assessment metric to identify degree of bias against protected demographic attributes from learned representations of pretrained facial recognition systems, which differs from other methods that rely on classification accuracy or examine the differences between ground truth and predicted labels of protected attributes predicted using shallow networks.
Proceedings ArticleDOI

Gaussian Soft Margin Angular Loss for Face Recognition

TL;DR: This work proposes a loss function that while maximizing the inter-class distance and intra-class compactness, allows for the samples which naturally reside further from class center to have a smaller margin.
Proceedings ArticleDOI

A Unified Model for Face Matching and Presentation Attack Detection using an Ensemble of Vision Transformer Features

TL;DR: In this article , the authors proposed a feature ensemble approach, where an ensemble of local features extracted from the in-termediate blocks of a ViT are utilized for FPAD, while face matching is performed based on the ViT class token.
Proceedings ArticleDOI

Peanut Seed Germination Detection from Aerial Images

TL;DR: Zhang et al. as discussed by the authors proposed to reduce the time lag in detecting peanut germination failures by combining the power of deep learning-based object detection (OD) and unmanned aerial systems (UAS) to identify early in-field peanut seed germination.
Posted Content

Lagrangian Objective Function Leads to Improved Unforeseen Attack Generalization in Adversarial Training.

TL;DR: In this paper, a modification to adversarial training is proposed to minimize the perturbation $\ell_p$ norm while maximizing the classification loss in the Lagrangian form.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Proceedings ArticleDOI

Going deeper with convolutions

TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Journal ArticleDOI

Generative Adversarial Nets

TL;DR: A new framework for estimating generative models via an adversarial process, in which two models are simultaneously train: a generative model G that captures the data distribution and a discriminative model D that estimates the probability that a sample came from the training data rather than G.
Related Papers (5)