Deep face recognition: A survey
Mei Wang,Weihong Deng +1 more
Reads0
Chats0
TLDR
A comprehensive review of the recent developments on deep face recognition can be found in this paper, covering broad topics on algorithm designs, databases, protocols, and application scenes, as well as the technical challenges and several promising directions.About:
This article is published in Neurocomputing.The article was published on 2021-03-14 and is currently open access. It has received 353 citations till now. The article focuses on the topics: Deep learning & Feature extraction.read more
Citations
More filters
Posted Content
Mitigate Bias in Face Recognition using Skewness-Aware Reinforcement Learning
Mei Wang,Weihong Deng +1 more
TL;DR: A reinforcement learning based race balance network (RL-RBN) is proposed that successfully mitigates racial bias and learns more balanced performance and two ethnicity aware training datasets are provided.
Journal ArticleDOI
How many faces do people know
TL;DR: It is shown that people know about 5000 faces on average and that individual differences are large, which offers a possible explanation for large variation in identification performance.
Proceedings ArticleDOI
Fair Loss: Margin-Aware Reinforcement Learning for Deep Face Recognition
TL;DR: This paper introduces a new margin-aware reinforcement learning based loss function, namely fair loss, in which each class will learn an appropriate adaptive margin by Deep Q-learning, and trains an agent to learn a margin adaptive strategy for each class, and makes the additive margins for different classes more reasonable.
Journal ArticleDOI
Cross-resolution learning for Face Recognition
TL;DR: In this article, the authors proposed a cross-resolution matching method for low-resolution face recognition, which can be used for preprocessing faces with super-resolution techniques and achieved state-of-the-art performance.
Posted Content
Medical Deep Learning - A systematic Meta-Review.
TL;DR: The aim of this article is to provide the first high-level, systematic meta-review of medical deep learning surveys, which focus, in general, on specific medical scenarios, like the analysis of medical images containing specific pathologies.
References
More filters
Proceedings ArticleDOI
Deep Residual Learning for Image Recognition
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article
ImageNet Classification with Deep Convolutional Neural Networks
TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings Article
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan,Andrew Zisserman +1 more
TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Proceedings ArticleDOI
Going deeper with convolutions
Christian Szegedy,Wei Liu,Yangqing Jia,Pierre Sermanet,Scott Reed,Dragomir Anguelov,Dumitru Erhan,Vincent Vanhoucke,Andrew Rabinovich +8 more
TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Journal ArticleDOI
Generative Adversarial Nets
Ian Goodfellow,Jean Pouget-Abadie,Mehdi Mirza,Bing Xu,David Warde-Farley,Sherjil Ozair,Aaron Courville,Yoshua Bengio +7 more
TL;DR: A new framework for estimating generative models via an adversarial process, in which two models are simultaneously train: a generative model G that captures the data distribution and a discriminative model D that estimates the probability that a sample came from the training data rather than G.