Deep face recognition: A survey
Mei Wang,Weihong Deng +1 more
Reads0
Chats0
TLDR
A comprehensive review of the recent developments on deep face recognition can be found in this paper, covering broad topics on algorithm designs, databases, protocols, and application scenes, as well as the technical challenges and several promising directions.About:
This article is published in Neurocomputing.The article was published on 2021-03-14 and is currently open access. It has received 353 citations till now. The article focuses on the topics: Deep learning & Feature extraction.read more
Citations
More filters
Proceedings ArticleDOI
Verification of Sitter Identity Across Historical Portrait Paintings by Confidence-aware Face Recognition
TL;DR: Huber et al. as mentioned in this paper proposed a specialized, likelihood-based fusion method to enable deep learning-based face recognition on historic portrait paintings and also proposed a method to accurately determine the confidence of the made decision to assist art historians in their research.
Journal ArticleDOI
Privacy-Preserving and verifiable SRC-based face recognition with cloud/edge server assistance
TL;DR: In this article , a new norm-preserving matrix transformation was used to outsource the heavy ℓ1-minimization problem in SRC-based face recognition.
Book ChapterDOI
Data Collection and Image Processing Tool for Face Recognition
TL;DR: A data collection and image analysis tool for face recognition with evolved parameters (ergonomic and visual) setting that is capable of collecting face data with various poses while making the user interaction intuitive and comfortable.
Dissertation
Optimization for Training Deep Models and Deep Learning Based Point Cloud Analysis and Image Classification
TL;DR: A novel approximation algorithm, BPGrad, towards optimizing deep models globally via branch and pruning, based on the assumption of Lipschitz continuity in DL is proposed and it outperforms conventional DL solvers such as Adagrad, Adadelta, RMSProp, and Adam in the tasks of object recognition, detection, and segmentation.
References
More filters
Proceedings ArticleDOI
Deep Residual Learning for Image Recognition
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article
ImageNet Classification with Deep Convolutional Neural Networks
TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings Article
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan,Andrew Zisserman +1 more
TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Proceedings ArticleDOI
Going deeper with convolutions
Christian Szegedy,Wei Liu,Yangqing Jia,Pierre Sermanet,Scott Reed,Dragomir Anguelov,Dumitru Erhan,Vincent Vanhoucke,Andrew Rabinovich +8 more
TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Journal ArticleDOI
Generative Adversarial Nets
Ian Goodfellow,Jean Pouget-Abadie,Mehdi Mirza,Bing Xu,David Warde-Farley,Sherjil Ozair,Aaron Courville,Yoshua Bengio +7 more
TL;DR: A new framework for estimating generative models via an adversarial process, in which two models are simultaneously train: a generative model G that captures the data distribution and a discriminative model D that estimates the probability that a sample came from the training data rather than G.