scispace - formally typeset
Open AccessPosted Content

When Face Recognition Meets with Deep Learning: an Evaluation of Convolutional Neural Networks for Face Recognition

Reads0
Chats0
TLDR
In this article, the authors conduct an extensive evaluation of CNN-based face recognition systems (CNN-FRS) on a common ground to make their work easily reproducible, and propose three CNN architectures which are the first reported architectures trained using LFW data.
Abstract: 
Deep learning, in particular Convolutional Neural Network (CNN), has achieved promising results in face recognition recently. However, it remains an open question: why CNNs work well and how to design a 'good' architecture. The existing works tend to focus on reporting CNN architectures that work well for face recognition rather than investigate the reason. In this work, we conduct an extensive evaluation of CNN-based face recognition systems (CNN-FRS) on a common ground to make our work easily reproducible. Specifically, we use public database LFW (Labeled Faces in the Wild) to train CNNs, unlike most existing CNNs trained on private databases. We propose three CNN architectures which are the first reported architectures trained using LFW data. This paper quantitatively compares the architectures of CNNs and evaluate the effect of different implementation choices. We identify several useful properties of CNN-FRS. For instance, the dimensionality of the learned features can be significantly reduced without adverse effect on face recognition accuracy. In addition, traditional metric learning method exploiting CNN-learned features is evaluated. Experiments show two crucial factors to good CNN-FRS performance are the fusion of multiple CNNs and metric learning. To make our work reproducible, source code and models will be made publicly available.

read more

Citations
More filters
Posted Content

CosFace: Large Margin Cosine Loss for Deep Face Recognition

TL;DR: Li et al. as mentioned in this paper reformulated the softmax loss as a cosine loss by normalizing both features and weight vectors to remove radial variations, based on which the cosine margin term is introduced to further maximize the decision margin in the angular space.
Journal ArticleDOI

Predicting flood susceptibility using LSTM neural networks

TL;DR: A local spatial sequential long short-term memory neural network (LSS-LSTM) for flood susceptibility prediction in Shangyou County, China is proposed, which can not only capture both attribution information of flood conditioning factors and local spatial information of flooding data, but also retain the powerful sequential modelling capability to deal with flood spatial relationship.
Proceedings ArticleDOI

Rotation equivariant vector field networks

TL;DR: The Rotation Equivariant Vector Field Networks (RotEqNet), a Convolutional Neural Network architecture encoding rotation equivariance, invariance and covariance, is proposed and a modified convolution operator relying on this representation to obtain deep architectures is developed.
Journal ArticleDOI

Power Quality Disturbance Monitoring and Classification Based on Improved PCA and Convolution Neural Network for Wind-Grid Distribution Systems

TL;DR: A novel algorithm based on Improved Principal Component Analysis and 1-Dimensional Convolution Neural Network for detection and classification of PQDs is proposed and shows that the proposed method gives significantly higher classification accuracy.
Journal ArticleDOI

Landslide Susceptibility Prediction Modeling Based on Remote Sensing and a Novel Deep Learning Algorithm of a Cascade-Parallel Recurrent Neural Network

TL;DR: A deep-learning-based model using the long short-term memory (LSTM) recurrent neural network and conditional random field in cascade-parallel form was proposed for making LSPs based on remote sensing images and a geographic information system (GIS).
References
More filters
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Posted Content

Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift

Sergey Ioffe, +1 more
- 11 Feb 2015 - 
TL;DR: Batch Normalization as mentioned in this paper normalizes layer inputs for each training mini-batch to reduce the internal covariate shift in deep neural networks, and achieves state-of-the-art performance on ImageNet.
Proceedings ArticleDOI

Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification

TL;DR: In this paper, a Parametric Rectified Linear Unit (PReLU) was proposed to improve model fitting with nearly zero extra computational cost and little overfitting risk, which achieved a 4.94% top-5 test error on ImageNet 2012 classification dataset.
Posted Content

Improving neural networks by preventing co-adaptation of feature detectors

TL;DR: The authors randomly omits half of the feature detectors on each training case to prevent complex co-adaptations in which a feature detector is only helpful in the context of several other specific feature detectors.
Proceedings ArticleDOI

DeepFace: Closing the Gap to Human-Level Performance in Face Verification

TL;DR: This work revisits both the alignment step and the representation step by employing explicit 3D face modeling in order to apply a piecewise affine transformation, and derive a face representation from a nine-layer deep neural network.
Related Papers (5)