scispace - formally typeset
Open AccessProceedings ArticleDOI

FaceNet2ExpNet: Regularizing a Deep Face Recognition Net for Expression Recognition

Reads0
Chats0
TLDR
FaceNet2ExpNet as mentioned in this paper proposes a new distribution function to model the high-level neurons of the expression network, which achieves better results than state-of-the-art methods.
Abstract
Relatively small data sets available for expression recognition research make the training of deep networks very challenging. Although fine-tuning can partially alleviate the issue, the performance is still below acceptable levels as the deep features probably contain redundant information from the pretrained domain. In this paper, we present FaceNet2ExpNet, a novel idea to train an expression recognition network based on static images. We first propose a new distribution function to model the high-level neurons of the expression network. Based on this, a two-stage training algorithm is carefully designed. In the pre-training stage, we train the convolutional layers of the expression net, regularized by the face net; In the refining stage, we append fully-connected layers to the pre-trained convolutional layers and train the whole network jointly. Visualization results show that the model trained with our method captures improved high-level expression semantics. Evaluations on four public expression databases, CK+, Oulu- CASIA, TFD, and SFEW demonstrate that our method achieves better results than state-of-the-art.

read more

Citations
More filters
Proceedings ArticleDOI

DPCNet: Dual Path Multi-Excitation Collaborative Network for Facial Expression Representation Learning in Videos

TL;DR: A Dual Path multi-excitation Collaborative Network (DPCNet) is proposed to learn the critical information for facial expression representation from fewer keyframes in videos and designs a multi-frame regularization loss to enforce the representation of multiple frames in the dual view to be semantically coherent.
Journal ArticleDOI

Self-Difference Convolutional Neural Network for Facial Expression Recognition

TL;DR: Li et al. as discussed by the authors proposed a self-difference convolutional network (SD-CNN) to address the intra-class variation issue in facial expression recognition, which achieved state-of-the-art performance with accuracies of 997% on CK+ and 913% on Oulu-CASIA.
Journal ArticleDOI

Identity-Aware Facial Expression Recognition Via Deep Metric Learning Based on Synthesized Images

TL;DR: In this article , a novel identity-aware method is proposed to solve the challenging person-dependent facial expression recognition task based on deep metric learning and facial image synthesis techniques, and a StarGAN is incorporated to synthesize facial images depicting different but complete basic emotions for each identity to augment the training data.
Journal ArticleDOI

The current challenges of automatic recognition of facial expressions: A systematic review

TL;DR: A systematic review of the literature according to the guidelines of the PRISMA method highlights the strengths, limitations and main directions for future research in this field of automated facial expression recognition.
Proceedings ArticleDOI

Apathy Classification by Exploiting Task Relatedness

TL;DR: In this article, a multi-task learning (MTL) framework for apathy classification based on facial analysis, entailed both emotion and facial movements, was proposed, which leverages information from other auxiliary tasks (e.g., clinical scores) which might be closely or distantly related to the main task of apathy diagnosis.
References
More filters
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Proceedings ArticleDOI

ImageNet: A large-scale hierarchical image database

TL;DR: A new database called “ImageNet” is introduced, a large-scale ontology of images built upon the backbone of the WordNet structure, much larger in scale and diversity and much more accurate than the current image datasets.
Journal Article

Dropout: a simple way to prevent neural networks from overfitting

TL;DR: It is shown that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.
Proceedings ArticleDOI

Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation

TL;DR: RCNN as discussed by the authors combines CNNs with bottom-up region proposals to localize and segment objects, and when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost.
Related Papers (5)