scispace - formally typeset
Open AccessProceedings ArticleDOI

FaceNet2ExpNet: Regularizing a Deep Face Recognition Net for Expression Recognition

Reads0
Chats0
TLDR
FaceNet2ExpNet as mentioned in this paper proposes a new distribution function to model the high-level neurons of the expression network, which achieves better results than state-of-the-art methods.
Abstract
Relatively small data sets available for expression recognition research make the training of deep networks very challenging. Although fine-tuning can partially alleviate the issue, the performance is still below acceptable levels as the deep features probably contain redundant information from the pretrained domain. In this paper, we present FaceNet2ExpNet, a novel idea to train an expression recognition network based on static images. We first propose a new distribution function to model the high-level neurons of the expression network. Based on this, a two-stage training algorithm is carefully designed. In the pre-training stage, we train the convolutional layers of the expression net, regularized by the face net; In the refining stage, we append fully-connected layers to the pre-trained convolutional layers and train the whole network jointly. Visualization results show that the model trained with our method captures improved high-level expression semantics. Evaluations on four public expression databases, CK+, Oulu- CASIA, TFD, and SFEW demonstrate that our method achieves better results than state-of-the-art.

read more

Citations
More filters
Journal ArticleDOI

Emotion Recognition for Cognitive Edge Computing Using Deep Learning

TL;DR: Experimental results show that the proposed emotion recognition system from facial images based on edge computing is energy efficient, has less learnable parameters, and good recognition accuracy.
Journal ArticleDOI

Facial Expression Recognition with Neighborhood-Aware Edge Directional Pattern (NEDP)

TL;DR: A novel local descriptor named Neighborhood-aware Edge Directional Pattern (NEDP) is proposed, which examines the gradients at the target (center) pixel as well as its neighboring pixels to explore a wider neighborhood for the consistency of the feature in spite of the presence of subtle distortion and noise in local region.
Book ChapterDOI

Deep Multi-Task Learning to Recognise Subtle Facial Expressions of Mental States

TL;DR: This work addresses subtle expression recognition through convolutional neural networks (CNNs) by developing multi-task learning (MTL) methods to effectively leverage a side task: facial landmark detection and achieves very competitive performance on Oulu-Casia NIR&Vis and CK+ databases via transfer learning.
Proceedings ArticleDOI

Deep Disturbance-Disentangled Learning for Facial Expression Recognition

TL;DR: This paper proposes a novel Deep Disturbance-disentangled Learning (DDL) method for FER that is capable of simultaneously and explicitly disentangling multiple disturbing factors by taking advantage of multi-task learning and adversarial transfer learning.
Journal ArticleDOI

HIC-net: A deep convolutional neural network model for classification of histopathological breast images

TL;DR: An effective pre-processing step has been added for WSI for better predictability of image parts and faster training, and HIC-net has more successful results than other state-of-art CNN algorithms with AUC score of 97.7%.
References
More filters
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Proceedings ArticleDOI

ImageNet: A large-scale hierarchical image database

TL;DR: A new database called “ImageNet” is introduced, a large-scale ontology of images built upon the backbone of the WordNet structure, much larger in scale and diversity and much more accurate than the current image datasets.
Journal Article

Dropout: a simple way to prevent neural networks from overfitting

TL;DR: It is shown that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.
Proceedings ArticleDOI

Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation

TL;DR: RCNN as discussed by the authors combines CNNs with bottom-up region proposals to localize and segment objects, and when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost.
Related Papers (5)