Journal ArticleDOI
Facial expression recognition from near-infrared videos
Reads0
Chats0
TLDR
A novel research on a dynamic facial expression recognition, using near-infrared (NIR) video sequences and LBP-TOP feature descriptors and component-based facial features are presented to combine geometric and appearance information, providing an effective way for representing the facial expressions.About:
This article is published in Image and Vision Computing.The article was published on 2011-08-01. It has received 586 citations till now. The article focuses on the topics: Three-dimensional face recognition & Face hallucination.read more
Citations
More filters
Book ChapterDOI
Deep Learning-Based Assessment of Facial Periodic Affect in Work-Like Settings
Siyang Song,Yi-Xiang Luo,Vincenzo Ronca,Gianluca Borghini,Hesam Sagha,Vera Rick,Alexander Mertens,Hatice Gunes +7 more
TL;DR: In this article , the authors investigated the influence of spatial and temporal facial behaviors on human affect recognition in work-like settings and proposed a deep learning-based framework that leverages both spatio-temporal facial behavioural cues and background information for workers' affect recognition.
Book ChapterDOI
Multichannel CNN for Facial Expression Recognition
TL;DR: A multi-channel CNN architecture is proposed, which helps in performing improved facial expression recognition on frontal face images and is comparable with various state-of-the-art approaches for facial expression Recognition.
Proceedings ArticleDOI
Attention Mechanism and Feature Correction Fusion Model for Facial Expression Recognition
Qihua Xu,Changlong Wang,Yi Hou +2 more
TL;DR: In this article, the authors proposed a facial expression recognition method that combines transfer learning, attention mechanism, and feature fusion technology, which achieved state-of-the-art recognition accuracy for RAF-DB and FER2013+ datasets.
Journal ArticleDOI
AU-Guided Unsupervised Domain-Adaptive Facial Expression Recognition
TL;DR: AdaFER as discussed by the authors proposes an AU-guided unsupervised domain-adaptive FER framework to relieve the annotation bias between different FER datasets by leveraging an advanced model for AU detection on both a source and a target domain.
Book ChapterDOI
Data Augmentation Using GANs for 3D Applications
TL;DR: The subject of this chapter is to review and organize implementations of the generative adversarial networks approach on 3D and 2D imagery, examine the methods that were used, and survey the areas in which they were applied.
References
More filters
Journal ArticleDOI
Multiresolution gray-scale and rotation invariant texture classification with local binary patterns
TL;DR: A generalized gray-scale and rotation invariant operator presentation that allows for detecting the "uniform" patterns for any quantization of the angular space and for any spatial resolution and presents a method for combining multiple operators for multiresolution analysis.
Journal ArticleDOI
Robust Face Recognition via Sparse Representation
TL;DR: This work considers the problem of automatically recognizing human faces from frontal views with varying expression and illumination, as well as occlusion and disguise, and proposes a general classification algorithm for (image-based) object recognition based on a sparse representation computed by C1-minimization.
Journal ArticleDOI
On combining classifiers
TL;DR: A common theoretical framework for combining classifiers which use distinct pattern representations is developed and it is shown that many existing schemes can be considered as special cases of compound classification where all the pattern representations are used jointly to make a decision.
Journal ArticleDOI
From few to many: illumination cone models for face recognition under variable lighting and pose
TL;DR: A generative appearance-based method for recognizing human faces under variation in lighting and viewpoint that exploits the fact that the set of images of an object in fixed pose but under all possible illumination conditions, is a convex cone in the space of images.