OpenFace: An open source facial behavior analysis toolkit
read more
Citations
OpenFace 2.0: Facial Behavior Analysis Toolkit
Multimodal Language Analysis in the Wild: CMU-MOSEI Dataset and Interpretable Dynamic Fusion Graph
Tensor Fusion Network for Multimodal Sentiment Analysis
AVEC 2016: Depression, Mood, and Emotion Recognition Workshop and Challenge
Why and how to use virtual reality to study human social interaction: The challenges of exploring a new research landscape.
References
Object Detection with Discriminatively Trained Part-Based Models
The Extended Cohn-Kanade Dataset (CK+): A complete dataset for action unit and emotion-specified expression
Dlib-ml: A Machine Learning Toolkit
One Millisecond Face Alignment with an Ensemble of Regression Trees
A Survey of Affect Recognition Methods: Audio, Visual, and Spontaneous Expressions
Related Papers (5)
Frequently Asked Questions (8)
Q2. What are the future works mentioned in the paper "Openface: an open source facial behavior analysis toolkit" ?
Furthermore, the future development of the tool will continue and it will attempt to incorporate the newest and most reliable approaches for the problem at hand while remaining a transparent open source tool and retaining its real-time capacity. The authors hope that this tool will encourage other researchers in the field to share their code.
Q3. How many blocks of facial expressions are used?
The authors use blocks of 2 × 2 cells, of 8 × 8 pixels, leading to 12×12 blocks of 31 dimensional histograms (4464 dimensional vector describing the face).
Q4. What is the way to extract facial appearance features?
In order to extract facial appearance features the authors used a similarity transform from the currently detected landmarks to a representation of frontal landmarks from a neutral expression.
Q5. What are the use cases of saving facial behaviors using OpenFace?
Example use case of saving facial behaviors using OpenFace would involve using them as features for emotion prediction, medical condition analysis, and social signal analysis systems.
Q6. How did the authors measure the performance of OpenFace on a head pose estimation task?
To measure OpenFace performance on a head pose estimation task the authors used three publicly available datasets with existing ground truth head pose data: BU [15], Biwi [21] and ICT-3DHP [9].
Q7. How many dimensions does the PCA model have?
Applying PCA to images (sub-sampling from peak and neutral expressions) and keeping 95% of explained variability leads to a reduced basis of 1391 dimensions.
Q8. Why is the recognition of certain AUs not as reliable as others?
The recognition of certain AUs is not as reliable as that of others partly due to lack of representation in training data and inherent difficulty of the problem.