scispace - formally typeset
X

Xavier Binefa

Researcher at Pompeu Fabra University

Publications -  12
Citations -  114

Xavier Binefa is an academic researcher from Pompeu Fabra University. The author has contributed to research in topics: Affective computing & Deep learning. The author has an hindex of 6, co-authored 12 publications receiving 77 citations.

Papers
More filters
Posted Content

Learning Disentangled Representations with Reference-Based Variational Autoencoders

TL;DR: Reference-based variational autoencoders as mentioned in this paper learn a representation where a set of target factors are disentangled from others, where the only supervision comes from an auxiliary "reference set" containing images where the factors of interest are constant.
Proceedings ArticleDOI

End-to-end Facial and Physiological Model for Affective Computing and Applications

TL;DR: In this paper, a multi-modal emotion recognition model based on deep learning techniques using the combination of peripheral physiological signals and facial expressions was proposed for anxiety therapy assessment, and the proposed model was trained and evaluated on AMIGOS datasets reporting valence, arousal, and emotion state classification.
Proceedings ArticleDOI

Fully End-to-End Composite Recurrent Convolution Network for Deformable Facial Tracking In The Wild

TL;DR: This paper presents a fully end-to-end facial tracking model based on current state of the art deep model architectures that can be effectively trained from the available annotated facial landmark datasets and shows that it can produce results that are comparable to state-of-the- art trackers.
Proceedings ArticleDOI

Heatmap-Guided Balanced Deep Convolution Networks for Family Classification in the Wild

TL;DR: The Deep Family Classifier (DFC) is presented, a deep learning model for family classification in the wild that combines two sub-networks: internal Image Feature Enhancer which operates by removing the image noise and provides an additional facial heatmap layer and Family Class Estimator trained with strong regularizers and a compound loss.
Posted Content

End-to-end facial and physiological model for Affective Computing and applications

TL;DR: A multi-modal emotion recognition model based on deep learning techniques using the combination of peripheral physiological signals and facial expressions is proposed and an improvement to proposed models is presented by introducing latent features extracted from the authors' internal Bio Auto-Encoder (BAE).