scispace - formally typeset
A

Aitor Álvarez

Researcher at University of the Basque Country

Publications -  8
Citations -  115

Aitor Álvarez is an academic researcher from University of the Basque Country. The author has contributed to research in topics: Feature (machine learning) & Gesture recognition. The author has an hindex of 7, co-authored 8 publications receiving 108 citations.

Papers
More filters
Journal ArticleDOI

Classifier Subset Selection for the Stacked Generalization Method Applied to Emotion Recognition in Speech.

TL;DR: A new supervised classification paradigm, called classifier subset selection for stacked generalization (CSS stacking), is presented to deal with speech emotion recognition by means of an integration of an estimation of distribution algorithm (EDA) in the first layer to select the optimal subset from the standard base classifiers.
Journal Article

Feature subset selection based on evolutionary algorithms for automatic emotion recognition in spoken spanish and standard basque language

TL;DR: A study performed to analyze different Machine Learning techniques validity in automatic speech emotion recognition area using a bilingual affective database and techniques based on evolutive algorithms to select speech feature subsets that optimize automatic emotion recognition success rate.
Journal ArticleDOI

Feature selection for speech emotion recognition in Spanish and Basque: on the use of machine learning to improve human-computer interaction.

TL;DR: An attempt to select the most significant features for emotion recognition in spoken Basque and Spanish Languages using different methods for feature selection shows that an instance-based learning algorithm using feature subset selection techniques based on evolutionary algorithms is the best Machine Learning paradigm in automatic emotion recognition.
Book ChapterDOI

Feature subset selection based on evolutionary algorithms for automatic emotion recognition in spoken spanish and standard basque language

TL;DR: In this article, different speech parameters have been calculated for each audio recording and several Machine Learning techniques have been applied to evaluate their usefulness in speech emotion recognition in this particular case, techniques based on evolutive algorithms have been used to select speech feature subsets that optimize automatic emotion recognition success rate.
Proceedings Article

SAVAS: Collecting, Annotating and Sharing Audiovisual Language Resources for Automatic Subtitling

TL;DR: This paper describes the data collection, annotation and sharing activities carried out within the FP7 EU-funded SAVAS project, which aims to collect, share and reuse audiovisual language resources from broadcasters and subtitling companies to develop large vocabulary continuous speech recognisers in specific domains and new languages.