E
Emmanuel Dellandréa
Researcher at École centrale de Lyon
Publications - 106
Citations - 2332
Emmanuel Dellandréa is an academic researcher from École centrale de Lyon. The author has contributed to research in topics: Object detection & Audio signal. The author has an hindex of 25, co-authored 103 publications receiving 1864 citations. Previous affiliations of Emmanuel Dellandréa include University of Lyon & François Rabelais University.
Papers
More filters
Proceedings ArticleDOI
What is the best segment duration for music mood analysis
TL;DR: Four versions of music datasets with duration of clips from 4 seconds to 32 seconds are tested in this paper, and better classification rates are obtained with the music segments of 8 seconds and 16 seconds.
Journal ArticleDOI
Automatic 2.5-D Facial Landmarking and Emotion Annotation for Social Interaction Assistance
TL;DR: This paper proposes an automatic emotion annotation solution on 2.5-D facial data collected from RGB-D cameras, consisting of a facial landmarking method and a FER method that has achieved satisfactory results on three publicly accessible facial databases.
Proceedings ArticleDOI
Combining Geometric, Textual and Visual Features for Predicting Prepositions in Image Descriptions
Arnau Ramisa,Josiah Wang,Ying Lu,Emmanuel Dellandréa,Francesc Moreno-Noguer,Robert Gaizauskas +5 more
TL;DR: This work investigates the role that geometric, textual and visual features play in the task of predicting a preposition that links two visual entities depicted in an image, and finds clear evidence that all three features contribute to the prediction task.
The MediaEval 2016 Emotional Impact of Movies Task
TL;DR: The 2018 edition of the MediaEval 2018 Emotional Impact of Movies Task as mentioned in this paper focused on predicting the emotional impact that video content will have on viewers, in terms of valence, arousal and fear.
Proceedings ArticleDOI
Recognition of emotions in speech by a hierarchical approach
TL;DR: Some new harmonic and Zipf based features for better speech emotion characterization in the valence dimension and a multistage classification scheme driven by a dimensional emotion model for better emotional class discrimination are proposed.