A Multimodal Database for Affect Recognition and Implicit Tagging
read more
Citations
DEAP: A Database for Emotion Analysis ;Using Physiological Signals
Deep learning-based electroencephalography analysis: a systematic review.
DISFA: A Spontaneous Facial Action Intensity Database
Emotions Recognition Using EEG Signals: A Survey
Algorithmic Principles of Remote PPG
References
LIBSVM: A library for support vector machines
A Decision-Theoretic Generalization of On-Line Learning and an Application to Boosting
Learning realistic human actions from movies
Looking at pictures: affective, facial, visceral, and behavioral reactions
DEAP: A Database for Emotion Analysis ;Using Physiological Signals
Related Papers (5)
Frequently Asked Questions (11)
Q2. What are the three modalities used to recognize emotions from participants’ responses?
Three modalities, including peripheral and central nervous system physiological signals and information captured by eye gaze tracker, were used to recognize emotions from participants’ responses.
Q3. What is the way to combine the decisions of classifiers?
If the classifiers provide confidence measures on their decisions, combining decisions of classifiers can be done using a summation rule.
Q4. What format was used to record the physiological responses of each participant?
The physiological responses to each stimuli were recorded each in a separate file in Biosemi data format (BDF) which is an extension of European data format (EDF) and easily readable in different platforms.
Q5. What are the disadvantages of labelbased representations?
Although the most straightforward way to represent an emotion is to use discrete labels such as fear or joy, labelbased representations have some disadvantages.
Q6. What is the main reason for the lack of multimodal databases of recordings dedicated to human emotional experiences?
The need for interdisciplinary knowledge as well as technological solutions to combine measurement data from a diversity of sensor equipment is probably the main reason for the current lack of multimodal databases of recordings dedicated to human emotional experiences.
Q7. What is the first database which has five modalities precisely synchronized?
To their knowledge, MAHNOB-HCI is the first database which has five modalities precisely synchronized, namely, eye gaze data, video, audio, and peripheral and central nervous system physiological signals.
Q8. How could the video be related to the audio?
By recording the external camera trigger pulse signal (“b” in Fig. 3) in a parallel audio track(see the fifth signal in Fig. 4), each recorded video frame could be related to the recorded audio with an uncertainty below 25 s.
Q9. What modalities were used to predict the correctness of displayed tags?
Two modalities were used to predict the correctness of displayed tags, namely, facial expression (captured by a camera) and the eye gaze location on the screen (captured by an eye gaze tracker).
Q10. How many files were containing physiological signals?
All the files containingphysiological signals include the signals recorded 30 s before the start and after the end of their stimuli.
Q11. How was the eye gaze tracker synchronized with the CPU cycle counter of its dedicated capture?
The eye gaze tracker (“g” in Fig. 3) synchronized with the CPU cycle counter of its dedicated capture PC (“f”) with an accuracy of approximately one millisecond.