scispace - formally typeset
Open AccessJournal ArticleDOI

Short-term emotion assessment in a recall paradigm

TLDR
Comparison of results obtained using either peripheral or EEG signals confirms the interest of using EEGs to assess valence and arousal in emotion recall conditions.
Abstract
The work presented in this paper aims at assessing human emotions using peripheral as well as electroencephalographic (EEG) physiological signals on short-time periods Three specific areas of the valence-arousal emotional space are defined, corresponding to negatively excited, positively excited, and calm-neutral states An acquisition protocol based on the recall of past emotional life episodes has been designed to acquire data from both peripheral and EEG signals Pattern classification is used to distinguish between the three areas of the valence-arousal space The performance of several classifiers has been evaluated on 10 participants and different feature sets: peripheral features, EEG time-frequency features, EEG pairwise mutual information (MI) features Comparison of results obtained using either peripheral or EEG signals confirms the interest of using EEGs to assess valence and arousal in emotion recall conditions The obtained accuracy for the three emotional classes is 63% using EEG time-frequency features, which is better than the results obtained from previous studies using EEG and similar classes Fusion of the different feature sets at the decision level using a summation rule also showed to improve accuracy to 70% Furthermore, the rejection of non-confident samples finally led to a classification accuracy of 80% for the three classes

read more

Content maybe subject to copyright    Report

Article
Reference
Short-term emotion assessment in a recall paradigm
CHANEL, Guillaume, et al.
Abstract
The work presented in this paper aims at assessing human emotions using peripheral as well
as electroencephalographic (EEG) physiological signals on short-time periods. Three specific
areas of the valence–arousal emotional space are defined, corresponding to negatively
excited, positively excited, and calm-neutral states. An acquisition protocol based on the recall
of past emotional life episodes has been designed to acquire data from both peripheral and
EEG signals. Pattern classification is used to distinguish between the three areas of the
valence–arousal space. The performance of several classifiers has been evaluated on 10
participants and different feature sets: peripheral features, EEG time–frequency features, EEG
pairwise mutual information (MI) features. Comparison of results obtained using either
peripheral or EEG signals confirms the interest of using EEGs to assess valence and arousal
in emotion recall conditions. The obtained accuracy for the three emotional classes is 63%
using EEG time–frequency features, which is better than the results obtained from previous
studies using EEG and similar classes. [...]
CHANEL, Guillaume, et al. Short-term emotion assessment in a recall paradigm. International
journal of human-computer studies, 2009, vol. 67, no. 8, p. 607-627
DOI : 10.1016/j.ijhcs.2009.03.005
Available at:
http://archive-ouverte.unige.ch/unige:47415
Disclaimer: layout of this document may differ from the published version.
1 / 1

www.elsevier.com/locate/ijhcs
Author’s Accepted Manuscript
Short-term emotion assessment in a recall paradigm
GuillaumeChanel,JoepKierkels,MohammadSoleymani,
Thierry Pun
PII: S1071-5819(09)00043-3
DOI: doi:10.1016/j.ijhcs.2009.03.005
Reference: YIJHC1519
To appear in: Int. J. Human–Computer Studies
Received date: 4 J uly 2008
Revised date: 19 March 2009
Accepted date: 27 March 2009
Cite this article as: Guillaume Chanel, Joep Kierkels, Mohammad Soleymani and Thierry
Pun, Short-term emotion assessment in a recall paradigm,Int. J. Human–ComputerStudies
(2009), doi:10.1016/j.ijhcs.2009.03.005
This is a PDF file of an unedited manuscript that h as been accepted for publication. As
a service to our customers we are providing this early version of the manuscript. The
manuscript will undergo copyediting, typesetting, and review of the resulting galley proof
before it is publishedinitsfinalcitableform.Please notethatduringt he productionprocess
errors may be discoveredwhichcould affect the content, and all legaldisclaimers that apply
to the journal pertain.

Accepted manuscript
1
Short-term emotion assessment in a recall
paradigm
Guillaume Chanel
*
, Joep Kierkels, Mohammad Soleymani, Thierry Pun
Computer Science Departement – University of Geneva
Route de Drize 7
CH - 1227 Carouge, Switzerland
Abstract
The work presented in this paper aims at assessing human emotions using
peripheral as well as electroencephalographic (EEG) physiological signals on short-
time periods. Three specific areas of the valence-arousal emotional space are
defined, corresponding to negatively excited, positively excited, and calm-neutral
states. An acquisition protocol based on the recall of past emotional life episodes has
been designed to acquire data from both peripheral and EEG signals. Pattern
classification is used to distinguish between the three areas of the valence-arousal
space. The performance of several classifiers has been evaluated on ten participants
and different feature sets: peripheral features, EEG time-frequency features, EEG
pairwise mutual information features. Comparison of results obtained using either
peripheral or EEG signals confirms the interest of using EEG’s to assess valence and
arousal in emotion recall conditions. The obtained accuracy for the three emotional
classes is 63% using EEG time-frequency features which is better than the results
obtained from previous studies using EEG and similar classes. Fusion of the different
feature sets at the decision level using a summation rule also showed to improve
accuracy to 70%. Furthermore, the rejection of non confident samples finally led to a
classification accuracy of 80% for the three classes.
Keywords: emotion assessment and classification, affective computing, signal
processing.
1 Introduction
Emotions are part of any natural communication between humans, generally as non-
verbal cues. Until recently affective communication was not implemented in human
computer interfaces. Nowadays researchers in human-computer interaction (HCI)
have recognized the importance of emotional aspects and started to include them in
the design of new interfaces using one of two possible approaches, either as
evaluation indicators or as components to be inserted in the human-computer loop.
The first approach consist in using emotion assessment as a tool for evaluating
attractiveness, appreciation and user experience of software (Hazlett and Benedek,
2007). Such an assessment can be done by using different self-report methods
(Isomursu et al., 2007) or by inferring emotional states from others measures such as
physiological signals (Mandryk et al., 2006; Picard and Daily, 2005).
The second approach aims at bringing the machine closer to the human by including
emotional content in the communication and is known as affective computing (Picard,
1997). According to Picard, affective computing “proposes to give computers the
ability to recognize [and] express […] emotions”. Synthetic expression of emotions
*
Corresponding author. Tel.: (+41) 22 379 01 83; fax: (+41) 22 379 0250.
E-mail address: guillaume.chanel@unige.ch.

Accepted manuscript
2
can be achieved by enabling avatars or simpler agents to have facial expressions,
different tones of voice, and empathic behaviours (Brave et al., 2005; Xu et al.,
2006). Detection of human emotions can be realized by monitoring facial expressions
(Cohen et al., 2003; Cowie et al., 2001; Fasel and Luettin, 2003), speech (Cowie,
2000; Ververidis and Kotropoulos, 2006) , postures (Coulson, 2004; Kapoor et al.,
2007) and physiological signals (see Section 2). Fusion of these different modalities
to improve the recognition accuracy has also been studied (Kapoor et al., 2007; Kim
et al., 2005; Pantic and Rothkrantz, 2003; Zeng et al., 2008).The present work
focuses on the emotion assessment aspect, especially from different physiological
signals.
Fig. 1 presents a framework describing how emotion assessment could be integrated
in human-computer interfaces. As proposed by Norman (1990) the interaction with a
machine, from the point of view of the user, can be decomposed in execution /
evaluation cycles. After identifying his/her goals, the user starts an execution stage. It
consists in formulating his intentions, specifying the necessary sequence of actions
and executing those actions. Next, the computer executes the given commands and
output results through the available modalities. The second stage is the evaluation
which is realized by: perceiving computer outputs, interpreting them and evaluating
the outcome (i.e. are the goals satisfied?).
<Figure 1>
According to the cognitive theory of emotions, emotions are issued from a cognitive
process called appraisal that evaluates a stimulus according to several criteria such
as goal relevance and consequences of the event (Cornelius, 1996; Sander et al.,
2005; Scherer, 2001). For this reason, an emotional evaluation step, corresponding
to the appraisal process, was added in Fig. 1 at the evaluation stage. Elicitation of
emotions is known to be related to changes in several components of the organism
such as physiological, motor and behavioral components (Scherer, 2001). It is thus
possible to consider those changes as emotional cues that can be used to
automatically detect the elicited emotion after being recorded by the adequate
sensors. For this purpose it is necessary to extract features of interest from the
recorded signals and perform classification using previously trained models from data
including the associated emotion. The detected emotion can then be used to adapt
the interaction by modifying command execution. The information presented on the
output modalities can also directly be influenced by the emotional adaptation, for
instance by synthesizing an emotional response on screens and speakers.
Several applications can be derived from this framework, some of them going beyond
human-computer interfaces to reach human-machine interfaces in general and even
human-human interfaces. In gaming, emotion assessment can be used for better
understanding of the playing conditions that lead to emotional activation (Mandryk
and Atkins, 2007) and for maintaining involvement of a player by adapting game
difficulty or content (Chanel et al., 2008; Chen, 2007; Rani et al., 2005). Learning in a
computer mediated environment can elicit various types of feeling that are currently
not handled. Detection of frustration for instance, if followed by proper adaptation of
the learning strategy (Choi et al., 2007; Kapoor et al., 2007), will certainly help to
maintain learner’s interest. Identification of critical states, such as stress, panic, and
boredom, can be very useful in situations such as driving or when performing
dangerous operations (Benoit et al., 2006; Healey, 2000). Another possible

Accepted manuscript
3
application is the use of emotion recognition to help severely disabled persons
express their feeling.
In addition to the cognitive theory, several theories of emotions have developed over
the past century (Cornelius, 1996). These different views gave rise to different
models of emotions. The most famous models are the basic emotions (Ekman et al.,
1987) and the valence-arousal space (Russell, 1980). Basic emotions are defined as
the emotions that are common across cultures and selected by nature because of
their high survival functions. The valence-arousal space allows for a continuous
representation of emotions on two axes: valence, ranging from unpleasant to
pleasant, and arousal, ranging from calm to excited.
As stated before, there is extensive literature regarding emotion assessment from
speech and facial expressions. However these two modalities suffer from several
disadvantages. Firstly, they are not always available: the user may not look at the
camera or speak all the time. Secondly, they do not always reflect the true emotional
state of the user, since facial and voice expressions are often faked because of
social rules or by deliberate choice. Finally, they should be regarded as the output of
the emotional process, meaning that the emotion could have been elicited prior to the
speech or expression.
An alternative is to use physiological signals both from the central and peripheral
system. The Jamesian theory (Cornelius, 1996) emphasizes the importance of
peripheral signals as it suggests there is some specific pattern of physiology for each
particular emotion. Even though this statement is often disputed in the psychology
literature (Stemmler et al., 2001) and the cognitive theory suggest that physiological
signals are also an output of the emotional process, several studies from human
computer interaction have shown the usefulness of peripheral activity for emotion
assessment in diverse conditions (see Section 2.1). Those studies mainly use signals
duration of the order of the minute to obtain accurate emotion assessment, however
shorter durations are necessary for real time applications. The cognitive theory
stresses the importance of the central nervous system i.e. the brain. It has been
shown that correlates of emotions can be observed from brain activity, especially in
the pre-frontal cortex and the amygdala (Adolphs et al., 2003; Damasio et al., 2000;
Davidson, 2003; Davidson et al., 2000; Rolls, 2000). Brain activity can be measured
using several techniques such as Electroencephalography (EEG) and others. Since it
is the least intrusive and comparatively cheap, EEG is certainly one of the most
usable modality for real and everyday applications. Brain activity can also be useful
for short term emotion assessment since activity related to an emotional stimulus
should occur shortly after the stimulus onset. Despite of this, the few studies trying to
use EEG signals for emotion assessment did not obtain convincing results.
The objective of this study is to investigate the use of EEG modality, peripheral
modality and fusion of these two for emotion assessment on short time periods. This
work will thus focus on the red parts of Fig. 1. Previous studies related to emotion
assessment from physiological signals and novelties of the present study are detailed
in Section 2. A protocol based on the recall of past emotional episodes was used to
reliably induce emotions from three areas of the valence-arousal space. This protocol
was used instead of a real HCI framework because it allows for better control over
when an emotional episode starts (timing) and over the strength and valence of the
recalled emotion (content). The protocol, the signal acquisition system and the
algorithms developed to extract different feature sets from the recorded signals are

Citations
More filters
Journal ArticleDOI

DEAP: A Database for Emotion Analysis ;Using Physiological Signals

TL;DR: A multimodal data set for the analysis of human affective states was presented and a novel method for stimuli selection is proposed using retrieval by affective tags from the last.fm website, video highlight detection, and an online assessment tool.
Journal ArticleDOI

A Multimodal Database for Affect Recognition and Implicit Tagging

TL;DR: Results show the potential uses of the recorded modalities and the significance of the emotion elicitation protocol and single modality and modality fusion results for both emotion recognition and implicit tagging experiments are reported.
Journal ArticleDOI

EEG-Based Emotion Recognition in Music Listening

TL;DR: This study applied machine-learning algorithms to categorize EEG dynamics according to subject self-reported emotional states during music listening to identify 30 subject-independent features that were most relevant to emotional processing across subjects and explored the feasibility of using fewer electrodes to characterize the EEG dynamics duringMusic listening.
Journal ArticleDOI

Emotions Recognition Using EEG Signals: A Survey

TL;DR: A survey of the neurophysiological research performed from 2009 to 2016 is presented, providing a comprehensive overview of the existing works in emotion recognition using EEG signals, and a set of good practice recommendations that researchers must follow to achieve reproducible, replicable, well-validated and high-quality results.
Journal ArticleDOI

Multimodal Emotion Recognition in Response to Videos

TL;DR: The results over a population of 24 participants demonstrate that user-independent emotion recognition can outperform individual self-reports for arousal assessments and do not underperform for valence assessments.
References
More filters
Journal ArticleDOI

LIBSVM: A library for support vector machines

TL;DR: Issues such as solving SVM optimization problems theoretical convergence multiclass classification probability estimates and parameter selection are discussed in detail.
Book

Pattern Recognition and Machine Learning

TL;DR: Probability Distributions, linear models for Regression, Linear Models for Classification, Neural Networks, Graphical Models, Mixture Models and EM, Sampling Methods, Continuous Latent Variables, Sequential Data are studied.
Journal ArticleDOI

Pattern Recognition and Machine Learning

Radford M. Neal
- 01 Aug 2007 - 
TL;DR: This book covers a broad range of topics for regular factorial designs and presents all of the material in very mathematical fashion and will surely become an invaluable resource for researchers and graduate students doing research in the design of factorial experiments.
Related Papers (5)
Frequently Asked Questions (13)
Q1. What contributions have the authors mentioned in the paper "Short-term emotion assessment in a recall paradigm" ?

The work presented in this paper aims at assessing human emotions using peripheral as well as electroencephalographic ( EEG ) physiological signals on short-time periods. 

Since following the stimulus onset emotional processes in brain and peripheral signals are expected to be observable at different times, the exploration of different time resolutions is needed to determine the time scales favorable to emotional assessment from EEG and peripheral activity. 

There are two drawbacks to the use of SVM’s as classifiers: they are intrinsically only two-class classifiers and their output is uncalibrated so that it is not directly usable as a confidence value in the case one wants to combine outputs of different classifiers or modalities. 

Notice that because of the sensitivity of EEG sensors to noise and the fact that they often require gel to be applied on the surface of the skin, some researchers have avoided using them for others HCI applications. 

In (Salahuddin et al., 2007) the authors analyzed the usability of heart rate variability on different time periods and concluded that 50 s of signals are necessary to accurately monitor mental stress in real settings. 

The elicited emotions are considered reliable because (i) thinking of the same episodes ought to produce similar reactions from one trial to another, (ii) emotional episodes 

The best average accuracy for two classes is obtained from the CP classification task with nearly 80% of well classified trials (random level at 50%), followed by the CE and NP classification tasks with respectively 78% and 74% of accuracy. 

Diverse types of physiological activity measurements from both the peripheral and the central nervous system have been used to assess emotions. 

The first approach consist in using emotion assessment as a tool for evaluating attractiveness, appreciation and user experience of software (Hazlett and Benedek, 2007). 

In the case the trials with low confidence are those that are misclassified such rejection should lead to an increase of accuracy. 

recalling past episodes and eliciting the corresponding emotions are difficult tasks and participants might need a few seconds to accomplish them. 

The interest of fusing peripheral and EEG features for emotion assessment was shown in (Chanel et al., 2006) through a simple concatenation of feature sets. 

With theLDA it is sufficient to compute a single covariance matrix from the complete learning set without distinction between classes.