scispace - formally typeset
Search or ask a question
Author

Ellen Douglas-Cowie

Bio: Ellen Douglas-Cowie is an academic researcher from Queen's University Belfast. The author has contributed to research in topics: Facial expression & Recurrent neural network. The author has an hindex of 28, co-authored 74 publications receiving 6013 citations. Previous affiliations of Ellen Douglas-Cowie include Royal Society of Medicine & Queen's University.


Papers
More filters
Journal ArticleDOI
TL;DR: Basic issues in signal processing and analysis techniques for consolidating psychological and linguistic analyses of emotion are examined, motivated by the PKYSTA project, which aims to develop a hybrid system capable of using information from faces and voices to recognize people's emotions.
Abstract: Two channels have been distinguished in human interaction: one transmits explicit messages, which may be about anything or nothing; the other transmits implicit messages about the speakers themselves. Both linguistics and technology have invested enormous efforts in understanding the first, explicit channel, but the second is not as well understood. Understanding the other party's emotions is one of the key tasks associated with the second, implicit channel. To tackle that task, signal processing and analysis techniques have to be developed, while, at the same time, consolidating psychological and linguistic analyses of emotion. This article examines basic issues in those areas. It is motivated by the PKYSTA project, in which we aim to develop a hybrid system capable of using information from faces and voices to recognize people's emotions.

2,255 citations

01 Sep 2000
TL;DR: FEELTRACE has resolving power comparable to an emotion vocabulary of 20 non-overlapping words, with the advantage of allowing intermediate ratings, and above all, the ability to track impressions continuously.
Abstract: FEELTRACE is an instrument developed to let observers track the emotional content of a stimulus as they perceive it over time, allowing the emotional dynamics of speech episodes to be examined It is based on activation-evaluation space, a representation derived from psychology The activation dimension measures how dynamic the emotional state is; the evaluation dimension is a global measure of the positive or negative feeling associated with the state Research suggests that the space is naturally circular, ie states which are at the limit of emotional intensity define a circle, with alert neutrality at the centre To turn those ideas into a recording tool, the space was represented by a circle on a computer screen, and observers described perceived emotional state by moving a pointer (in the form of a disc) to the appropriate point in the circle, using a mouse Prototypes were tested, and in the light of results, refinements were made to ensure that outputs were as consistent and meaningful as possible They include colour coding the pointer in a way that users readily associate with the relevant emotional state; presenting key emotion words as ‘landmarks’ at the strategic points in the space; and developing an induction procedure to introduce observers to the system An experiment assessed the reliability of the developed system Stimuli were 16 clips from TV programs, two showing relatively strong emotions in each quadrant of activationevaluation space, each paired with one of the same person in a relatively neural state 24 raters took part Differences between clips chosen to contrast were statistically robust Results were plotted in activation-evaluation space as ellipses, each with its centre at the mean co-ordinates for the clip, and its width proportional to standard deviation across raters The size of the ellipses meant that about 25 could be fitted into the space, ie FEELTRACE has resolving power comparable to an emotion vocabulary of 20 non-overlapping words, with the advantage of allowing intermediate ratings, and above all, the ability to track impressions continuously

568 citations

Journal ArticleDOI
TL;DR: The paper shows how the challenge of developing appropriate databases is being addressed in three major recent projects--the Reading--Leeds project, the Belfast project and the CREST--ESP project and indicates the future directions for the development of emotional speech databases.

483 citations

Book ChapterDOI
12 Sep 2007
TL;DR: The HUMAINE Database provides naturalistic clips which record that kind of material, in multiple modalities, and labelling techniques that are suited to describing it.
Abstract: The HUMAINE project is concerned with developing interfaces that will register and respond to emotion, particularly pervasive emotion (forms of feeling, expression and action that colour most of human life). The HUMAINE Database provides naturalistic clips which record that kind of material, in multiple modalities, and labelling techniques that are suited to describing it.

344 citations

Proceedings ArticleDOI
22 Sep 2008
TL;DR: A novel approach for continuous emotion recognition based on LongShort-TermMemoryRecurrentNeu-ral Networks which include modelling of long-range dependen-cies between observations and thus outperform techniques like Support-VectorRegression.
Abstract: Class based emotion recognition from speech, as performedin most works up to now, entails many restrictions for practi-cal applications. Human emotion is a continuum and an auto-matic emotion recognition system must be able to recognise itas such. We present a novel approach for continuous emotionrecognitionbasedonLongShort-TermMemoryRecurrentNeu-ral Networks which include modelling of long-range dependen-cies between observations and thus outperform techniques likeSupport-VectorRegression. Transferringtheinnovativeconceptof additionally modelling emotional history to the classificationof discrete levels for the emotional dimensions “valence” and“activation” we also apply Conditional Random Fields whichprevail over the commonly used Support-Vector Machines. Ex-periments conducted on data that was recorded while humansinteracted with a Sensitive Artificial Listener prove that for ac-tivation the derived classifiers perform as well as human anno-tators.Index Terms: Emotion Recognition, Sensitive Artificial Lis-tener, LSTM

328 citations


Cited by
More filters
Journal ArticleDOI

3,628 citations

Journal ArticleDOI
01 Jun 1959

3,442 citations

Journal ArticleDOI
TL;DR: In this article, the authors define emotion as a phenomenon to be studied, without consensual conceptualization and operationalization of exactly what phenomenon is to be investigated. But progress in theory and research is difficult to a...
Abstract: Defining “emotion” is a notorious problem. Without consensual conceptualization and operationalization of exactly what phenomenon is to be studied, progress in theory and research is difficult to a...

3,247 citations

Journal ArticleDOI
TL;DR: A multimodal data set for the analysis of human affective states was presented and a novel method for stimuli selection is proposed using retrieval by affective tags from the last.fm website, video highlight detection, and an online assessment tool.
Abstract: We present a multimodal data set for the analysis of human affective states. The electroencephalogram (EEG) and peripheral physiological signals of 32 participants were recorded as each watched 40 one-minute long excerpts of music videos. Participants rated each video in terms of the levels of arousal, valence, like/dislike, dominance, and familiarity. For 22 of the 32 participants, frontal face video was also recorded. A novel method for stimuli selection is proposed using retrieval by affective tags from the last.fm website, video highlight detection, and an online assessment tool. An extensive analysis of the participants' ratings during the experiment is presented. Correlates between the EEG signal frequencies and the participants' ratings are investigated. Methods and results are presented for single-trial classification of arousal, valence, and like/dislike ratings using the modalities of EEG, peripheral physiological signals, and multimedia content analysis. Finally, decision fusion of the classification results from different modalities is performed. The data set is made publicly available and we encourage other researchers to use it for testing their own affective state estimation methods.

3,013 citations

Journal ArticleDOI
TL;DR: In this paper, the authors discuss human emotion perception from a psychological perspective, examine available approaches to solving the problem of machine understanding of human affective behavior, and discuss important issues like the collection and availability of training and test data.
Abstract: Automated analysis of human affective behavior has attracted increasing attention from researchers in psychology, computer science, linguistics, neuroscience, and related disciplines. However, the existing methods typically handle only deliberately displayed and exaggerated expressions of prototypical emotions despite the fact that deliberate behaviour differs in visual appearance, audio profile, and timing from spontaneously occurring behaviour. To address this problem, efforts to develop algorithms that can process naturally occurring human affective behaviour have recently emerged. Moreover, an increasing number of efforts are reported toward multimodal fusion for human affect analysis including audiovisual fusion, linguistic and paralinguistic fusion, and multi-cue visual fusion based on facial expressions, head movements, and body gestures. This paper introduces and surveys these recent advances. We first discuss human emotion perception from a psychological perspective. Next we examine available approaches to solving the problem of machine understanding of human affective behavior, and discuss important issues like the collection and availability of training and test data. We finally outline some of the scientific and engineering challenges to advancing human affect sensing technology.

2,503 citations