scispace - formally typeset
Book ChapterDOI

Collection and Annotation of a Corpus of Human-Human Multimodal Interactions: Emotion and Others Anthropomorphic Characteristics

Reads0
Chats0
TLDR
The EmoTaboo protocol is presented for the collection of multimodal emotional behaviours occurring during human-human interactions in a game context and a new annotation methodology based on a hierarchical taxonomy of emotion-related words is introduced.
Abstract
In order to design affective interactive systems, experimental grounding is required for studying expressions of emotion during interaction. In this paper, we present the EmoTaboo protocol for the collection of multimodal emotional behaviours occurring during human-human interactions in a game context. First annotations revealed that the collected data contains various multimodal expressions of emotions and other mental states. In order to reduce the influence of language via a predetermined set of labels and to take into account differences between coders in their capacity to verbalize their perception, we introduce a new annotation methodology based on 1) a hierarchical taxonomy of emotion-related words, and 2) the design of the annotation interface. Future directions include the implementation of such an annotation tool and its evaluation for the annotation of multimodal interactive and emotional behaviours. We will also extend our first annotation scheme to several other characteristics interdependent of emotions.

read more

Citations
More filters
Journal ArticleDOI

IEMOCAP: interactive emotional dyadic motion capture database

TL;DR: A new corpus named the “interactive emotional dyadic motion capture database” (IEMOCAP), collected by the Speech Analysis and Interpretation Laboratory at the University of Southern California (USC), which provides detailed information about their facial expressions and hand movements during scripted and spontaneous spoken communication scenarios.
Journal ArticleDOI

A 3-D Audio-Visual Corpus of Affective Communication

TL;DR: This work presents a new audio-visual corpus for possibly the two most important modalities used by humans to communicate their emotional states, namely speech and facial expression in the form of dense dynamic 3-D face geometries.
Journal ArticleDOI

The MPI emotional body expressions database for narrative scenarios.

TL;DR: A new database consisting of a large set of natural emotional body expressions typical of monologues, featuring the intended emotion expression for each motion sequence from the actor is presented and made available in a searchable MPI Emotional Body Expression Database.
Journal ArticleDOI

Analyses of a Multimodal Spontaneous Facial Expression Database

TL;DR: These analyses of a multimodal spontaneous facial expression database of natural visible and infrared facial expressions demonstrate the effectiveness of the emotion-inducing experimental design, the gender difference in emotional responses, and the coexistence of multiple emotions/expressions.
References
More filters
Book

Affective Computing

TL;DR: Key issues in affective computing, " computing that relates to, arises from, or influences emotions", are presented and new applications are presented for computer-assisted learning, perceptual information retrieval, arts and entertainment, and human health and interaction.
Journal ArticleDOI

Emotion recognition in human-computer interaction

TL;DR: Basic issues in signal processing and analysis techniques for consolidating psychological and linguistic analyses of emotion are examined, motivated by the PKYSTA project, which aims to develop a hybrid system capable of using information from faces and voices to recognize people's emotions.
Book

Handbook of affective sciences.

TL;DR: In this article, the authors bring together, for the first time, the various strands of inquiry and latest research in the scientific study of the relationship between the mechanisms of the brain and the psychology of the mind.
Book

Embodied conversational agents

TL;DR: Embodied conversational agents as mentioned in this paper are computer-generated cartoonlike characters that demonstrate many of the same properties as humans in face-to-face conversation, including the ability to produce and respond to verbal and nonverbal communication.
Related Papers (5)