scispace - formally typeset
Open AccessJournal ArticleDOI

EmoCo : Visual Analysis of Emotion Coherence in Presentation Videos

Reads0
Chats0
TLDR
This paper introduces EmoCo, an interactive visual analytics system to facilitate efficient analysis of emotion coherence across facial, text, and audio modalities in presentation videos and demonstrates the effectiveness of the system in gaining insights into emotioncoherence in presentations.
Abstract
Emotions play a key role in human communication and public presentations. Human emotions are usually expressed through multiple modalities. Therefore, exploring multimodal emotions and their coherence is of great value for understanding emotional expressions in presentations and improving presentation skills. However, manually watching and studying presentation videos is often tedious and time-consuming. There is a lack of tool support to help conduct an efficient and in-depth multi-level analysis. Thus, in this paper, we introduce EmoCo , an interactive visual analytics system to facilitate efficient analysis of emotion coherence across facial, text, and audio modalities in presentation videos. Our visualization system features a channel coherence view and a sentence clustering view that together enable users to obtain a quick overview of emotion coherence and its temporal evolution. In addition, a detail view and word view enable detailed exploration and comparison from the sentence level and word level, respectively. We thoroughly evaluate the proposed system and visualization techniques through two usage scenarios based on TED Talk videos and interviews with two domain experts. The results demonstrate the effectiveness of our system in gaining insights into emotion coherence in presentations.

read more

Citations
More filters
Journal ArticleDOI

A survey of visual analytics techniques for machine learning

TL;DR: A taxonomy of visual analytics techniques is built, which includes three first-level categories: techniques before model building, techniques during modeling building, and techniques after model building.
Journal ArticleDOI

EmotionCues : Emotion-Oriented Visual Summarization of Classroom Videos

TL;DR: EmpirCues, a visual analytics system to easily analyze classroom videos from the perspective of emotion summary and detailed analysis, which integrates emotion recognition algorithms with visualizations is proposed.
Proceedings ArticleDOI

A Visual Analytics Approach to Facilitate the Proctoring of Online Exams

TL;DR: In this paper, the authors presented a visual analytics approach to facilitate the proctoring of online exams by analyzing the exam video records and mouse movement data of each student, detecting and visualizing suspected head and mouse movements of students in three levels of detail.
Proceedings ArticleDOI

A Visual Analytics Approach to Facilitate the Proctoring of Online Exams

TL;DR: In this article, the authors presented a visual analytics approach to facilitate the proctoring of online exams by analyzing the exam video records and mouse movement data of each student, detecting and visualizing suspected head and mouse movements of students in three levels of detail.
Journal ArticleDOI

Deep-Learning-Based Multimodal Emotion Classification for Music Videos

TL;DR: In this paper, an affective computing system that relies on music, video, and facial expression cues, making it useful for emotional analysis, was presented, which applied the audio-video information exchange and boosting methods to regularize the training process and reduced the computational costs by using a separable convolution strategy.
References
More filters
Book

The Expression of the Emotions in Man and Animals

TL;DR: The Expression of the Emotions in Man and Animals Introduction to the First Edition and Discussion Index, by Phillip Prodger and Paul Ekman.
Proceedings ArticleDOI

The eyes have it: a task by data type taxonomy for information visualizations

TL;DR: A task by data type taxonomy with seven data types and seven tasks (overview, zoom, filter, details-on-demand, relate, history, and extracts) is offered.
Journal ArticleDOI

The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS): A dynamic, multimodal set of facial and vocal expressions in North American English

TL;DR: The RAVDESS is a validated multimodal database of emotional speech and song consisting of 24 professional actors, vocalizing lexically-matched statements in a neutral North American accent, which shows high levels of emotional validity and test-retest intrarater reliability.
Journal ArticleDOI

A review of affective computing

TL;DR: This first of its kind, comprehensive literature review of the diverse field of affective computing focuses mainly on the use of audio, visual and text information for multimodal affect analysis, and outlines existing methods for fusing information from different modalities.
Journal ArticleDOI

Body cues, not facial expressions, discriminate between intense positive and negative emotions

TL;DR: Standard models of emotion expression are challenged and the role of the body is highlighted in expressing and perceiving emotions, particularly whether the emotion being expressed is positive or negative.
Related Papers (5)