scispace - formally typeset
Open AccessJournal ArticleDOI

A review of affective computing

Reads0
Chats0
TLDR
This first of its kind, comprehensive literature review of the diverse field of affective computing focuses mainly on the use of audio, visual and text information for multimodal affect analysis, and outlines existing methods for fusing information from different modalities.
About
This article is published in Information Fusion.The article was published on 2017-09-01 and is currently open access. It has received 969 citations till now. The article focuses on the topics: Affective computing & Modality (human–computer interaction).

read more

Citations
More filters
Journal ArticleDOI

Superior Communication of Positive Emotions Through Nonverbal Vocalisations Compared to Speech Prosody

TL;DR: Listeners were more accurate in recognizing most positive emotions from nonverbal vocalizations compared to prosodic expressions, and acoustic classification experiments with machine learning models demonstrated that positive emotions are expressed with more distinctive acoustic patterns for non verbal vocalizations as compared to speech prosody.
Journal ArticleDOI

MCL: A Contrastive Learning Method for Multimodal Data Fusion in Violence Detection

TL;DR: In this article , a multi-encoder framework is proposed to perform task-driven feature encoding on video and audio respectively, and a contrastive learning task is introduced to reduce information loss during multimodal fusion.
Journal ArticleDOI

Multimodal transformer augmented fusion for speech emotion recognition

TL;DR: In this article, a model-fusion module composed of three cross-transformer encoders is proposed to generate multimodal emotional representation for modal guidance and information fusion.
Book ChapterDOI

Soundtrack Recommendation for UGVs

TL;DR: A fast and effective heuristic ranking approach based on heterogeneous late fusion by jointly considering three aspects: venue categories, visual scene, and user listening history that recommends appealing soundtracks for UGVs to enhance the viewing experience is proposed.
Journal ArticleDOI

Is trust in artificial intelligence systems related to user personality? Review of empirical evidence and future research directions

René Riedl
- 23 Nov 2022 - 
TL;DR: In this paper , the authors developed a framework of user personality and trust in AI systems, which distinguishes universal personality traits (e.g., Big Five), specific personality traits, and specific behaviors such as adherence to the recommendation of an AI system in a decision-making context.
References
More filters
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Posted Content

Efficient Estimation of Word Representations in Vector Space

TL;DR: This paper proposed two novel model architectures for computing continuous vector representations of words from very large data sets, and the quality of these representations is measured in a word similarity task and the results are compared to the previously best performing techniques based on different types of neural networks.
Journal ArticleDOI

A fast learning algorithm for deep belief nets

TL;DR: A fast, greedy algorithm is derived that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory.
Proceedings ArticleDOI

Convolutional Neural Networks for Sentence Classification

TL;DR: The CNN models discussed herein improve upon the state of the art on 4 out of 7 tasks, which include sentiment analysis and question classification, and are proposed to allow for the use of both task-specific and static vectors.
Book

The Expression of the Emotions in Man and Animals

TL;DR: The Expression of the Emotions in Man and Animals Introduction to the First Edition and Discussion Index, by Phillip Prodger and Paul Ekman.
Related Papers (5)
Frequently Asked Questions (9)
Q1. What contributions have the authors mentioned in the paper "A review of affective computing: from unimodal analysis to multimodal fusion" ?

This is the primary motivation behind their first of its kind, comprehensive literature review of the diverse field of affective computing. Furthermore, existing literature surveys lack a detailed discussion of state of the art in multimodal affect analysis frameworks, which this review aims to address. In this paper, the authors focus mainly on the use of audio, visual and text information for multimodal affect analysis, since around 90 % of the relevant literature appears to cover these three modalities. As part of this review, the authors carry out an extensive study of different categories of state-of-the-art fusion techniques, followed by a critical analysis of potential performance improvements with multimodal analysis compared to unimodal analysis. A comprehensive overview of these two complementary fields aims to form the building blocks for readers, to better understand this challenging and exciting research field. 

One important area of future research is to investigate novel approaches for advancing their understanding of the temporal dependency between utterances, i. e., the effect of utterance at time t on the utterance at time t+1. The progress in text classification research can play a major role in future of the multimodal affect analysis research. Future research should focus on answering this question. The use of deep learning for multimodal fusion can also be an important future work. 

The primary advantage of analyzing videos over textual analysis, for detecting emotions and sentiments from opinions, is the surplus of behavioral cues. 

For acoustic features, low-level acoustic features were extracted at frame level on each utterance and used to generate feature representation of the entire dataset, using the OpenSMILE toolkit. 

Whilst machine learning methods, for supervised training of the sentiment analysis system, are predominant in literature, a number of unsupervised methods such as linguistic patterns can also be found. 

Across the ages of people involved, and the nature of conversations, facial expressions are the primary channel for forming an impression of the subject’s present state of mind. 

The results on uncontrolled recordings (i.e., speech downloaded from a video-sharing website) revealed that the feature adaptation scheme significantly improved the unweighted and weighted accuracies of the emotion recognition system. 

In their literature survey, the authors have found more than 90% of studies reported visual modality as superior to audio and other modalities. 

To accommodate research in audio-visual fusion, the audio and video signals were synchronized with an accuracy of 25micro-seconds.