scispace - formally typeset
Journal ArticleDOI

Human expression recognition from motion using a radial basis function network architecture

Reads0
Chats0
TLDR
A radial basis function network architecture is developed that learns the correlation of facial feature motion patterns and human expressions through a hierarchical approach which at the highest level identifies expressions, at the mid level determines motion of facial features, and at the low level recovers motion directions.
Abstract
In this paper a radial basis function network architecture is developed that learns the correlation of facial feature motion patterns and human expressions. We describe a hierarchical approach which at the highest level identifies expressions, at the mid level determines motion of facial features, and at the low level recovers motion directions. Individual expression networks were trained to recognize the "smile" and "surprise" expressions. Each expression network was trained by viewing a set of sequences of one expression for many subjects. The trained neural network was then tested for retention, extrapolation, and rejection ability. Success rates were 88% for retention, 88% for extrapolation, and 83% for rejection.

read more

Citations
More filters
Journal ArticleDOI

Emotion recognition in human-computer interaction

TL;DR: Basic issues in signal processing and analysis techniques for consolidating psychological and linguistic analyses of emotion are examined, motivated by the PKYSTA project, which aims to develop a hybrid system capable of using information from faces and voices to recognize people's emotions.
Journal ArticleDOI

Automatic facial expression analysis: a survey

Beat Fasel, +1 more
- 01 Jan 2003 - 
TL;DR: This survey introduces the most prominent automatic facial expression analysis methods and systems presented in the literature and discusses issues such as face normalization, facial expression dynamics and facial expression intensity.
Journal ArticleDOI

Gesture Recognition: A Survey

TL;DR: A survey on gesture recognition with particular emphasis on hand gestures and facial expressions is provided, and applications involving hidden Markov models, particle filtering and condensation, finite-state machines, optical flow, skin color, and connectionist models are discussed in detail.
Journal ArticleDOI

Recognizing action units for facial expression analysis

TL;DR: An Automatic Face Analysis (AFA) system to analyze facial expressions based on both permanent facial features and transient facial features in a nearly frontal-view face image sequence and Multistate face and facial component models are proposed for tracking and modeling the various facial features.
Journal ArticleDOI

Classifying facial actions

TL;DR: This paper explores and compares techniques for automatically recognizing facial actions in sequences of images and provides converging evidence for the importance of using local filters, high spatial frequencies, and statistical independence for classifying facial actions.
References
More filters
Book

Introduction To The Theory Of Neural Computation

TL;DR: This book is a detailed, logically-developed treatment that covers the theory and uses of collective computational networks, including associative memory, feed forward networks, and unsupervised learning.
Book

Sensation and Perception

TL;DR: Goldstein's SENSATION AND PERCEPTION as mentioned in this paper is a comprehensive examination of sensation and perception, with a balanced coverage of all senses, which offers an integrated examination of how the senses work together, and shows how seemingly simple experiences are actually extremely complex mechanisms and examines both psychophysical and physiological underpinnings of perception.
Proceedings ArticleDOI

Feature extraction from faces using deformable templates

TL;DR: A method for detecting and describing the features of faces using deformable templates is described, demonstrated by showing deformable template detecting eyes and mouths in real images.
Journal ArticleDOI

Emotion recognition: the role of facial movement and the relative importance of upper and lower areas of the face.

TL;DR: The results demonstrated that moving displays of happiness, sadness, fear, surprise, anger and disgust were recognized more accurately than static displays of the white spots at the apex of the expressions, indicating that facial motion, in the absence of information about the shape and position of facial features, is informative about these basic emotions.