scispace - formally typeset
Search or ask a question

Showing papers on "Facial Action Coding System published in 1993"


Journal ArticleDOI
TL;DR: Data contrasting the processing of facial identity from static photographs, and facial expression from static and moving images, in two patients with face processing impairments are reported, indicating the separate encoding of expression from moving and static images.

265 citations


Proceedings ArticleDOI
03 Nov 1993
TL;DR: A new emotion model is presented which gives a criteria to decide human's emotion condition from the face image and the facial action coding system is selected as an efficient criteria to describe delicate face expression and motion.
Abstract: This paper presents a new emotion model which gives a criteria to decide human's emotion condition from the face image. Our final goal is to realize very natural and user-friendly human-machine communication environment by giving a face to computer terminal or communication system which can also understand the user's emotion condition. So it is necessary for the emotion model to express emotional meanings of a parameterized face expression and its motion quantitatively. Our emotion model is based on 5-layered neural network which has generalization and nonlinear mapping performance. Both input and output layer has the same number of units. So identity mapping can be realized and emotion space can be constructed in the middle-layer (3rd layer). The mapping from input layer to middle layer means emotion recognition and that from middle layer to output layer corresponds to expression synthesis from the emotion value. Training is performed by typical 13 emotion patterns which are expressed by expression parameters. Subjective test of this emotion space proves the propriety of this model. The facial action coding system is selected as an efficient criteria to describe delicate face expression and motion. >

41 citations


Journal ArticleDOI
TL;DR: This paper used photographs of human facial expressions as stimuli, and had the subjects judge the facial expressions in terms of the six basic categories of emotion: happiness, surprise, fear, anger, sadness, and disgust.
Abstract: Using photographs of the human facial expressions as stimuli, we had the subjects judge the facial expressions in terms of the six basic categories of emotion: happiness, surprise, fear, anger, sadness, and disgust. For each stimulus, we measured the displacements of the facial feature points from a neutral face used as a reference. The relationships between the displacements of the facial feature points and the distribution of categorical judgments were examined by a canonical discriminant analysis. We found three major canonical variables, which were similar to those found by Yamada (1993). Thus, we conclude that we human beings use the structural dimensions of the face, such as curvedness/openness and slantedness of the facial elements, as the information for categorizing the facial expressions of emotion

32 citations


Journal ArticleDOI
TL;DR: In this paper, trained judges (66 college students) distinguished facial expressions of subjects in genuine pain (hand in ice water), masked pain, posed pain, or no pain) from low pain tolerance.
Abstract: Trained judges (66 college students) distinguished facial expressions of subjects in genuine pain (hand in ice water), masked pain, posed pain, or no pain. Judges were given facial movement training (based on Facial Action Coding System) plus limited feedback training, feedback only training, or no training. All judges, regardless of training, were more accurate in detecting genuine pain in subjects demonstrating low pain tolerance than in subjects with high tolerance. Relative to no training, feedback training enhanced accuracy in identifying posed and genuine pain, whereas facial action training plus feedback enhanced accuracy in identifying posed pain. Results suggest judges can be provided with information about facial movements to distinguish between genuine and distorted pain displays

25 citations


01 Jan 1993
TL;DR: This paper presents the interactive facilities for simulating abstract muscle actions using free form deformations (FFD) and defines a minimum perceptible action (MPA) which is defined as the atomic action unit, similar to action unit (AU) of the facial action coding system (FACS).
Abstract: Computer simulation of human facial expressions requires an interactive ability to create arbitrary faces and to provide a controlled simulation of expressions on these faces. The system describes in this paper presents the interactive facilities for simulating abstract muscle actions using free form deformations (FFD). The particular muscle action is simulated as the displacement of the control points of the control-unit for an FFD defined on a region of interest. One or several simulated muscle actions constitute a minimum perceptible action (MPA), which is defined as the atomic action unit, similar to action unit (AU) of the facial action coding system (FACS), to build an expression

15 citations


Proceedings ArticleDOI
03 Nov 1993
TL;DR: In this paper, the Ekman and Friesen Facial Action Coding System (FACS) was used to determine whether humans exhibit facial expressions while interacting with a computer system.
Abstract: This paper describes research which was conducted to determine if humans exhibit facial expressions while interacting with a computer system. Fourteen college-aged subjects were chosen for the experiment. The subjects included 3 Hispanics and 11 Caucasians. Six of the subjects' were female. Each of these subjects performed five computer-based tasks which were chosen to simulate a wide range of typical applications; one of these tasks was a baseline. The subject's facial expressions were videotaped and later analyzed using the Ekman and Friesen Facial Action Coding System. The analysis revealed that the subjects did indeed exhibit facial expressions; an analysis of variance showed a significant difference between task types. In addition, an ethological analysis revealed a surprising number of facial expression maskings. >

4 citations



01 Jan 1993
TL;DR: In this article, Feldman et al. found that subjects induced to feel a specific emotion are less likely to misattribute that emotion to a discrepant facial expression, and the effect of emotion induction on subsequent nonverbal decoding ability.
Abstract: BIASES IN THE DECODING OF OTHERS' FACIAL EXPRESSIONS FEBRUARY 1993 SEAN DONOVAN, B.S., PENN STATE UNIVERSITY M.S., UNIVERSITY OF MASSACHUSETTS Directed by: Professor Robert S. Feldman The transfer of emotional states may occur through an emotion contagion process, in which a person mimics the emotional expression of another, or a cognitive appraisal process, in which emotion-congruent memory nodes become activated. Either of these processes could produce bias in subsequent emotional judgments. Two experiments were conducted to determine the effect of emotion induction on subsequent nonverbal decoding ability. Subjects viewed an emotion-specific film segment and were then asked to decode a series of twenty facial expressions of emotion. Results offer some support of a cognitive bias; subjects induced to feel a specific emotion are less likely to misattribute that emotion to a discrepant facial expression. This effect was statistically significant in two of four conditions and marginally significant in a third. Results provide a basis for future research, although they were not as strong as were expected.