scispace - formally typeset
Search or ask a question

Showing papers on "Facial Action Coding System published in 1991"


Journal ArticleDOI
01 Aug 1991-Pain
TL;DR: Considerable voluntary control over the facial expression ofPain was observed, although the faked expression was more an intensified caricature of the genuine expression, and an attempt to suppress the facial grimace of pain was not entirely successful as residual facial activity persisted.
Abstract: Facial activity was examined as 60 female and 60 male chronic low back pain patients responded to a painful range of motion exercise during a scheduled physical examination. Subsequently, they were asked to fake the facial response to the movement inducing the most pain or to attempt to suppress evidence that they were experiencing pain when this same movement was again repeated. Facial behavior was measured using the Facial Action Coding System. Self-reports of pain also were provided. The genuine expression was consistent with that observed in previous research, but minor differences indicated that the facial display of pain reflects differences between sources of pain, social context in which pain is induced and individual differences among patients. Considerable voluntary control over the facial expression of pain was observed, although the faked expression was more an intensified caricature of the genuine expression, and an attempt to suppress the facial grimace of pain was not entirely successful as residual facial activity persisted. Self-reports were only moderately correlated with facial behavior.

182 citations


Journal ArticleDOI
TL;DR: Results indicate that decoding of emotions from own facial expression and decoding of the respective emotions from pictures of facial affect correspond to a degree above chance.
Abstract: There is considerable evidence now that recognition of emotion from facial expression occurs far above chance, at least for primary emotions. On the other hand, not much research is available studying the process of emotion recognition. An early theory was proposed by Lipps (1907), postulating that an ‘imitation drive’ accounts for this process. According to this theory, we tend to imitate a facial expression to which we are exposed, via feedback mechanisms we realize that our own imitated facial expression is associated with an emotion, and then we attribute this emotion to the person confronting us. Using Ekman & Friesen's (1976) Pictures of Facial Affect, a study employing 20 subjects was conducted. During the first part subjects had to judge the emotions expressed in the pictures of facial affect. During this task the subjects were videotaped without their knowledge. About two weeks later the same subjects watched the video-recordings of their own expressions during the judgement task and had to judge which emotions they had decoded for the respective slides two weeks previously. Results indicate that decoding of emotions from own facial expression and decoding of the respective emotions from pictures of facial affect correspond to a degree above chance. The results are discussed with respect to the possible impact of imitation on the process of emotion recognition.

131 citations


Book ChapterDOI
01 Jan 1991
TL;DR: The goal is to build a system of 3D animation of facial expressions of emotion correlated with the intonation of the voice by examining the rules that control these relations (intonation/emotions and facial expressions/em emotions) as well as the coordination of these various modes of expressions.
Abstract: Our goal is to build a system of 3D animation of facial expressions of emotion correlated with the intonation of the voice. Up till now, the existing systems did not take into account the link between these two features. We will look at the rules that control these relations (intonation/emotions and facial expressions/emotions) as well as the coordination of these various modes of expressions. Given an utterance, we consider how the messages (what is new/old information in the given context) transmitted through the choice of accents and their placement, are conveyed through the face. The facial model integrates the action of each muscle or group of muscles as well as the propagation of the muscles’ movement. Our first step will be to enumerate and to differentiate facial movements linked to emotions as opposed to those linked to conversation. Then, we will examine what the rules are that drive them and how their different functions interact.

86 citations


Journal ArticleDOI
TL;DR: The authors examined how different social contexts determine whether preschoolers' smiles in an achievement-game serve an expressive function indicating success versus failure experiences and/or a communicative function, and found that children smiled more often after failure than after success.
Abstract: Two studies examined how different social contexts determine whether preschoolers' smiles in an achievement-game serve an expressive function indicating success versus failure experiences and/or a communicative function. Facial behavior was coded with the Facial Action Coding System. Unexpectedly, in Study 1 children (N=19) smiled more often after failure than after success. Study 2 investigated the influence of face-to-face contact with the experimenter on preschoolers' smiles (N=20). However, there were no differences between success and failure, but with face-to-face contact subjects exhibited more smiles than without. Features of the social situation that are supposed to determine the predominance of the communicative or expressive function of a smile are discussed.

69 citations


Journal ArticleDOI
TL;DR: The ability to exhibit facial expressions was studied in four patients with severe dementia of the Alzheimer type by means of the Facial Action Coding System (FACS) and physiological responses under pleasant and unpleasant stimulus conditions.
Abstract: The ability to exhibit facial expressions was studied in four patients with severe dementia of the Alzheimer type (SDAT), by means of the Facial Action Coding System (FACS) and physiological respon ...

58 citations


Journal ArticleDOI
TL;DR: A hierarchical model of facial description units for the hierarchical description of the facial expressions is proposed and the action unit of the FACS (facial action coding system), known as a systematic method of describing the facial expression, is considered as a layer in the proposed hierarchical model.
Abstract: The automatic synthesis of the facial expression by computer is one of the most basic techniques to be applied in various fields. This paper describes a method to synthesize various facial expressions by modification of the 3-D model of the face. The 3-D facial model is obtained by adjusting the prepared 3-D facial shape model to the frontal view of the object neutral face image, and by projecting the gray-level informations. This paper proposes first a hierarchical model of facial description units for the hierarchical description of the facial expressions. The action unit (AU) of the FACS (facial action coding system), which is known as a systematic method of describing the facial expression, is considered as a layer in the proposed hierarchical model. AU of FACS is realized on a computer, and the automatic synthesis of the expression is executed. This procedure is based on the muscular structure of the face and allows cancellation of individual differences. At present, 34 kinds of AUs are implemented. The graylevel information for wrinkles and teeth, which are important in the synthesis of expressions, is not contained in the 3-D facial model obtained from the neutral expression. Thus, such information is extracted from other face images, and registered as auxiliary information of the 3-D facial model. This leads to synthesis of natural expressions. Finally, numerous examples for the facial expression synthesis are presented to show the usefulness of the method.

7 citations