scispace - formally typeset
Search or ask a question

Showing papers on "Facial expression published in 1991"


Journal ArticleDOI
TL;DR: Emotion-specific autonomic nervous system activity was studied in 20 elderly people who followed muscle-by-muscle instructions for constructing facial prototypes of emotional expressions and relived past emotional experiences.
Abstract: Emotion-specific autonomic nervous system (ANS) activity was studied in 20 elderly people (age 71-83 years, M = 77) who followed muscle-by-muscle instructions for constructing facial prototypes of emotional expressions and relived past emotional experiences. Results indicated that (a) patterns of emotion-specific ANS activity produced by these tasks closely resembled those found in other studies with younger Ss, (b) the magnitude of change in ANS measures was smaller in older than in younger Ss, (c) patterns of emotion-specific ANS activity showed generality across the 2 modes of elicitation, (d) emotion self-reports and spontaneous production of emotional facial expressions that occurred during relived emotional memories were comparable with those found in younger Ss, (e) elderly men and women did not differ in emotional physiology or facial expression, and (f) elderly women reported experiencing more intense emotions when reliving emotional memories than did elderly men.

527 citations


Journal ArticleDOI
TL;DR: The authors found that people smile more often when they are alone than when a friend is present, and that the smiles are better predicted by social context than by emotion, which is consistent with both contemporary ethology and role and impression management theories of behavior.
Abstract: Ss viewed a pleasant videotape either: (a) alone, (b) alone but with the belief that a friend nearby was otherwise engaged, (c) alone but with the belief that a friend was viewing the same videotape in another room, or (d) when a friend was present. S'ssmiling, as estimated by facial electromyography, varied monotonically with the sociality of viewing but not with reported emotion. The findings confirm audience effects for human smiles, demonstrate that the effects do not depend upon the presence of the interactant, and indicate that the smiles are better predicted by social context than by emotion. Both naive and expert independent raters given descriptions of the study made predictions that conformed to previous emotion-based accounts of faces but departed from the findings. The results suggest that some solitary faces may be implicitly social, a view consistent with both contemporary ethology, and role and impression-management theories of behavior. People make faces when they are alone. This curious fact may have been crucial in shaping the most popular contemporary theories of facial expression. These generally hold that whereas some faces reflect social convention, others are quasi-reflexive released displays of felt emotion (Buck, 1984; Darwin, 1872; Ekman, 1972,1973,1977,1984; Ekman & Friesen, 1969,1975,

482 citations



Journal ArticleDOI
TL;DR: A model for the development of emotion that involves an initial decision of approach or withdraw, which results in motor programs, including facial expression, that facilitate either approach or withdrawal is presented.
Abstract: Recent studies suggest that an initial component involving stimulus evaluation may precede subsequent steps in the generation of emotion. This article presents a model for the development of emotion that involves an initial decision of approach or withdrawal, which results in motor programs, including facial expression, that facilitate either approach or withdrawal. With development, more complex emotions arise, as products of these basic initial responses and interaction with the environment. Evidence is presented that suggests that there are brain asymmetries (as measured by scalp recorded EEG activity) localized to the frontal region that are associated with the generation of emotion in infants. Variability in the pattern of EEG asymmetry between infants may be an important marker of differences in temperament.

422 citations


Journal ArticleDOI
01 Jun 1991-Brain
TL;DR: It is found that RHD subjects performed normally in their ability to infer the emotion conveyed by sentences describing situations, but RHD patients were impaired in relation to both LHD and NC in the capacity to judge the emotional content of sentences depicting facial, prosodic, and gestural expressions.
Abstract: Previous research has established that patients with right hemisphere damage (RHD) are impaired in the comprehension of emotional prosody and facial expression. There are several explanations for this impairment. It may reflect defective acoustic and visuospatial analysis, disruption of nonverbal communicative representations, or a disturbance in the comprehension of emotional meaning. In order to examine these hypotheses, we asked RHD patients, left hemisphere damaged patients (LHD) and normal controls (NC) to judge the emotional content of sentences describing nonverbal expressions, and sentences describing emotional situations. We found that RHD subjects performed normally in their ability to infer the emotion conveyed by sentences describing situations. However, RHD patients were impaired in relation to both LHD and NC in the capacity to judge the emotional content of sentences depicting facial, prosodic, and gestural expressions, suggesting a disruption of nonverbal communicative representations.

408 citations


Book
01 Jan 1991
TL;DR: The development of facial expressions in infancy was studied in this article, where a fundamental approach to nonverbal exchange was presented. But it was not shown that nonverbal and self-presentation are related.
Abstract: Preface Part I. Biological Approaches to Nonverbal Behaviour: 1. Neuropsychology of facial expression William E. Rinn 2. Brain pathology, lateralization, and nonverbal behaviour Pierre Feyereisen Part II. Sociodevelopmental Approaches to Nonverbal Behaviour: 3. The development of facial expresssions in infancy Linda A. Camras, Carol Malatesta and Carroll E. Izard 4. Toward an ecology of expressiveness: family socialization in particular and a model in general Amy G. Halberstadt Part III. Affective and Cognitive Processes: 5. Facial expression: methods, means, and moues Paul Ekman and Maureen O'Sullivan 6. Voice and emotion Arvid Kappas, Ursula Hess and Klaus R. Scherer 7. Gesture and speech Bernard Rime and Loris Schiaratura Part IV. Individual Differences and Social Adaptation: 8. Expressiveness as an individual difference Antony S. R. Manstead 9. Social competence and nonverbal behaviour Robert S. Feldman, Pierre Philippot and Robert J. Custrini 10. Nonverbal and self-presentation: a developmental perspective Bella M. Depaulo Part V. Interpersonal Processes: 11. Interpersonal coordination: behaviour matching and interactional synchrony Frank J. Bernieri and Robert Rosenthal 12. Symbolic nonverbal behaviour: talking through gestures Pio Enrico Ricci Bitti and Isabella Poggi 13. A fundamental approach to nonverbal exchange Miles L. Patterson Author index Subject index.

314 citations


Journal ArticleDOI
TL;DR: A behavioral-ecology view of human facial displays is introduced that contrasts with previous views of faces as innate, prototypic, "iconic" expressions of fundamental emotions, to clarify how facial displays relate to reflexion, motive and intention, emotion and psychophysiology, and language and paralanguage.

256 citations


Journal ArticleDOI
TL;DR: In this paper, the authors reviewed studies that examined children's ability to recognize emotional information in facial expressions and their knowledge of ways this information is affected by aspects of the situations in which the expressions occur.

212 citations


Journal ArticleDOI
01 Aug 1991-Pain
TL;DR: Considerable voluntary control over the facial expression ofPain was observed, although the faked expression was more an intensified caricature of the genuine expression, and an attempt to suppress the facial grimace of pain was not entirely successful as residual facial activity persisted.
Abstract: Facial activity was examined as 60 female and 60 male chronic low back pain patients responded to a painful range of motion exercise during a scheduled physical examination. Subsequently, they were asked to fake the facial response to the movement inducing the most pain or to attempt to suppress evidence that they were experiencing pain when this same movement was again repeated. Facial behavior was measured using the Facial Action Coding System. Self-reports of pain also were provided. The genuine expression was consistent with that observed in previous research, but minor differences indicated that the facial display of pain reflects differences between sources of pain, social context in which pain is induced and individual differences among patients. Considerable voluntary control over the facial expression of pain was observed, although the faked expression was more an intensified caricature of the genuine expression, and an attempt to suppress the facial grimace of pain was not entirely successful as residual facial activity persisted. Self-reports were only moderately correlated with facial behavior.

182 citations


Journal ArticleDOI
TL;DR: In this paper, the frequency of motor mimicry displays in response to hearing about a close-call experience was examined in four communicative situations, and the results support the proposition that facial displays are mediated by the extent to which individuals can fully interact in communicative situation.
Abstract: A primary function of facial displays is to communicate messages to others. Bavelas and Chovil (1990) proposed an Integrated Message Model of language in which nonverbal acts such as facial displays and gestures that occur in communicative (particularly face-to-face) interactions are viewed as symbolic messages that are used to convey meaning to others. One proposition of this model is that these nonverbal messages will be shaped by the social components of the situation. The present study attempted to delineate more precisely the components of sociality that explicitly affect the use of facial displays in social situations. Frequency of motor mimicry displays in response to hearing about a close-call experience was examined in four communicative situations. In one condition, participants listened to a tape-recording of an individual telling about a close-call event. In two interactive but nonvisual conditions, participants listened to another person over the telephone or in the same room but separated by a partition. In the fourth condition, participants listened to another person in a face-to-face interaction. The frequency of listeners' motor mimicry displays was found to vary monotonically with the sociality of the four conditions. Actual presence and visual availability of the story-teller potentiated listener displays. The results support the proposition that facial displays are mediated by the extent to which individuals can fully interact in communicative situations.

165 citations


Journal ArticleDOI
TL;DR: The authors proposed a theoretical framework by which to understand and predict how and why cultures influence the emotions, combining individualism and power distance with the social distinctions of ingroup-outgroup and status.
Abstract: Research demonstrates that facial expressions of emotion are both universal and culturally‐specific, but our theoretical understanding of how cultures influence emotions has not advanced since Friesen's (1972) conception of cultural display rules. This article offers a theoretical framework by which to understand and predict how and why cultures influence the emotions. The model combines the cultural dimensions known as individualism and power distance with the social distinctions of ingroup‐out‐group and status. Major issues in future theoretical and empirical work are also discussed.

Journal ArticleDOI
TL;DR: In this paper, three different age groups (5-and 10-year-old children, and adults) were asked to link a number of selected excerpts of music to one of the four mood states: "happiness", "sadness", "fear" and "anger".
Abstract: Three different age groups (5- and 10-year-old children, and adults) were asked to link a number of selected excerpts of music to one of the four moodstates: "happiness", "sadness", "fear" and "anger" (represented by facial expressions). The consensus of choices was considerable, even among the youngest children, and increased with age. Fear and anger were harder to identify in music than happiness and sadness. In the case of anger, this is probably caused by the phenomenon that the subjects (especially the youngest children) were often inclined to answer not by identifying the character of the stimulus, but in terms of the character of their response: fear. In a preliminary experiment in which a group of adult subjects was asked to judge a large number of moodstates for their possible expression in music, we found some indications that music expresses the positive-negative value and the degree of activity of a moodstate particularly well. The position of happiness, sadness and anger on those dimensions i...

Journal ArticleDOI
TL;DR: This article found that 5-month-olds can discriminate vocal expressions of emotion when those expressions are presented in conjunction with a face and showed that the presence of a facial expression plays a role in successful discrimination.
Abstract: Infants 5 months of age have been found to discriminate happy and sad vocal expressions The present experiment showed that they can discriminate happy and angry vocal expressions as well, but that the presence of a facial expression plays a role in successful discrimination Infants were visually habituated to a vocal expression accompanied by an affectively matching facial expression, an affectively mismatching facial expression, or a checkerboard pattern At criterion, the vocal expression was or was not changed while the slide remained the same Infants who received no change or who had been habituated to a vocal expression accompanied by a checkerboard display failed to dishabituate on the posttests Infants who received a change in vocal expression from happy to angry or sad, or angry to happy or sad increased their looking time These results indicate that 5-month-olds can discriminate vocal expressions of emotion when those expressions are presented in conjunction with a face

Journal ArticleDOI
TL;DR: Results indicate that decoding of emotions from own facial expression and decoding of the respective emotions from pictures of facial affect correspond to a degree above chance.
Abstract: There is considerable evidence now that recognition of emotion from facial expression occurs far above chance, at least for primary emotions. On the other hand, not much research is available studying the process of emotion recognition. An early theory was proposed by Lipps (1907), postulating that an ‘imitation drive’ accounts for this process. According to this theory, we tend to imitate a facial expression to which we are exposed, via feedback mechanisms we realize that our own imitated facial expression is associated with an emotion, and then we attribute this emotion to the person confronting us. Using Ekman & Friesen's (1976) Pictures of Facial Affect, a study employing 20 subjects was conducted. During the first part subjects had to judge the emotions expressed in the pictures of facial affect. During this task the subjects were videotaped without their knowledge. About two weeks later the same subjects watched the video-recordings of their own expressions during the judgement task and had to judge which emotions they had decoded for the respective slides two weeks previously. Results indicate that decoding of emotions from own facial expression and decoding of the respective emotions from pictures of facial affect correspond to a degree above chance. The results are discussed with respect to the possible impact of imitation on the process of emotion recognition.

Journal ArticleDOI
TL;DR: The positive results of this controlled trial demonstrate that feedback training in combination with a structured home rehabilitation program is a clinically efficacious treatment for patients with facial nerve paresis.
Abstract: An efficacious treatment has not been available to patients with aberrant regeneration of the facial nerve as a result of Bell's palsy or after acoustic neuroma excision. This prospective controlled trial examines the efficacy of electromyographic feedback versus mirror feedback as treatment strategies for patients suffering from long-standing (18 months minimum) facial nerve paresis. Twenty-five patients were randomly assigned to electromyography with mirror feedback or mirror feedback alone. Seven rural patients who did not undergo treatment served as controls. At 0, 6, and 12 months, facial motor function was objectively quantified by linear measurement of facial movement, visual assessment of voluntary movement, and electrical measurement of facial nerve response to maximal stimulation. Statistically significant improvements were noted in both electromyography and mirror-feedback groups with respect to symmetry of voluntary movement (P less than .03) and linear measurement of facial expression (P less than .01). The positive results of this controlled trial demonstrate that feedback training in combination with a structured home rehabilitation program is a clinically efficacious treatment for patients with facial nerve paresis.

Journal ArticleDOI
01 Dec 1991-Brain
TL;DR: These findings support the hypothesis that the right hemisphere may contain a 'lexicon' of facial emotions and argue against current views that it is exclusively the left or right hemisphere that mediates visual imagery.
Abstract: Thirty-six patients with unilateral hemispheric lesions of the right hemisphere (RHD), left hemisphere (LHD), or no neurologic disease were evaluated on two tasks of visual imagery: one involved imagery for facial emotions and the other involved imagery for common objects. As a group, the RHD patients were more impaired on the emotional than the object imagery task, whereas the LHD patients showed the opposite pattern. Individual case analyses suggested that the RHD group consisted of different behavioural subtypes. One patient with a right inferior occipito-temporal lesion had a facial emotion imagery generation defect, other RHD patients displayed a facial affect agnosia (being impaired on emotional imagery and emotional perceptual tasks), while other RHD patients had perceptual defects with sparing of imagery performance. A final RHD group was globally impaired across all imagery and perceptual tasks. These findings support the hypothesis that the right hemisphere may contain a 'lexicon' of facial emotions. Furthermore, these findings argue against current views that it is exclusively the left or right hemisphere that mediates visual imagery. Rather, hemispheric asymmetries in imagery performance are to some extent material-representation specific and may arise when (a) the representations of objects/events to be imaged are differentially represented in the hemispheres, and/or (b) when the operations acting on these imaged events are differentially lateralized.

Journal ArticleDOI
TL;DR: An automatic field motion image synthesis scheme (driven by speech) and a real-time image synthesis design are presented to realize an intelligent human-machine interface or intelligent communication system with talking head images.
Abstract: An automatic field motion image synthesis scheme (driven by speech) and a real-time image synthesis design are presented. The purpose of this research is to realize an intelligent human-machine interface or intelligent communication system with talking head images. A human face is reconstructed on the display of a terminal using a 3-D surface model and texture mapping technique. Facial motion images are synthesized naturally by transformation of the lattice points on 3-D wire frames. Two driving motion methods, a text-to-image conversion scheme and a voice-to-image conversion scheme, are proposed. In the first method, the synthesized head image can appear to speak some given words and phrases naturally. In the second case, some mouth and jaw motions can be synthesized in synchronization with voice signals from a speaker. Facial expressions other than mouth shape and jaw position can be added at any moment, so it is easy to make the facial model appear angry, to smile, to appear sad, etc., by special modification rules. These schemes were implemented on a parallel image computer system. A real-time image synthesizer was able to generate facial motion images on the display at a TV image video rate. >


Journal ArticleDOI
01 Jun 1991-Cortex
TL;DR: Negative-aroused and negative-nonaroused facial expressions were recognized with significantly greater accuracy by left hemisphere-damaged patients compared to right hemisphere-Damaged patients; the group difference in performance was nonsignificant for positive emotions.

Journal ArticleDOI
TL;DR: Three-way ANOVAs revealed children with learning disabilities to be less accurate interpreters of emotion and to spend more time identifying specific emotions.
Abstract: The accuracy and time required for children with and without learning disabilities to interpret emotions when restricted to information from facial expressions, and the accuracy of those interpretations, were investigated. Ninety-six children participated; an equal number of males and females were included in both learning categories and age levels. Accuracy and response time on a modified version of Pictures of Facial Affect were recorded for the emotions of fear, sadness, surprise, anger, happiness, and disgust, as well as for the entire task. Three-way ANOVAs revealed children with learning disabilities to (a) be less accurate interpreters of emotion and (b) spend more time identifying specific emotions. Both age and sex influenced response time: Younger subjects required more time to interpret the emotions of fear and anger; males spent more time interpreting happiness. Younger females with learning disabilities displayed difficulty in interpretation, and older children with learning disabilities (par...

Journal ArticleDOI
TL;DR: In this paper, the validity of social perceptions was assessed on the basis of facial or vocal information, and the specific facial and vocal characteristics that mediated these links were also considered, potential mechanisms that may yield the match between self-perceptions and impressions based on nonverbal cues are discussed.
Abstract: The validity of social perceptions was assessed on the basis of facial or vocal information. Specifically, impressions of stimulus persons' power and warmth were obtained on the basis of either a facial photograph or a voice recording. These were compared with the stimulus persons' self-reports along the same dimensions. Face- and voice-based impressions did predict self-view. The specific facial and vocal characteristics that mediated these links were also considered. Potential mechanisms that may yield the match between self-perceptions and impressions based on nonverbal cues are discussed.

Journal ArticleDOI
TL;DR: In this article, the effect of the physical presence of a friend or of a stranger on facial expressiveness was investigated, and the results support the suggestion that the degree to which emotions are expressed depends on the role of an accompanying person.
Abstract: This study investigated the effect of the physical presence of a friend or of a stranger on facial expressiveness. Pairs of friends and pairs of strangers (all women) were unobtrusively videotaped while they viewed together a number of emotional stimulus slides, and rated their individual emotional responses to them. Judges subsequently attempted to identify from the videotapes the emotions reported by each sender subject. Generally, expressions were more readily identified for women videotaped with friends than for those recorded with strangers. These results support the suggestion that the degree to which emotions are expressed depends on the role of an accompanying person. Altemative interpretations of this view are discussed.

Journal ArticleDOI
TL;DR: It is suggested that children can be classified as stutterers on the basis of their nonspeech behaviors and that these behaviors may reflect a variety of cognitive, emotional, linguistic, and physical events associated with childhood stuttering.
Abstract: The purpose of this study was to assess the nonspeech behaviors associated with young stutterers’ stuttering and normally fluent children’s comparable fluent utterances. Subjects were 28 boys and 2...

Journal ArticleDOI
TL;DR: This article found evidence that smile production in 10-month-old infants is affected by the presence or absence of an audience for the facial display, and that the audience effect does not appear to be mediated by emotion.
Abstract: This report presents evidence that smile production in 10-month-old infants is affected by the presence or absence of an audience for the facial display. The audience effect does not appear to be mediated by emotion. The evidence indicates that the production of facial expressions is at least partly independent of emotion and partly dependent on a social-communicative context from a very early age.

Proceedings ArticleDOI
18 Nov 1991
TL;DR: A method of human emotion recognition from facial expressions by a neural network is shown, and network learning and recognition are done by a backpropagation algorithm.
Abstract: An attempt is being made to develop a mind-implemented robot that can carry out intellectual conversation with humans. As the first step for this study, a method for the robot to perceive human emotion is investigated. Specifically, a method of human emotion recognition from facial expressions by a neural network is shown. The authors categorized facial expressions into six groups (surprise, fear, disgust, anger, happiness, and sadness) and obtained 70 CCD (charge coupled device) camera-acquired data of facial feature-points from three components of the face (eyebrows, eyes, and mouth). Then the facial expression information is generated and input into the input units of the neural network; network learning and recognition are done by a backpropagation algorithm. >

Journal ArticleDOI
TL;DR: A meta-analysis of 65 hypothesis tests in 14 published studies of asymmetry in the facial expression of emotion is reported in this article, which examines which side of the face has been found more strongly to express emotion as well as the effects of three moderator variables: (i) the type of expression; (ii) the method of eliciting the expression (posed or spontaneous); and (iii) dimensions of emotional experience.
Abstract: A meta-analysis of 65 hypothesis tests in 14 published studies of asymmetry in the facial expression of emotion is reported. The analysis examines which side of the face has been found more strongly to express emotion as well as the effects of three moderator variables: (i) the type of expression; (ii) the method of eliciting the expression (posed or spontaneous); and (iii) dimensions of emotional experience. The analysis reveals a highly significant but small effect for the left side of the face to be judged more expressive than the right. Additionally it reveals that asymmetry is (a) stronger for emotional than neutral expressions, (b) stronger for posed emotional expressions compared with spontaneous emotional expressions and (c) predicted by some dimensions of emotional experience, notably pleasantness. The results highlight theoretical and methodological issues in asymmetry of facial emotional expression.

Journal ArticleDOI
TL;DR: Four cases of dissociable impairments affecting only one of the face processing tasks are reported; patient JP impaired only on facial expression recognition, patients AB and HI impairment only on familiar face recognition, and patient VS impaired only in unfamiliar face matching.
Abstract: Matched populations of head-injured patients and normal control subjects completed three “forced-choice” face processing tasks designed to test facial expression recognition, familiar face recognition, and unfamiliar face matching. We hypothesised a significant difference in the performance of the patients and controls on the three tasks, and hoped to observe individual differences in the patients' performance across tasks. As predicted the head-injured patients made significantly more errors than the controls on the forced-choice tasks. Four cases of dissociable impairments affecting only one of the face processing tasks are reported; patient JP impaired only on facial expression recognition, patients AB and HI impaired only on familiar face recognition, and patient VS impaired only on unfamiliar face matching. These dissociable impairments provide further evidence for independent cognitive processing of specific face properties.

Book ChapterDOI
Tsuneya Kurihara1, Kiyoshi Arai1
01 Jan 1991
TL;DR: A 3-D canonical facial model is introduced which is transformed to a facial model that is consistent with photographs of an individual face, and facial expression is modified by transformation of the obtained facial model.
Abstract: This paper describes a new transformation method for modeling and animation of the human face. A 3-D canonical facial model is introduced which is transformed to a facial model that is consistent with photographs of an individual face. Facial expression is modified by transformation of the obtained facial model. By using the displacements of selected control points, the transformation determines the displacements of the remaining points by linear interpolation in a 2-D parameter space. To generate texturemapped facial images, photographs are first projected onto a 2-D space using cylindrical coordinates and then combined, taking into account their positional certainty.

Journal Article
TL;DR: Children and adults with mental retardation or borderline intelligence were less proficient at identifying facial expressions of emotion than were children of average intelligence and among individuals with average intelligence, recognition accuracy increased with age.
Abstract: A sample of 511 children and adults with mental retardation or borderline intelligence (1 SD below the mean IQ) and children of average intelligence were tested on their ability to recognize the six basic facial expressions of emotion as they are exemplified in Ekman and Friesen's (1975) normed photographs. Each subject was shown four sets of six photographs, one of each emotion. Subjects were read 24 short stories; after each one they were asked to point to the photograph that depicted the emotion described. Children and adults with mental retardation or borderline intelligence were less proficient at identifying facial expressions of emotion than were children of average intelligence. Among individuals with mental retardation or borderline intelligence, recognition of accuracy of facial emotion increased with IQ. Among individuals with average intelligence, recognition accuracy increased with age.

Book ChapterDOI
01 Jan 1991
TL;DR: The goal is to build a system of 3D animation of facial expressions of emotion correlated with the intonation of the voice by examining the rules that control these relations (intonation/emotions and facial expressions/em emotions) as well as the coordination of these various modes of expressions.
Abstract: Our goal is to build a system of 3D animation of facial expressions of emotion correlated with the intonation of the voice. Up till now, the existing systems did not take into account the link between these two features. We will look at the rules that control these relations (intonation/emotions and facial expressions/emotions) as well as the coordination of these various modes of expressions. Given an utterance, we consider how the messages (what is new/old information in the given context) transmitted through the choice of accents and their placement, are conveyed through the face. The facial model integrates the action of each muscle or group of muscles as well as the propagation of the muscles’ movement. Our first step will be to enumerate and to differentiate facial movements linked to emotions as opposed to those linked to conversation. Then, we will examine what the rules are that drive them and how their different functions interact.