scispace - formally typeset
Search or ask a question

Showing papers on "Facial expression published in 1994"


Journal ArticleDOI
15 Dec 1994-Nature
TL;DR: Findings suggest the human amygdala may be indispensable to recognize fear in facial expressions, but is not required to recognize personal identity from faces, and constrains the broad notion that the amygdala is involved in emotion.
Abstract: Studies in animals have shown that the amygdala receives highly processed visual input, contains neurons that respond selectively to faces, and that it participates in emotion and social behaviour Although studies in epileptic patients support its role in emotion, determination of the amygdala's function in humans has been hampered by the rarity of patients with selective amygdala lesions Here, with the help of one such rare patient, we report findings that suggest the human amygdala may be indispensable to: (1) recognize fear in facial expressions; (2) recognize multiple emotions in a single facial expression; but (3) is not required to recognize personal identity from faces These results suggest that damage restricted to the amygdala causes very specific recognition impairments, and thus constrains the broad notion that the amygdala is involved in emotion

2,091 citations


Journal ArticleDOI
TL;DR: A review of the methods used in that research raises questions of its ecological, convergent, and internal validity as mentioned in this paper, as well as other features such as forced-choice response format, within-subject design, preselected photographs of posed facial expressions, and other features of method are each problematic.
Abstract: Emotions are universally recognized from facial expressions--or so it has been claimed. To support that claim, research has been carried out in various modern cultures and in cultures relatively isolated from Western influence. A review of the methods used in that research raises questions of its ecological, convergent, and internal validity. Forced-choice response format, within-subject design, preselected photographs of posed facial expressions, and other features of method are each problematic. When they are altered, less supportive or nonsupportive results occur. When they are combined, these method factors may help to shape the results. Facial expressions and emotion labels are probably associated, but the association may vary with culture and is loose enough to be consistent with various alternative accounts, 8 of which are discussed.

1,449 citations


Book
26 Aug 1994
TL;DR: In this article, a review of cross-cultural studies of facial expressions of emotion is presented, with a focus on universal recognition of emotion from facial expressions and how to account for both universal and regional variations in facial expressions.
Abstract: Pre-Darwinian Views on Facial Expression. Darwin's Anti-Darwinism in Expression of the Emotions in Man and Animals. Facial Expression and the Methods of Contemporary Evolutionary Research. Mechanisms for the Evolution of Facial Expressions. Facial Hardware: The Nerves and Muscles of the Face. Facial Reflexes and the Ontogeny of Facial Displays. Emotions Versus Behavioural Ecology Views of Facial Expression: Theory and Concepts. Emotions Versus Behavioural Ecology Views of Facial Expreesion: The State of the Evidence. Introduction: Cross Cultural Studies of Facial Expressions of Emotion. Is There Universal Recognition of Emotion from Facial Expression? A Review of Cross-Cultural Studiew, By James A. Russell. How Do We Account for Both Universal and Regional Variations in Facial Expressions of Emotion?. Facial Paralanguage and Gesture. Conclusion: The Study of Facial Displays-Where Do We Go from Here?. References. Index.

1,144 citations


Proceedings ArticleDOI
24 Jul 1994
TL;DR: An implemented system which automatically generates and animates conversations between multiple human-like agents with appropriate and synchronized speech, intonation, facial expressions, and hand gestures is described.
Abstract: We describe an implemented system which automatically generates and animates conversations between multiple human-like agents with appropriate and synchronized speech, intonation, facial expressions, and hand gestures. Conversation is created by a dialogue planner that produces the text as well as the intonation of the utterances. The speaker/listener relationship, the text, and the intonation in turn drive facial expressions, lip motions, eye gaze, head motion, and arm gestures generators. Coordinated arm, wrist, and hand motions are invoked to create semantically meaningful gestures. Throughout we will use examples from an actual synthesized, fully animated conversation.

728 citations


Journal ArticleDOI
TL;DR: The 17-item Emotional Expressivity Scale (EES) was designed as a self-report measure of the extent to which people outwardly display their emotions, and reliability studies showed the EES to be an internally consistent and stable individual-difference measure.
Abstract: Although emotional expressivity figures prominently in several theories of psychological and physical functioning, limitations of currently available measurement techniques impede precise and economical testing of these theories. The 17-item Emotional Expressivity Scale (EES) was designed as a self-report measure of the extent to which people outwardly display their emotions. Reliability studies showed the EES to be an internally consistent and stable individual-difference measure. Validational studies established initial convergent and discriminant validities, a moderate relationship between self-rated and other-rated expression, and correspondence between self-report and laboratorymeasured expressiveness using both college student and community populations. The potential for the EES to promote and integrate findings across diverse areas of research is discussed. Other peoples' emotional expressions hold a certain fascination for nearly everyone. News agencies always provide images of politicians' expressions on winning and losing elections. Reports of court cases routinely mention the defendant's emotional expressions during the reading of the verdict. Winning and losing locker-room photographs attempt to capture sports figures' expressive reactions. This level of fascination is probably supported by the belief that something unique and interesting is communicated by emotional expressions—something that words may at times fail to express. As Fritz Perls (1969), the founder of Gestalt therapy, put it "What we say is mostly either lies or bullshit. But the voice is there, the gesture, the posture, the facial expression" (p. 54). People vary in the extent to which they outwardly exhibit emotions, and these differences have long posed unique and interesting challenges to psychologists. Indeed, emotional expressiveness has captured the attention of researchers interested in areas as diverse as nonverbal communication, psychopathology, personality, social psychology, and health psychology. This article reports on the development of a new self-report measure capturing the general disposition to outwardly express emotion. At the outset, it is worth addressing several crucial questions. Can emotional expressiveness be defined operationally? Is emo

467 citations


Journal ArticleDOI
TL;DR: The results support a theory of disgust that posits its origin as a response to bad tastes and maps its evolution onto a moral emotion.
Abstract: In 3 facial expression identification studies, college students matched a variety of disgust faces to verbally described eliciting situations. The faces depicted specific muscle action movements in accordance with P. Ekman and W. V. Friesen's (1978) Facial Action Coding System. The nose wrinkle is associated with either irritating or offensive smells and, to some extent, bad tastes. Gape and tongue extrusion are associated primarily with what we call core or food-offense disgust and also oral irritation. The broader range of disgust elicitors, including stimuli that remind humans of their animal origins (e.g., body boundary violations, inappropriate sex, poor hygiene, and death), a variety of aversive interpersonal contacts, and certain moral offenses are associated primarily with the raised upper lip. The results support a theory of disgust that posits its origin as a response to bad tastes and maps its evolution onto a moral emotion.

344 citations


Journal ArticleDOI
TL;DR: It is suggested that behaviors and facial expressions are fundamental expressive units flexibly organized into configurations that convey messages about the infant's internal state and intentions that are related to specific interactive contexts.
Abstract: This article evaluates the extent to which infants' expressive modalities of face, gaze, voice, gesture, and posture form coherent affective configurations and whether these configurations are related to specific interactive contexts. 50 6-month-old infants and their mothers were videotaped in Tronick's Face-to-Face Still-Face Paradigm. The infants' gaze, voice, gestures, self-regulatory, and withdrawal behaviors were coded with the Infant Regulatory Scoring System (IRSS). The infants' facial expressions were coded with Izard's AFFEX system. Contingency analyses of IRSS behaviors and AFFEX expressions revealed 4 distinct affective configurations: Social Engagement, Object Engagement, Passive Withdrawal, and Active Protest. These affective configurations were differentially distributed among the different interactive contexts of the Face-to-Face Still-Face Paradigm. It is suggested that behaviors and facial expressions are fundamental expressive units flexibly organized into configurations that convey messages about the infant's internal state and intentions. Furthermore, it is hypothesized that the basic units of the infant's experience are these distinct affective configurations of emotion and behavior.

319 citations


Journal ArticleDOI
TL;DR: In this paper, the authors examined the relationship between facial expression and self-report of emotion at multiple points in time during an affective episode and found that facial expressions and reports of emotion were analyzed for specific moments in film time.
Abstract: In order to assess the extent of coherence in emotional response systems, we examined the relationship between facial expression and self-report of emotion at multiple points in time during an affective episode. We showed subjects brief films that were selected for their ability to elicit disgust and fear, and we asked them to report on their emotions using a new reporting procedure. This procedure, called cued-review, allows subjects to rate the degree to which they experienced each of several categories of emotion for many locations over the time interval of a stimulus period. When facial expressions and reports of emotion were analysed for specific moments in film time, there was a high degree of temporal linkage and categorical agreement between facial expression and self-report, as predicted. Coherence was even stronger for more intense emotional events. This is the first evidence of linkage between facial expression and self-report of emotion on a momentary basis.

307 citations


Journal ArticleDOI
TL;DR: In this paper, the authors examined automatic elicitation of conditioned skin conductance responses (SCRs) when a backward masking procedure prevented the subject's conscious awareness of the conditioned stimuli (CSs).
Abstract: This study examined automatic elicitation of conditioned skin conductance responses (SCRs), when a backward masking procedure prevented the subject's conscious awareness of the conditioned stimuli (CSs). The CSs were pictures of emotional facial expressions. A differential conditioning procedure was used. One facial expression (e.g. an angry face) was aversively conditioned by a shock unconditioned stimulus, whereas another facial expression (e.g. a happy face) was never presented with the shock. After conditioning, the CSs were presented backwardly masked by a neutral face. This procedure prevented conscious perception of the CS. Nevertheless, reliable differential SCRs were obtained when the CS had been an angry face. This effect, however, was dependent on the subject's direction of attention. When attention was focused on the mask, no differential responding was observed. Thus it was concluded that, when fear-relevant stimuli (angry faces) served as the CS, elicitation of SCRs was automatic in...

295 citations


Journal ArticleDOI
TL;DR: It is shown that the method of radial basis functions provides a powerful mechanism for processing facial expressions and is applicable to other elastic objects as well.

277 citations


Proceedings ArticleDOI
21 Jun 1994
TL;DR: An approach for analysis and representation of facial dynamics for recognition of facial expressions from image sequences is proposed and a mid-level symbolic representation that is motivated by linguistic and psychological considerations is developed.
Abstract: An approach for analysis and representation of facial dynamics for recognition of facial expressions from image sequences is proposed. The algorithms we develop utilize optical flow computation to identify the direction of rigid and non-rigid motions that are caused by human, facial expressions. A mid-level symbolic representation that is motivated by linguistic and psychological considerations is developed. Recognition of six facial expressions, as well as eye blinking, on a large set of image sequences is reported. >

Proceedings ArticleDOI
21 Jun 1994
TL;DR: By interpreting facial motions within a physics-based optimal estimation framework, a new control model of facial movement is developed and the newly extracted action units are both physics and geometry-based, and extend the well-known FACS parameters for facial expressions by adding temporal information and non-local spatial patterning of facial motion.
Abstract: We describe a computer vision system for observing the "action units" of a face using video sequences as input. The visual observation (sensing) is achieved by using an optimal estimation optical flow method coupled with a geometric and a physical (muscle) model describing the facial structure. This modeling results in a time-varying spatial patterning of facial shape and a parametric representation of the independent muscle action groups, responsible for the observed facial motions. These muscle action patterns may then be used for analysis, interpretation, and synthesis. Thus, by interpreting facial motions within a physics-based optimal estimation framework, a new control model of facial movement is developed. The newly extracted action units (which we name "FACS+") are both physics and geometry-based, and extend the well-known FACS parameters for facial expressions by adding temporal information and non-local spatial patterning of facial motion. >

Proceedings Article
01 Jan 1994
TL;DR: In this paper, an approach for analysis and representation of facia/ dynamics for recognition of facaal expressions from image sequences is proposed, which utilizes optical flow computation to identify the direction of rigid and non-rigid motions that are caused by human facial expressions.
Abstract: An approach for analysis and representation of facia/ dynamics for recognition of facaal expressions from image sequences is proposed The algorithms we develop utilize optical flow computation to identify the direction of rigid and non-rigid motions that are caused by human facial expressions A mid-level symbolic representation that is motivated by linguistic and psychological considerations is developed Recognition of sax facial exprtssions, as well as eye blinking, on a large set of image sequences is reported

Journal ArticleDOI
TL;DR: This paper proposes new methods for analyzing image sequences and updating textures of the three-dimensional (3-D) facial model and presents two methods for updating the texture of the facial model to improve the quality of the synthesized images.
Abstract: This paper proposes new methods for analyzing image sequences and updating textures of the three-dimensional (3-D) facial model. It also describes a method for synthesizing various facial expressions. These three methods are the key technologies for the model-based image coding system. The input image analysis technique directly and robustly estimates the 3-D head motions and the facial expressions without any two-dimensional (2-D) entity correspondences. This technique resolves the 2-D correspondence mismatch errors and provides quality reproduction of the original images by fully incorporating the synthesis rules. To verify the analysis algorithm, the paper performs quantitative and subjective evaluations. It presents two methods for updating the texture of the facial model to improve the quality of the synthesized images. The first method focuses on the facial parts with large change of brightness according to the various facial expressions for reducing the transmission bit rates. The second method focuses on all changes of brightness caused by the 3-D head motions as well as the facial expressions. The transmission bit rates are estimated according to the update methods. For synthesizing the output images, it describes rules that simulate the facial muscular actions because the muscles cause the facial expressions. These rules more easily synthesize the high-quality facial images that represent the various facial expressions. >

Proceedings Article
01 Jan 1994
TL;DR: In this paper, a computer vision system for observing the "action units" of a face using video sequences as input is described, which is achieved by using an optimal estimation optical-low method coupled with a geometric and a physical (muscle) model describing the facial structure.
Abstract: We describe a computer vision system for observing the “action units” of a face using video sequences as input. The visual observation (sensing} is achieved by using an optimal estimation opticaljlow method coupled with a geometric and a physical (muscle} model describing the facial structure. This modeling results in a time-varying spatial patterning of facial shape and a parametric representation of the independent muscle action groups, responsible for the observed facial motions. These muscle action patterns may then be usedforanalysis, interpretation, and synthesis. Thus, by interpreting facial motions within a physics-based optimal estimation framework, a new contra1 model of facial movement is developed. The newly extracted action units (which we name “FACS+”} are both physics and geometry-based, and extend the well-known FACS parameters for facial expressions by adding temporal information and non-local spatial patterning of facial motion.

Journal ArticleDOI
TL;DR: Three dynamic face-processing tasks based on the Bruce & Young (1986) functional model of face processing were presented to schizophrenic and 10 depressed inpatients and to 10 non-patient subjects and there was a differential pattern of group performance on each of the three tasks.
Abstract: Three dynamic face-processing tasks based on the Bruce & Young (1986) functional model of face processing were presented to 10 schizophrenic and 10 depressed inpatients and to 10 non-patient subjects. Familiar face recognition, facial expression recognition and unfamiliar face matching were examined. Schizophrenic patients' performance was significantly poorer than that of depressed patients and non-patient controls. Significantly lower scores were obtained on the facial expression recognition task than on the familiar face recognition task. There was a differential pattern of group performance on each of the three tasks: schizophrenic and depressed patients were as accurate as non-patient controls on the familiar face recognition task, but significantly less accurate than non-patient controls on the unfamiliar face-matching task. Schizophrenic patients were significantly less accurate than depressed patients and non-patient controls on the facial expression recognition task. The results are contrasted with an analogous static face-processing study.

Journal ArticleDOI
01 Aug 1994-Pain
TL;DR: Rosenthal's (1982) model of communication was applied to an analysis of the role of facial expression in the transmission of pain information and indicated that although observers can make coarse distinctions among patients' pain states, they are likely to systematically downgrade the intensity of patients' suffering.
Abstract: The communication of pain requires a sufferer to encode and transmit the experience and an observer to decode and interpret it. Rosenthal's (1982) model of communication was applied to an analysis of the role of facial expression in the transmission of pain information. Videotapes of patients with shoulder pain undergoing a series of movements of the shoulder were shown to a group of 5 judges. Observers and patients provided ratings of the patients' pain on the same verbal descriptor scales. Analyses addressed relationships among patients' pain reports, observers' judgements of patients' pain and measures of patients' facial expressions based on the Facial Action Coding System. The results indicated that although observers can make coarse distinctions among patients' pain states, they 1. (1) are not especially sensitive, and 2. (2) are likely to systematically downgrade the intensity of patients' suffering. Moreover, observers appear to make insufficient use of information that is available in patients' facial expression. Implications of the findings for pain patients and for training of health-care workers are discussed as are directions for future research.

Journal ArticleDOI
TL;DR: It is found that, provided the faces are rated with hair concealed, reasonable correlations can be achieved between their physical deviation and their rated distinctiveness, confirming the theory of Vokey and Read (1992) that the typicality/distinctiveness dimension can be broken down into two orthogonal components: “memorability” and “context-free familiarity”.
Abstract: In this study we examine the relationship between objective aspects of facial appearance and facial “distinctiveness”. Specifically, we examine whether the extent to which a face deviates from “ave...

Journal ArticleDOI
TL;DR: The findings suggest that the elderly have more difficulty processing negative affect, while their ability to process positive affect remains intact, which lends only partial support to the right hemi-aging hypothesis.
Abstract: The hypothesis that the right cerebral hemisphere declines more quickly than the left cerebral hemisphere in the normal aging process was tested using accuracy and intensity measures in a facial recognition test and using response time and response bias measures in a tachistoscopic paradigm. Elderly and younger men and women (N = 60) participated in both experiments. Experiment I required facial affect identification and intensity ratings of 50 standardized photographs of 5 affective categories: Happy, Neutral, Sad, Angry, and Fearful. The elderly were significantly less accurate in identifying facial affective valence. This effect was found using negative and neutral expressions. Results for happy expressions, however, were consistent with the younger group. In Experiment 2, age differences in hemispheric asymmetry were evaluated using presentation of affective faces in each visual field. Following prolonged experience with the affective stimuli during Experiment 1, the elderly showed heightened cerebral...

Journal ArticleDOI
TL;DR: This paper summarizes the readout position, answers Fridlund's criticisms identifying it with the different notion of "spillover," and contends that the expressive readout functions in spontaneous communication.

Journal ArticleDOI
TL;DR: The cluster of facial activity associated with pain in this sample, using either measure, was similar to the cluster of Facial activity related to pain in adults and other newborns, providing construct validity for the position that the face encodes painful distress in infants and adults.
Abstract: Facial activity is strikingly visible in infants reacting to noxious events. Two measures that reduce this activity to composite events, the Neonatal Facial Coding System (NFCS) and the Facial Action Coding System (FACS), were used to examine facial expressions of 56 neonates responding to routine heel lancing for blood sampling purposes. The NFCS focuses upon a limited subset of all possible facial actions that had been identified previously as responsive to painful events, whereas the FACS is a comprehensive system that is inclusive of all facial actions. Descriptions of the facial expressions obtained from the two measurement systems were very similar, supporting the convergent validity of the shorter, more readily applied system. As well, the cluster of facial activity associated with pain in this sample, using either measure, was similar to the cluster of facial activity associated with pain in adults and other newborns, both full-term and preterm, providing construct validity for the position that the face encodes painful distress in infants and adults.

Journal ArticleDOI
TL;DR: This paper explored the role of mimicry and self-perception processes in emotional contagion and found that subjects who visibly moved to mimic the behavior of the actor were significantly more likely to be those who were more responsive to self-produced cues.
Abstract: Two experiments explored the role of mimicry and self-perception processes in emotional contagion. In Study 1, 46 subjects watched two brief film clips depicting an episode of startled fear. In a separate procedure, subjects adopted facial expressions of emotion, and reported whether the expressions had caused them to feel corresponding emotions. Those who reported feeling the emotions were identified as more responsive to self-produced cues for feeling. Subjects who visibly moved to mimic the behavior of the actor were significantly more likely to be those who were more responsive to self-produced cues. In Study 2, 57 subjects watched three film clips depicting happy people. During clips when they inhibited the movements of their faces, subjects reported less happiness than during clips when they moved naturally and were able to mimic, or when they exaggerated their movements. This effect occurred only among subjects who, in a separate procedure, had been identified as more responsive to self-produced cues.

Journal ArticleDOI
TL;DR: In this article, display rule behavior and understanding were compared in 72 4- to 6-year-old boys and girls listening to stories in which the protagonist was in a positive or in a negative mood.
Abstract: Display rule behavior and understanding were compared in 72 4- to 6-year-old boys and girls. In Study 1, children listened to stories in which the protagonist was in a positive or in a negative mood. The motivation to hide his or her emotional state was either prosocial or self-centered. Stories with no discrepancy between feeling and expression were included as a control condition. Subjects were asked to identify the protagonist's real feelings and facial expression. Older children were more accurate than younger ones in recognizing that real and apparent emotions did not coincide in the self-centered and prosocial stories. Girls produced more correct answers than boys in the prosocial condition. In Study 2, children were examined in a situation in which they were expected to hide their disappointment about an unattractive gift. They were either observed (social situation) or not observed (nonsocial situation) by the experimenter when receiving the gift. Irrespective of age, preschoolers regulated their nonverbal behavior appropriately in the social situation. The comparison of both data sets revealed that even younger preschoolers follow display rules in their behavior before fully grasping the distinction between real and apparent emotions.

Proceedings ArticleDOI
11 Nov 1994
TL;DR: By integrating real-time 2D image-processing with 3D models, this work obtains a system that is able to quickly track and interpret complex facial motions.
Abstract: We describe a computer system that allows real-time tracking of facial expressions Sparse, fast visual measurements using 2D templates are used to observe the face of a subject Rather than track features on the face, the distributed response of a set of templates is used to characterize a given facial region These measurements ape coupled via a linear interpolation method to states in a physically-based model of facial animation, which includes both skin and muscle dynamics By integrating real-time 2D image-processing with 3D models we obtain a system that is able to quickly track and interpret complex facial motions >

Journal ArticleDOI
TL;DR: The authors examined preschool children's decoding and encoding of facial emotions and gestures, interrelationships between these skills, and the relationship between the skills and children's popularity, and found that children performed better on decoding than encoding tasks, suggesting that nonverbal comprehension precedes production.
Abstract: This study examined preschool children's decoding and encoding of facial emotions and gestures, interrelationships between these skills, and the relationship between these skills and children's popularity Subjects were 34 preschoolers (eighteen 4-year-olds, sixteen 5-year-olds), with an equal number of boys and girls Children's nonverbal skill was measured on four tasks: decoding emotions, decoding gestures, encoding facial emotions, and encoding gestures Children's popularity was measured by teacher ratings Analyses revealed the following major findings: (a) There were no age or gender effects on performance on any of the tasks (b) Children performed better on decoding than encoding tasks, suggesting that nonverbal comprehension precedes production Also, children appeared better at facial emotion skills than gesture skills There were significant correlations between decoding and encoding gestures, and between encoding gestures and encoding emotions (c) Multiple regression analyses indicated that encoding emotions and decoding gestures were marginally predictive of popularity In addition, when children's scores on the four tasks were combined via z-score transformations, children's aggregate nonverbal skill correlated significantly with peer popularity

Journal ArticleDOI
TL;DR: This article found that participants were relatively poor at identifying expressions of the two types and this low discrimination accuracy was a function of consistent use of these invalid cues, and a measure of the level of perceived honest demeanour of the stimulus persons based on their neutral expressions was found to relate to perceivers accuracy in discriminating posed and spontaneous expressions.
Abstract: Dynamic facial expressions, either posed or elicited by afectively evocative materials, were objectively scored to determine the movement cues and temporalparameters associated with the two types of expression. Subjects viewed these expressive episodes and rated each of them on a number of scales intended to assess perceived spontaneousness and deliberateness. Subsequent to viewing all stimuli, subjects reported the spec$c cues that they felt they had used to discriminate spontaneous from deliberate expressions. The results reveal that (a) subjects were able to accurately report the cues they employed in the rating task and that (b) these cues were not always valid discriminators of posed and spontaneous expressions. Subjects were in fact relatively poor at identifying expressions of the two types and this low discrimination accuracy was found to be a function of the consistent use of these invalid cues. A measure of the level of perceived ‘honest demeanour of the stimulus persons based on their neutral expressions was found to relate to perceivers accuracy in discriminating posed and spontaneous expressions.

Proceedings Article
05 Oct 1994
TL;DR: A new approach to human-computer interaction is presented, called social interaction, which realizes a social agent that hears human-to-human conversation and informs what is causing the misunderstanding.
Abstract: We present a new approach to human-computer interaction, called social interaction. Its main characteristics are summarized by the following three points. First, interactions are realized as multimodal (verbal and nonverbal) conversation using spoken language, facial expressions, and so on. Second, the conversants are a group of humans and social agents that are autonomous and social. Autonomy is an important property that allows agents to decide how to act in an ever-changing environment. Socialness is also an important property that allows agents to behave both cooperatively and collaboratively. Generally, conversation is a joint work and ill-structured. Its participants are required to be social as well as autonomous. Third, conversants often encounter communication mismatches (misunderstanding others' intentions and beliefs) and fail to achieve their joint goals. The social agents, therefore, are always concerned with detecting communication mismatches. We realize a social agent that hears human-to-human conversation and informs what is causing the misunderstanding. It can also interact with humans by voice with facial displays and head (and eye) movement.

Proceedings ArticleDOI
27 Jun 1994
TL;DR: In this article, the authors developed an experimental system that integrates speech dialogue and facial animation, to investigate the effect of introducing communicative facial expressions as a new modality in human-computer conversation.
Abstract: Human face-to-face conversation is an ideal model for human-computer dialogue. One of the major features of face-to-face communication is its multiplicity of communication channels that act on multiple modalities. To realize a natural multimodal dialogue, it is necessary to study how humans perceive information and determine the information to which humans are sensitive. A face is an independent communication channel that conveys emotional and conversational signals, encoded as facial expressions. We have developed an experimental system that integrates speech dialogue and facial animation, to investigate the effect of introducing communicative facial expressions as a new modality in human-computer conversation. Our experiments have showen that facial expressions are helpful, especially upon first contact with the system. We have also discovered that featuring facial expressions at an early stage improves subsequent interaction.

Journal ArticleDOI
TL;DR: Cry would seem to command attention, but facial activity, rather than cry, can account for the major variations in adults' judgments of neonatal pain.
Abstract: Explored the facial and cry characteristics that adults use when judging an infant's pain. Sixteen women viewed videotaped reactions of 36 newborns subjected to noninvasive thigh rubs and vitamin K injections in the course of routine care and rated discomfort. The group mean interrater reliability was high. Detailed descriptions of the infants' facial reactions and cry sounds permitted specification of the determinants of distress judgments. Several facial variables (a brow bulge, eyes squeezed shut, and deepened nasolabial fold constellation, and taut tongue) accounted for 49% of the variance in ratings of affective discomfort after controlling for ratings of discomfort during a noninvasive event. In a separate analysis not including facial activity, several cry variables (formant frequency, latency to cry) also accounted for variance (38%) in ratings. When the facial and cry variables were considered together, cry variables added little to the prediction of ratings in comparison to facial variables. Cry would seem to command attention, but facial activity, rather than cry, can account for the major variations in adults' judgments of neonatal pain.