scispace - formally typeset
Search or ask a question

Showing papers on "Facial Action Coding System published in 2007"


Journal ArticleDOI
TL;DR: The experiments show that the integration of AU relationships and AU dynamics with AU measurements yields significant improvement of AU recognition, especially for spontaneous facial expressions and under more realistic environment including illumination variation, face pose variation, and occlusion.
Abstract: A system that could automatically analyze the facial actions in real time has applications in a wide range of different fields. However, developing such a system is always challenging due to the richness, ambiguity, and dynamic nature of facial actions. Although a number of research groups attempt to recognize facial action units (AUs) by improving either the facial feature extraction techniques or the AU classification techniques, these methods often recognize AUs or certain AU combinations individually and statically, ignoring the semantic relationships among AUs and the dynamics of AUs. Hence, these approaches cannot always recognize AUs reliably, robustly, and consistently. In this paper, we propose a novel approach that systematically accounts for the relationships among AUs and their temporal evolutions for AU recognition. Specifically, we use a dynamic Bayesian network (DBN) to model the relationships among different AUs. The DBN provides a coherent and unified hierarchical probabilistic framework to represent probabilistic relationships among various AUs and to account for the temporal changes in facial action development. Within our system, robust computer vision techniques are used to obtain AU measurements. Such AU measurements are then applied as evidence to the DBN for inferring various AUs. The experiments show that the integration of AU relationships and AU dynamics with AU measurements yields significant improvement of AU recognition, especially for spontaneous facial expressions and under more realistic environment including illumination variation, face pose variation, and occlusion.

404 citations


01 Jan 2007
TL;DR: FACS is regarded by many as the standard measure for facial behavior and is used widely in diverse fields and beyond emotion science, these include facial neuromuscular disorders.
Abstract: to name a few. Because of its importance to the study of emotion, a number of observer-based systems of facial expression measurement have been developed Using FACS and viewing video-recorded facial behavior at frame rate and slow motion, coders can manually code nearly all possible facial expressions, which are decomposed into action units (AUs). Action units, with some qualifications , are the smallest visually discriminable facial movements. By comparison, other systems are less thorough (Malatesta et al., 1989), fail to differentiate between some anatomically distinct movements (Oster, Hegley, & Nagel, 1992), consider movements that are not anatomically distinct as separable (Oster et al., 1992), and often assume a one-to-one mapping between facial expression and emotion (for a review of these systems, see Cohn & Ekman, in press). Unlike systems that use emotion labels to describe expression , FACS explicitly distinguishes between facial actions and inferences about what they mean. FACS itself is descriptive and includes no emotion-specified descriptors. Hypotheses and inferences about the emotional meaning of facial actions are extrinsic to FACS. If one wishes to make emotion based inferences from FACS codes, a variety of related resources exist. These include the FACS Investigators' Guide These resources suggest combination rules for defining emotion-specified expressions from FACS action units, but this inferential step remains extrinsic to FACS. Because of its descriptive power, FACS is regarded by many as the standard measure for facial behavior and is used widely in diverse fields. Beyond emotion science, these include facial neuromuscular disorders

291 citations


Journal ArticleDOI
15 Dec 2007-Pain
TL;DR: The preserved pain typicalness of facial responses to noxious stimulation suggests that pain is reflected as validly in the facial responses of demented patients as it is in healthy individuals.
Abstract: The facial expression of pain has emerged as an important pain indicator in demented patients, who have difficulties in providing self-report ratings. In a few clinical studies an increase of facial responses to pain was observed in demented patients compared to healthy controls. However, it had to be shown that this increase can be verified when using experimental methods, which also allows for testing whether the facial responses in demented patients are still typical for pain. We investigated facial responses in 42 demented patients and 54 aged-matched healthy controls to mechanically induced pain of various intensities. The face of the subject was videotaped during pressure stimulation and was later analysed using the Facial Action Coding System. Besides facial responses we also assessed self-report ratings. Comparable to previous findings, we found that facial responses to noxious stimulation were significantly increased in demented patients compared to healthy controls. This increase was mainly due to an increase of pain-indicative facial responses in demented patients. Moreover, facial responses were closely related to the intensity of stimulation, especially in demented patients. Regarding self-report ratings, we found no significant group differences; however, the capacity to provide these self-report ratings was diminished in demented patients. The preserved pain typicalness of facial responses to noxious stimulation suggests that pain is reflected as validly in the facial responses of demented patients as it is in healthy individuals. Therefore, the facial expression of pain has the potential to serve as an alternative pain assessment tool in demented patients, even in patients who are verbally compromised.

285 citations


Book ChapterDOI
01 Jul 2007
TL;DR: The human face is a multi-signal input-output communicative system capable of tremendous flexibility and specificity and is the authors' preeminent means of communicating and understanding somebody’s affective state and intentions on the basis of the shown facial expression.
Abstract: 1. Human Face and Its Expression The human face is the site for major sensory inputs and major communicative outputs. It houses the majority of our sensory apparatus as well as our speech production apparatus. It is used to identify other members of our species, to gather information about age, gender, attractiveness, and personality, and to regulate conversation by gazing or nodding. Moreover, the human face is our preeminent means of communicating and understanding somebody’s affective state and intentions on the basis of the shown facial expression (Keltner & Ekman, 2000). Thus, the human face is a multi-signal input-output communicative system capable of tremendous flexibility and specificity (Ekman & Friesen, 1975). In general, the human face conveys information via four kinds of signals. (a) Static facial signals represent relatively permanent features of the face, such as the bony structure, the soft tissue, and the overall proportions of the face. These signals contribute to an individual’s appearance and are usually exploited for person identification.

262 citations


Book ChapterDOI
20 Oct 2007
TL;DR: The system was able to predict sleep and crash episodes during a driving computer game with 96% accuracy within subjects and above 90% accuracy across subjects, which is the highest prediction rate reported to date for detecting real drowsiness.
Abstract: The advance of computing technology has provided the means for building intelligent vehicle systems. Drowsy driver detection system is one of the potential applications of intelligent vehicle systems. Previous approaches to drowsiness detection primarily make preassumptions about the relevant behavior, focusing on blink rate, eye closure, and yawning. Here we employ machine learning to datamine actual human behavior during drowsiness episodes. Automatic classifiers for 30 facial actions from the Facial Action Coding system were developed using machine learning on a separate database of spontaneous expressions. These facial actions include blinking and yawn motions, as well as a number of other facial movements. In addition, head motion was collected through automatic eye tracking and an accelerometer. These measures were passed to learning-based classifiers such as Adaboost and multinomial ridge regression. The system was able to predict sleep and crash episodes during a driving computer game with 96% accuracy within subjects and above 90% accuracy across subjects. This is the highest prediction rate reported to date for detecting real drowsiness. Moreover, the analysis revealed new information about human behavior during drowsy driving.

232 citations


Journal ArticleDOI
TL;DR: The results indicate that dynamic facial expressions elicit spontaneous and rapid facial mimicry, which functions both as a form of intra-individual processing and as inter-individual communication.

206 citations


Book ChapterDOI
01 Jul 2007
TL;DR: This work investigates the employment of the Active Appearance Model (AAM) framework in order to derive effective representations for facial action recognition, and investigates how these same representations affect spontaneous facial action unit recognition.
Abstract: The Facial Action Coding System (FACS) [Ekman et al., 2002] is the leading method for measuring facial movement in behavioral science. FACS has been successfully applied, but not limited to, identifying the differences between simulated and genuine pain, differences betweenwhen people are telling the truth versus lying, and differences between suicidal and non-suicidal patients [Ekman and Rosenberg, 2005]. Successfully recognizing facial actions is recognized as one of the “major” hurdles to overcome, for successful automated expression recognition. How one should represent the face for effective action unit recognition is the main topic of interest in this chapter. This interest is motivated by the plethora of work in existence in other areas of face analysis, such as face recognition [Zhao et al., 2003], that demonstrate the benefit of representation when performing recognition tasks. It is well understood in the field of statistical pattern recognition [Duda et al., 2001] given a fixed classifier and training set that how one represents a pattern can greatly effect recognition performance. The face can be represented in a myriad of ways. Much work in facial action recognition has centered solely on the appearance (i.e., pixel values) of the face given quite a basic alignment (e.g., eyes and nose). In our work we investigate the employment of the Active Appearance Model (AAM) framework [Cootes et al., 2001, Matthews and Baker, 2004] in order to derive effective representations for facial action recognition. Some of the representations we will be employing can be seen in Figure 1. Experiments in this chapter are run across two action unit databases. The CohnKanade FACS-Coded Facial Expression Database [Kanade et al., 2000] is employed to investigate the effect of face representation on posed facial action unit recognition. Posed facial actions are those that have been elicited by asking subjects to deliberately make specific facial actions or expressions. Facial actions are typically recorded under controlled circumstances that include full-face frontal view, good lighting, constrained head movement and selectivity in terms of the type and magnitude of facial actions. Almost all work in automatic facial expression analysis has used posed image data and the Cohn-Kanade database may be the database most widely used [Tian et al., 2005]. The RU-FACS Spontaneous Expression Database is employed to investigate how these same representations affect spontaneous facial action unit recognition. Spontaneous facial actions are representative of “real-world” facial

183 citations


Journal ArticleDOI
01 Nov 2007-Emotion
TL;DR: Together, these studies indicate that pride can be reliably assessed from nonverbal behaviors, and suggest that for the most part, authentic and hubristic pride share the same signal.
Abstract: This research provides a systematic analysis of the nonverbal expression of pride. Study 1 manipulated behavioral movements relevant to pride (e.g., expanded posture and head tilt) to identify the most prototypical pride expression and determine the specific components that are necessary and sufficient for reliable recognition. Studies 2 and 3 tested whether the 2 conceptually and empirically distinct facets of pride ("authentic" and "hubristic"; J. L. Tracy & R. W. Robins, 2007a) are associated with distinct nonverbal expressions. Results showed that neither the prototypical pride expression nor several recognizable variants were differentially associated with either facet, suggesting that for the most part, authentic and hubristic pride share the same signal. Together these studies indicate that pride can be reliably assessed from nonverbal behaviors. In the Appendix, the authors provide guidelines for a pride behavioral coding scheme, akin to the Emotion Facial Action Coding System (EMFACS; P. Ekman & E. Rosenberg, 1997) for assessing "basic" emotions from observable nonverbal behaviors.

176 citations


Journal ArticleDOI
TL;DR: The ChimpFACS is described and used to compare the repertoire of facial movement in chimpanzees and humans and demonstrates that FACS can be applied to other species, but it is highlighted that any modifications must be based on both underlying anatomy and detailed observational analysis of movements.
Abstract: A comparative perspective has remained central to the study of human facial expressions since Darwin’s [(1872/1998). The expression of the emotions in man and animals (3rd ed.). New York: Oxford University Press] insightful observations on the presence and significance of cross-species continuities and species-unique phenomena. However, cross-species comparisons are often difficult to draw due to methodological limitations. We report the application of a common methodology, the Facial Action Coding System (FACS) to examine facial movement across two species of hominoids, namely humans and chimpanzees. FACS [Ekman & Friesen (1978). Facial action coding system. CA: Consulting Psychology Press] has been employed to identify the repertoire of human facial movements. We demonstrate that FACS can be applied to other species, but highlight that any modifications must be based on both underlying anatomy and detailed observational analysis of movements. Here we describe the ChimpFACS and use it to compare the repertoire of facial movement in chimpanzees and humans. While the underlying mimetic musculature shows minimal differences, important differences in facial morphology impact upon the identification and detection of related surface appearance changes across these two species.

174 citations


Book ChapterDOI
22 Aug 2007
TL;DR: Facial expressions such as anger, sadness, surprise, joy, disgust, fear and neutral are successfully recognized with an average recognition rate of 91.3%, and the highest recognition rate reaches to 98.3% in the recognition of surprise.
Abstract: In this paper, we propose a novel approach for facial expression analysis and recognition. The proposed approach relies on the distance vectors retrieved from 3D distribution of facial feature points to classify universal facial expressions. Neural network architecture is employed as a classifier to recognize the facial expressions from a distance vector obtained from 3D facial feature locations. Facial expressions such as anger, sadness, surprise, joy, disgust, fear and neutral are successfully recognized with an average recognition rate of 91.3%. The highest recognition rate reaches to 98.3% in the recognition of surprise.

173 citations


Proceedings ArticleDOI
12 Nov 2007
TL;DR: The automated system was successfully able to differentiate faked from real pain and the most discriminative facial action in the automated system output was AU 4 (brow lower), which all was consistent with findings using human expert FACS codes.
Abstract: We present initial results from the application of an automated facial expression recognition system to spontaneous facial expressions of pain. In this study, 26 participants were videotaped under three experimental conditions: baseline, posed pain, and real pain. In the real pain condition, subjects experienced cold pressor pain by submerging their arm in ice water. Our goal was to automatically determine which experimental condition was shown in a 60 second clip from a previously unseen subject. We chose a machine learning approach, previously used successfully to categorize basic emotional facial expressions in posed datasets as well as to detect individual facial actions of the Facial Action Coding System (FACS) (Littlewort et al, 2006; Bartlett et al., 2006). For this study, we trained 20 Action Unit (AU) classifiers on over 5000 images selected from a combination of posed and spontaneous facial expressions. The output of the system was a real valued number indicating the distance to the separating hyperplane for each classifier. Applying this system to the pain video data produced a 20 channel output stream, consisting of one real value for each learned AU, for each frame of the video. This data was passed to a second layer of classifiers to predict the difference between baseline and pained faces, and the difference between expressions of real pain and fake pain. Naive human subjects tested on the same videos were at chance for differentiating faked from real pain, obtaining only 52% accuracy. The automated system was successfully able to differentiate faked from real pain. In an analysis of 26 subjects, the system obtained 72% correct for subject independent discrimination of real versus fake pain on a 2-alternative forced choice. Moreover, the most discriminative facial action in the automated system output was AU 4 (brow lower), which all was consistent with findings using human expert FACS codes.

Journal ArticleDOI
01 Feb 2007-Emotion
TL;DR: The authors provide data on the first application of the ChimpFACS to validate existing categories of chimpanzee facial expressions using discriminant functions analyses and suggest a potential homology between these prototypical chimpanzee expressions and human expressions based on structural similarities.
Abstract: The Chimpanzee Facial Action Coding System (ChimpFACS) is an objective, standardized observational tool for measuring facial movement in chimpanzees based on the well-known human Facial Action Coding System (FACS; P. Ekman & W. V. Friesen, 1978). This tool enables direct structural comparisons of facial expressions between humans and chimpanzees in terms of their common underlying musculature. Here the authors provide data on the first application of the ChimpFACS to validate existing categories of chimpanzee facial expressions using discriminant functions analyses. The ChimpFACS validated most existing expression categories (6 of 9) and, where the predicted group memberships were poor, the authors discuss potential problems with ChimpFACS and/or existing categorizations. The authors also report the prototypical movement configurations associated with these 6 expression categories. For all expressions, unique combinations of muscle movements were identified, and these are illustrated as peak intensity prototypical expression configurations. Finally, the authors suggest a potential homology between these prototypical chimpanzee expressions and human expressions based on structural similarities. These results contribute to our understanding of the evolution of emotional communication by suggesting several structural homologies between the facial expressions of chimpanzees and humans and facilitating future research.

01 Jan 2007
TL;DR: McDaniel et al. as mentioned in this paper used facial features to detect the affective states (or emotions) that accompany deep-level learning of conceptual material, including boredom, confusion, delight, flow, frustration, and surprise.
Abstract: Facial Features for Affective State Detection in Learning Environments Bethany McDaniel (btmcdanl@memphis.edu) Department of Psychology, University of Memphis Sidney D’Mello (sdmello@memphis.edu) Department of Computer Science, University of Memphis Brandon King (bgking@memphis.edu) Department of Psychology, University of Memphis Patrick Chipman (pchipman@memphis.edu) Department of Psychology, University of Memphis Kristy Tapp (kmsnyder@memphis.edu) Department of Psychology, University of Memphis Art Graesser (a-graesser@memphis.edu) Department of Psychology, University of Memphis System. This system specified how specific facial behaviors, based on the muscles that produce them, could identify “basic emotions”. Each movement in the face is referred to as an action unit (or AU). There are approximately 58 action units. These facial patterns were used to identify the emotions of happiness, sadness, surprise, disgust, anger, and fear (Ekman & Friesen, 1978; Elfenbein & Ambady, 2002). Doubts have been raised, however, that these six emotions are frequent and functionally significant in the learning process (D’Mello et al., 2006; Kapoor, Mota, & Picard, 2001). Some have challenged the adequacy of basing a theory of emotions on these “basic” emotions (Rozin & Cohen, 2003). Moreover, Ekman’s coding system was tested primarily on static pictures rather than on changing expressions over time.. There is some evidence for a different set of emotions that influence learning and cognition, specifically boredom (Csikszentmihalyi, 1990; Miserandino, 1996), confusion (Graesser & Olde, 2003; Kort, Reilly, & Picard, 2001), flow (i.e. engagement, Csikszentmihalyi, 1990), and frustration (Kort, Reilly, & Picard, 2001; Patrick et al., 1993). Curiosity and eureka (i.e. the “a-ha” experience) are also believed to accompany learning. A study was recently conducted to investigate the occurrence of these emotions, as well as Ekman’s basic emotions. The study used an emote-aloud procedure (D’Mello et al., 2006), a variant of the think-aloud procedure (Ericsson & Simon, 1993), as an online measure of the learners’ affective states during learning. College students were asked to express the affective states they were feeling while working on a task, in this case being tutored in computer literacy with AutoTutor. Using the emote-aloud method allowed for the on-line identification of emotions while working on the learning task. A sample of 215 emote-aloud observations were Abstract This study investigated facial features to detect the affective states (or emotions) that accompany deep-level learning of conceptual material. Videos of the participants’ faces were captured while they interacted with AutoTutor on computer literacy topics. After the tutoring session, the affective states (boredom, confusion, delight, flow, frustration, and surprise) of the student were identified by the learner, a peer, and two trained judges. Participants’ facial expressions were coded by two independent judges using Ekman’s Facial Action Coding System. Correlational analyses indicated that specific facial features could segregate confusion, delight, and frustration form the baseline state of neutral, but boredom was indistinguishable from neutral. We discuss the prospects of automatically detecting these emotions on the basis of facial features that are highly diagnostic. Keywords: Facial features; action units, affective states; emotions; learning; AutoTutor; classifying affect Introduction It is widely acknowledged that cognition, motivation, and emotions are three fundamental components of learning (Snow, Corno, & Jackson, 1996). Emotion has been viewed as source of motivational energy (Harter, 1981; Miserandino, 1996; Stipek, 1998), but it can also be viewed as a more complex independent factor that plays an explanatory role in both learning and motivation (Ford, 1992; Meyer & Turner, 2002). The link between emotions and learning has received more attention during the last decade in the fields of psychology, education, and computer science (Craig, Graesser, Sullins, & Gholson, 2004; Graesser, Jackson, & McDaniel, 2007; Kort, Reilly, & Picard, 2001; Picard 1997; Meyer & Turner, 2002). Ekman and Friesen (1978) highlighted the expressive aspects of emotions with their Facial Action Coding

Journal ArticleDOI
01 Aug 2007-Emotion
TL;DR: The authors found that facial identity was better recognized when the faces were initially encountered with a happy rather than an angry expression, even when attention was oriented toward facial features other than expression.
Abstract: Previous studies indicate that the encoding of new facial identities in memory is influenced by the type of expression displayed by the faces. In the current study, the authors investigated whether or not this influence requires attention to be explicitly directed toward the affective meaning of facial expressions. In a first experiment, the authors found that facial identity was better recognized when the faces were initially encountered with a happy rather than an angry expression, even when attention was oriented toward facial features other than expression. Using the Remember/Know/Guess paradigm in a second experiment, the authors found that the influence of facial expressions on the conscious recollection of facial identity was even more pronounced when participants' attention was not directed toward expressions. It is suggested that the affective meaning of facial expressions automatically modulates the encoding of facial identity in memory.

Journal ArticleDOI
TL;DR: This is the first study to discriminate among FACS measures collected during innocuous and graded levels of precisely measured painful stimuli in seniors with (mild) dementia and in healthy control group participants.
Abstract: Objective. Reflexive responses to pain such as facial reactions become increasingly important for pain assessment among patients with Alzheimer's disease (AD) because self-report capabilities diminish as cognitive abilities decline. Our goal was to study facial expressions of pain in patients with and without AD. Design. We employed a quasi-experimental design and used the Facial Action Coding System (FACS) to assess reflexive facial responses to noxious stimuli of varied intensity. Two different modalities of stimulation (mechanical and electrical) were employed. Results. The FACS identified differences in facial expression as a function of level of discomforting stimulation. As expected, there were no significant differences based on disease status (AD vs control group). Conclusions. This is the first study to discriminate among FACS measures collected during innocuous and graded levels of precisely measured painful stimuli in seniors with (mild) dementia and in healthy control group participants. We conclude that, as hypothesized, FACS can be used for the assessment of evoked pain, regardless of the presence of AD.

Journal ArticleDOI
TL;DR: For instance, this paper found that individuals with a history of major depressive disorder (MDD) were more likely than those without current depressive symptomatology to control their initial smiles with negative affect-related expressions.
Abstract: Individuals suffering from depression show diminished facial responses to positive stimuli. Recent cognitive research suggests that depressed individuals may appraise emotional stimuli differently than do nondepressed persons. Prior studies do not indicate whether depressed individuals respond differently when they encounter positive stimuli that are difficult to avoid. The authors investigated dynamic responses of individuals varying in both history of major depressive disorder (MDD) and current depressive symptomatology (N = 116) to robust positive stimuli. The Facial Action Coding System (Ekman & Friesen, 1978) was used to measure affect-related responses to a comedy clip. Participants reporting current depressive symptomatology were more likely to evince affect-related shifts in expression following the clip than were those without current symptomatology. This effect of current symptomatology emerged even when the contrast focused only on individuals with a history of MDD. Specifically, persons with current depressive symptomatology were more likely than those without current symptomatology to control their initial smiles with negative affect-related expressions. These findings suggest that integration of emotion science and social cognition may yield important advances for understanding depression.

Proceedings ArticleDOI
26 Dec 2007
TL;DR: A two-step approach to temporally segment facial behavior using spectral graph techniques to cluster shape and appearance features invariant to some geometric transformations that significantly improves productivity and addresses the need for ground-truth data for facial image analysis.
Abstract: Temporal segmentation of facial gestures in spontaneous facial behavior recorded in real-world settings is an important, unsolved, and relatively unexplored problem in facial image analysis. Several issues contribute to the challenge of this task. These include non-frontal pose, moderate to large out-of-plane head motion, large variability in the temporal scale of facial gestures, and the exponential nature of possible facial action combinations. To address these challenges, we propose a two-step approach to temporally segment facial behavior. The first step uses spectral graph techniques to cluster shape and appearance features invariant to some geometric transformations. The second step groups the clusters into temporally coherent facial gestures. We evaluated this method in facial behavior recorded during face-to- face interactions. The video data were originally collected to answer substantive questions in psychology without concern for algorithm development. The method achieved moderate convergent validity with manual FACS (Facial Action Coding System) annotation. Further, when used to preprocess video for manual FACS annotation, the method significantly improves productivity, thus addressing the need for ground-truth data for facial image analysis. Moreover, we were also able to detect unusual facial behavior.

Journal ArticleDOI
TL;DR: It is demonstrated that facial blends of emotion are more easily and accurately posed on the upper-lower than right-left hemiface, and that upper facial emotions are processed preferentially by the right hemisphere whereas lower facial emotions have to be processed preferentially by the left hemisphere.
Abstract: Most clinical research has focused on intensity differences of facial expressions between the right and left hemiface to explore lateralization of emotions in the brain. Observations by social psychologists, however, suggest that control of facial expression is organized predominantly across the upper-lower facial axis because of the phenomena of facial blends: simultaneous display of different emotions on the upper and lower face. Facial blends are related to social emotions and development of display rules that allow individuals to sculpt facial expressions for social and manipulative purposes. We have demonstrated that facial blends of emotion are more easily and accurately posed on the upper-lower than right-left hemiface, and that upper facial emotions are processed preferentially by the right hemisphere whereas lower facial emotions are processed preferentially by the left hemisphere. Based on these results, recent anatomical studies showing separate cortical areas for motor control of upper and lower face and the neurology of posed and spontaneous expressions of emotion, a functional-anatomic model of how the forebrain modulates facial expressions, is presented. The unique human ability to produce facial blends of emotion is, most likely, an adaptive modification linked to the evolution of speech and language.

01 Jan 2007
TL;DR: It was showed that children base their judgment on AU intensity of both mouth and eyes, with relatively little distinction between the Duchenne marker (AU6 or "cheek raiser") and a different voluntary muscle that has a similar effect on eye aperture (AU7 or "lid tightener").
Abstract: The authors investigated the differences between 8-year-olds (n 80) and adults (n 80) in recognition of felt versus faked enjoyment smiles by using a newly developed picture set that is based on the Facial Action Coding System. The authors tested the effect of different facial action units (AUs) on judgments of smile authenticity. Multiple regression showed that children base their judgment on AU intensity of both mouth and eyes, with relatively little distinction between the Duchenne marker (AU6 or “cheek raiser”) and a different voluntary muscle that has a similar effect on eye aperture (AU7 or “lid tightener”). Adults discriminate well between AU6 and AU7 and seem to use eye-mouth discrepancy as a major cue of authenticity. Bared-teeth smiles (involving AU25) are particularly salient to both groups. The authors propose and discuss an initial developmental model of the smile recognition process.

Journal ArticleDOI
TL;DR: This paper investigated the differences between 8-year-olds and adults in recognition of felt versus faked enjoyment smiles by using a newly developed picture set that is based on the Facial Action Coding System.
Abstract: The authors investigated the differences between 8-year-olds (n=80) and adults (n=80) in recognition of felt versus faked enjoyment smiles by using a newly developed picture set that is based on the Facial Action Coding System. The authors tested the effect of different facial action units (AUs) on judgments of smile authenticity. Multiple regression showed that children base their judgment on AU intensity of both mouth and eyes, with relatively little distinction between the Duchenne marker (AU6 or "cheek raiser") and a different voluntary muscle that has a similar effect on eye aperture (AU7 or "lid tightener"). Adults discriminate well between AU6 and AU7 and seem to use eye-mouth discrepancy as a major cue of authenticity. Bared-teeth smiles (involving AU25) are particularly salient to both groups. The authors propose and discuss an initial developmental model of the smile recognition process.

Book ChapterDOI
06 Jan 2007
TL;DR: This paper explores audio-visual emotion recognition in a realistic human conversation setting--the Adult Attachment Interview (AAI) based on the assumption that facial expression and vocal expression are at the same coarse affective states, positive and negative emotion sequences are labeled according to Facial Action Coding System.
Abstract: Automatic multimodal recognition of spontaneous emotional expressions is a largely unexplored and challenging problem. In this paper, we explore audio-visual emotion recognition in a realistic human conversation setting--the Adult Attachment Interview (AAI). Based on the assumption that facial expression and vocal expression are at the same coarse affective states, positive and negative emotion sequences are labeled according to Facial Action Coding System. Facial texture in visual channel and prosody in audio channel are integrated in the framework of Adaboost multi-stream hidden Markov model (AdaMHMM) in which the Adaboost learning scheme is used to build component HMM fusion. Our approach is evaluated in AAI spontaneous emotion recognition experiments.

Journal ArticleDOI
TL;DR: ChimpFACS is developed and made exciting new discoveries regarding chimpanzees' perception and categorization of emotional facial expressions and similarities in the facial anatomy of chimpanzees and humans, and homologous facial movements in the two species are identified.
Abstract: There has been little research over the past few decades focusing on similarities and differences in the form and function of emotional signals in nonhuman primates, or whether these communication systems are homologous with those of humans This is, in part, due to the fact that detailed and objective measurement tools to answer such questions have not been systematically developed for nonhuman primate research Despite this, emotion research in humans has benefited for over 30 years from an objective, anatomically based facial-measurement tool: the Facial Action Coding System In collaboration with other researchers, we have now developed a similar system for chimpanzees (ChimpFACS) and, in the process, have made exciting new discoveries regarding chimpanzees' perception and categorization of emotional facial expressions, similarities in the facial anatomy of chimpanzees and humans, and we have identified homologous facial movements in the two species Investigating similarities and differences in primate emotional communication systems is essential if we are to understand unique evolutionary specializations among different species

Journal ArticleDOI
TL;DR: These data provide some of the first known evidence linking specific measures of infant crying with an independent, validated measure of pain.
Abstract: ObjectiveTo determine the relations between Neonatal Facial Coding System (NFCS) scores and measures of infant crying during newborn circumcision.MethodsVideo and audio recordings were made of infant facial activity and cry sounds, respectively, during the lysis phase of circumcisions of 44 healthy

01 Jan 2007
TL;DR: This paper describes two experiments which are the first applications of a system based on machine learning for fully automated detection of 30 actions from the facial action coding system (FACS), and revealed information about facial behavior during these conditions that were previously unknown, including the coupling of movements.
Abstract: The computer vision field has advanced to the point that we are now able to begin to apply automatic facial expression recognition systems to important research questions in behavioral science. The machine perception lab at UC San Diego has developed a system based on machine learning for fully automated detection of 30 actions from the facial action coding system (FACS). The system, called Computer Expression Recognition Toolbox (CERT), operates in real-time and is robust to the video conditions in real applications. This paper describes two experiments which are the first applications of this system to analyzing spontaneous human behavior: Automated discrimination of posed from genuine expressions of pain, and automated detection of driver drowsiness. The analysis revealed information about facial behavior during these conditions that were previously unknown, including the coupling of movements. Automated classifiers were able to differentiate real from fake pain significantly better than naive human subjects, and to detect critical drowsiness above 98% accuracy. Issues for application of machine learning systems to facial expression analysis are discussed.

Book ChapterDOI
31 Oct 2007
TL;DR: Facial gestures include various nods and head movements, blinks, eyebrow gestures and gaze as mentioned in this paper, which are all facial displays except explicit verbal and emotional displays (visemes or expressions such as smile).
Abstract: Facial displays are an extremely important communication channel fulfilling a wide variety of functions in discourse and conversation. Humans use them naturally, often subconsciously, and are therefore very sensitive to the application of such displays in their computer-generated correspondents, Embodied Conversational Agents (ECA). In this chapter, we aim to provide an extensive survey of one class of facial displays, the facial gestures. Facial gestures include various nods and head movements, blinks, eyebrow gestures and gaze, i.e., all facial displays except explicit verbal and emotional displays (visemes or expressions such as smile). Consciously or subconsciously, facial gestures play an important role both in discourse and in conversation. They are instrumental in turn taking, emphasizing, providing rhythm, and can be connected to physiological functions. While verbal and emotional displays may be regarded as the explicit, perhaps even obvious, the facial gestures are less tangible - yet they are largely responsible for what we intuitively call natural behavior of the face. In other words, an ECA pronouncing a sentence using perfect coarticulation mechanism for the lips and displaying a carefully modeled expression of surprise will still look unnatural if the facial gestures are not right as well. It is therefore extremely important for an ECA to implement facial gestures well. While there is a large body of knowledge on this topic both from psychology and ECA literature, it is quite scattered. Existing ECA implementations typically concentrate on some aspects of facial gestures but do not cover the complete set. We attempt to provide a complete survey of facial gestures that can be useful as a guideline for their implementation in an ECA. Specifically, we provide a systematically organized repertoire of usual facial gestures. For each gesture class, we provide the information on its typical usage in discourse and conversation, conscious or subconscious causes, any available knowledge on

Proceedings ArticleDOI
25 Jul 2007
TL;DR: The findings show that the animated avatar in general is a useful tool for the investigation of facial expressions, although improvements have to be made to reach a higher recognition accuracy of certain expressions.
Abstract: The human face is capable of producing a large variety of facial expressions that supply important information for communication. As was shown in previous studies using unmanipulated video sequences, movements of single regions like mouth, eyes, and eyebrows as well as rigid head motion play a decisive role in the recognition of conversational facial expressions. Here, flexible but at the same time realistic computer animated faces were used to investigate the spatiotemporal coaction of facial movements systematically. For three psychophysical experiments, spatiotemporal properties were manipulated in a highly controlled manner. First, single regions (mouth, eyes, and eyebrows) of a computer animated face performing seven basic facial expressions were selected. These single regions, as well as combinations of these regions, were animated for each of the seven chosen facial expressions. Participants were then asked to recognize these animated expressions in the experiments. The findings show that the animated avatar in general is a useful tool for the investigation of facial expressions, although improvements have to be made to reach a higher recognition accuracy of certain expressions. Furthermore, the results shed light on the importance and interplay of individual facial regions for recognition. With this knowledge the perceptual quality of computer animations can be improved in order to reach a higher level of realism and effectiveness.


Proceedings ArticleDOI
15 Apr 2007
TL;DR: A novel class of support vector machines (SVM) is introduced to deal with facial expression recognition, and the proposed classifier incorporates statistic information about the classes under examination into the classical SVM.
Abstract: In this paper, a novel class of support vector machines (SVM) is introduced to deal with facial expression recognition. The proposed classifier incorporates statistic information about the classes under examination into the classical SVM. The developed system performs facial expression recognition in facial videos. The grid tracking and deformation algorithm used tracks the Candide grid over time as the facial expression evolves, until the frame that corresponds to the greatest facial expression intensity. The geometrical displacement of Candide nodes is used as an input to the bank of novel SVM classifiers, that are utilized to recognize the six basic facial expressions. The experiments on the Cohn-Kanade database show a recognition accuracy of 98.2%.

Book ChapterDOI
12 Sep 2007
TL;DR: This work addresses the problem of complex mental states in face-to-face interaction by building animation models for complex emotions based on video clips of professional actors displaying these emotions.
Abstract: A face is capable of producing about twenty thousand different facial expressions [2]. Many researchers on Virtual Characters have selected a limited set of emotional facial expressions and defined them as basic emotions, which are universally recognized facial expressions. These basic emotions have been well studied since 1969 and employed in many applications [3]. However, real life communication usually entails more complicated emotions. For instance, communicative emotions like "convinced", "persuaded" and "bored" are difficult to describe adequately with basic emotions. Our daily face-to-face interaction is already accompanied by more complex mental states, so an empathic animation system should support them. Compared to basic emotions, complex mental states are harder to model because they require knowledge of temporal changes in facial displays and head movements as opposed to a static snapshot of the facial expression. We address this by building animation models for complex emotions based on video clips of professional actors displaying these emotions.

Proceedings ArticleDOI
10 Apr 2007
TL;DR: The use of dimensional-based tests is suggested, e.g. semantic differential approaches like the pleasure-arousal-dominance-model, which derive the test from the theory on which the design of an object is based, the validity of the test raises significantly.
Abstract: Common works on emotion expressing robots are theoretically based on a dimensional (continuous) model of emotions. Nevertheless, performance tests, which are used to evaluate the emotion expressing robots, are based on categorical (discrete) models of emotions. In this paper the use of dimensional-based tests is suggested, e.g. semantic differential approaches like the pleasure-arousal-dominance-model. By deriving the test from the theory on which the design of an object is based, the validity of the test raises significantly. Major benefits are explicit guidelines for design improvement and the possible integration of arbitrary actuated expressive features for which no common framework as, for example, the facial action coding system (FACS) exists. For illustration purposes, a comparative evaluation study of the robot EDDIE is conducted: one test is based on a categorical model and one test is based on a dimensional model of emotion. A third study based on a dimensional model demonstrates the evaluation of the influence of animal like features on the perceived emotion state.