scispace - formally typeset
Search or ask a question

Showing papers on "Facial expression published in 2004"


Proceedings ArticleDOI
13 Oct 2004
TL;DR: Results reveal that the system based on facial expression gave better performance than the systembased on just acoustic information for the emotions considered, and that when these two modalities are fused, the performance and the robustness of the emotion recognition system improve measurably.
Abstract: The interaction between human beings and computers will be more natural if computers are able to perceive and respond to human non-verbal communication such as emotions. Although several approaches have been proposed to recognize human emotions based on facial expressions or speech, relatively limited work has been done to fuse these two, and other, modalities to improve the accuracy and robustness of the emotion recognition system. This paper analyzes the strengths and the limitations of systems based only on facial expressions or acoustic information. It also discusses two approaches used to fuse these two modalities: decision level and feature level integration. Using a database recorded from an actress, four emotions were classified: sadness, anger, happiness, and neutral state. By the use of markers on her face, detailed facial motions were captured with motion capture, in conjunction with simultaneous speech recordings. The results reveal that the system based on facial expression gave better performance than the system based on just acoustic information for the emotions considered. Results also show the complementarily of the two modalities and that when these two modalities are fused, the performance and the robustness of the emotion recognition system improve measurably.

843 citations


Journal ArticleDOI
TL;DR: Antidepressant treatment reduces left limbic, subcortical, and neocortical capacity for activation in depressed subjects and increases the dynamic range of the left prefrontal cortex.
Abstract: Background: Depression is associated with interpersonal difficulties related to abnormalities in affective facial processing. Objectives: To map brain systems activated by sad facial affect processing in patients with depression and to identify brain functional correlates of antidepressant treatment and symptomatic response. Design: Two groups underwent scanning twice using functional magnetic resonance imaging (fMRI) during an 8-week period. The event-related fMRI paradigm entailed incidental affect recognition of facial stimuli morphed to express discriminable intensities of sadness. Setting: Participants were recruited by advertisement from the local population; depressed subjects were treated as outpatients. Patients and Other Participants: We matched 19

789 citations


Journal ArticleDOI
TL;DR: It is shown how psychological theories of emotion shed light on the interaction between emotion and cognition, and thus can inform the design of human-like autonomous agents that must convey these core aspects of human behavior.

759 citations


Journal ArticleDOI
01 Jun 2004-Emotion
TL;DR: Assessing early perceptual stimulusprocessing, threatening faces elicited an early posterior negativity compared with nonthreatening neutral or friendly expressions, and at later stages of stimulus processing, facial threat also elicited augmented late positive potentials relative to the other facial expressions, indicating the more elaborate perceptual analysis of these stimuli.
Abstract: Threatening, friendly, and neutral faces were presented to test the hypothesis of the facilitated perceptual processing of threatening faces. Dense sensor event-related brain potentials were measured while subjects viewed facial stimuli. Subjects had no explicit task for emotional categorization of the faces. Assessing early perceptual stimulus processing, threatening faces elicited an early posterior negativity compared with nonthreatening neutral or friendly expressions. Moreover, at later stages of stimulus processing, facial threat also elicited augmented late positive potentials relative to the other facial expressions, indicating the more elaborate perceptual analysis of these stimuli. Taken together, these data demonstrate the facilitated perceptual processing of threatening faces. Results are discussed within

741 citations


Journal ArticleDOI
TL;DR: Exaggeration of body movement enhanced recognition accuracy and produced higher emotional-intensity ratings, regardless of lighting condition, for movies but to a lesser extent for stills, indicating that intensity judgments of body gestures rely more on movement (or form-from-movement) than static form information.
Abstract: Research on emotion recognition has been dominated by studies of photographs of facial expressions. A full understanding of emotion perception and its neural substrate will require investigations that employ dynamic displays and means of expression other than the face. Our aims were: (i) to develop a set of dynamic and static whole-body expressions of basic emotions for systematic investigations of clinical populations, and for use in functional-imaging studies; (ii) to assess forced-choice emotion-classification performance with these stimuli relative to the results of previous studies; and (iii) to test the hypotheses that more exaggerated whole-body movements would produce (a) more accurate emotion classification and (b) higher ratings of emotional intensity. Ten actors portrayed 5 emotions (anger, disgust, fear, happiness, and sadness) at 3 levels of exaggeration, with their faces covered. Two identical sets of 150 emotion portrayals (full-light and point-light) were created from the same digital footage, along with corresponding static images of the 'peak' of each emotion portrayal. Recognition tasks confirmed previous findings that basic emotions are readily identifiable from body movements, even when static form information is minimised by use of point-light displays, and that full-light and even point-light displays can convey identifiable emotions, though rather less efficiently than dynamic displays. Recognition success differed for individual emotions, corroborating earlier results about the importance of distinguishing differences in movement characteristics for different emotional expressions. The patterns of misclassifications were in keeping with earlier findings on emotional clustering. Exaggeration of body movement (a) enhanced recognition accuracy, especially for the dynamic point-light displays, but notably not for sadness, and (b) produced higher emotional-intensity ratings, regardless of lighting condition, for movies but to a lesser extent for stills, indicating that intensity judgments of body gestures rely more on movement (or form-from-movement) than static form information.

670 citations


Journal ArticleDOI
01 Apr 2004-Nature
TL;DR: It is shown that adaptation effects are pronounced for natural variations in faces and for natural categorical judgements about faces, suggesting that adaptation may routinely influence face perception in normal viewing and could have an important role in calibrating properties of face perception according to the subset of faces populating an individual's environment.
Abstract: Face perception is fundamentally important for judging the characteristics of individuals, such as identification of their gender, age, ethnicity or expression. We asked how the perception of these characteristics is influenced by the set of faces that observers are exposed to. Previous studies have shown that the appearance of a face can be biased strongly after viewing an altered image of the face, and have suggested that these after-effects reflect response changes in the neural mechanisms underlying object or face perception. Here we show that these adaptation effects are pronounced for natural variations in faces and for natural categorical judgements about faces. This suggests that adaptation may routinely influence face perception in normal viewing, and could have an important role in calibrating properties of face perception according to the subset of faces populating an individual's environment.

569 citations


Journal ArticleDOI
TL;DR: Evidence is found for a common cortical imitation circuit for both face and hand imitation, consisting of Broca's area, bilateral dorsal and ventral premotor areas, right superior temporal gyrus (STG), supplementary motor area, posterior temporo-occipital cortex, and cerebellar areas.

530 citations


Journal ArticleDOI
TL;DR: The authors suggest that, in depressed patients, the inability to accurately identify subtle changes in facial expression displayed by others in social situations may underlie the impaired interpersonal functioning.
Abstract: Impaired facial expression recognition has been associated with features of major depression, which could underlie some of the difficulties in social interactions in these patients. Patients with major depressive disorder and age- and gender-matched healthy volunteers judged the emotion of 100 facial stimuli displaying different intensities of sadness and happiness and neutral expressions presented for short (100 ms) and long (2,000 ms) durations. Compared with healthy volunteers, depressed patients demonstrated subtle impairments in discrimination accuracy and a predominant bias away from the identification as happy of mildly happy expressions. The authors suggest that, in depressed patients, the inability to accurately identify subtle changes in facial expression displayed by others in social situations may underlie the impaired interpersonal functioning.

502 citations


Journal ArticleDOI
TL;DR: The proposed algorithm when compared with conventional PCA algorithm has an improved recognition rate for face images with large variations in lighting direction and facial expression and is expected to be able to cope with these variations.

490 citations


Journal ArticleDOI
TL;DR: Results from behavioural and neuroimaging studies indicate continued development of emotion expression recognition and neural regions important for this process throughout childhood and adolescence, including subcortical and prefrontal cortical structures.
Abstract: Background: Intact emotion processing is critical for normal emotional development. Recent advances in neuroimaging have facilitated the examination of brain development, and have allowed for the exploration of the relationships between the development of emotion processing abilities, and that of associated neural systems. Methods: A literature review was performed of published studies examining the development of emotion expression recognition in normal children and psychiatric populations, and of the development of neural systems important for emotion processing. Results: Few studies have explored the development of emotion expression recognition throughout childhood and adolescence. Behavioural studies suggest continued development throughout childhood and adolescence (reflected by accuracy scores and speed of processing), which varies according to the category of emotion displayed. Factors such as sex, socio-economic status, and verbal ability may also affect this development. Functional neuroimaging studies in adults highlight the role of the amygdala in emotion processing. Results of the few neuroimaging studies in children have focused on the role of the amygdala in the recognition of fearful expressions. Although results are inconsistent, they provide evidence throughout childhood and adolescence for the continued development of and sex differences in amygdalar function in response to fearful expressions. Studies exploring emotion expression recognition in psychiatric populations of children and adolescents suggest deficits that are specific to the type of disorder and to the emotion displayed. Conclusions: Results from behavioural and neuroimaging studies indicate continued development of emotion expression recognition and neural regions important for this process throughout childhood and adolescence. Methodological inconsistencies and disparate findings make any conclusion difficult, however. Further studies are required examining the relationship between the development of emotion expression recognition and that of underlying neural systems, in particular subcortical and prefrontal cortical structures. These will inform understanding of the neural bases of normal and abnormal emotional development, and aid the development of earlier interventions for children and adolescents with psychiatric disorders. Keywords: Emotion expression recognition, facial affect, normal development, amygdala, neural correlates, brain development, emotional development. Emotion identification is crucial for subsequent social interaction and functioning. The ability to decode facial expressions is an important component of social interaction because of the significant role of facial information in the appropriate modification of social behaviours (Philippot & Feldman, 1990; Vicari, Snitzer Reilly, Pasqualetti, Vizzotto, &

479 citations


Journal ArticleDOI
TL;DR: Functional MRI is used to clarify how the brain recognizes happiness or fear expressed by a whole body and indicates that observing fearful body expressions produces increased activity in brain areas narrowly associated with emotional processes.
Abstract: Darwin regarded emotions as predispositions to act adaptively, thereby suggesting that characteristic body movements are associated with each emotional state. To this date, investigations of emotional cognition have predominantly concentrated on processes associated with viewing facial expressions. However, expressive body movements may be just as important for understanding the neurobiology of emotional behavior. Here, we used functional MRI to clarify how the brain recognizes happiness or fear expressed by a whole body. Our results indicate that observing fearful body expressions produces increased activity in brain areas narrowly associated with emotional processes and that this emotion-related activity occurs together with activation of areas linked with representation of action and movement. The mechanism of fear contagion hereby suggested may automatically prepare the brain for action.

Journal ArticleDOI
01 Jun 2004
TL;DR: An automated system that is developed to recognize facial gestures in static, frontal- and/or profile-view color face images using rule-based reasoning and a recognition rate of 86% is achieved.
Abstract: Automatic recognition of facial gestures (i.e., facial muscle activity) is rapidly becoming an area of intense interest in the research field of machine vision. In this paper, we present an automated system that we developed to recognize facial gestures in static, frontal- and/or profile-view color face images. A multidetector approach to facial feature localization is utilized to spatially sample the profile contour and the contours of the facial components such as the eyes and the mouth. From the extracted contours of the facial features, we extract ten profile-contour fiducial points and 19 fiducial points of the contours of the facial components. Based on these, 32 individual facial muscle actions (AUs) occurring alone or in combination are recognized using rule-based reasoning. With each scored AU, the utilized algorithm associates a factor denoting the certainty with which the pertinent AU has been scored. A recognition rate of 86% is achieved.

Journal ArticleDOI
01 Jun 2004-Emotion
TL;DR: Women were more accurate than men even under conditions of minimal stimulus information and women's ratings were more variable across scales, and they rated correct target emotions higher than did men.
Abstract: The authors tested gender differences in emotion judgments by utilizing a new judgment task (Studies 1 and 2) and presenting stimuli at the edge of conscious awareness (Study 2) Women were more accurate than men even under conditions of minimal stimulus information Women's ratings were more variable across scales, and they rated correct target emotions higher than did men

Journal ArticleDOI
TL;DR: The broad region of the occipital and temporal cortices, especially in the right hemisphere, showed higher activation during viewing of the dynamic facial expressions than it did during the viewing of either control stimulus, common to both expressions.

Proceedings ArticleDOI
27 Jun 2004
TL;DR: The system is based on a multi-level dynamic Bayesian network classifier which models complex mental states as a number of interacting facial and head displays, identified from component-based facial features.
Abstract: This paper presents a system for inferring complex mental states from video of facial expressions and head gestures in real-time. The system is based on a multi-level dynamic Bayesian network classifier which models complex mental states as a number of interacting facial and head displays, identified from component-based facial features. Experimental results for 6 mental states groups- agreement, concentrating, disagreement, interested, thinking and unsure are reported. Real-time performance, unobtrusiveness and lack of preprocessing make our system particularly suitable for user-independent human computer interaction.

01 Jan 2004
TL;DR: For instance, this paper found that the left amygdala was highly activated in response to dynamic facial expressions relative to both control stimuli, but not in the case of happy expressions, while the right ventral premotor cortex was also activated.
Abstract: Dynamic facial expressions of emotion constitute natural and powerful media of communication between individuals. However, little is known about the neural substrate underlying the processing of dynamic facial expressions of emotion. We depicted the brain areas by using fMRI with 22 right-handed healthy subjects. The facial expressions are dynamically morphed from neutral to fearful or happy expressions. Two types of control stimuli were presented: (i) static facial expressions, which provided sustained fearful or happy expressions, and (ii) dynamic mosaic images, which provided dynamic information with no facial features. Subjects passively viewed these stimuli. The left amygdala was highly activated in response to dynamic facial expressions relative to both control stimuli in the case of fearful expressions, but not in the case of happy expressions. The broad region of the occipital and temporal cortices, especially in the right hemisphere, which included the activation foci of the inferior occipital gyri, middle temporal gyri, and fusiform gyri, showed higher activation during viewing of the dynamic facial expressions than it did during the viewing of either control stimulus, common to both expressions. In the same manner, the right ventral premotor cortex was also activated. These results identify the neural substrate for enhanced emotional, perceptual/cognitive, and motor processing of dynamic facial expressions of emotion. D 2004 Elsevier B.V. All rights reserved. Theme: Neural basis of behavior Topic: Cognition, motivation and emotion

Journal ArticleDOI
TL;DR: The phenomenon of binocular rivalry is exploited to induce complete suppression of affective face stimuli presented to one eye to suggest that the amygdala has a limited capacity to differentiate between specific facial expressions when it must rely on information received via a subcortical route.
Abstract: The human amygdala plays a crucial role in processing affective information conveyed by sensory stimuli. Facial expressions of fear and anger, which both signal potential threat to an observer, result in significant increases in amygdala activity, even when the faces are unattended or presented briefly and masked. It has been suggested that afferent signals from the retina travel to the amygdala via separate cortical and subcortical pathways, with the subcortical pathway underlying unconscious processing. Here we exploited the phenomenon of binocular rivalry to induce complete suppression of affective face stimuli presented to one eye. Twelve participants viewed brief, rivalrous visual displays in which a fearful, happy, or neutral face was presented to one eye while a house was presented simultaneously to the other. We used functional magnetic resonance imaging to study activation in the amygdala and extrastriate visual areas for consciously perceived versus suppressed face and house stimuli. Activation within the fusiform and parahippocampal gyri increased significantly for perceived versus suppressed faces and houses, respectively. Amygdala activation increased bilaterally in response to fearful versus neutral faces, regardless of whether the face was perceived consciously or suppressed because of binocular rivalry. Amygdala activity also increased significantly for happy versus neutral faces, but only when the face was suppressed. This activation pattern suggests that the amygdala has a limited capacity to differentiate between specific facial expressions when it must rely on information received via a subcortical route. We suggest that this limited capacity reflects a tradeoff between specificity and speed of processing.

Journal ArticleDOI
TL;DR: It is suggested that high-functioning individuals with ASD may be relatively unimpaired in the cognitive assessment of basic emotions, yet still show differences in the automatic processing of facial expressions.
Abstract: Objective To examine the neural basis of impairments in interpreting facial emotions in children and adolescents with autism spectrum disorders (ASD) Method Twelve children and adolescents with ASD and 12 typically developing (TD) controls matched faces by emotion and assigned a label to facial expressions while undergoing functional magnetic resonance imaging Results Both groups engaged similar neural networks during facial emotion processing, including activity in the fusiform gyrus (FG) and prefrontal cortex However, between-group analyses in regions of interest revealed that when matching facial expressions, the ASD group showed significantly less activity than the TD group in the FG, but reliably greater activity in the precuneus During the labeling of facial emotions, no between-group differences were observed at the behavioral or neural level Furthermore, activity in the amygdala was moderated by task demands in the TD group but not in the ASD group Conclusions These findings suggest that children and adolescents with ASD in part recruit different neural networks and rely on different strategies when processing facial emotions High-functioning individuals with ASD may be relatively unimpaired in the cognitive assessment of basic emotions, yet still show differences in the automatic processing of facial expressions

Journal ArticleDOI
TL;DR: These findings suggest that by extracting and representing dynamic as well as morphological features, automatic facial expression analysis can begin to discriminate among the message values of morphologically similar expressions.
Abstract: Almost all work in automatic facial expression analysis has focused on recognition of prototypic expressions rather than dynamic changes in appearance over time. To investigate the relative contribution of dynamic features to expression recognition, we used automatic feature tracking to measure the relation between amplitude and duration of smile onsets in spontaneous and deliberate smiles of 81 young adults of Euro- and African-American background. Spontaneous smiles were of smaller amplitude and had a larger and more consistent relation between amplitude and duration than deliberate smiles. A linear discriminant classifier using timing and amplitude measures of smile onsets achieved a 93% recognition rate. Using timing measures alone, recognition rate declined only marginally to 89%. These findings suggest that by extracting and representing dynamic as well as morphological features, automatic facial expression analysis can begin to discriminate among the message values of morphologically similar expressions.

Journal ArticleDOI
TL;DR: The results suggest that the happy face advantage may reflect a higher-level asymmetry in the recognition and categorization of emotionally positive and negative signals.
Abstract: Three experiments examined the recognition speed advantage for happy faces The results replicated earlier findings by showing that positive (happy) facial expressions were recognized faster than negative (disgusted or sad) facial expressions (Experiments 1 and 2) In addition, the results showed that this effect was evident even when low-level physical differences between positive and negative faces were controlled by using schematic faces (Experiment 2), and that the effect was not attributable to an artifact arising from facilitated recognition of a single feature in the happy faces (up-turned mouth line, Experiment 3) Together, these results suggest that the happy face advantage may reflect a higher-level asymmetry in the recognition and categorization of emotionally positive and negative signals

Journal ArticleDOI
TL;DR: In this paper, the ability of psychopathic individuals to process facial emotional expressions was investigated with a set of facial expressions depicting six emotions: happy, surprised, disgusted, angry, sad and fearful.

Journal ArticleDOI
TL;DR: Implicit prejudice (but not explicit prejudice) was related to increased sensitivity to the targets' facial expressions, regardless of whether prejudice was measured after (Study 1) or before (Study 2) the race categorizations were made.
Abstract: Two studies tested the hypothesis that perceivers' prejudice and targets' facial expressions bias race categorization in stereotypic directions. Specifically, we hypothesized that racial prejudice would be more strongly associated with a tendency to categorize hostile (but not happy) racially ambiguous faces as African American. We obtained support for this hypothesis using both a speeded dichotomous categorization task (Studies 1 and 2) and a rating-scale task (Study 2). Implicit prejudice (but not explicit prejudice) was related to increased sensitivity to the targets' facial expressions, regardless of whether prejudice was measured after (Study 1) or before (Study 2) the race categorizations were made.

Journal ArticleDOI
TL;DR: In this paper, participants were trained on a temporal bisection task in which visual stimuli (a pink oval) of 400 ms and 1600 ms served as short and long standards, respectively.
Abstract: Participants were trained on a temporal bisection task in which visual stimuli (a pink oval) of 400 ms and 1600 ms served as short and long standards, respectively. They were then presented comparison durations between 400 ms and 1600 ms, represented by faces expressing three emotions (anger, happiness, and sadness) and a neutral‐baseline facial expression. Relative to the neutral face, the proportion of long responses was higher, the psychophysical functions shifted to the left, and the bisection point values were lower for faces expressing any of the three emotions. These findings indicate that the duration of emotional faces was systematically overestimated compared to neural ones. Furthermore, consistent with arousal‐based models of time perception, temporal overestimation for the emotional faces increased with the duration values. It appears, therefore, that emotional faces increased the speed of the pacemaker of the internal clock.

Journal ArticleDOI
TL;DR: Cognitive, psychophysiological, neuropsychological, and neuroimaging evidence is reviewed in support of specialized neural networks subserving the processing of facial displays of threat, suggesting a primary role for the amygdale and pre-frontal cortices in interpreting signs of danger from facial expressions and other social stimuli.

Journal ArticleDOI
TL;DR: The results could not be explained by either medication or co-morbid depression, and are consistent with theories emphasising the role of information processing biases in social phobia, and show promise in the application to treatment evaluation in this disorder.
Abstract: Cognitive models of social phobia propose that cognitive biases and fears regarding negative evaluation by others result in preferential attention to interpersonal sources of threat. These fears may account for the hypervigilance and avoidance of eye contact commonly reported by clinicians. This study provides the first objective examination of threat-related processing in social phobia. It was predicted that hyperscanning (hypervigilance) and eye avoidance would be most apparent in social phobia for overt expressions of threat. An infrared corneal reflection technique was used to record visual scanpaths in response to angry, sad, and happy vs. neutral facial expressions. Twenty-two subjects with social phobia were compared with age- and sex-matched normal controls. As predicted, social phobia subjects displayed hyperscanning, (increased scanpath length) and avoidance (reduced foveal fixations) of the eyes, particularly evident for angry faces. The results could not be explained by either medication or co-morbid depression. These findings are consistent with theories emphasising the role of information processing biases in social phobia, and show promise in the application to treatment evaluation in this disorder.

Journal ArticleDOI
TL;DR: Differences between social phobics and control subjects in brain responses to socially threatening faces are most pronounced when facial expression is task-irrelevant.

Journal ArticleDOI
TL;DR: An observed relationship between state anxiety and ventral amygdala response to happy versus neutral faces was explained by response to neutral faces.

Journal ArticleDOI
TL;DR: In this paper, the facial expression of emotion and quality of life in patients after long-term facial nerve paralysis was investigated in a cross-sectional study with 20 patients and 24 significant others (partner, relative, and significant others).
Abstract: Objective:To investigate the facial expression of emotion and quality of life in patients after long-term facial nerve paralysis.Study Design:Cross-sectional.Setting:Facial nerve paralysis clinic.Patients:Twenty-four patients with facial nerve paralysis and 24 significant others (partner, relative,

Book ChapterDOI
01 Jan 2004
TL;DR: A Mind—Body interface is designed that takes as input a specification of a discourse plan in an XML language (DPML) and enriches this plan with the communicative meanings that have to be attached to it, by producing an input to the Body in a newxml language (APML), and a language to describe facial expressions is developed.
Abstract: Developing an embodied conversational agent able to exhibit a humanlike behavior while communicating with other virtual or human agents requires enriching the dialogue of the agent with non-verbal information. Our agent, Greta, is defined as two components: a Mind and a Body. Her mind reflects her personality, her social intelligence, as well as her emotional reaction to events occurring in the environment. Her body corresponds to her physical appearance able to display expressive behaviors. We designed a Mind—Body interface that takes as input a specification of a discourse plan in an XML language (DPML) and enriches this plan with the communicative meanings that have to be attached to it, by producing an input to the Body in a new XML language (APML). Moreover we have developed a language to describe facial expressions. It combines basic facial expressions with operators to create complex facial expressions. The purpose of this chapter is to describe these languages and to illustrate our approach to the generation of behavior of an agent able to act consistently with her goals and with the context of the interaction.

Proceedings ArticleDOI
27 Jun 2004
TL;DR: An end-to-end system that provides facial expression codes at 24 frames per second and animates a computer generated character and applies the system to fully automated facial action coding, the best performance reported so far on these datasets.
Abstract: We present a systematic comparison of machine learning methods applied to the problem of fully automatic recognition of facial expressions, including AdaBoost, support vector machines, and linear discriminant analysis. Each video-frame is first scanned in real-time to detect approximately upright-frontal faces. The faces found are scaled into image patches of equal size, convolved with a bank of Gabor energy filters, and then passed to a recognition engine that codes facial expressions into 7 dimensions in real time: neutral, anger, disgust, fear, joy, sadness, surprise. We report results on a series of experiments comparing spatial frequency ranges, feature selection techniques, and recognition engines. Best results were obtained by selecting a subset of Gabor filters using AdaBoost and then training Support Vector Machines on the outputs of the filters selected by AdaBoost. The generalization performance to new subjects for a 7-way forced choice was 93% or more correct on two publicly available datasets, the best performance reported so far on these datasets. Surprisingly, registration of internal facial features was not necessary, even though the face detector does not provide precisely registered images. The outputs of the classifier change smoothly as a function of time and thus can be used for unobtrusive motion capture. We developed an end-to-end system that provides facial expression codes at 24 frames per second and animates a computer generated character. In real-time this expression mirror operates down to resolutions of 16 pixels from eye to eye. We also applied the system to fully automated facial action coding.