scispace - formally typeset
Search or ask a question

Showing papers on "Facial expression published in 2007"


Journal ArticleDOI
TL;DR: Hemodynamic and electrical neuroimaging results indicating that activity in the face-selective fusiform cortex may be enhanced by emotional (fearful) expressions, without explicit voluntary control, and presumably through direct feedback connections from the amygdala are reviewed.

1,075 citations


Journal ArticleDOI
TL;DR: A modulatory role of oxytocin on amygdala responses to facial expressions irrespective of their valence is suggested, which might reflect reduced uncertainty about the predictive value of a social stimulus and thereby facilitates social approach behavior.

737 citations


Journal ArticleDOI
TL;DR: Two novel methods for facial expression recognition in facial image sequences are presented, one based on deformable models and the other based on grid-tracking and deformation systems.
Abstract: In this paper, two novel methods for facial expression recognition in facial image sequences are presented. The user has to manually place some of Candide grid nodes to face landmarks depicted at the first frame of the image sequence under examination. The grid-tracking and deformation system used, based on deformable models, tracks the grid in consecutive video frames over time, as the facial expression evolves, until the frame that corresponds to the greatest facial expression intensity. The geometrical displacement of certain selected Candide nodes, defined as the difference of the node coordinates between the first and the greatest facial expression intensity frame, is used as an input to a novel multiclass Support Vector Machine (SVM) system of classifiers that are used to recognize either the six basic facial expressions or a set of chosen Facial Action Units (FAUs). The results on the Cohn-Kanade database show a recognition accuracy of 99.7% for facial expression recognition using the proposed multiclass SVMs and 95.1% for facial expression recognition based on FAU detection

676 citations


Journal ArticleDOI
TL;DR: Emotional faces were found to trigger an increased ERP positivity relative to neutral faces, and similar emotional expression effects were found for six basic emotions, suggesting that these effects are not primarily generated within neural structures specialised for the automatic detection of specific emotions.

627 citations


Journal ArticleDOI
TL;DR: Whether aspects of face perception are "automatic", in that they are especially rapid, non-conscious, mandatory and capacity-free, and whether limited-capacity selective attention mechanisms are preferentially recruited by faces and facial expressions is examined.

595 citations


Journal ArticleDOI
TL;DR: Differential engagement of the MPFC, the PCC/precuneus, and temporo-parietal regions in the self-task indicates that these structures act as key players in the evaluation of one's own emotional state during empathic face-to-face interaction.
Abstract: Empathy allows emotional psychological inference about other person's mental states and feelings in social contexts. We aimed at specifying the common and differential neural mechanisms of “self”-and “other”-related attribution of emotional states using event-related functional magnetic resonance imaging. Subjects viewed faces expressing emotions with direct or averted gaze and either focused on their own emotional response to each face (self-task) or evaluated the emotional state expressed by the face (other-task). The common network activated by both tasks included the left lateral orbito-frontal and medial prefrontal cortices (MPFC), bilateral inferior frontal cortices, superior temporal sulci and temporal poles, as well as the right cerebellum. In a subset of these regions, neural activity was significantly correlated with empathic abilities. The self-(relative to the other-) task differentially activated the MPFC, the posterior cingulate cortex (PCC)/precuneus, and the temporo-parietal junction bilaterally. Empathy-related processing of emotional facial expressions recruited brain areas involved in mirror neuron and theory-of-mind (ToM) mechanisms. The differential engagement of the MPFC, the PCC/precuneus, and temporo-parietal regions in the self-task indicates that these structures act as key players in the evaluation of one's own emotional state during empathic face-to-face interaction. Activation of mirror neurons in a task relying on empathic abilities without explicit task-related motor components supports the view that mirror neurons are not only involved in motor cognition but also in emotional interpersonal cognition. An interplay between ToM and mirror neuron mechanisms may hold for the maintenance of a self-other distinction during empathic interpersonal face-to-face interactions.

496 citations


Journal ArticleDOI
TL;DR: This paper presents the computational tools and a hardware prototype for 3D face recognition and presents the results on the largest known, and now publicly available, face recognition grand challenge 3D facial database consisting of several thousand scans.
Abstract: In this paper, we present the computational tools and a hardware prototype for 3D face recognition. Full automation is provided through the use of advanced multistage alignment algorithms, resilience to facial expressions by employing a deformable model framework, and invariance to 3D capture devices through suitable preprocessing steps. In addition, scalability in both time and space is achieved by converting 3D facial scans into compact metadata. We present our results on the largest known, and now publicly available, face recognition grand challenge 3D facial database consisting of several thousand scans. To the best of our knowledge, this is the highest performance reported on the FRGC v2 database for the 3D modality

496 citations


Journal ArticleDOI
01 Aug 2007-Emotion
TL;DR: The findings illustrate the importance of emotional whole-body expressions in communication either when viewed on their own or, as is often the case in realistic circumstances, in combination with facial expressions and emotional voices.
Abstract: The most familiar emotional signals consist of faces, voices, and whole-body expressions, but so far research on emotions expressed by the whole body is sparse. The authors investigated recognition of whole-body expressions of emotion in three experiments. In the first experiment, participants performed a body expression-matching task. Results indicate good recognition of all emotions, with fear being the hardest to recognize. In the second experiment, two alternative forced choice categorizations of the facial expression of a compound face-body stimulus were strongly influenced by the bodily expression. This effect was a function of the ambiguity of the facial expression. In the third experiment, recognition of emotional tone of voice was similarly influenced by task irrelevant emotional body expressions. Taken together, the findings illustrate the importance of emotional whole-body expressions in communication either when viewed on their own or, as is often the case in realistic circumstances, in combination with facial expressions and emotional voices.

419 citations


Journal ArticleDOI
TL;DR: It is suggested that facial mimicry differentially contributes to recognition of specific facial expressions, thus allowing for more refined predictions from embodied cognition theories.
Abstract: People spontaneously mimic a variety of behaviors, including emotional facial expressions. Embodied cognition theories suggest that mimicry reflects internal simulation of perceived emotion in order to facilitate its understanding. If so, blocking facial mimicry should impair recognition of expressions, especially of emotions that are simulated using facial musculature. The current research tested this hypothesis using four expressions (happy, disgust, fear, and sad) and two mimicry-interfering manipulations (1) biting on a pen and (2) chewing gum, as well as two control conditions. Experiment 1 used electromyography over cheek, mouth, and nose regions. The bite manipulation consistently activated assessed muscles, whereas the chew manipulation activated muscles only intermittently. Further, expressing happiness generated most facial action. Experiment 2 found that the bite manipulation interfered most with recognition of happiness. These findings suggest that facial mimicry differentially contributes to recognition of specific facial expressions, thus allowing for more refined predictions from embodied cognition theories.

411 citations


Journal ArticleDOI
TL;DR: The experiments show that the integration of AU relationships and AU dynamics with AU measurements yields significant improvement of AU recognition, especially for spontaneous facial expressions and under more realistic environment including illumination variation, face pose variation, and occlusion.
Abstract: A system that could automatically analyze the facial actions in real time has applications in a wide range of different fields. However, developing such a system is always challenging due to the richness, ambiguity, and dynamic nature of facial actions. Although a number of research groups attempt to recognize facial action units (AUs) by improving either the facial feature extraction techniques or the AU classification techniques, these methods often recognize AUs or certain AU combinations individually and statically, ignoring the semantic relationships among AUs and the dynamics of AUs. Hence, these approaches cannot always recognize AUs reliably, robustly, and consistently. In this paper, we propose a novel approach that systematically accounts for the relationships among AUs and their temporal evolutions for AU recognition. Specifically, we use a dynamic Bayesian network (DBN) to model the relationships among different AUs. The DBN provides a coherent and unified hierarchical probabilistic framework to represent probabilistic relationships among various AUs and to account for the temporal changes in facial action development. Within our system, robust computer vision techniques are used to obtain AU measurements. Such AU measurements are then applied as evidence to the DBN for inferring various AUs. The experiments show that the integration of AU relationships and AU dynamics with AU measurements yields significant improvement of AU recognition, especially for spontaneous facial expressions and under more realistic environment including illumination variation, face pose variation, and occlusion.

404 citations


Journal ArticleDOI
01 Nov 2007-Emotion
TL;DR: Facial dynamics significantly influenced participants' choice of with whom to play the game and decisions to cooperate and it was found that inferences about the other player's trustworthiness mediated these effects of facial dynamics on cooperative behavior.
Abstract: Detecting cooperative partners in situations that have financial stakes is crucial to successful social exchange. The authors tested whether humans are sensitive to subtle facial dynamics of counterparts when deciding whether to trust and cooperate. Participants played a 2-person trust game before which the facial dynamics of the other player were manipulated using brief (< 6 s) but highly realistic facial animations. Results showed that facial dynamics significantly influenced participants' (a) choice of with whom to play the game and (b) decisions to cooperate. It was also found that inferences about the other player's trustworthiness mediated these effects of facial dynamics on cooperative behavior. (C) 2007 by the American Psychological Association

Journal ArticleDOI
TL;DR: Across all three morph types, adults displayed more sensitivity to subtle changes in emotional expression than children and adolescents, and fear morphs and fear-to-anger blends showed a linear developmental trajectory, whereas anger morphs showed a quadratic trend, increasing sharply from adolescents to adults.
Abstract: The ability to interpret emotions in facial expressions is crucial for social functioning across the lifespan. Facial expression recognition develops rapidly during infancy and improves with age during the preschool years. However, the developmental trajectory from late childhood to adulthood is less clear. We tested older children, adolescents and adults on a two-alternative forced-choice discrimination task using morphed faces that varied in emotional content. Actors appeared to pose expressions that changed incrementally along three progressions: neutral-to-fear, neutral-to-anger, and fear-to-anger. Across all three morph types, adults displayed more sensitivity to subtle changes in emotional expression than children and adolescents. Fear morphs and fear-to-anger blends showed a linear developmental trajectory, whereas anger morphs showed a quadratic trend, increasing sharply from adolescents to adults. The results provide evidence for late developmental changes in emotional expression recognition with some specificity in the time course for distinct emotions.

Journal ArticleDOI
TL;DR: An approach to automatic visual recognition of expressive face and upper-body gestures from video sequences suitable for use in a vision-based affective multi-modal framework and fuse facial expression and affective body gesture information at the feature and at the decision level is presented.

Journal ArticleDOI
TL;DR: The Approach-Avoidance Task (AAT) was employed to indirectly investigate avoidance reactions to stimuli of potential social threat, and a critical discrepancy between direct and indirect measures was observed for smiling faces: HSAs evaluated them positively, but reacted to them with avoidance.

Journal ArticleDOI
TL;DR: The results strongly suggest the existence of a genetic condition leading to a selective deficit of visual recognition in individuals high functioning in everyday life.
Abstract: We report on neuropsychological testing done with a family in which many members reported severe face recognition impairments. These 10 individuals were high functioning in everyday life and performed normally on tests of low-level vision and high-level cognition. In contrast, they showed clear deficits with tests requiring face memory and judgements of facial similarity. They did not show deficits with all aspects of higher level visual processing as all tested performed normally on a challenging facial emotion recognition task and on a global-local letter identification task. On object memory tasks requiring recognition of particular cars and guns, they showed significant deficits so their recognition impairments were not restricted to facial identity. These results strongly suggest the existence of a genetic condition leading to a selective deficit of visual recognition.

Journal ArticleDOI
TL;DR: The authors investigated the hypothesis that facial cues in different parts of the face are weighted differently when interpreting emotions and found that individuals in cultures where emotional subduction is the norm (such as Japan) would focus more strongly on the eyes than the mouth when interpreting others' emotions.

Journal ArticleDOI
TL;DR: For instance, this article found that the majority of these neurons (64%) responded both to identity and facial expression, suggesting that these parameters are processed jointly in the amygdala, while large fractions of neurons showed pure identity-selective or expression-selectively responses.
Abstract: The amygdala is purported to play an important role in face processing, yet the specificity of its activation to face stimuli and the relative contribution of identity and expression to its activation are unknown. In the current study, neural activity in the amygdala was recorded as monkeys passively viewed images of monkey faces, human faces, and objects on a computer monitor. Comparable proportions of neurons responded selectively to images from each category. Neural responses to monkey faces were further examined to determine whether face identity or facial expression drove the face-selective responses. The majority of these neurons (64%) responded both to identity and facial expression, suggesting that these parameters are processed jointly in the amygdala. Large fractions of neurons, however, showed pure identity-selective or expression-selective responses. Neurons were selective for a particular facial expression by either increasing or decreasing their firing rate compared with the firing rates elicited by the other expressions. Responses to appeasing faces were often marked by significant decreases of firing rates, whereas responses to threatening faces were strongly associated with increased firing rate. Thus global activation in the amygdala might be larger to threatening faces than to neutral or appeasing faces.

Journal ArticleDOI
TL;DR: It is suggested that older adults were less accurate at identifying emotions than were young adults, but the pattern differed across emotions and task types, and implications for age-related changes in different types of emotional processing are discussed.
Abstract: Age differences in emotion recognition from lexical stimuli and facial expressions were examined in a cross-sectional sample of adults aged 18 to 85 (N = 357). Emotion-specific response biases differed by age: Older adults were disproportionately more likely to incorrectly label lexical stimuli as happiness, sadness, and surprise and to incorrectly label facial stimuli as disgust and fear. After these biases were controlled, findings suggested that older adults were less accurate at identifying emotions than were young adults, but the pattern differed across emotions and task types. The lexical task showed stronger age differences than the facial task, and for lexical stimuli, age groups differed in accuracy for all emotional states except fear. For facial stimuli, in contrast, age groups differed only in accuracy for anger, disgust, fear, and happiness. Implications for age-related changes in different types of emotional processing are discussed.

Journal ArticleDOI
TL;DR: The ability to recognize facial emotion develops with age, with a developmental course that depends on the emotion to be recognized, and children at all ages and adults exhibited both an inversion effect and a composite effect, suggesting that children rely on configural information to recognizes facial emotions.

Journal ArticleDOI
01 Feb 2007-Emotion
TL;DR: Two studies provided direct support for a recently proposed dialect theory of communicating emotion, positing that expressive displays show cultural variations similar to linguistic dialects, thereby decreasing accurate recognition by out-group members.
Abstract: Two studies provided direct support for a recently proposed dialect theory of communicating emotion, positing that expressive displays show cultural variations similar to linguistic dialects, thereby decreasing accurate recognition by out-group members. In Study 1, 60 participants from Quebec and Gabon posed facial expressions. Dialects, in the form of activating different muscles for the same expressions, emerged most clearly for serenity, shame, and contempt and also for anger, sadness, surprise, and happiness, but not for fear, disgust, or embarrassment. In Study 2, Quebecois and Gabonese participants judged these stimuli and stimuli standardized to erase cultural dialects. As predicted, an in-group advantage emerged for nonstandardized expressions only and most strongly for expressions with greater regional dialects, according to Study 1.

01 Jan 2007
TL;DR: FACS is regarded by many as the standard measure for facial behavior and is used widely in diverse fields and beyond emotion science, these include facial neuromuscular disorders.
Abstract: to name a few. Because of its importance to the study of emotion, a number of observer-based systems of facial expression measurement have been developed Using FACS and viewing video-recorded facial behavior at frame rate and slow motion, coders can manually code nearly all possible facial expressions, which are decomposed into action units (AUs). Action units, with some qualifications , are the smallest visually discriminable facial movements. By comparison, other systems are less thorough (Malatesta et al., 1989), fail to differentiate between some anatomically distinct movements (Oster, Hegley, & Nagel, 1992), consider movements that are not anatomically distinct as separable (Oster et al., 1992), and often assume a one-to-one mapping between facial expression and emotion (for a review of these systems, see Cohn & Ekman, in press). Unlike systems that use emotion labels to describe expression , FACS explicitly distinguishes between facial actions and inferences about what they mean. FACS itself is descriptive and includes no emotion-specified descriptors. Hypotheses and inferences about the emotional meaning of facial actions are extrinsic to FACS. If one wishes to make emotion based inferences from FACS codes, a variety of related resources exist. These include the FACS Investigators' Guide These resources suggest combination rules for defining emotion-specified expressions from FACS action units, but this inferential step remains extrinsic to FACS. Because of its descriptive power, FACS is regarded by many as the standard measure for facial behavior and is used widely in diverse fields. Beyond emotion science, these include facial neuromuscular disorders

Journal ArticleDOI
TL;DR: This paper presents the effort in creating an authentic facial expression database based on spontaneous emotions derived from the environment, and test and compare a wide range of classifiers from the machine learning literature that can be used for facial expression classification.

Journal ArticleDOI
TL;DR: Impairments in the neural processing of happy facial expressions in depression were evident in the core regions of affective facial processing, which were reversed following treatment, and these data complement the neural effects observed with negative affective stimuli.
Abstract: Objective: Processing affective facial expressions is an important component of interpersonal relationships. However, depressed patients show impairments in this system. The present study investigated the neural correlates of implicit processing of happy facial expressions in depression and identified regions affected by antidepressant therapy. Method: Two groups of subjects participated in a prospective study with functional magnetic resonance imaging (fMRI). The patients were 19 medication-free subjects (mean age, 43.2 years) with major depression, acute depressive episode, unipolar subtype. The comparison group contained 19 matched healthy volunteers (mean age, 42.8 years). Both groups underwent fMRI scans at baseline (week 0) and at 8 weeks. Following the baseline scan, the patients received treatment with fluoxetine, 20 mg daily. The fMRI task was implicit affect recognition with standard facial stimuli morphed to display varying intensities of happiness. The fMRI data were analyzed to estimate the a...

Journal ArticleDOI
15 Dec 2007-Pain
TL;DR: The preserved pain typicalness of facial responses to noxious stimulation suggests that pain is reflected as validly in the facial responses of demented patients as it is in healthy individuals.
Abstract: The facial expression of pain has emerged as an important pain indicator in demented patients, who have difficulties in providing self-report ratings. In a few clinical studies an increase of facial responses to pain was observed in demented patients compared to healthy controls. However, it had to be shown that this increase can be verified when using experimental methods, which also allows for testing whether the facial responses in demented patients are still typical for pain. We investigated facial responses in 42 demented patients and 54 aged-matched healthy controls to mechanically induced pain of various intensities. The face of the subject was videotaped during pressure stimulation and was later analysed using the Facial Action Coding System. Besides facial responses we also assessed self-report ratings. Comparable to previous findings, we found that facial responses to noxious stimulation were significantly increased in demented patients compared to healthy controls. This increase was mainly due to an increase of pain-indicative facial responses in demented patients. Moreover, facial responses were closely related to the intensity of stimulation, especially in demented patients. Regarding self-report ratings, we found no significant group differences; however, the capacity to provide these self-report ratings was diminished in demented patients. The preserved pain typicalness of facial responses to noxious stimulation suggests that pain is reflected as validly in the facial responses of demented patients as it is in healthy individuals. Therefore, the facial expression of pain has the potential to serve as an alternative pain assessment tool in demented patients, even in patients who are verbally compromised.

Journal ArticleDOI
TL;DR: Results demonstrate that even passive viewing of facial expressions activates a wide network of brain regions that were also involved in the execution of similar expressions, including the IFG/insula and the posterior parietal cortex.
Abstract: Facial expressions contain both motor and emotional components. The inferior frontal gyrus (IFG) and posterior parietal cortex have been considered to compose a mirror neuron system (MNS) for the motor components of facial expressions, while the amygdala and insula may represent an “additional” MNS for emotional states. Together, these systems may contribute to our understanding of facial expressions. Here we further examine this possibility. In three separate event-related fMRI experiment, subjects had to (1) observe (2) discriminate and (3) imitate facial expressions. Stimuli were dynamic neutral, happy, fearful and disgusted facial expressions, and in Experiments 1 and 2, an additional pattern motion condition. Importantly, during each experiment, subjects were unaware of the nature of the next experiments. Results demonstrate that even passive viewing of facial expressions activates a wide network of brain regions that were also involved in the execution of similar expressions, including the ...

Journal ArticleDOI
TL;DR: It is shown that nonverbal aspects in the physician-patient interaction play an important role and physician training could profit from incorporating knowledge about physician and patient nonverbal behavior.

Journal ArticleDOI
TL;DR: Behavioural data revealed that audiovisual presentation of non-verbal emotional information resulted in a significant increase in correctly classified stimuli when compared with visual and auditory stimulation, paralleled by enhanced activation in bilateral posterior superior temporal gyrus and right thalamus, which substantiating their role in the emotional integration process.

Journal ArticleDOI
TL;DR: Results show that at least some expressions are discriminated and preferred in newborns only a few days old, and raise the possibility that this preference reflects experience acquired over the first few days of life.
Abstract: The ability of newborns to discriminate and respond to different emotional facial expressions remains controversial. We conducted three experiments in which we tested newborns' preferences, and their ability to discriminate between neutral, fearful, and happy facial expressions, using visual preference and habituation procedures. In the first two experiments, no evidence was found that newborns discriminate, or show a preference between, a fearful and a neutral face. In the third experiment, newborns looked significantly longer at a happy facial expression than a fearful one. We raise the possibility that this preference reflects experience acquired over the first few days of life. These results show that at least some expressions are discriminated and preferred in newborns only a few days old.

Book ChapterDOI
01 Jul 2007
TL;DR: The human face is a multi-signal input-output communicative system capable of tremendous flexibility and specificity and is the authors' preeminent means of communicating and understanding somebody’s affective state and intentions on the basis of the shown facial expression.
Abstract: 1. Human Face and Its Expression The human face is the site for major sensory inputs and major communicative outputs. It houses the majority of our sensory apparatus as well as our speech production apparatus. It is used to identify other members of our species, to gather information about age, gender, attractiveness, and personality, and to regulate conversation by gazing or nodding. Moreover, the human face is our preeminent means of communicating and understanding somebody’s affective state and intentions on the basis of the shown facial expression (Keltner & Ekman, 2000). Thus, the human face is a multi-signal input-output communicative system capable of tremendous flexibility and specificity (Ekman & Friesen, 1975). In general, the human face conveys information via four kinds of signals. (a) Static facial signals represent relatively permanent features of the face, such as the bony structure, the soft tissue, and the overall proportions of the face. These signals contribute to an individual’s appearance and are usually exploited for person identification.

Journal ArticleDOI
TL;DR: Dysfunctions in key components of the human face processing system including the AMY, FFG and posterior STS region are present in individuals with high-functioning autism, and this dysfunction might contribute to the deficits in processing emotional facial expressions.
Abstract: Despite elegant behavioral descriptions of abnormalities for processing emotional facial expressions and biological motion in autism, identification of the neural mechanisms underlying these abnormalities remains a critical and largely unmet challenge. We compared brain activity with dynamic and static facial expressions in participants with and without high-functioning autism using event-related functional magnetic resonance imaging (fMRI) and three classes of face stimuli—emotion morphs (fearful and angry), identity morphs and static images (fearful, angry and neutral). We observed reduced activity in the amygdala (AMY) and fusiform gyrus (FFG) to dynamic emotional expressions in people with autism. There was also a lack of modulation by dynamic compared with static emotional expressions of social brain regions including the AMY, posterior superior temporal sulcus (STS) region and FFG. We observed equivalent emotion and identity morph-evoked activity in participants with and without autism in a region corresponding to the expected location of the more generally motion-sensitive area MT or V5. We conclude that dysfunctions in key components of the human face processing system including the AMY, FFG and posterior STS region are present in individuals with high-functioning autism, and this dysfunction might contribute to the deficits in processing emotional facial expressions.