scispace - formally typeset
Search or ask a question

Showing papers on "Facial expression published in 1998"


Journal ArticleDOI
TL;DR: This study, using fMRI in conjunction with masked stimulus presentations, represents an initial step toward determining the role of the amygdala in nonconscious processing.
Abstract: Functional magnetic resonance imaging (fMRI) of the human brain was used to study whether the amygdala is activated in response to emotional stimuli, even in the absence of explicit knowledge that such stimuli were presented. Pictures of human faces bearing fearful or happy expressions were presented to 10 normal, healthy subjects by using a backward masking procedure that resulted in 8 of 10 subjects reporting that they had not seen these facial expressions. The backward masking procedure consisted of 33 msec presentations of fearful or happy facial expressions, their offset coincident with the onset of 167 msec presentations of neutral facial expressions. Although subjects reported seeing only neutral faces, blood oxygen level-dependent (BOLD) fMRI signal in the amygdala was significantly higher during viewing of masked fearful faces than during the viewing of masked happy faces. This difference was composed of significant signal increases in the amygdala to masked fearful faces as well as significant signal decreases to masked happy faces, consistent with the notion that the level of amygdala activation is affected differentially by the emotional valence of external stimuli. In addition, these facial expressions activated the sublenticular substantia innominata (SI), where signal increases were observed to both fearful and happy faces—suggesting a spatial dissociation of territories that respond to emotional valence versus salience or arousal value. This study, using fMRI in conjunction with masked stimulus presentations, represents an initial step toward determining the role of the amygdala in nonconscious processing.

2,226 citations


Proceedings ArticleDOI
14 Apr 1998
TL;DR: The results show that it is possible to construct a facial expression classifier with Gabor coding of the facial images as the input stage and the Gabor representation shows a significant degree of psychological plausibility, a design feature which may be important for human-computer interfaces.
Abstract: A method for extracting information about facial expressions from images is presented. Facial expression images are coded using a multi-orientation multi-resolution set of Gabor filters which are topographically ordered and aligned approximately with the face. The similarity space derived from this representation is compared with one derived from semantic ratings of the images by human observers. The results show that it is possible to construct a facial expression classifier with Gabor coding of the facial images as the input stage. The Gabor representation shows a significant degree of psychological plausibility, a design feature which may be important for human-computer interfaces.

2,100 citations


Journal ArticleDOI
01 Jan 1998-Brain
TL;DR: Functional neuroimaging confirmed that the amygdala and some of its functionally connected structures mediate specific neural responses to fearful expressions and demonstrated that amygdalar responses predict expression-specific neural activity in extrastriate cortex.
Abstract: Localized amygdalar lesions in humans produce deficits in the recognition of fearful facial expressions. We used functional neuroimaging to test two hypotheses: (i) that the amygdala and some of its functionally connected structures mediate specific neural responses to fearful expressions; (ii) that the early visual processing of emotional faces can be influenced by amygdalar activity. Normal subjects were scanned using PET while they performed a gender discrimination task involving static grey-scale images of faces expressing varying degrees of fear or happiness. In support of the first hypothesis, enhanced activity in the left amygdala, left pulvinar, left anterior insula and bilateral anterior cingulate gyri was observed during the processing of fearful faces. Evidence consistent with the second hypothesis was obtained by a demonstration that amygdalar responses predict expression-specific neural activity in extrastriate cortex.

1,282 citations


Journal ArticleDOI
04 Jun 1998-Nature
TL;DR: This investigation into the hypothesis that the human amygdala is required for accurate social judgments of other individuals on the basis of their facial appearance finds three subjects with complete bilateral amygdala damage to judge faces of unfamiliar people with respect to two attributes important in real-life social encounters: approachability and trustworthiness.
Abstract: Studies in animals have implicated the amygdala in emotional1,2,3, and social4,5,6, behaviours, especially those related to fear and aggression Although lesion7,8,9,10, and functional imaging11,12,13,14, studies in humans have demonstrated the amygdala's participation in recognizing emotional facial expressions, its role in human social behaviour has remained unclear We report here our investigation into the hypothesis that the human amygdala is required for accurate social judgments of other individuals on the basis of their facial appearance We asked three subjects with complete bilateral amygdala damage to judge faces of unfamiliar people with respect to two attributes important in real-life social encounters: approachability and trustworthiness All three subjects judged unfamiliar individuals to be more approachable and more trustworthy than did control subjects The impairment was most striking for faces to which normal subjects assign the most negative ratings: unapproachable and untrustworthy looking individuals Additional investigations revealed that the impairment does not extend to judging verbal descriptions of people The amygdala appears to be an important component of the neural systems that help retrieve socially relevant knowledge on the basis of facial appearance

1,110 citations


Proceedings ArticleDOI
24 Jul 1998
TL;DR: This work presents new techniques for creating photorealistic textured 3D facial models from photographs of a human subject, and for creating smooth transitions between different facial expressions by morphing between these different models.
Abstract: We present new techniques for creating photorealistic textured 3D facial models from photographs of a human subject, and for creating smooth transitions between different facial expressions by morphing between these different models. Starting from several uncalibrated views of a human subject, we employ a user-assisted technique to recover the camera poses corresponding to the views as well as the 3D coordinates of a sparse set of chosen locations on the subject's face. A scattered data interpolation technique is then used to deform a generic face mesh to fit the particular geometry of the subject's face. Having recovered the camera poses and the facial geometry, we extract from the input images one or more texture maps for the model. This process is repeated for several facial expressions of a particular subject. To generate transitions between these facial expressions we use 3D shape morphing between the corresponding face models, while at the same time blending the corresponding textures. Using our technique, we have been able to generate highly realistic face models and natural looking animations.

826 citations


Journal ArticleDOI
TL;DR: The findings support the differential localization of the neural substrates of fear and disgust and suggest a possible general role for the perception of emotional expressions for the superior temporal gyrus.
Abstract: Neuropsychological studies report more impaired responses to facial expressions of fear than disgust in people with amygdala lesions, and vice versa in people with Huntington's disease. Experiments using functional magnetic resonance imaging (fMRI) have confirmed the role of the amygdala in the response to fearful faces and have implicated the anterior insula in the response to facial expressions of disgust. We used fMRI to extend these studies to the perception of fear and disgust from both facial and vocal expressions. Consistent with neuropsychological findings, both types of fearful stimuli activated the amygdala. Facial expressions of disgust activated the anterior insula and the caudate-putamen; vocal expressions of disgust did not significantly activate either of these regions. All four types of stimuli activated the superior temporal gyrus. Our findings therefore (i) support the differential localization of the neural substrates of fear and disgust; (ii) confirm the involvement of the amygdala in the emotion of fear, whether evoked by facial or vocal expressions; (iii) confirm the involvement of the anterior insula and the striatum in reactions to facial expressions of disgust; and (iv) suggest a possible general role for the perception of emotional expressions for the superior temporal gyrus.

786 citations


Journal ArticleDOI
TL;DR: The results support the hypotheses derived from neuropsychological findings, that recognition of disgust, fear and anger is based on separate neural systems, and that the output of these systems converges on frontal regions for further information processing.
Abstract: People with Huntington's disease and people suffering from obsessive compulsive disorder show severe deficits in recognizing facial expressions of disgust, whereas people with lesions restricted to the amygdala are especially impaired in recognizing facial expressions of fear This double dissociation implies that recognition of certain basic emotions may be associated with distinct and non-overlapping neural substrates Some authors, however, emphasize the general importance of the ventral parts of the frontal cortex in emotion recognition, regardless of the emotion being recognized In this study, we used functional magnetic resonance imaging to locate neural structures that are critical for recognition of facial expressions of basic emotions by investigating cerebral activation of six healthy adults performing a gender discrimination task on images of faces expressing disgust, fear and anger Activation in response to these faces was compared with that for faces showing neutral expressions Disgusted facial expressions activated the right putamen and the left insula cortex, whereas enhanced activity in the posterior part of the right gyrus cinguli and the medial temporal gyrus of the left hemisphere was observed during processing of angry faces Fearful expressions activated the right fusiform gyrus and the left dorsolateral frontal cortex For all three emotions investigated, we also found activation of the inferior part of the left frontal cortex (Brodmann area 47) These results support the hypotheses derived from neuropsychological findings, that (i) recognition of disgust, fear and anger is based on separate neural systems, and that (ii) the output of these systems converges on frontal regions for further information processing

662 citations


Journal ArticleDOI
TL;DR: A review of studies shows that schizophrenia patients, despite a general impairment of perception or expression of facial emotions, are highly sensitive to certain negative emotions of fear and anger.
Abstract: It is generally agreed that schizophrenia patients show a markedly reduced ability to perceive and express facial emotions. Previous studies have shown, however, that such deficits are emotion-specific in schizophrenia and not generalized. Three kinds of studies were examined: decoding studies dealing with schizophrenia patients' ability to perceive universally recognized facial expressions of emotions, encoding studies dealing with schizophrenia patients' ability to express certain facial emotions, and studies of subjective reactions of patients' sensitivity toward universally recognized facial expressions of emotions. A review of these studies shows that schizophrenia patients, despite a general impairment of perception or expression of facial emotions, are highly sensitive to certain negative emotions of fear and anger. These observations are discussed in the light of hemispheric theory, which accounts for a generalized performance deficit, and social-cognitive theory, which accounts for an emotion-specific deficit in schizophrenia.

539 citations


Journal ArticleDOI
TL;DR: In this paper, the authors investigated the time course of attentional biases for emotional facial expressions in high and low trait anxious individuals, and found evidence of an attentional bias favoring threatening facial expressions, but not emotional faces in general, in high trait anxiety.
Abstract: The study investigated the time course of attentional biases for emotional facial expressions in high and low trait anxious individuals. Threat, happy, and neutral face stimuli were presented at two exposure durations, 500 and 1250msec, in a forced-choice reaction time (RT) version of the dot probe task. There was clear evidence of an attentional bias favouring threatening facial expressions, but not emotional faces in general, in high trait anxiety. Increased dysphoria was associated with a tendency to avoid happy faces. No evidence was found of avoidance following initial vigilance for threat in this nonclinical sample. Methodological and theoretical implications of the results are discussed.

493 citations


Journal ArticleDOI
TL;DR: It is suggested that for individuals with schizophrenia, deficits in facial affect recognition are stable deficits that are related to other impairments in neurocognition that have implications for psychosocial treatments.

470 citations


Journal ArticleDOI
TL;DR: Multilinear techniques are applied to support the claims that facial motion during speech is largely a by-product of producing the speech acoustics and better estimated by the 3D motion of the face than by the midsagittalmotion of the anterior vocal-tract.

Journal ArticleDOI
TL;DR: This study explored how rapidly emotion specific facial muscle reactions were elicited when subjects were exposed to pictures of angry and happy facial expressions, and found that distinctive facial electromyographic reactions were detectable after only 300-400 ms of exposure.
Abstract: This study explored how rapidly emotion specific facial muscle reactions were elicited when subjects were exposed to pictures of angry and happy facial expressions. In three separate experiments, it was found that distinctive facial electromyographic reactions, i.e., greater Zygomaticus major muscle activity in response to happy than to angry stimuli and greater Corrugator supercilii muscle activity in response to angry stimuli, were detectable after only 300-400 ms of exposure. These findings demonstrate that facial reactions are quickly elicited, indicating that expressive emotional reactions can be very rapidly manifested and are perhaps controlled by fast operating facial affect programs.


Journal ArticleDOI
TL;DR: A novel method for the segmentation of faces, extraction of facial features and tracking of the face contour and features over time, using deformable models like snakes is described.
Abstract: The present paper describes a novel method for the segmentation of faces, extraction of facial features and tracking of the face contour and features over time. Robust segmentation of faces out of complex scenes is done based on color and shape information. Additionally, face candidates are verified by searching for facial features in the interior of the face. As interesting facial features we employ eyebrows, eyes, nostrils, mouth and chin. We consider incomplete feature constellations as well. If a face and its features are detected once reliably, we track the face contour and the features over time. Face contour tracking is done by using deformable models like snakes. Facial feature tracking is performed by block matching. The success of our approach was verified by evaluating 38 different color image sequences, containing features as beard, glasses and changing facial expressions.

Journal ArticleDOI
TL;DR: Face processing and facial emotion recognition were investigated in five post-encephalitic people with extensive damage in the region of the amygdala, showing impaired recognition of fear following bilateral temporal lobe damage when this included the amygdala.

Proceedings ArticleDOI
14 Apr 1998
TL;DR: A computer vision system is developed that automatically recognizes individual action units or action unit combinations in the upper face using hidden Markov models (HMMs) based on the Facial Action Coding System.
Abstract: Automated recognition of facial expression is an important addition to computer vision research because of its relevance to the study of psychological phenomena and the development of human-computer interaction (HCI). We developed a computer vision system that automatically recognizes individual action units or action unit combinations in the upper face using hidden Markov models (HMMs). Our approach to facial expression recognition is based an the Facial Action Coding System (FACS), which separates expressions into upper and lower face action. We use three approaches to extract facial expression information: (1) facial feature point tracking; (2) dense flow tracking with principal component analysis (PCA); and (3) high gradient component detection (i.e. furrow detection). The recognition results of the upper face expressions using feature point tracking, dense flow tracking, and high gradient component detection are 85%, 93% and 85%, respectively.

Journal ArticleDOI
TL;DR: In this paper, the effects of variation in an irrelevant stimulus dimension on judgments of faces with respect to a relevant dimension were investigated, and the results suggest asymmetric dependencies between different components of face perception.
Abstract: Effects of variation in an irrelevant stimulus dimension on judgments of faces with respect to a relevant dimension were investigated. Dimensions were identity, emotional expression, and facial speech. The irrelevant dimension was correlated with, constant, or orthogonal to the relevant one. Reaction times (RTs) were predicted to increase over these conditions to the extent that the relevant dimension could not be processed independently of the irrelevant one. RTs for identity judgments were independent of variation in expression or facial speech, but RTs for expression and facial speech judgments were influenced by identity variation. Facial speech perception was affected by identity even when variation in the mouth region was eliminated. Moreover, observers could judge speech faster for personally familiar faces than for unfamiliar faces. The results suggest asymmetric dependencies between different components of face perception. Identity is perceived independently of, but may exert an influence on, expression and facial speech analysis.

Journal ArticleDOI
TL;DR: Facial and emotional reactions while viewing two different types of smiles and the relation of emotional empathy to these reactions were investigated and empathy was correlated to the rated experiences of pleasure and interest after the Duchenne smile block.

Journal ArticleDOI
TL;DR: Results provide evidence for a distinction between the neural correlates of facial recognition memory and perception of facial expression but, whilst highlighting the role of limbic structures in perception of happy facial expressions, do not allow the mapping of a distinct neural substrate for perception of sad facial expressions.
Abstract: We investigated facial recognition memory (for previously unfamiliar faces) and facial expression perception with functional magnetic resonance imaging (fMRI). Eight healthy, right-handed volunteers participated. For the facial recognition task, subjects made a decision as to the familiarity of each of 50 faces (25 previously viewed; 25 novel). We detected signal increase in the right middle temporal gyrus and left prefrontal cortex during presentation of familiar faces, and in several brain regions, including bilateral posterior cingulate gyri, bilateral insulae and right middle occipital cortex during presentation of unfamiliar faces. Standard facial expressions of emotion were used as stimuli in two further tasks of facial expression perception. In the first task, subjects were presented with alternating happy and neutral faces; in the second task, subjects were presented with alternating sad and neutral faces. During presentation of happy facial expressions, we detected a signal increase predominantly in the left anterior cingulate gyrus, bilateral posterior cingulate gyri, medial frontal cortex and right supramarginal gyrus, brain regions previously implicated in visuospatial and emotion processing tasks. No brain regions showed increased signal intensity during presentation of sad facial expressions. These results provide evidence for a distinction between the neural correlates of facial recognition memory and perception of facial expression but, whilst highlighting the role of limbic structures in perception of happy facial expressions, do not allow the mapping of a distinct neural substrate for perception of sad facial expressions.

Journal ArticleDOI
TL;DR: For example, this article found that women smile more than men overall and showed more Duchenne smiling in the equal power context, but they did not differ in the high-power context or low power context.
Abstract: This experiment tested whether social power and sex affect amount and type of smiling. Participants were assigned to low-, high-, or equal-power positions and interacted in dyads. For high- and equal-power participants, smiling correlated with positive affect, whereas for low- power participants, it did not. Women smiled more than men overall and showed more Duchenne smiling in the equal-power context, but they did not differ in the high-power context or low-power context. Results are interpreted as reflecting the license given to high-power people to smile when they are so inclined and the obligation for low-power people to smile regardless of how positive they feel.

Proceedings ArticleDOI
14 Apr 1998
TL;DR: By using both speech and video modalities, it is shown it is possible to achieve higher recognition rates than either modality alone, and to be complimentary.
Abstract: Recognizing human facial expression and emotion by computer is an interesting and challenging problem. Many have investigated emotional contents in speech alone, or recognition of human facial expressions solely from images. However, relatively little has been done in combining these two modalities for recognizing human emotions. L.C. De Silva et al. (1997) studied human subjects' ability to recognize emotions from viewing video clips of facial expressions and listening to the corresponding emotional speech stimuli. They found that humans recognize some emotions better by audio information, and other emotions better by video. They also proposed an algorithm to integrate both kinds of inputs to mimic human's recognition process. While attempting to implement the algorithm, we encountered difficulties which led us to a different approach. We found these two modalities to be complimentary. By using both, we show it is possible to achieve higher recognition rates than either modality alone.

Proceedings ArticleDOI
14 Apr 1998
TL;DR: An optical flow based approach (feature point tracking) that is sensitive to subtle changes in facial expression is developed and implemented that demonstrated high concurrent validity with human coding using the Facial Action Coding System (FACS).
Abstract: Current approaches to automated analysis have focused an a small set of prototypic expressions (e.g. joy or anger). Prototypic expressions occur infrequently in everyday life, however, and emotion expression is far more varied. To capture the full range of emotion expression, automated discrimination of fine grained changes in facial expression is needed. We developed and implemented an optical flow based approach (feature point tracking) that is sensitive to subtle changes in facial expression. In image sequences from 100 young adults, action units and action unit combinations in the brow and mouth regions were selected for analysis if they occurred a minimum of 25 times in the image database. Selected facial features were automatically tracked using a hierarchical algorithm for estimating optical flow. Image sequences were randomly divided into training and test sets. Feature point tracking demonstrated high concurrent validity with human coding using the Facial Action Coding System (FACS).

Proceedings Article
18 May 1998
TL;DR: The development of a neural network for facial expression recognition is discussed, which aims at recognizing and interpreting facial expressions in terms of signaled emotions and level of expressiveness, and how the network generalizes to new faces is shown.
Abstract: We discuss the development of a neural network for facial expression recognition. It aims at recognizing and interpreting facial expressions in terms of signaled emotions and level of expressiveness. We use the backpropagation algorithm to train the system to differentiate between facial expressions. We show how the network generalizes to new faces and we analyze the results. In our approach, we acknowledge that facial expressions can be very subtle, and propose strategies to deal with the complexity of various levels of expressiveness. Our database includes a variety of different faces, including individuals of different gender, race, and including different features such as glasses, mustache, and beard. Even given the variety of the database, the network learns fairly succesfuily to distinguish various levels of expressiveness, and generalizes on new faces as ~ell.

Journal ArticleDOI
TL;DR: Experimental results demonstrate that the proposed approach can efficiently detect human facial features and satisfactorily deal with the problems caused by bad lighting condition, skew face orientation, and even facial expression.

Journal ArticleDOI
TL;DR: When compared to studies of children in the general population, children with ADHD have deficits in their ability to accurately recognize facial expressions of emotion, which have important implications for the remediation of social skill deficits commonly seen inChildren with ADHD.
Abstract: Fifty children and adolescents were tested for their ability to recognize the 6 basic facial expressions of emotion depicted in Ekman and Friesen's normed photographs. Subjects were presented with sets of 6 photographs of faces, each portraying a different basic emotion, and stories portraying those emotions were read to them. After each story, the subject was asked to point to the photograph in the set that depicted the emotion described. Overall, the children correctly identified the emotions on 74% of the presentations. The highest level of accuracy in recognition was for happiness, followed by sadness, with fear being the emotional expression that was mistaken most often. When compared to studies of children in the general population, children with ADHD have deficits in their ability to accurately recognize facial expressions of emotion. These findings have important implications for the remediation of social skill deficits commonly seen in children with ADHD.


Journal ArticleDOI
TL;DR: The judgment of the emotion of sadness was the best predictor of the patients' depression persistence and the patients judged significantly more sadness in the facial expressions than the control subjects.
Abstract: In research it has been demonstrated that cognitive and interpersonal processes play significant roles in depression development and persistence. The judgment of emotions displayed in facial expressions by depressed patients allows for a better understanding of these processes. In this study, 48 major depression outpatients and healthy control subjects, matched on the gender of the patients, judged facial expressions as to the emotions the expressions displayed. These judgments were conducted at the patients' outpatient admission (T1). The depression severity of the patients was measured at T1, 13 weeks later (T2) and at a 6-month follow-up (T3). It was found that the judgment of negative emotions in the facial expressions was related to both the depression severity at T1 and depression persistence (T2 and T3), whereas the judgment of positive emotions was not related to the patients' depression. The judgment of the emotion of sadness was the best predictor of the patients' depression persistence. Additionally, it was found that the patients judged significantly more sadness in the facial expressions than the control subjects. These findings are related to previous data of facial expression judgments of depressed patients and future research directions are discussed.

Journal ArticleDOI
TL;DR: This article investigated whether observers' facial reactions to the emotional facial expressions of others represent an affective or cognitive response to these emotional expressions and found that facial reactions are due to mimicry as part of affective empathic reaction.
Abstract: This study investigated whether observers’ facial reactions to the emotional facial expressions of others represent an affective or a cognitive response to these emotional expressions. Three hypotheses were contrasted: (1) facial reactions to emotional facial expressions are due to mimicry as part of an affective empathic reaction; (2) facial reactions to emotional facial expressions

Journal ArticleDOI
TL;DR: The authors investigated automaticity as evidenced by involuntary interference in a warping task by an attacker, and found that some characteristics of facial expressions may be automatically processed, such as facial expressions and facial expressions' facial expressions.
Abstract: Earlier research has indicated that some characteristics of facial expressions may be automatically processed. This study investigated automaticity as evidenced by involuntary interference in a wor ...

Journal ArticleDOI
TL;DR: These data suggest that the morphology of the late positive component of the ERP differs depending on the emotional expression of the stimuli, the gender of the facial stimulus, and theGender of the subject.