scispace - formally typeset
Search or ask a question

Showing papers on "Facial expression published in 2011"


Journal ArticleDOI
01 Nov 2011
TL;DR: As a typical application of the LBP approach, LBP-based facial image analysis is extensively reviewed, while its successful extensions, which deal with various tasks of facial imageAnalysis, are also highlighted.
Abstract: Local binary pattern (LBP) is a nonparametric descriptor, which efficiently summarizes the local structures of images. In recent years, it has aroused increasing interest in many areas of image processing and computer vision and has shown its effectiveness in a number of applications, in particular for facial image analysis, including tasks as diverse as face detection, face recognition, facial expression analysis, and demographic classification. This paper presents a comprehensive survey of LBP methodology, including several more recent variations. As a typical application of the LBP approach, LBP-based facial image analysis is extensively reviewed, while its successful extensions, which deal with various tasks of facial image analysis, are also highlighted.

895 citations


Journal ArticleDOI
TL;DR: A novel research on a dynamic facial expression recognition, using near-infrared (NIR) video sequences and LBP-TOP feature descriptors and component-based facial features are presented to combine geometric and appearance information, providing an effective way for representing the facial expressions.

586 citations


01 Jan 2011
TL;DR: This chapter reviews fundamental approaches to facial measurement by behavioral scientists and current efforts in automated facial expression recognition, and considers challenges, databases available to the research community, approaches to feature detection, tracking, and representation, and both supervised and unsupervised learning.
Abstract: The face is one of the most powerful channels of nonverbal communication. Facial expression provides cues about emotion, intention, alertness, pain, personality, regulates interpersonal behavior, and communicates psychiatric and biomedical status among other functions. Within the past 15 years, there has been increasing interest in automated facial expression analysis within the computer vision and machine learning communities. This chapter reviews fundamental approaches to facial measurement by behavioral scientists and current efforts in automated facial expression recognition. We consider challenges, review databases available to the research community, approaches to feature detection, tracking, and representation, and both supervised and unsupervised learning. keywords : Facial expression analysis, Action unit recognition, Active Appearance Models, temporal clustering.

562 citations


Proceedings ArticleDOI
21 Mar 2011
TL;DR: The Computer Expression Recognition Toolbox (CERT), a software tool for fully automatic real-time facial expression recognition, is presented and officially released for free academic use.
Abstract: We present the Computer Expression Recognition Toolbox (CERT), a software tool for fully automatic real-time facial expression recognition, and officially release it for free academic use. CERT can automatically code the intensity of 19 different facial actions from the Facial Action Unit Coding System (FACS) and 6 different protoypical facial expressions. It also estimates the locations of 10 facial features as well as the 3-D orientation (yaw, pitch, roll) of the head. On a database of posed facial expressions, Extended Cohn-Kanade (CK+ [1]), CERT achieves an average recognition performance (probability of correctness on a two-alternative forced choice (2AFC) task between one positive and one negative example) of 90.1% when analyzing facial actions. On a spontaneous facial expression dataset, CERT achieves an accuracy of nearly 80%. In a standard dual core laptop, CERT can process 320 × 240 video images in real time at approximately 10 frames per second.

553 citations


Journal ArticleDOI
TL;DR: This paper proposes an output-associative fusion framework that incorporates correlations and covariances between the emotion dimensions and shows that on average, BLSTM-NNs outperform SVR due to their ability to learn past and future context, and the proposed system is well able to reproduce the valence and arousal ground truth obtained from human coders.
Abstract: Past research in analysis of human affect has focused on recognition of prototypic expressions of six basic emotions based on posed data acquired in laboratory settings. Recently, there has been a shift toward subtle, continuous, and context-specific interpretations of affective displays recorded in naturalistic and real-world settings, and toward multimodal analysis and recognition of human affect. Converging with this shift, this paper presents, to the best of our knowledge, the first approach in the literature that: 1) fuses facial expression, shoulder gesture, and audio cues for dimensional and continuous prediction of emotions in valence and arousal space, 2) compares the performance of two state-of-the-art machine learning techniques applied to the target problem, the bidirectional Long Short-Term Memory neural networks (BLSTM-NNs), and Support Vector Machines for Regression (SVR), and 3) proposes an output-associative fusion framework that incorporates correlations and covariances between the emotion dimensions. Evaluation of the proposed approach has been done using the spontaneous SAL data from four subjects and subject-dependent leave-one-sequence-out cross validation. The experimental results obtained show that: 1) on average, BLSTM-NNs outperform SVR due to their ability to learn past and future context, 2) the proposed output-associative fusion framework outperforms feature-level and model-level fusion by modeling and learning correlations and patterns between the valence and arousal dimensions, and 3) the proposed system is well able to reproduce the valence and arousal ground truth obtained from human coders.

516 citations


Proceedings ArticleDOI
01 Nov 2011
TL;DR: A person independent training and testing protocol for expression recognition as part of the BEFIT workshop is proposed and a new static facial expression database Static Facial Expressions in the Wild (SFEW) is presented.
Abstract: Quality data recorded in varied realistic environments is vital for effective human face related research. Currently available datasets for human facial expression analysis have been generated in highly controlled lab environments. We present a new static facial expression database Static Facial Expressions in the Wild (SFEW) extracted from a temporal facial expressions database Acted Facial Expressions in the Wild (AFEW) [9], which we have extracted from movies. In the past, many robust methods have been reported in the literature. However, these methods have been experimented on different databases or using different protocols within the same databases. The lack of a standard protocol makes it difficult to compare systems and acts as a hindrance in the progress of the field. Therefore, we propose a person independent training and testing protocol for expression recognition as part of the BEFIT workshop. Further, we compare our dataset with the JAFFE and Multi-PIE datasets and provide baseline results.

434 citations


Proceedings ArticleDOI
21 Mar 2011
TL;DR: This paper presents the first challenge in automatic recognition of facial expressions to be held during the IEEE conference on Face and Gesture recognition 2011, in Santa Barbara, California, and outlines the evaluation protocol, the data used, and the results of a baseline method for the two sub-challenges.
Abstract: Automatic Facial Expression Recognition and Analysis, in particular FACS Action Unit (AU) detection and discrete emotion detection, has been an active topic in computer science for over two decades. Standardisation and comparability has come some way; for instance, there exist a number of commonly used facial expression databases. However, lack of a common evaluation protocol and lack of sufficient details to reproduce the reported individual results make it difficult to compare systems to each other. This in turn hinders the progress of the field. A periodical challenge in Facial Expression Recognition and Analysis would allow this comparison in a fair manner. It would clarify how far the field has come, and would allow us to identify new goals, challenges and targets. In this paper we present the first challenge in automatic recognition of facial expressions to be held during the IEEE conference on Face and Gesture Recognition 2011, in Santa Barbara, California. Two sub-challenges are defined: one on AU detection and another on discrete emotion detection. It outlines the evaluation protocol, the data used, and the results of a baseline method for the two sub-challenges.

397 citations


Journal ArticleDOI
01 Aug 2011-Emotion
TL;DR: Two studies validating a new standardized set of filmed emotion expressions, the Amsterdam Dynamic Facial Expression Set (ADFES), show that participants more strongly perceived themselves to be the cause of the other's emotion when the model's face turned toward the respondents.
Abstract: We report two studies validating a new standardized set of filmed emotion expressions, the Amsterdam Dynamic Facial Expression Set (ADFES). The ADFES is distinct from existing datasets in that it includes a face-forward version and two different head-turning versions (faces turning toward and away from viewers), North-European as well as Mediterranean models (male and female), and nine discrete emotions (joy, anger, fear, sadness, surprise, disgust, contempt, pride, and embarrassment). Study 1 showed that the ADFES received excellent recognition scores. Recognition was affected by social categorization of the model: displays of North-European models were better recognized by Dutch participants, suggesting an ingroup advantage. Head-turning did not affect recognition accuracy. Study 2 showed that participants more strongly perceived themselves to be the cause of the other's emotion when the model's face turned toward the respondents. The ADFES provides new avenues for research on emotion expression and is available for researchers upon request.

390 citations


Journal ArticleDOI
07 Nov 2011
TL;DR: The main objective was to describe neurobiological differences between depressed patients with major depressive disorder (MDD) and healthy controls (HCs) regarding brain responsiveness to facial expressions and to delineate altered neural activation patterns associated with mood-congruent processing bias and to integrate these data with recent functional connectivity results.
Abstract: Background Cognitive models of depression suggest that major depression is characterized by biased facial emotion processing, making facial stimuli particularly valuable for neuroimaging research on the neurobiological correlates of depression. The present review provides an overview of functional neuroimaging studies on abnormal facial emotion processing in major depression. Our main objective was to describe neurobiological differences between depressed patients with major depressive disorder (MDD) and healthy controls (HCs) regarding brain responsiveness to facial expressions and, furthermore, to delineate altered neural activation patterns associated with mood-congruent processing bias and to integrate these data with recent functional connectivity results. We further discuss methodological aspects potentially explaining the heterogeneity of results.

361 citations


Journal ArticleDOI
TL;DR: A sequential 2 stage approach is taken for pose classification and view dependent facial expression classification to investigate the effects of yaw variations from frontal to profile views and the influence of pose on different facial expressions.

349 citations


Proceedings ArticleDOI
06 Nov 2011
TL;DR: A temporal interpolation model together with the first comprehensive spontaneous micro-expression corpus enable the system to accurately recognise these very short expressions and achieves very promising results that compare favourably with the human micro- expression detection accuracy.
Abstract: Facial micro-expressions are rapid involuntary facial expressions which reveal suppressed affect. To the best knowledge of the authors, there is no previous work that successfully recognises spontaneous facial micro-expressions. In this paper we show how a temporal interpolation model together with the first comprehensive spontaneous micro-expression corpus enable us to accurately recognise these very short expressions. We designed an induced emotion suppression experiment to collect the new corpus using a high-speed camera. The system is the first to recognise spontaneous facial micro-expressions and achieves very promising results that compare favourably with the human micro-expression detection accuracy.

Journal ArticleDOI
01 Aug 2011-Emotion
TL;DR: Eye-tracking was used to monitor scanning behavior of healthy participants while looking at different facial expressions, and results confirm the relevance of the eyes and mouth in emotional decoding, but they demonstrate that not all facial expressions with different emotional content are decoded equally.
Abstract: There is evidence that specific regions of the face such as the eyes are particularly relevant for the decoding of emotional expressions, but it has not been examined whether scan paths of observers vary for facial expressions with different emotional content. In this study, eye-tracking was used to monitor scanning behavior of healthy participants while looking at different facial expressions. Locations of fixations and their durations were recorded, and a dominance ratio (i.e., eyes and mouth relative to the rest of the face) was calculated. Across all emotional expressions, initial fixations were most frequently directed to either the eyes or the mouth. Especially in sad facial expressions, participants more frequently issued the initial fixation to the eyes compared with all other expressions. In happy facial expressions, participants fixated the mouth region for a longer time across all trials. For fearful and neutral facial expressions, the dominance ratio indicated that both the eyes and mouth are equally important. However, in sad and angry facial expressions, the eyes received more attention than the mouth. These results confirm the relevance of the eyes and mouth in emotional decoding, but they also demonstrate that not all facial expressions with different emotional content are decoded equally. Our data suggest that people look at regions that are most characteristic for each emotion.

Journal ArticleDOI
TL;DR: In this paper, a system for performance-based character animation that enables any user to control the facial expressions of a digital avatar in real-time is presented, where the user is recorded in a natural environment.
Abstract: This paper presents a system for performance-based character animation that enables any user to control the facial expressions of a digital avatar in realtime. The user is recorded in a natural env...

Journal ArticleDOI
TL;DR: Data on the set's validity based on ratings by 20 healthy adult raters should give researchers confidence in the NIMH‐ChEFS's validity for use in affective and social neuroscience research.
Abstract: With the emergence of new technologies, there has been an explosion of basic and clinical research on the affective and cognitive neuroscience of face processing and emotion perception. Adult emotional face stimuli are commonly used in these studies. For developmental research, there is a need for a validated set of child emotional faces. This paper describes the development of the National Institute of Mental Health Child Emotional Faces Picture Set (NIMH-ChEFS), a relatively large stimulus set with high quality, color images of the emotional faces of children. The set includes 482 photographs of fearful, angry, happy, sad and neutral child faces with two gaze conditions: direct and averted gaze. In this paper we describe the development of the NIMH-ChEFS and data on the set's validity based on ratings by 20 healthy adult raters. Agreement between the a priori emotion designation and the raters' labels was high and comparable with values reported for commonly used adult picture sets. Intensity, representativeness, and composite "goodness" ratings are also presented to guide researchers in their choice of specific stimuli for their studies. These data should give researchers confidence in the NIMH-ChEFS's validity for use in affective and social neuroscience research.

Journal ArticleDOI
TL;DR: An automated FACS based on advanced computer science technology was developed and the quantitative measures of flatness and inappropriateness showed clear differences between patients and the controls, highlighting their potential in automatic and objective quantification of symptom severity.

Book ChapterDOI
01 Jan 2011
TL;DR: This chapter describes the problem space for facial expression analysis, which includes multiple dimensions: level of description, individual differences in subjects, transitions among expressions, intensity of facial expression, deliberate versus spontaneous expression, head orientation and scene complexity, image acquisition and resolution, reliability of ground truth, databases, and the relation to other facial behaviors or nonfacial behaviors.
Abstract: This chapter introduces recent advances in facial expression analysis and recognition. The first part discusses general structure of AFEA systems. The second part describes the problem space for facial expression analysis. This space includes multiple dimensions: level of description, individual differences in subjects, transitions among expressions, intensity of facial expression, deliberate versus spontaneous expression, head orientation and scene complexity, image acquisition and resolution, reliability of ground truth, databases, and the relation to other facial behaviors or nonfacial behaviors. We note that most work to date has been confined to a relatively restricted region of this space. The last part of this chapter is devoted to a description of more specific approaches and the techniques used in recent advances. They include the techniques for face acquisition, facial data extraction and representation, facial expression recognition, and multimodal expression analysis. The chapter concludes with a discussion assessing the current status, future possibilities, and open questions about automatic facial expression analysis.

Journal Article
TL;DR: In this article, the authors present a scenario where the authors have to deal with the following problems: 1) The problem of the lack of resources, 2) The lack of knowledge, 3)
Abstract: 目的:扩展本土化的中国面孔表情图片系统以提供情绪研究的取材.方法:采用方便取样.从北京2所高等艺术院校的表演系、导演系选取100名学生,作为面孔表情表演者;从北京2所普通高等院校招募100名学生,作为面孔表情评分者.采集表演者的愤怒,厌恶,恐惧,悲伤,惊讶,高兴和平静7种面孔表情图片,再由评分者对图片进行情绪类别的判定和情绪强烈程度的9点量表评分,扩展各种情绪类型的图片数量.然后,从北京3所普通高...

Journal ArticleDOI
TL;DR: Comparison with the state-of-the-art performance confirms that the proposed approach achieves the highest CRR on the JAFFE database and is among the top performers on the Cohn-Kanade (CK) database.
Abstract: Facial expression is an important channel for human communication and can be applied in many real applications. One critical step for facial expression recognition (FER) is to accurately extract emotional features. Current approaches on FER in static images have not fully considered and utilized the features of facial element and muscle movements, which represent static and dynamic, as well as geometric and appearance characteristics of facial expressions. This paper proposes an approach to solve this limitation using "salient” distance features, which are obtained by extracting patch-based 3D Gabor features, selecting the "salient” patches, and performing patch matching operations. The experimental results demonstrate high correct recognition rate (CRR), significant performance improvements due to the consideration of facial element and muscle movements, promising results under face registration errors, and fast processing time. Comparison with the state-of-the-art performance confirms that the proposed approach achieves the highest CRR on the JAFFE database and is among the top performers on the Cohn-Kanade (CK) database.

Journal ArticleDOI
TL;DR: In this article, facial feedback signals generated when we automatically mimic the expressions displayed on others' faces were used to recognize the emotions other people are feeling, and the effect of facial feedback on emotion perception was investigated.
Abstract: How do we recognize the emotions other people are feeling? One source of information may be facial feedback signals generated when we automatically mimic the expressions displayed on others' faces. Supporting this “embodied emotion perception,” dampening (Experiment 1) and amplifying (Experiment 2) facial feedback signals, respectively, impaired and improved people’s ability to read others' facial emotions. In Experiment 1, emotion perception was significantly impaired in people who had received a cosmetic procedure that reduces muscular feedback from the face (Botox) compared to a procedure that does not reduce feedback (a dermal filler). Experiment 2 capitalized on the fact that feedback signals are enhanced when muscle contractions meet resistance. Accordingly, when the skin was made resistant to underlying muscle contractions via a restricting gel, emotion perception improved, and did so only for emotion judgments that theoretically could benefit from facial feedback.

BookDOI
28 Jul 2011
TL;DR: This chapter presents an updated model of distributed human neural systems for face perception that has a Core System of visual extrastriate areas for visual analysis of faces and an Extended System that consists of additional neural systems that work in concert with the Core System to extract various types of information from faces.
Abstract: Face perception plays a central role in social communication and is, arguably, one of the most sophisticated visual perceptual skills in humans. Consequently, face perception has been the subject of intensive investigation and theorizing in both visual and social neuroscience. The organization of neural systems for face perception has stimulated intense debate. Much of this debate has focused on models that posit the existence of a module that is specialized for face perception (Kanwisher et al., 1997 ; Kanwisher and Yovel, 2006 ) versus models that propose that face perception is mediated by distributed processing (Haxby et al., 2000 , 2001 ; Ishai et al., 2005 ; Ishai 2008 ). These two perspectives on the neural systems that underlie face perception are not necessarily incompatible (see Kanwisher and Barton, Chapter 7, this volume). In our work, we have proposed that face perception is mediated by distributed systems, both in terms of the involvement of multiple brain areas and in terms of locally distributed population codes within these areas (Haxby et al., 2000 , 2001 ). Specifically, we proposed a model for the distributed neural system for face perception that has a Core System of visual extrastriate areas for visual analysis of faces and an Extended System that consists of additional neural systems that work in concert with the Core System to extract various types of information from faces (Haxby et al., 2000 ). We also have shown that in visual extrastriate cortices, information that distinguishes faces from other categories of animate and inanimate objects is not restricted to regions that respond maximally to faces, i.e. the fusiform and occipital face areas (Haxby et al., 2001 ; Hanson et al., 2004 ). In this chapter we present an updated model of distributed human neural systems for face perception. We will begin with a discussion of the Core System for visual analysis of faces with an emphasis on the distinction between perception of invariant features for identity recognition and changeable features for recognition of facial gestures such as expression and eye gaze. The bulk of the chapter will be a selective discussion of neural systems in the Extended System for familiar face recognition and for extracting the meaning of facial gestures, in particular facial expression and eye gaze. We will discuss the roles of systems for the representation of emotion, for person knowledge, and for action understanding in face recognition and perception of expression and gaze. The neural systems that are recruited for extracting socially-relevant information from faces are an unbounded set whose membership accrues with further investigations of face perception. Rather than attempt an exhaustive review, our intention is to present systems that we believe are of particular relevance for social communication and that are illustrative of how distributed systems are engaged by face perception. Many of the chapters in this volume provide a more detailed account for areas that we touch on in this chapter. Our review also is biased towards work that we have been directly involved in, with selective references to closely related work by other investigators. We will finish with a discussion of modularity and distributed processing in neural representation. C6.S1

Journal ArticleDOI
TL;DR: The results indicate that even fully and expertly animated characters are rated as more uncanny than humans and that, in virtual characters, a lack of facial expression in the upper parts of the face during speech exaggerates the uncanny by inhibiting effective communication of the perceived emotion.

Journal ArticleDOI
TL;DR: It is estimated that between 13% and 39% of people with moderate to severe TBI may have significant difficulties with facial affect recognition, depending on the cut-off criterion used.
Abstract: Objective: Difficulties in communication and social relationships present a formidable challenge for many people after traumatic brain injury (TBI). These difficulties are likely to be partially attributable to problems with emotion perception. Mounting evidence shows facial affect recognition to be particularly difficult after TBI. However, no attempt has been made to systematically estimate the magnitude of this problem or the frequency with which it occurs. Method: A meta-analysis is presented examining the magnitude of facial affect recognition difficulties after TBI. From this, the frequency of these impairments in the TBI population is estimated. Effect sizes were calculated from 13 studies that compared adults with moderate to severe TBI to matched healthy controls on static measures of facial affect recognition. Results: The studies collectively presented data from 296 adults with TBI and 296 matched controls. The overall weighted mean effect size for the 13 studies was -1.11, indicating people with TBI on average perform about 1.1 SD below healthy peers on measures of facial affect recognition. Based on estimation of the TBI population standard deviation and modeling of likely distribution shape, it is estimated that between 13% and 39% of people with moderate to severe TBI may have significant difficulties with facial affect recognition, depending on the cut-off criterion used. Conclusion: This is clearly an area that warrants attention, particularly examining techniques for the rehabilitation of these deficits.

Journal ArticleDOI
TL;DR: Research highlights ► Congenital prosopagnosics show weak holistic coding of expression and identity, and Holistic coding of identity is functionally involved in face identification ability.

Journal ArticleDOI
TL;DR: A systematic comparison of the neurofunctional network dedicated to processing facial and bodily expressions found that the amygdala (AMG) was more active for facial than for bodily expressions, and EBA and superior temporal sulcus were more activated by threatening bodies.

Journal ArticleDOI
TL;DR: The BEAST appears as a valuable addition to currently available tools for assessing recognition of affective signals, and can be used in explicit recognition tasks as well as in matching tasks and in implicit tasks.
Abstract: Whole body expressions are among the main visual stimulus categories that are naturally associated with faces and the neuroscientific investigation of how body expressions are processed has entered the research agenda this last decade. Here we describe the stimulus set of whole body expressions termed Bodily Expressive Action Stimulus Test (BEAST), and we provide validation data for use of these materials by the community of emotion researchers. The database was composed by 254 whole body expressions resulting from 46 actors expressing 4 emotions (anger, fear, happiness and sadness). In all pictures the face of the actor was blurred and participants were asked to categorize the emotions expressed in the stimuli in a four alternative forced choice task. The results show that all emotions are well recognized, with sadness being the easiest, followed by fear, whereas happiness was the most difficult. The BEAST appears as a valuable addition to currently available tools for assessing recognition of affective signals. It can be used in explicit recognition tasks as well as in matching tasks and in implicit tasks, combined either with facial expressions, with affective prosody or presented with affective pictures as context in healthy subjects as well as in neurologically atypical populations.

Journal ArticleDOI
TL;DR: The data show that facial expressions affect earlier stages of emotion processing compared to emotional words, but the emotional value of words may have been detected at early stages of emotional processing in the visual cortex, as was indicated by the extrastriate source activity.

Journal ArticleDOI
TL;DR: It is found that the level of neural activity within a distributed network of the perceiver's brain can be successfully predicted from the neural activity in the same network in the sender's brain, depending on the affect that is currently being communicated.

Journal ArticleDOI
TL;DR: Evidence is provided that a single dose of intranasally administered oxytocin enhances detection of briefly presented emotional stimuli and was more pronounced for the recognition of happy faces.

Journal ArticleDOI
03 Oct 2011-PLOS ONE
TL;DR: The results provide evidence that judgments of facial trustworthiness and attractiveness are at least partially separable, and that beauty has a significant positive effect on judgment of competence, a universal dimension of social cognition, but has a more nuanced effect on the other universaldimension of social warmth.
Abstract: Research on the perception of faces has focused on the size, shape, and configuration of inherited features or the biological phenotype, and largely ignored the effects of adornment, or the extended phenotype. Research on the evolution of signaling has shown that animals frequently alter visual features, including color cues, to attract, intimidate or protect themselves from conspecifics. Humans engage in conscious manipulation of visual signals using cultural tools in real time rather than genetic changes over evolutionary time. Here, we investigate one tool, the use of color cosmetics. In two studies, we asked viewers to rate the same female faces with or without color cosmetics, and we varied the style of makeup from minimal (natural), to moderate (professional), to dramatic (glamorous). Each look provided increasing luminance contrast between the facial features and surrounding skin. Faces were shown for 250 ms or for unlimited inspection time, and subjects rated them for attractiveness, competence, likeability and trustworthiness. At 250 ms, cosmetics had significant positive effects on all outcomes. Length of inspection time did not change the effect for competence or attractiveness. However, with longer inspection time, the effect of cosmetics on likability and trust varied by specific makeup looks, indicating that cosmetics could impact automatic and deliberative judgments differently. The results suggest that cosmetics can create supernormal facial stimuli, and that one way they may do so is by exaggerating cues to sexual dimorphism. Our results provide evidence that judgments of facial trustworthiness and attractiveness are at least partially separable, that beauty has a significant positive effect on judgment of competence, a universal dimension of social cognition, but has a more nuanced effect on the other universal dimension of social warmth, and that the extended phenotype significantly influences perception of biologically important signals at first glance and at longer inspection.

Journal ArticleDOI
TL;DR: It is shown that image averaging stabilizes facial appearance by diluting aspects of the image that vary between snapshots of the same person, and develops the proposal that summary statistics can provide more stable face representations.
Abstract: Photographs are often used to establish the identity of an individual or to verify that they are who they claim to be. Yet, recent research shows that it is surprisingly difficult to match a photo to a face. Neither humans nor machines can perform this task reliably. Although human perceivers are good at matching familiar faces, performance with unfamiliar faces is strikingly poor. The situation is no better for automatic face recognition systems. In practical settings, automatic systems have been consistently disappointing. In this review, we suggest that failure to distinguish between familiar and unfamiliar face processing has led to unrealistic expectations about face identification in applied settings. We also argue that a photograph is not necessarily a reliable indicator of facial appearance, and develop our proposal that summary statistics can provide more stable face representations. In particular, we show that image averaging stabilizes facial appearance by diluting aspects of the image that vary between snapshots of the same person. We review evidence that the resulting images can outperform photographs in both behavioural experiments and computer simulations, and outline promising directions for future research.