scispace - formally typeset
Search or ask a question
Author

Xandra Van Montfort

Bio: Xandra Van Montfort is an academic researcher from University of Glasgow. The author has contributed to research in topics: Face perception. The author has an hindex of 1, co-authored 1 publications receiving 377 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: It is shown that photographs are not consistent indicators of facial appearance because they are blind to within-person variability, which has important practical implications, and suggests that face photographs are unsuitable as proof of identity.

442 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: There is an urgent need for research that examines how people actually move their faces to express emotions and other social information in the variety of contexts that make up everyday life, as well as careful study of the mechanisms by which people perceive instances of emotion in one another.
Abstract: It is commonly assumed that a person’s emotional state can be readily inferred from his or her facial movements, typically called emotional expressions or facial expressions. This assumption influences legal judgments, policy decisions, national security protocols, and educational practices; guides the diagnosis and treatment of psychiatric illness, as well as the development of commercial applications; and pervades everyday social interactions as well as research in other scientific fields such as artificial intelligence, neuroscience, and computer vision. In this article, we survey examples of this widespread assumption, which we refer to as the common view, and we then examine the scientific evidence that tests this view, focusing on the six most popular emotion categories used by consumers of emotion research: anger, disgust, fear, happiness, sadness, and surprise. The available scientific evidence suggests that people do sometimes smile when happy, frown when sad, scowl when angry, and so on, as proposed by the common view, more than what would be expected by chance. Yet how people communicate anger, disgust, fear, happiness, sadness, and surprise varies substantially across cultures, situations, and even across people within a single situation. Furthermore, similar configurations of facial movements variably express instances of more than one emotion category. In fact, a given configuration of facial movements, such as a scowl, often communicates something other than an emotional state. Scientists agree that facial movements convey a range of information and are important for social communication, emotional or otherwise. But our review suggests an urgent need for research that examines how people actually move their faces to express emotions and other social information in the variety of contexts that make up everyday life, as well as careful study of the mechanisms by which people perceive instances of emotion in one another. We make specific research recommendations that will yield a more valid picture of how people move their faces to express emotions and how they infer emotional meaning from facial movements in situations of everyday life. This research is crucial to provide consumers of emotion research with the translational information they require.

772 citations

Journal ArticleDOI
TL;DR: It is argued that the diagnostic validity of social attributions from faces has been greatly overstated in the literature and the functional significance of these attributions is offered.
Abstract: Since the early twentieth century, psychologists have known that there is consensus in attributing social and personality characteristics from facial appearance. Recent studies have shown that surprisingly little time and effort are needed to arrive at this consensus. Here we review recent research on social attributions from faces. Section I outlines data-driven methods capable of identifying the perceptual basis of consensus in social attributions from faces (e.g., What makes a face look threatening?). Section II describes nonperceptual determinants of social attributions (e.g., person knowledge and incidental associations). Section III discusses evidence that attributions from faces predict important social outcomes in diverse domains (e.g., investment decisions and leader selection). In Section IV, we argue that the diagnostic validity of these attributions has been greatly overstated in the literature. In the final section, we offer an account of the functional significance of these attributions.

636 citations

Journal ArticleDOI
TL;DR: These findings highlight the utility of the original trustworthiness and dominance dimensions, but also underscore the need to utilise varied face stimuli: with a more realistically diverse set of face images, social inferences from faces show a more elaborate underlying structure than hitherto suggested.

279 citations

Journal ArticleDOI
TL;DR: The authors argue that face recognition (specifically identification) may only be understood by adopting new techniques that acknowledge statistical patterns in the visual environment, and some of our current methods will need to be abandoned.
Abstract: Despite many years of research, there has been surprisingly little progress in our understanding of how faces are identified. Here I argue that there are two contributory factors: (a) Our methods have obscured a critical aspect of the problem, within-person variability; and (b) research has tended to conflate familiar and unfamiliar face processing. Examples of procedures for studying variability are given, and a case is made for studying real faces, of the type people recognize every day. I argue that face recognition (specifically identification) may only be understood by adopting new techniques that acknowledge statistical patterns in the visual environment. As a consequence, some of our current methods will need to be abandoned.

204 citations

Journal ArticleDOI
TL;DR: A quantitative model is created that can predict first impressions of previously unseen ambient images of faces from a linear combination of facial attributes, explaining 58% of the variance in raters’ impressions despite the considerable variability of the photographs.
Abstract: First impressions of social traits, such as trustworthiness or dominance, are reliably perceived in faces, and despite their questionable validity they can have considerable real-world consequences. We sought to uncover the information driving such judgments, using an attribute-based approach. Attributes (physical facial features) were objectively measured from feature positions and colors in a database of highly variable “ambient” face photographs, and then used as input for a neural network to model factor dimensions (approachability, youthful-attractiveness, and dominance) thought to underlie social attributions. A linear model based on this approach was able to account for 58% of the variance in raters’ impressions of previously unseen faces, and factor-attribute correlations could be used to rank attributes by their importance to each factor. Reversing this process, neural networks were then used to predict facial attributes and corresponding image properties from specific combinations of factor scores. In this way, the factors driving social trait impressions could be visualized as a series of computer-generated cartoon face-like images, depicting how attributes change along each dimension. This study shows that despite enormous variation in ambient images of faces, a substantial proportion of the variance in first impressions can be accounted for through linear changes in objectively defined features.

166 citations