scispace - formally typeset
Search or ask a question
Topic

Face perception

About: Face perception is a research topic. Over the lifetime, 3597 publications have been published within this topic receiving 291983 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: A near-real-time computer system that can locate and track a subject's head, and then recognize the person by comparing characteristics of the face to those of known individuals, and that is easy to implement using a neural network architecture.
Abstract: We have developed a near-real-time computer system that can locate and track a subject's head, and then recognize the person by comparing characteristics of the face to those of known individuals. The computational approach taken in this system is motivated by both physiology and information theory, as well as by the practical requirements of near-real-time performance and accuracy. Our approach treats the face recognition problem as an intrinsically two-dimensional (2-D) recognition problem rather than requiring recovery of three-dimensional geometry, taking advantage of the fact that faces are normally upright and thus may be described by a small set of 2-D characteristic views. The system functions by projecting face images onto a feature space that spans the significant variations among known face images. The significant features are known as "eigenfaces," because they are the eigenvectors (principal components) of the set of faces; they do not necessarily correspond to features such as eyes, ears, and noses. The projection operation characterizes an individual face by a weighted sum of the eigenface features, and so to recognize a particular face it is necessary only to compare these weights to those of known individuals. Some particular advantages of our approach are that it provides for the ability to learn and later recognize new faces in an unsupervised manner, and that it is easy to implement using a neural network architecture.

14,562 citations

Journal ArticleDOI
TL;DR: The data allow us to reject alternative accounts of the function of the fusiform face area (area “FF”) that appeal to visual attention, subordinate-level classification, or general processing of any animate or human forms, demonstrating that this region is selectively involved in the perception of faces.
Abstract: Using functional magnetic resonance imaging (fMRI), we found an area in the fusiform gyrus in 12 of the 15 subjects tested that was significantly more active when the subjects viewed faces than when they viewed assorted common objects. This face activation was used to define a specific region of interest individually for each subject, within which several new tests of face specificity were run. In each of five subjects tested, the predefined candidate “face area” also responded significantly more strongly to passive viewing of (1) intact than scrambled two-tone faces, (2) full front-view face photos than front-view photos of houses, and (in a different set of five subjects) (3) three-quarter-view face photos (with hair concealed) than photos of human hands; it also responded more strongly during (4) a consecutive matching task performed on three-quarter-view faces versus hands. Our technique of running multiple tests applied to the same region defined functionally within individual subjects provides a solution to two common problems in functional imaging: (1) the requirement to correct for multiple statistical comparisons and (2) the inevitable ambiguity in the interpretation of any study in which only two or three conditions are compared. Our data allow us to reject alternative accounts of the function of the fusiform face area (area “FF”) that appeal to visual attention, subordinate-level classification, or general processing of any animate or human forms, demonstrating that this region is selectively involved in the perception of faces.

7,059 citations

Journal ArticleDOI
TL;DR: A model for the organization of this system that emphasizes a distinction between the representation of invariant and changeable aspects of faces is proposed and is hierarchical insofar as it is divided into a core system and an extended system.

4,430 citations

Journal ArticleDOI
TL;DR: A functional model is proposed in which structural encoding processes provide descriptions suitable for the analysis of facial speech, for analysis of expression and for face recognition units, and it is proposed that the cognitive system plays an active role in deciding whether or not the initial match is sufficiently close to indicate true recognition.
Abstract: The aim of this paper is to develop a theoretical model and a set of terms for understanding and discussing how we recognize familiar faces, and the relationship between recognition and other aspects of face processing. It is suggested that there are seven distinct types of information that we derive from seen faces; these are labelled pictorial, structural, visually derived semantic, identity-specific semantic, name, expression and facial speech codes. A functional model is proposed in which structural encoding processes provide descriptions suitable for the analysis of facial speech, for analysis of expression and for face recognition units. Recognition of familiar faces involves a match between the products of structural encoding and previously stored structural codes describing the appearance of familiar faces, held in face recognition units. Identity-specific semantic codes are then accessed from person identity nodes, and subsequently name codes are retrieved. It is also proposed that the cognitive system plays an active role in deciding whether or not the initial match is sufficiently close to indicate true recognition or merely a ‘resemblance’; several factors are seen as influencing such decisions. This functional model is used to draw together data from diverse sources including laboratory experiments, studies of everyday errors, and studies of patients with different types of cerebral injury. It is also used to clarify similarities and differences between processes responsible for object, word and face recognition.

3,604 citations

Journal ArticleDOI
TL;DR: The differential sensitivity of N170 to eyes in isolation suggests that N170 may reflect the activation of an eye-sensitive region of cortex, and the voltage distribution of N 170 over the scalp is consistent with a neural generator located in the occipitotemporal sulcus lateral to the fusiform/inferior temporal region that generates N200.
Abstract: Event-related potentials (ERPs) associated with face perception were recorded with scalp electrodes from normal volunteers. Subjects performed a visual target detection task in which they mentally counted the number of occurrences of pictorial stimuli from a designated category such as butterflies. In separate experiments, target stimuli were embedded within a series of other stimuli including unfamiliar human faces and isolated face components, inverted faces, distorted faces, animal faces, and other nonface stimuli. Human faces evoked a negative potential at 172 msec (N170), which was absent from the ERPs elicited by other animate and inanimate nonface stimuli. N170 was largest over the posterior temporal scalp and was larger over the right than the left hemisphere. N170 was delayed when faces were presented upside-down, but its amplitude did not change. When presented in isolation, eyes elicited an N170 that was significantly larger than that elicited by whole faces, while noses and lips elicited small negative ERPs about 50 msec later than N170. Distorted human faces, in which the locations of inner face components were altered, elicited an N170 similar in amplitude to that elicited by normal faces. However, faces of animals, human hands, cars, and items of furniture did not evoke N170. N170 may reflect the operation of a neural mechanism tuned to detect (as opposed to identify) human faces, similar to the “structural encoder” suggested by Bruce and Young (1986). A similar function has been proposed for the face-selective N200 ERP recorded from the middle fusiform and posterior inferior temporal gyri using subdural electrodes in humans (Allison, McCarthy, Nobre, Puce, & Belger, 1994c). However, the differential sensitivity of N170 to eyes in isolation suggests that N170 may reflect the activation of an eye-sensitive region of cortex. The voltage distribution of N170 over the scalp is consistent with a neural generator located in the occipitotemporal sulcus lateral to the fusiform/inferior temporal region that generates N200.

2,859 citations


Network Information
Related Topics (5)
Visual perception
20.8K papers, 997.2K citations
91% related
Working memory
26.5K papers, 1.6M citations
90% related
Social cognition
16.1K papers, 1.2M citations
86% related
Functional magnetic resonance imaging
15.4K papers, 1.1M citations
86% related
Visual cortex
18.8K papers, 1.2M citations
85% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202354
2022169
2021165
2020133
2019149
2018168