scispace - formally typeset
Open AccessPosted ContentDOI

Perceptual difficulty modulates the direction of information flow in familiar face recognition

TLDR
This work developed a novel informational connectivity method and demonstrated that perceptual difficulty and the level of familiarity influence the neural representation of familiar faces and the degree to which peri-frontal neural networks contribute to familiar face recognition.
Abstract
Humans are fast and accurate when they recognize familiar faces. Previous neurophysiological studies have shown enhanced representations for the dichotomy of familiar vs. unfamiliar faces. As familiarity is a spectrum, however, any neural correlate should reflect graded representations for more vs. less familiar faces along the spectrum. By systematically varying familiarity across stimuli, we show a neural familiarity spectrum using electroencephalography. We then evaluated the spatiotemporal dynamics of familiar face recognition across the brain. Specifically, we developed a novel informational connectivity method to test whether peri-frontal brain areas contribute to familiar face recognition. Results showed that feed-forward flow dominates for the most familiar faces and top-down flow was only dominant when sensory evidence was insufficient to support face recognition. These results demonstrate that perceptual difficulty and the level of familiarity influence the neural representation of familiar faces and the degree to which peri-frontal neural networks contribute to familiar face recognition.

read more

Content maybe subject to copyright    Report

NeuroImage 233 (2021) 117896
Contents lists available at ScienceDirect
NeuroImage
journal homepage: www.elsevier.com/locate/neuroimage
Perceptual diculty modulates the direction of information ow in
familiar face recognition
Hamid Karimi-Rouzbahani
a
,
b
,
, Farzad Ramezani
c
, Alexandra Woolgar
a
,
b
, Anina Rich
b
,
Masoud Ghodrati
d ,
a
Medical Research Council Cognition and Brain Sciences Unit, University of Cambridge, United Kingdom
b
Perception in Action Research Centre and Department of Cognitive Science Macquarie University, Australia
c
Department of Computer Science, School of Mathematics, Statistics, and Computer Science, University of Tehran, Iran
d
Neuroscience Program, Biomedicine Discovery Institute, Monash University, Australia
Keywords:
Face recognition
Familiar faces
Multivariate pattern analysis (MVPA)
Representational similarity analysis (RSA)
Informational brain connectivity
Humans are fast and accurate when they recognize familiar faces. Previous neurophysiological studies have shown
enhanced representations for the dichotomy of familiar vs. unfamiliar faces. As familiarity is a spectrum, however,
any neural correlate should reect graded representations for more vs. less familiar faces along the spectrum. By
systematically varying familiarity across stimuli, we show a neural familiarity spectrum using electroencephalog-
raphy. We then evaluated the spatiotemporal dynamics of familiar face recognition across the brain. Specically,
we developed a novel informational connectivity method to test whether peri-frontal brain areas contribute to
familiar face recognition. Results showed that feed-forward ow dominates for the most familiar faces and top-
down ow was only dominant when sensory evidence was insucient to support face recognition. These results
demonstrate that perceptual diculty and the level of familiarity inuence the neural representation of familiar
faces and the degree to which peri-frontal neural networks contribute to familiar face recognition.
Introduction
Faces are crucial for our social interactions, allowing us to extract
information about identity, gender, age, familiarity, intent and emo-
tion. Humans categorize familiar faces more quickly and accurately than
unfamiliar ones, and this advantage is more pronounced under di-
cult viewing conditions, where categorizing unfamiliar faces often fails
( Ramon and Gobbini, 2018 ; Young and Burton, 2018 ). The neural corre-
lates of this behavioral advantage suggest an enhanced representation of
familiar over unfamiliar faces in the brain ( Dobs et al., 2019 ; Landi and
Freiwald, 2017 ). Here, we focus on addressing two major questions
about familiar face recognition. First, whether there is a “familiarity
spectrum ”for faces in the brain with enhanced representations for more
vs. less familiar faces along the spectrum. Second, whether higher-order
frontal brain areas contribute to familiar face recognition, testing previ-
ous suggestions about their role in visual recognition ( Bar et al., 2006;
Goddard et al., 2016; Karimi-Rouzbahani et al., 2019; Polyn et al., 2005;
Summereld et al., 2006; Todorov et al., 2007 ), and whether levels of
face familiarity and perceptual diculty (as has been suggested previ-
ously ( Woolgar et al., 2011 , 2015 )) impact the involvement of frontal
cognitive areas in familiar face recognition.
Corresponding authors.
E-mail addresses: hamid.karimi-rouzbahani@mrc-cbu.cam.ac.uk (H. Karimi-Rouzbahani), ghodrati.masoud@gmail.com (M. Ghodrati).
One of the main limitations of previous studies, which hinders
our progress in answering our rst question, is that they mostly
used celebrity faces as the familiar category ( Ambrus et al., 2019 ;
Collins et al., 2018 ; Dobs et al., 2019 ). As familiar faces can range widely
from celebrity faces to highly familiar ones such as family members,
relatives, friends, and even one’s own face ( Ramon and Gobbini, 2018 ),
these results might not reect the full familiarity spectrum. A better
understanding of familiar face recognition requires characterizing the
computational steps and representations for sub-categories of familiar
faces, including personally familiar, visually familiar, famous, and ex-
perimentally learned faces. Such face categories do not only dier in
terms of how much exposure the individual has had to them, but also
the availability of personal knowledge, relationships, and emotions as-
sociated with the identities in question ( Leppänen and Nelson, 2009 ;
Ramon and Gobbini, 2018 ; Kovács, 2020 ). However, we still expect that
potentially enhanced representations for more vs. less familiar faces, as
they modulate the behavior, can also be detected using neuroimaging
analysis. Moreover, these categories may vary in terms of the poten-
tial for top-down inuences in the process. Importantly, while a few
functional magnetic resonance imaging (fMRI) studies have investigated
the dierences between dierent levels of familiar faces ( Gobbini et al.,
2004 ; Landi and Freiwald, 2017 ; Leibenluft et al., 2004 ; Ramon et al.,
2015 ; Sugiura et al., 2015 ; Taylor et al., 2009 ), there are no studies
https://doi.org/10.1016/j.neuroimage.2021.117896
Received 22 October 2020; Received in revised form 10 February 2021; Accepted 17 February 2021
Available online 3 March 2021
1053-8119/© 2021 The Author(s). Published by Elsevier Inc. This is an open access article under the CC BY-NC-ND license
( http://creativecommons.org/licenses/by-nc-nd/4.0/ )

H. Karimi-Rouzbahani, F. Ramezani, A. Woolgar et al. NeuroImage 233 (2021) 117896
that systematically compare the temporal dynamics of information pro-
cessing across this familiarity spectrum. Specically, while event-related
potential (ERP) analyses have shown amplitude modulation by lev-
els of face familiarity ( Henson et al., 2008 ; Kaufmann et al., 2009 ;
Schweinberger et al., 2002 ; Huang et al., 2017 ), they remain silent
about whether more familiar faces are represented more distinctly than
less familiar faces - amplitude modulation does not necessarily mean
that information is being represented. To address this issue, we can
use multivariate pattern analysis (MVPA or decoding; Ambrus et al.,
2019 ; Karimi-Rouzbahani et al., 2017a ), which provides higher sen-
sitivity ( Norman et al., 2006 ) than univariate (e.g., ERP) analysis, to
compare the amount of information in each of the familiarity levels.
In line with our second question, recent human studies have com-
pared the neural dynamics for familiar vs. unfamiliar face processing
using the high temporal resolution of electroencephalography (EEG;
Ambrus et al., 2019 ; Collins et al., 2018 ) and magnetoencephalogra-
phy (MEG; Dobs et al., 2019 ). These studies have found that familiarity
aects the initial time windows of face processing, which are generally
attributed to the feed-forward mechanisms of the brain. In particular,
they have explored the possibility that the face familiarity eect occurs
because these faces have been seen repeatedly, leading to the devel-
opment of low-level representations for familiar faces in the occipito-
temporal visual system. This in turn facilitates the ow of familiar face
information in a bottom-up feed-forward manner from the occipito-
temporal to the frontal areas for recognition ( di Oleggio Castello and
Gobbini, 2015 ; Ramon et al., 2015 ; Ellis et al., 1979 ; Young and Bur-
ton, 2018 ). On the other hand, studies have also shown the role of frontal
brain areas in facilitating the processing of visual inputs ( Bar et al.,
2006; Goddard et al., 2016; Karimi-Rouzbahani et al., 2019; Kveraga
et al., 2007 ), such as faces ( Kramer et al., 2018 ; Summereld et al.,
2006 ), by feeding back signals to the face-selective areas in the occipito-
temporal visual areas, particularly when the visual input is ambiguous
( Summereld et al., 2006 ) or during face imagery ( Mechelli et al., 2004 ;
Johnson et al., 2007 ). These top-down mechanisms, which were local-
ized in medial prefrontal cortex (MPFC), have been suggested (but not
quantitatively supported) to reect feedback of (pre-existing) face tem-
plates, against which the input faces are compared for correct recog-
nition ( Polyn et al., 2005; Summereld et al., 2006; Todorov et al.,
2007 ) in a recollection procedure ( Brown and Banks, 2015 ). A more
recent fMRI study showed that there is signicant face selectivity in the
inferior frontal gyrus (IFG) over the frontal cortex and that the same
area is strongly connected to the well-stablished face-selective superior
temporal sulcus (STS) over the temporal cortex ( Davies-Thompson and
Andrews, 2012 ), which was consistent with a previous diusion tensor
imaging study ( Ethofer et al., 2011 ). Despite the large literature of face
recognition supporting the roles of both the peri-occipital (e.g. Fusiform
face area, STS) and peri-frontal
1
(e.g. IFG, MPFC and posterior cingulate
cortex ( Ramon et al., 2015 )) brain areas (i.e. feed-forward and feedback
mechanisms), their potential interactions in familiar face recognition
have remained ambiguous (see for reviews Ramon and Gobbini, 2018 ;
Duchaine and Yovel, 2015 ). We develop novel connectivity methods
to track the ow of information along the feed-forward and feedback
mechanisms and assess the role of these mechanisms in familiar face
recognition.
One critical aspect of the studies that successfully detected top-
down peri-frontal to peri-occipital feedback signals ( Bar et al., 2006 ;
Summereld et al., 2006 ; Goddard et al., 2016 ) has been the active
involvement of the participant in a task. In recent E/MEG studies re-
porting support for a feed-forward explanation of the face familiarity
eect, participants were asked to detect target faces ( Ambrus et al.,
2019 ) or nd a match between faces in series of consecutively presented
1
Here we use the terms “peri-occipital ”and “peri-frontal ”to refer broadly to
groups of electrodes selected from posterior and anterior parts of the EEG cap,
respectively (as indicated in Fig. 5 ).
faces ( Dobs et al., 2019 ). This makes familiarity irrelevant to the task of
the participant. Such indirect tasks may reduce the involvement of top-
down familiarity-related feedback mechanisms, as was demonstrated by
a recent study ( Kay and Yeatman, 2017 ), which found reduced feedback
signals (from intraparietal to ventro-temporal cortex) when comparing
xation vs. an active task in an fMRI study. Therefore, to answer our
research questions and fully test the contribution of feedback to the fa-
miliarity eect, we need active tasks that are aected by familiarity.
Timing information is also crucial in evaluating the ows of feed-
forward and feedback information as these processes often dier in the
temporal dynamics ( Kietzmann et al., 2019 ). With the advent of infor-
mational connectivity analyses, we now have the potential to exam-
ine the interaction of information between feed-forward and feedback
mechanisms to characterize their potential spatiotemporal contribution
to familiar face recognition ( Goddard et al., 2016 , 2019 ; Anzellotti and
Coutanche, 2018 ; Basti et al., 2020 ; Karimi-Rouzbahani et al., 2020a ).
However, this requires novel methods to track the ow of familiarity
information from a given brain area to a destination area and link this
ow to the behavioral task goals to conrm its biological relevance.
Such analyses can provide valuable insights for understanding the neu-
ral mechanisms underlying familiar face recognition in humans.
In our study, participants performed a familiar vs. unfamiliar face
categorization task on sequences of images selected from four face cate-
gories (i.e., unfamiliar, famous, personally familiar and their own faces),
with dynamically updating noise patterns, while their EEG data were
recorded. It was crucial to use dynamic noise in this study. If stimuli
were presented statically for more than ~200 ms, this would result in
a dominant feed-forward ow of information simply due to the incom-
ing information ( Goddard et al., 2016; Karimi-Rouzbahani et al., 2019;
Lamme and Roelfsema, 2000 ). On the other hand, if we present stimuli
for very brief durations (e.g. < 50 ms), there may be insucient time
to evoke familiarity processing. By varying the signal-to-noise ratio of
each image sequence using perceptual coherence, we were able to in-
vestigate how information for the dierent familiar categories gradually
builds up in the electrical activity recordable by scalp electrodes, and
how this relates to the amount of sensory evidence available in the stim-
ulus (perceptual diculty). The manipulation of sensory evidence also
allowed us to investigate when, and how, feedback information ow af-
fects familiar face recognition. Using univariate and multivariate pattern
analyses, representational similarity analysis (RSA) and a novel informa-
tional connectivity analysis method, we reveal the temporal dynamics
of neural representations for dierent levels of face familiarity.
Our results show that self and personally familiar faces lead to higher
perceptual categorization accuracy and enhanced representation in the
brain even when sensory information is limited while famous (visually
familiar) and unfamiliar face categorization is only possible in high-
coherence conditions. Importantly, our novel information ow analysis
suggests that in high-coherence conditions the feed-forward sweep of
face category information processing is dominant, while at lower coher-
ence levels the exchange of face category information is consistent with
feedback ow of information. The change in dominance of feedback vs.
feed-forward eects as a function of coherence level is consistent with
a dynamic exchange of information between higher-order (frontal) cog-
nitive and visual areas depending on the amount of sensory evidence.
Results
We designed a paradigm to study how the stimulus- and decision-
related activations for dierent levels of face familiarity build up dur-
ing stimulus presentation and how these built-up activations relate to
the amount of sensory evidence about each category. We recorded EEG
data from human participants ( n = 18) while they categorized face im-
ages as familiar or unfamiliar. We varied the amount of sensory evi-
dence by manipulating the phase coherence of images on dierent trials
( Fig. 1 A). In each 1.2 s (max) sequence of image presentation (trial), the
pattern of noise changed in each frame (16.7 ms) while the face image
2

H. Karimi-Rouzbahani, F. Ramezani, A. Woolgar et al. NeuroImage 233 (2021) 117896
22 30
22 30 45 55
20
60
100
22 30 45 55
Coherence
0.5
0.7
0.9
Reaction Time (sec)
ED
Coherence
Fixation
(0.3 - 0.6 s)
Stimulus and Decision/RT
(max 1.2 s)
ITI
(1-1.2 s)
Familiar
Famous
22 30 45 55
Coherence (%)
50
100
Behavioral Accuracy (%)
22 30 45 55
Coherence (%)
0.5
0.7
0.9
Reaction Time (sec)
Behavioral Accuracy (%)
75
Fig. 1. Experimental design and behavioral results
for familiar vs. unfamiliar face categorization. (A)
Upper row shows a sample face image (from the
famous category) at the four dierent phase coher-
ence levels (22, 30, 45, and 55%) used in this exper-
iment, in addition to the original image (not used).
Lower row shows schematic representation of the
experimental paradigm. In each trial, a black xa-
tion cross was presented for 300–600 ms (randomly
selected). Then, a noisy and rapidly updating (ev-
ery 16.7 ms) stimulus of a face image (unfamiliar,
famous, personally familiar, or self), at one of the
four possible phase coherence levels, was presented
until response, for a maximum of 1.2 s. Participants
had to categorize the stimulus as familiar or unfa-
miliar by pressing one of two buttons (button map-
pings swapped across the two sessions, counterbal-
anced across participants). There was then a vari-
able inter-trial interval (ITI) lasting between 1 and
1.2 s (chosen from a uniform random distribution;
see a demo of the task here https://osf.io/n7b8f/ ).
(B) Mean behavioral accuracy for face categoriza-
tion across all stimuli, as a function of coherence
levels; (C) Median reaction times for correctly cat-
egorized face trials across all conditions, as a func-
tion of coherence levels. (D) and (E) show the re-
sults for dierent familiar face sub-categories. Error
bars in all panels are the standard error of the mean
across participants (smaller for panels B and C).
and the overall coherence level remained the same. Familiar face im-
ages ( n = 120) were selected equally from celebrity faces, photos of the
participants’ own face, and personally familiar faces (e.g., friends, fam-
ily members, relatives of the participant) while unfamiliar face images
( n = 120) were completely unknown to participants before the experi-
ment. Within each block of trials, familiar and unfamiliar face images
with dierent coherence levels were presented in random order.
Levels of face familiarity are reflected in behavioral performance
We quantied our behavioral results using accuracy and reaction
times on correct trials. Specically, accuracy was the percentage of im-
ages correctly categorized as either familiar or unfamiliar. All partici-
pants performed with high accuracy ( > 92%) at the highest phase co-
herence (55%), and their accuracy was signicantly lower (~62%) at
3

H. Karimi-Rouzbahani, F. Ramezani, A. Woolgar et al. NeuroImage 233 (2021) 117896
the lowest coherence (22%; F(3272) = 75.839, p < 0.001; Fig. 1 B). The
correct reaction times show that participants were signicantly faster
to categorize the face at high phase coherence levels than lower ones
(F(3272) = 65.797, p < 0.001, main eect; Fig. 1 C). We also calculated
the accuracy and reaction times for the sub-categories of the familiar
category separately (i.e. famous, personally familiar and self). The cal-
culated accuracy here is the percentage of correct responses within each
of these familiar sub-categories. The results show a gradual increase
in accuracy as a function of phase coherence and familiarity ( Fig. 1 D,
two-way ANOVA. factors: coherence level and face category. Face cat-
egory main eect: F(2408) = 188.708, p < 0.001, coherence main ef-
fect: F(3408) = 115.977, p < 0.001, and interaction: F(6408) = 12.979,
p < 0.001), with the highest accuracy in categorizing their own (self),
then personally familiar, and nally famous (or visually familiar) faces.
The reaction time analysis also showed a similar pattern where partici-
pants were fastest to categorize self faces, then personally familiar and
famous faces ( Fig. 1 E, two-way ANOVA, factors: coherence level and
face category. Face category main eect: F(2404) = 174.063, p < 0.001,
coherence main eect: F(3404) = 104.861, p < 0.001). We did not eval-
uate any potential interaction between coherence levels and familiarity
levels as it does not address any hypothesis in this study. All reported
p-values were corrected for multiple comparisons at p < 0.05 using Bon-
ferroni correction.
Is there a “familiarity spectrum for faces in the brain?
Our behavioral results showed that there is a graded increase in par-
ticipants’ performance as a function of familiarity level - i.e., participants
achieve higher performance if the faces are more familiar to them. In
this section we address the rst question of this study about whether
we can nd a familiarity spectrum in neural activations, using both the
traditional univariate and novel multi-variate analyses of EEG.
Event-related potentials reflect behavioral familiarity effects
As an initial, more traditional, pass at the data, we explored how
the neural responses were modulated by dierent levels of familiarity
and coherence by averaging event-related potentials (ERP) across par-
ticipants for dierent familiarity levels and phase coherences ( Fig. 2 B).
This is important as recent work failed to capture familiar face identity
information from single electrodes ( Ambrus et al., 2019 ). At high coher-
ence, the averaged ERPs, obtained from a representative centroparietal
electrode (CP2), where previous studies have found dierential activ-
ity for dierent familiarity levels ( Henson et al., 2008 ; Kaufmann et al.,
2009 ; Huang et al., 2017 ), demonstrated an early, evoked response, fol-
lowed by an increase in the amplitude proportional to familiarity levels.
This showed that self faces elicited the highest ERP amplitude, followed
by personally familiar, famous, and unfamiliar faces ( Fig. 2 B for 55%
phase coherence). This observation of dierentiation between familiar-
ity levels at later time points seems to support evidence accumulation
over time, which is more pronounced at higher coherence levels where
the brain had access to reliable information. This repeats previous nd-
ings showing dierential activity for dierent levels of face familiarity
after 200 ms in the post-stimulus onset window ( Caharel et al., 2002 ;
Wiese et al., 2019 ).
We also observed a similar pattern between the ERPs of dierent
familiarity levels at the time of decision (just before the response was
made). Such systematic dierentiation across familiarity levels was lack-
ing at the lowest coherence level, where the amount of sensory evidence,
and behavioral performance, were low (c.f. Fig. 2 A for 22% phase co-
herence; shading areas). We observed a gradual increase in separability
between the four face categories when moving from low to high co-
herence levels (Supplementary Figure 1). The topographic ERP maps
(Supplementary Figure 2) show that the eects are not localized on the
CP2 electrode, but rather distributed across the head. There are elec-
trodes which seem to show even more familiarity information than the
CP2 electrode. These results reveal the neural correlates of perceptual
dierences in categorizing dierent familiar face categories under per-
ceptually dicult conditions.
Dynamics of neural representation and evidence accumulation for different
face familiarity levels
Our results so far are consistent with previous event-related stud-
ies showing that the amplitude of ERPs is modulated by the fa-
miliarity of the face ( Henson et al., 2008 ; Kaufmann et al., 2009 ;
Schweinberger et al., 2002 ; Huang et al., 2017 ). However, more mod-
ulation of ERP amplitude does not necessarily mean enhanced repre-
sentation. Moreover, we observed that the familiarity eects were dis-
tributed across the head rather than localized only on the individual
CP2 electrode (Supplementary Figure 2). Therefore, looking at individ-
ual electrodes might overlook the true temporal dynamics of familiarity
information, which may involve widespread brain networks ( Ramon and
Gobbini, 2018 ; Duchaine and Yovel, 2015 ). Here we used multivariate
pattern and representational similarity analyses on these EEG data to
quantify the time course of familiar vs. unfamiliar face processing. Com-
pared to traditional single-channel (univariate) ERP analysis, MVPA al-
lows us to capture the whole-brain widespread and potentially subtle
dierences between the activation dynamics of dierent familiarity lev-
els ( Ambrus et al., 2019 ; Dobs et al., 2019 ). Specically, we asked: (1)
how the representational dynamics of stimulus- and response-related
activations change depending on the level of face familiarity; and (2)
how manipulation of sensory evidence (phase coherence) aects neural
representation and coding of dierent familiarity levels.
To obtain the temporal evolution of familiarity information across
time, at each time point we trained the classier to discriminate between
familiar and unfamiliar faces. Note that the mapping between response
and ngers were swapped from the rst session to the next (counterbal-
anced across participants) and the data were collapsed across the two
sessions for all analyses, which ensures the motor response cannot drive
the classier. We trained the classier using 90% of the trials and tested
it on the left-out 10% of trials using a standard 10-fold cross-validation
procedure (see Methods ). This analysis used only correct trials. Our de-
coding analysis shows that, up until ~200 ms after stimulus onset, de-
coding accuracy is near chance for all coherence levels ( Fig. 3 A). The
decoding accuracy then gradually increases over time and peaks around
500 ms post-stimulus for the highest coherence level (55%) but remains
around chance for the lower coherence level (22%, Fig. 3 A). The ac-
curacy for intermediate coherence levels (30% and 45%) falls between
these two bounds but only reaches signicance above chance for the
45% coherence level. This ramping up temporal prole suggests an ac-
cumulation of sensory evidence in the brain across the time course of
stimulus presentation, which has a processing time that depends on
the strength of the sensory evidence ( Hanks and Summereld, 2017 ;
Philiastides et al., 2006 ).
After verifying that we could decode the main eect of familiarity,
we turned to the rst main question of the study. To examine if neural
decoding could reveal the spectrum of familiarity which we observed in
behavior and ERPs, we separately calculated the decoding accuracy for
each of the sub-categories of familiar faces ( Fig. 3 B): famous, person-
ally familiar and self faces (on the 55% coherence level, which showed
the highest decoding in Fig. 3 A). The decoding accuracy was highest for
self faces, both for stimulus- and response-aligned analyses, followed
by personally familiar, famous and unfamiliar faces. Accuracy for the
response-aligned analysis shows that the decoding gradually increased
to peak decoding ~100 ms before the response was given by partici-
pants. This temporal evolution of decoding accuracy begins after early
visual perception and rises in proportion to the amount of the face fa-
miliarity.
To rule out the possibility that an unbalanced number of trials in the
sub-categories of familiar faces could lead to the dierence in decoding
accuracies between the sub-categories, we also repeated the decoding
4

H. Karimi-Rouzbahani, F. Ramezani, A. Woolgar et al. NeuroImage 233 (2021) 117896
Response
A
B
Famous
Unfamiliar
-0.1 0 0.1 0.2 0.3 0.4 0.5 0.6
Time (s)
-0.5 -0.4 -0.3 -0.2 -0.1 0 0.1
Time (s)
-10
0
15
Amplitude (a.u.)
Personally familiar
Self
Coherence: 22%
Coherence: 55%
-0.1 0 0.1 0.2 0.3 0.4 0.5 0.6
Time (s)
-10
0
15
Amplitude (a.u.)
-0.5 -0.4 -0.3 -0.2 -0.1 0 0.1
Time (s)
Fig. 2. The eect of familiarity and sensory evidence
on event-related potentials (ERPs). Averaged ERPs for
22% (A) and 55% (B) phase coherence levels and four
face categories across all participants for an electrode
at a centroparietal site (CP2). Note that the left panels
show stimulus-aligned ERPs while the right panel shows
response-aligned ERPs. Shaded areas show the time win-
dows when the dierence in ERPs between unfamiliar and
the average of the three familiar face categories (i.e. un-
familiar vs. average of unfamiliar categories) were signif-
icantly ( p < 0.05) higher in the 55% vs. 22% coherence
levels. The signicance was evaluated using one-tailed in-
dependent t -test with correction for multiple comparisons
across time at p < 0.05. The dierences were signicant
at later stages of stimulus processing around 400 ms post-
stimulus onset and < 100 ms before the response was given
by the participant in the stimulus- and response-aligned
analyses, respectively.
analysis by classifying each familiar sub-category from the unfamiliar
category (after equalizing the number of trials across the familiar and
unfamiliar categories and also across the three familiar sub-categories),
which provided similar results. We also repeated the same analysis for
lower coherence levels: only the two high-coherence conditions (i.e.
45% and 55%), but not the low-coherence conditions (i.e. 22% and
30%), showed signicantly above-chance decoding for all familiarity
conditions (Supplementary Figure 3).
Low-level stimulus dierences between conditions could potentially
drive the dierences between categories observed in both ERP and de-
coding analyses (e.g., familiar faces being more frontal than unfamiliar
faces, leading to images with brighter centers and, therefore, separa-
bility of familiar from unfamiliar faces using central luminance of im-
ages; Dobs et al., 2019 ; Ambrus et al., 2019 ). To address such potential
dierences, we carried out a supplementary analysis using RSA (Sup-
plementary Text and Supplementary Figures 4 and 5), which showed
that such dierences between images do not play a major role in the
dierentiation between categories.
To determine whether the dynamics of decoding during stimulus pre-
sentation are associated with tfhe perceptual task, as captured by our
participants’ behavioral performance, we calculated the correlation be-
tween decoding accuracy and perceptual performance. For this, we cal-
culated the correlation between 16 data points from decoding accuracy
(4 face categories × 4 phase coherence levels) and their correspond-
ing behavioral accuracy rates, averaged over participants. The corre-
lation peaked ~500 ms post-stimulus ( Fig. 3 C), which was just before
the response was given. This is consistent with an evidence accumula-
tion mechanism determining whether to press the button for ’familiar’
or ’unfamiliar’, which took another ~100 ms to turn into action (nger
movement).
Do higher-order peri-frontal brain areas contribute to familiar face
recognition?
In this section we address the second question of this study about
whether peri-frontal brain areas contribute to the recognition of famil-
iar faces in the human brain using a novel informational connectivity
analyses on EEG.
Task difficulty and familiarity level affect information flow across the brain
We investigated how the dynamics of feed-forward and feedback in-
formation ow changes during the accumulation of sensory evidence
and the evolution over a trial of neural representations of face images.
We developed a novel connectivity method based on RSA to quantify
the relationships between the evolution of information based on peri-
occipital EEG electrodes and those of the peri-frontal electrodes. As
an advantage to previous Granger causality methods ( Goddard et al.,
2016, 2019; Karimi-Rouzbahani et al., 2019; Kietzmann et al., 2019 ),
the connectivity method developed here allowed us to check whether
the transferred signals contained specific aspects of stimulus information .
Alternatively, it could be the case that the transferred signals might
carry highly abstract but irrelevant information between the source and
destination areas, which can be incorrectly interpreted as connectiv-
ity ( Anzellotti and Coutanche, 2018 ; Basti et al., 2020 ). Briey, feed-
forward information ow is quantied as the degree to which the in-
5

Figures
Citations
More filters

High-Frequency, Long-Range Coupling Between Prefrontal and Visual Cortex During Attention

TL;DR: It is found that attention to a stimulus in their joint receptive field leads to enhanced oscillatory coupling between the two areas, particularly at gamma frequencies, which may optimize the postsynaptic impact of spikes from one area upon the other, improving cross-area communication with attention.
Journal ArticleDOI

Concurrent neuroimaging and neurostimulation reveals a causal role for dlPFC in coding of task-relevant information.

TL;DR: In this article, the authors applied transcranial magnetic stimulation (TMS) during fMRI, and tested for causal changes in information coding, showing that TMS decreases coding of relevant information across frontoparietal cortex and the impact is significantly stronger than any effect on irrelevant information, which is not statistically detectable.
Journal ArticleDOI

Hybrid Predictive Coding: Inferring, Fast and Slow

TL;DR: This work proposes a hybrid predictive coding network that combines both iterative and amortized inference in a principled manner by describing both in terms of a dual optimization of a single objective function and demonstrates that the resulting scheme can be implemented in a biologically plausible neural architecture that approximates Bayesian inference utilising local Hebbian update rules.
Journal ArticleDOI

A neural measure of the degree of face familiarity

TL;DR: In this paper , the authors measured face familiarity using subjective familiarity ratings in addition to testing explicit knowledge and reaction times in a face matching task, and found that the neural representations of familiar faces form part of a general neural signature of the familiarity component of recognition memory processes.
Posted ContentDOI

Codes-Temporal codes provide additional category-related information in object category decoding a systematic comparison of informative EEG features

TL;DR: Correlational analyses showed that the features which provided the most information about object categories, could predict participants’ performance (reaction time) more accurately than the less informative features.
References
More filters
Journal ArticleDOI

The Psychophysics Toolbox.

David H. Brainard
- 01 Jan 1997 - 
TL;DR: The Psychophysics Toolbox is a software package that supports visual psychophysics and its routines provide an interface between a high-level interpreted language and the video display hardware.
Journal ArticleDOI

The VideoToolbox software for visual psychophysics: transforming numbers into movies.

TL;DR: The VideoToolbox is a free collection of two hundred C subroutines for Macintosh computers that calibrates and controls the computer-display interface to create accurately specified visual stimuli.
Journal ArticleDOI

Distributed Hierarchical Processing in the Primate Cerebral Cortex

TL;DR: A summary of the layout of cortical areas associated with vision and with other modalities, a computerized database for storing and representing large amounts of information on connectivity patterns, and the application of these data to the analysis of hierarchical organization of the cerebral cortex are reported on.
Journal ArticleDOI

Beyond mind-reading: multi-voxel pattern analysis of fMRI data

TL;DR: How researchers are using multi-voxel pattern analysis methods to characterize neural coding and information processing in domains ranging from visual perception to memory search is reviewed.
Journal ArticleDOI

The distinct modes of vision offered by feedforward and recurrent processing.

TL;DR: An analysis of response latencies shows that when an image is presented to the visual system, neuronal activity is rapidly routed to a large number of visual areas, but the activity of cortical neurons is not determined by this feedforward sweep alone.
Related Papers (5)
Frequently Asked Questions (10)
Q1. What are the contributions mentioned in the paper "Perceptual difficulty modulates the direction of information flow in familiar face recognition" ?

By systematically varying familiarity across stimuli, the authors show a neural familiarity spectrum using electroencephalography. 

Image seuences were presented in rapid serial visual presentation (RSVP) fashon at a frame rate of 60 Hz frames per second (i.e. 16.67 ms per frameithout gaps). 

The authors used random bootstrapping testing to evaluate the significance f the decoding accuracies at every time point for the group of particiants. 

After the correction, the true correlation values with p < 0.05 ere considered significantly above chance (i.e., 0).epresentational similarity analysisRepresentational similarity analysis is used here for three purposes. 

For every time point, the p-value of the rue group-averaged decoding accuracy was obtained as [1- p(10,000 andomly generated decoding accuracies which were surpassed by the orresponding true group-averaged decoding value)]. 

The authors had a otal of 240 trials (i.e., 30 trials per perceptual category, familiar and nfamiliar, each at four phase coherence levels) during the experiment.articipants were naïve about the number and proportion of the face timuli in categories. 

The authors constructed neural representational dissimilarity matrices RDMs) by calculating the ( Spearman’s rank) correlation between evry possible representation obtained from every single presented imge which resulted in a correct response (leading to a 240 by 240 RDMatrix if all images were categorized correctly, which was never the ase for any participant). 

If paricipants failed to respond within the 1.2 s period, the trial was marked s a no-choice trial and was excluded from further analysis. 

The authors repeated this procedure iteratively 10 times ntil all trials from the two categories were used in the testing of the lassifier once (no trial was included both in the training and testing sets n a single run), hence 10-fold cross-validation, and averaged the clasification accuracy across the 10 validation runs for each participant. 

For every time point, the p-value of he true group-averaged correlation was obtained as [1- p(10,000 ranomly generated correlations which were surpassed by the correspondng true group-averaged correlation)].