scispace - formally typeset
Open AccessJournal ArticleDOI

Non-conscious recognition of affect in the absence of striate cortex

B. de Gelder, +3 more
- 16 Dec 1999 - 
- Vol. 10, Iss: 18, pp 3759-3763
TLDR
It is conjectured that a blindsight subject (GY) might recognize facial expressions presented in his blind field and the present study provides direct evidence for this claim.
Abstract
Functional neuroimaging experiments have shown that recognition of emotional expressions does not depend on awareness of visual stimuli and that unseen fear stimuli can activate the amygdala via a colliculopulvinar pathway. Perception of emotional expressions in the absence of awareness in normal subjects has some similarities with the unconscious recognition of visual stimuli which is well documented in patients with striate cortex lesions (blindsight). Presumably in these patients residual vision engages alternative extra-striate routes such as the superior colliculus and pulvinar. Against this background, we conjectured that a blindsight subject (GY) might recognize facial expressions presented in his blind field. The present study now provides direct evidence for this claim.

read more

Content maybe subject to copyright    Report

Tilburg University
Non-conscious recognition of affect in the absence of striate cortex
de Gelder, B.; Vroomen, J.; Pourtois, G.R.C.; Weiskrantz, L.
Published in:
Neuroreport
Publication date:
1999
Link to publication in Tilburg University Research Portal
Citation for published version (APA):
de Gelder, B., Vroomen, J., Pourtois, G. R. C., & Weiskrantz, L. (1999). Non-conscious recognition of affect in
the absence of striate cortex.
Neuroreport
,
10
(18), 3759-3763.
General rights
Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners
and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights.
• Users may download and print one copy of any publication from the public portal for the purpose of private study or research.
• You may not further distribute the material or use it for any profit-making activity or commercial gain
• You may freely distribute the URL identifying the publication in the public portal
Take down policy
If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately
and investigate your claim.
Download date: 10. aug.. 2022

Vision, Central NeuroReport
0959-4965 # Lippincott Williams & Wilkins
Non-conscious
recognition of affect in the
absence of striate cortex
Be
Â
atrice de Gelder,
1,2,CA
Jean Vroomen,
1
Gilles Pourtois
1,2
and
Lawrence Weiskrantz
3
1
Cognitive Neuroscience Laboratory, Tilburg
University, PO Box 90153, 5000 LE Tilburg, The
Netherlands;
2
Neurophysiology Laboratory, Faculty
of Medicine, Louvain University, Belgium;
3
Department of Psychology, Oxford University, UK
CA
Corresponding Author
FUNCTIONAL neuroimaging experiments have shown
that recognition of emotional expressions does not
depend on awareness of visual stimuli and that unseen
fear stimuli can activate the amygdala via a colliculo-
pulvinar pathway. Perception of emotional expressions
in the absence of awareness in normal subjects has some
similarities with the unconscious recognition of visual
stimuli which is well documented in patients with
striate cortex lesions (blindsight). Presumably in these
patients residual vision engages alternative extra-striate
routes such as the superior colliculus and pulvinar.
Against this background, we conjectured that a blind-
sight subject (GY) might recognize facial expressions
presented in his blind ®eld. The present study now
provides direct evidence for this claim. NeuroReport
10:3759±3763 # 1999 Lippincott Williams & Wilkins.
Key words: Awareness; Blindsight; ERPs; Facial expres-
sion; P1
Introduction
Evidence about the absence of conscious awareness
in processing emotional information has emerged
recently from a number of areas. Neuroimaging
studies have shown amygdala activation to emo-
tional stimuli, most notably to fearful faces [1,2].
Subcortical reactions to emotional stimuli have also
been registered when stimulus awareness was pre-
vented by backward visual masking of the emotional
stimuli [3], including in a fear conditioning paradigm
[4]. A prosopagnosic patient unable to recognize
facial expressions as a consequence of focal brain
damage in the occipito-temporal areas nevertheless
showed a sizable impact of facial expressions on
recognition of voice affect [5]. Such studies share a
similarity with reports of processing of elementary
visual stimuli in the absence of awareness in patients
with striate cortex lesions (blindsight). These pa-
tients can make accurate guesses about the attributes
of stimuli presented to their blind ®eld of which
they have no awareness.
The pathways of retinal origin that are most likely
to be engaged by visual processing in the absence of
striate cortex are the superior colliculus and the
pulvinar. Neuroimaging studies [4] have provided
evidence for selective involvement of these struc-
tures in conscious vs non-conscious recognition of
facial expressions. Thus far, studies of residual visual
abilities in patients with blindsight have mostly
investigated covert perception of elementary visual
information such as presence of a spatial frequency
grating, discrimination of simple shape (such as O vs
X) [6], detection of orientation or of direction of
movement [7] or of colour [8±11]. Recently blind-
sight has been reported for some high level vision
stimuli such as words [12]. Given the existence of
alternative visual pathways that remain after loss of
the pathway to striate cortex with data from studies
showing non-conscious processing of emotional in-
formation, we conjectured that there might exist
non-conscious recognition of facial expressions in
such a case.
Here we report the ®rst study of recognition of
unseen emotional stimuli in a well-studied 43-year-
old blindsight subject, GY (see [13] for a recent
list of studies with GY and details about the
lesion), who has a right half-®eld of blindness as a
result of damage to his left occipital lobe at the
age of 8. Behavioural methods were used to test
whether he could discriminate different facial ex-
pressions and, if so, whether his good performance
re¯ected covert recognition of the facial expres-
sions rather than discrimination of two patterns of
movement, and whether the actual conscious con-
tent of the alternative response labels he was given
were important for his performance. As a follow-
up we provide evidence for visual processing in
the blind ®eld obtained with event related poten-
tials (ERPs).
NeuroReport 10, 3759±3763 (1999)
Vol 10 No 18 16 December 1999
3759

Materials and Methods
Stimuli and tasks: Stimuli consisted of four video
fragments showing a female face pronouncing the
same sentence with four different facial expressions
(happy, sad, angry, fearful). These materials were
subsequently used in different presentation condi-
tions. Presentation was either random between left/
right visual ®elds or blocked, the image size could
either be small (10.2 3 8.28) or large (12.5 3 10.78),
and depending on the experiment the forced choice
alternatives were either happy vs sad, or angry vs
fearful. Mean luminance of the screen in between
stimulus presentations was 1.5 cd/m
2
. Mean lumi-
nance of the face was 20 cd/m
2
and for the grey frame
around the image was 21 cd/m
2
. Horizontal separa-
tion between the ®xation point and the outer edge of
the face was 3.68, for the eye it was 5.18, and for the
center of the face it was 6.48. Stimulus duration was
1.63 s. All of the responses were made verbally.
In the ®rst experiment a total of 8 blocks were
run using different stimulus pairs (happy/sad, angry/
sad, angry/fearful), different stimulus sizes (small or
big), and different presentation conditions (rando-
mized over left (LVF) or right (RVF) visual ®elds,
or in blocks of trials to either ®eld). In the second
experiment, four different video fragments (happy/
sad/angry/fearful) were presented in a four-alterna-
tive forced-choice design and shown in the RVF.
They were presented randomly in two blocks of 72
trials each (18 3 4 categories). Instructions speci®ed
to label the videos as happy, sad, angry or fearful.
The duration of each was 1.63 s. In the third experi-
ment, all stimuli were the small size happy/fear
faces, with presentation blocked or randomized. In
the fourth experiment, The videos were of the same
small-sized moving videos as described before with
a 6.48 horizontal separation between the ®xation
point and the centre of the face. All videos were
presented in the right visual ®eld with the sound off,
in blocks of 60 trials, 30 for each of the two
categories being used (happy/sad or angry/fearful).
The categories were presented in random order. The
two blocks with congruent labels were presented
®rst (®rst happy/sad, then angry/fearful), and they
were followed by the two blocks with non-congru-
ent labels (®rst angry/fearful videos with happy/sad
labels, then happy/sad videos with angry/fearful
labels). This series of four blocks was presented
twice, so the whole test consisted of eight blocks.
Instructions were identical to those of the previous
experiments. GY was not informed about the non-
congruence between the stimuli and the labels he
was instructed to use.
ERP recording and processing: Visual event-related
brain potentials were recorded on two separate
occasions using a Neuroscan with 64 channels. GY
was tested in a dimly lit, electrically shielded room
with the head restrained by a chin rest 60 cm from
the screen, ®xating a central cross. Four blocks of
240 stimuli were presented. Stimuli consisted of
complex gray-scale and coloured static normal front
faces taken from the Ekman series [14]. Three types
of facial expressions appearing randomly either in
the good visual ®eld or in the blind visual ®eld were
presented (neutral, happy and fearful), for a total of
48 experimental conditions (2 visual hemi-®elds 3 2
colours 3 3 emotions 3 2 genders 3 2 identities) each
repeated 20 times. Stimulus duration was 1250 ms
and the inter-trial interval was randomized between
1000 and 1500 ms. Stimuli were presented with the
internal edge of the stimulus at 4.76 of the ®xation
cross in the center of the screen. Size of stimulus
was 6 3 10 cm. Mean luminance of the room was
, 1 cd/m
2
, 25 cd/m
2
for the face and ,1 cd/m
2
for
the screen in between stimulus presentations. When
presented in his blind or good visual ®elds, GY was
instructed to discriminate (or guess in the blind
®eld) the gender of the faces by pressing one of two
keys.
Horizontal EOG and vertical EOG were moni-
tored using facial bipolar electrodes. EEG was
recorded with a left ear reference and ampli®ed with
a gain of 30 K and bandpass ®ltered at 0.01±100 Hz.
Impedance was kept below 5 kÙ. EEG and EOG
were continuously acquired at a rate of 500 Hz.
Epoching was performed 100 ms prior to stimulus
onset and continued for 924 ms after stimulus pre-
sentation. Data were re-referenced off-line to a
common average reference and low-pass ®ltered at
30 Hz. Amplitudes and latencies of visual compo-
nents were measured relative to a 100 ms pre-stimu-
lus baseline.
Results
Experiment 1: Our ®rst study used a total of 8
blocks consisting of different stimulus pairs (happy/
sad, angry/sad, angry/fearful). The task was a 2AFC
and GY was instructed to guess the facial expression
shown to his blind ®eld. GY was always ¯awless
with stimuli presented to his intact left hemi®eld
(LVF). When asked to report verbally what he saw
in his damaged right hemi®eld (RVF), GY fre-
quently reported detecting the offset and onset of a
white ¯ash, but he never consciously perceived a
face or even a moving stimulus. Overall, 333 trials
were presented in his right (blind) visual ®eld
(Table 1), and he was correct on 220 of them (66%,
Z 5.86, p , 0.005).
3760
Vol 10 No 18 16 December 1999
NeuroReport B. de Gelder et al.

Experiment 2: The second experiment used four
different video fragments (happy/sad/angry/fearful)
presented in a four-alternative forced-choice design
and shown in the RVF. GY correctly labeled the
videos as happy, sad, angry, or fearful on 38 of 72
trials in the ®rst block (52%, with the chance level
at 25%; Z 5.30, p , 0.005) and in 41 of 72 trials
in the second block (57%; Z 6.12, p , 0.005;
Table 2). The happy and sad videos were recognized,
as before, better than the angry and fearful videos.
The overall performance was far above chance
(Z 8.17, p , 0.005). GY thus also performed well
in a complex design that required more than a
simple binary decision.
Experiment 3: The third experiment was carried
out to assess whether movement was an important
parameter for GY's performance or whether he can
recognize stationary face expressions (stills). We
used a 2AFC task and GY was instructed to guess
the facial expression shown to his blind ®eld. Per-
formance with the video fragments was compared
with those for still shots and for upside-down
presentation. Table 3 shows that performance was
better with moving stimuli than with still ones.
Movement seems therefore to play an important role
for GY in distinguishing facial expressions. This
issue is further examined in the next experiment
using congruent vs non-congruent labels, where
movement was present throughout all presentations.
Experiment 4: To test whether performance was
critically dependent on the veridical response labels
being used, GY was tested a few months after
Experiments 1±3. In the congruent blocks, GY had
to identify the happy/sad videos with the labels
happy/sad, or to identify the angry/fearful videos
with the labels angry/fearful. In the non-congruent
blocks, he was given the response labels angry or
fearful, while, unknown to him, the happy/sad
videos were presented, or conversely, he was given
the labels happy/sad, while the angry/fearful videos
were shown.
GY did not report experiencing anything strange
or different between congruent and non-congruent
blocks. As before, he reported to detecting a white
¯ash with an onset and an offset, but nothing more
than that. However, performance was better with
congruent labels.
In the ®rst block, with congruent happy/sad
videos and labels, GY was correct on 46 of 56 trials
(four trials discarded for the presence of eye move-
ments): 21 of 28 happy faces were recognized as
Table 1. Covert recognition of facial expressions
Stimulating pair Image size L/R presentation Correct p
Happy/fearful Small Randomized 22/27 , 0.001
Happy/fearful Large Randomized 18/28 NS
Happy/fearful Small Blocked 37/58 ,0.05
Happy/fearful Large Blocked 37/58 , 0.05
Angry/sad Small Randomized 15/27 NS
Angry/sad Small Blocked 39/54 , 0.01
Angry/fearful Small Randomized 15/27 NS
Angry/fearful Small Blocked 37/56 , 0.05
Table 2. Confusion matrix of GYs response to happy, sad,
angry, or fearful videos
Video Response
Happy Sad Angry Fearful
Happy 27 2 6 1
Sad 1 24 5 6
Angry 3 11 13 9
Fearful 2 12 6 15
Table 3. Perceiving facial expressions or discriminating movement
Stimulus Orientation Presentation Correct p
Dynamic Upright Randomized 20/28 , 0.05
Still Upright Randomized 19/27 , 0.05
Dynamic Inverted Randomized 18/28 NS
Still Inverted Randomized 16/28 NS
Dynamic Upright Blocked 51/56 , 0.001
Still Upright Blocked 26/53 NS
Dynamic Inverted Blocked 26/56 NS
Still Inverted Blocked 27/54 NS
Vol 10 No 18 16 December 1999
3761
Non-conscious recognition of affect in the absence of striate cortex NeuroReport

happy, and 25 of 28 sad faces were recognized as sad
(÷
2
(1) 23.62, p , 0.001). On second testing, he was
correct on 47 of 60 trials: 24 of 30 happy faces were
recognized as happy, and 23 of 30 sad faces were
recognized as sad (÷
2
(1) 19.28, p , 0.001).
On the ®rst test with congruent angry/fearful
videos and labels, GY was correct on only 26 of 60
trials: 15 of 30 angry faces were recognized as angry,
and 11 of 30 fearful faces were recognized as fearful
÷
2
(1) 1.08, NS). On the second test, however, he
improved considerably, and was correct on 40 of 60
trials: 21 of 30 angry faces were recognized as angry,
and 19 of 30 fearful faces were recognized as fearful
(÷
2
(1) 6.69, p , 0.01. It thus appeared that the
angry/fearful videos were more dif®cult than the
happy/sad videos, but his performance improved on
second time testing. Overall, GY was correct on
159/236 trials (67%; ÷
2
(1) 28.51, p , 0.001).
When presented with non-congruent angry/fearful
videos and happy/sad labels (Table 4, top half) there
was a clear majority of sad responses but without
any relation with the video that was shown
(÷
2
(1) 0.00, NS). The majority of sad responses
presumably comes from an association of the nega-
tive emotion in both angry and fearful videos with
the sad label. Using the non-congruent happy/sad
videos and angry/fearful labels (Table 4, bottom
half) the relative frequencies of the two response
labels show very little relation to the presented
videos (÷
2
(1) 1.11, NS). With non-congruent la-
bels, there was thus no systematic link between
choice of response labels and the presented stimuli.
Event related brain potentials to facial expres-
sions: The subject gave 92.8% correct responses in
the good visual ®eld and 51.4% in the blind visual
®eld when discriminating the gender of static faces.
This latter result is compatible with his dif®culty in
discriminating static faces in his blind ®eld (see
above).
Figure 1 shows grand-average visual ERPs for
happy and fearful faces together at Oz site for left
visual ®eld presentation and right visual ®eld pre-
sentations. Visual stimulations in the normal visual
hemi®eld gave a ®rst positive de¯ection peaking at
148.62 ms (amplitude 4.83 ìv), followed by a subse-
quent negative visual component (latency 240.02 ms;
amplitude ÿ4.50 ìv). Visual stimulations in the blind
visual hemi®eld yielded a similar occipital positive
component, delayed in time (164.04 ms) and slightly
reduced in amplitude (4.44 ìv). Moreover, a subse-
quent negative component was also seen (latency
276 ms; amplitude ÿ1.50 ìv) when GY was stimu-
lated with faces in the blind visual hemi®eld.
The present electrophysiological data clearly show
that early visually evoked activity can be found in
ventro-lateral extrastriate cortex when stimuli are
presented to the blind hemi®eld of a hemianopic
subject. The ®rst positive activity is entirely compa-
tible (by latency and topography) with the P1
component generated in lateral extrastriate areas,
near the border of Brodman areas 18 and 19 [15±
17]. The second negative activity is compatible with
a N1 component generated in the occipito-parietal
and occipito-temporal cortex [16]. It has been sug-
gested that the P1 component re¯ects processing in
Table 4. GY's labeling of the videos with congruent and
incongruent labels
Video Response
Angry/fearful videos
Happy Sad
Angry 24 36
Fearful 24 36
Happy/sad videos
Fear Angry
Happy 33 27
Sad 32 28
2100 2 105 207 310 412 514 617 719 822
ms
8.00
6.40
4.80
3.20
1.60
0.00
21.60
23.20
24.80
28.00
26.40
924
µV
2100 2 105 207 310 412 514 617 719 822
ms
8.00
6.40
4.80
3.20
1.60
0.00
21.60
23.20
24.80
28.00
26.40
924
µV
FIG 1. Grand-average visual ebvent-related potentials (VERPs) at Oz
site for happy and fearful faces together. The upper part of the ®gure
shows VERPs for presentations in the good visual hemi®eld, the lower
part for presentations in the blind visual hemi®eld.
3762 Vol 10 No 18 16 December 1999
NeuroReport B. de Gelder et al.

Citations
More filters
Journal ArticleDOI

Neural systems for recognizing emotion.

TL;DR: Two important mechanisms for recognition of emotions are the construction of a simulation of the observed emotion in the perceiver, and the modulation of sensory cortices via top-down influences.
Journal ArticleDOI

Emotion, cognition, and behavior.

Raymond J. Dolan
- 08 Nov 2002 - 
TL;DR: In this paper, the psychological consequences and mechanisms underlying the emotional modulation of cognition are investigated, in particular attention, memory, and reasoning, in the context of functional neuroimaging.
Journal ArticleDOI

Neuroeconomics: How Neuroscience Can Inform Economics

TL;DR: In this paper, the authors use brain imaging, behavior of patients with localized brain lesions, animal behavior, and recording single neuron activity, and game theory to understand human decision making.

The interaction of emotion and cognition: Insights from studies of the human amygdala

TL;DR: This review explores insights into the relations between emotion and cognition that have resulted from studies of the human amygdala, suggesting that the classic division between the study of emotion and Cognition may be unrealistic and that an understanding of human cognition requires the consideration of emotion.
Journal ArticleDOI

Emotion and Cognition: Insights from Studies of the Human Amygdala

TL;DR: In this article, a review explores insights into the relations between emotion and cognition that have resulted from studies of the human amygdala and suggests that an understanding of human cognition requires the consideration of emotion.
References
More filters
Journal ArticleDOI

A subcortical pathway to the right amygdala mediating “unseen” fear

TL;DR: In this article, the authors used measures of right amygdala neural activity acquired from volunteer subjects viewing masked fear-conditioned faces to determine whether a colliculo-pulvinar pathway was engaged during processing of these unseen target stimuli.
Journal ArticleDOI

Measuring Facial Movement

TL;DR: The Facial Action Code as discussed by the authors was derived from an analysis of the anatomical basis of facial movement, which can be used to describe any facial movement (observed in photographs, motion picture film or videotape) in terms of anatomically based action units.
Book

Blindsight: A Case Study and Implications

TL;DR: In this article, the natural blind-spot (optic disc) within the scotoma was detected with a slow rate of onset, and a double association between form and detection was found.
Journal ArticleDOI

Neural mechanisms of visual selective attention

TL;DR: The physiological evidence extends early selection theories by providing neurophysiologically precise information about the stages of visual processing affected by attention and provides physiological evidence for early precategorical selection during visual attention.
Journal ArticleDOI

Identification of early visual evoked potential generators by retinotopic and topographic analyses

TL;DR: This study investigated the cortical sources of the early components of the pattern‐onset visual evoked potential (VEP) and found the C1 component was found to change its polarity and topography systematically as a function of stimulus position in a manner consistent with the retinotopic organization of the striate cortex.
Frequently Asked Questions (2)
Q1. What are the contributions mentioned in the paper "Non-conscious recognition of affect in the absence of striate cortex" ?

If you believe that this document breaches copyright please contact us providing details, and the authors will remove access to the work immediately and investigate your claim. 

Without further research it can not be concluded just which speci®c features of the facial stimuli were critical for generating the pattern of ERPs recorded here, but the results show that such stimuli presented in the blind visual hemi®eld of an hemianopic subject activate the ventral visual pathway via anatomical routes that bypass V1.