scispace - formally typeset
Search or ask a question

Showing papers by "Claude Alain published in 2007"


Journal ArticleDOI
TL;DR: A neural model of face processing is proposed in which face- and eye-selective neurons situated in the superior temporal sulcus region of the human brain respond differently to the face configuration and to the eyes depending on the face context.
Abstract: Unlike most other objects that are processed analytically, faces are processed configurally. This configural processing is reflected early in visual processing following face inversion and contrast reversal, as an increase in the N170 amplitude, a scalp-recorded event-related potential. Here, we show that these face-specific effects are mediated by the eye region. That is, they occurred only when the eyes were present, but not when eyes were removed from the face. The N170 recorded to inverted and negative faces likely reflects the processing of the eyes. We propose a neural model of face processing in which face-and eye-selective neurons situated in the superior temporal sulcus region of the human brain respond differently to the face configuration and to the eyes depending on the face context. This dynamic response modulation accounts for the N170 variations reported in the literature. The eyes may be central to what makes faces so special.

249 citations


Journal ArticleDOI
TL;DR: This article selectively reviews psychophysical and computational studies of streaming and comprehensively reviews more recent neurophysiological studies that have provided important insights into the mechanisms of streaming.
Abstract: Auditory stream segregation (or streaming) is a phenomenon in which 2 or more repeating sounds differing in at least 1 acoustic attribute are perceived as 2 or more separate sound sources (i.e., streams). This article selectively reviews psychophysical and computational studies of streaming and comprehensively reviews more recent neurophysiological studies that have provided important insights into the mechanisms of streaming. On the basis of these studies, segregation of sounds is likely to occur beginning in the auditory periphery and continuing at least to primary auditory cortex for simple cues such as pure-tone frequency but at stages as high as secondary auditory cortex for more complex cues such as periodicity pitch. Attention-dependent and perception-dependent processes are likely to take place in primary or secondary auditory cortex and may also involve higher level areas outside of auditory cortex. Topographic maps of acoustic attributes, stimulus-specific suppression, and competition between representations are among the neurophysiological mechanisms that likely contribute to streaming. A framework for future research is proposed.

212 citations


Journal ArticleDOI
TL;DR: Together, these studies suggest that the primary auditory cortex and the planum temporale play an important role in concurrent sound perception, and reveal a link between thalamo-cortical activation and the successful separation and identification of speech sounds presented simultaneously.

121 citations


Journal ArticleDOI
TL;DR: The results reveal that inharmonicity is rapidly and automatically registered in all three age groups but that the perception of concurrent sounds declines with age and age differences in auditory cortical activity were associated with a reduced likelihood of hearing two sounds as a function of mistuning.
Abstract: Deficits in parsing concurrent auditory events are believed to contribute to older adults' difficulties in understanding speech in adverse listening conditions (e.g., cocktail party). To explore the level at which aging impairs sound segregation, we measured auditory evoked fields (AEFs) using magnetoencephalography while young, middle-aged, and older adults were presented with complex sounds that either had all of their harmonics in tune or had the third harmonic mistuned by 4 or 16% of its original value. During the recording, participants were asked to ignore the stimuli and watch a muted subtitled movie of their choice. For each participant, the AEFs were modeled with a pair of dipoles in the superior temporal plane, and the effects of age and mistuning were examined on the amplitude and latency of the resulting source waveforms. Mistuned stimuli generated an early positivity (60–100 ms), an object-related negativity (ORN) (140–180 ms) that overlapped the N1 and P2 waves, and a positive displacement that peaked at ∼230 ms (P230) after sound onset. The early mistuning-related enhancement was similar in all three age groups, whereas the subsequent modulations (ORN and P230) were reduced in older adults. These age differences in auditory cortical activity were associated with a reduced likelihood of hearing two sounds as a function of mistuning. The results reveal that inharmonicity is rapidly and automatically registered in all three age groups but that the perception of concurrent sounds declines with age.

73 citations


Journal ArticleDOI
TL;DR: It is suggested that head orientation and gaze direction discrimination occur regardless of task demands and interact at the decision making level and may reflected the outcome of gaze processing.

60 citations


Journal ArticleDOI
TL;DR: The radial contribution in the P2 radial source amplitude in EEG is expressed preferentially in EEG, highlighting the importance of combining EEG with MEG where complex source configurations are suspected.
Abstract: Acoustic complexity of a stimulus has been shown to modulate the electromagnetic N1 (latency ∼110 ms) and P2 (latency 190 ms) auditory evoked responses. We compared the relative sensitivity of electroencephalography (EEG) and magnetoencephalography (MEG) to these neural correlates of sensation. Simultaneous EEG and MEG were recorded while participants listened to three variants of a piano tone. The piano stimuli differed in their number of harmonics: the fundamental frequency (f0), only, or f0 and the first two or eight harmonics. The root mean square (RMS) of the amplitude of P2 but not N1 increased with spectral complexity of the piano tones in EEG and MEG. The RMS increase for P2 was more prominent in EEG than MEG, suggesting important radial sources contributing to the P2 only in EEG. Source analysis revealing contributions from radial and tangential sources was conducted to test this hypothesis. Source waveforms revealed a significant increase in the P2 radial source amplitude in EEG with increased spectral complexity of piano tones. The P2 of the tangential source waveforms also increased in amplitude with increased spectral complexity in EEG and MEG. The P2␣auditory evoked response is thus represented by both tangential (gyri) and radial (sulci) activities. The radial contribution is expressed preferentially in EEG, highlighting the importance of combining EEG with MEG where complex source configurations are suspected.

59 citations


Journal ArticleDOI
TL;DR: The role of hearing loss on the neural representation of sound and how cognitive factors and learning can help compensate for perceptual difficulties are examined.
Abstract: The perception of complex acoustic signals such as speech and music depends on the interaction between peripheral and central auditory processing. As information travels from the cochlea to primary and associative auditory cortices, the incoming sound is subjected to increasingly more detailed and refined analysis. These various levels of analyses are thought to include low-level automatic processes that detect, discriminate and group sounds that are similar in physical attributes such as frequency, intensity, and location as well as higher-level schema-driven processes that reflect listeners' experience and knowledge of the auditory environment. In this review, we describe studies that have used event-related brain potentials in investigating the processing of complex acoustic signals (e.g., speech, music). In particular, we examine the role of hearing loss on the neural representation of sound and how cognitive factors and learning can help compensate for perceptual difficulties. The notion of auditory scene analysis is used as a conceptual framework for interpreting and studying the perception of sound.

50 citations


01 Jan 2007
TL;DR: It is concluded that age-related difficulties in separating competing speakers are unlikely to arise from deficits in streaming and might instead reflect less efficient concurrent sound segregation.
Abstract: Normal aging is accompanied by speech perception difficulties, especially in adverse listening situations such as a cocktail party. To assess whether such difficulties might be related to impairments in sequential auditory scene analysis, event-related brain potentials were recorded from normal-hearing young, middle-aged, and older adults during presentation of low (A) tones, high (B) tones, and silences (—) in repeating 3 tone triplets (ABA—). The likelihood of reporting hearing 2 streams increased as a function of the frequency difference between A and B tones (Df) to the same extent for all 3 age groups and was paralleled by enhanced sensory-evoked responses over the frontocentral scalp regions. In all 3 age groups, there was also a progressive buildup in brain activity from the beginning to the end of the sequence of triplets, which was characterized by an enhanced positivity that peaked at about 200 ms after the onset of each ABA— triplet. Similar Df- and buildup-related activity also occurred over the right temporal cortex, but only for young adults. We conclude that age-related difficulties in separating competing speakers are unlikely to arise from deficits in streaming and might instead reflect less efficient concurrent sound segregation.