scispace - formally typeset
Search or ask a question

Showing papers by "Claude Alain published in 2011"


Journal ArticleDOI
TL;DR: It is considered the possibility that some of the auditory spatial processing activity that has been observed in the dorsal pathway may actually be understood as a form of action processing in which the visual system may be guided to a particular location of interest.

74 citations


Journal ArticleDOI
TL;DR: The results suggest that multisensory facilitation is associated with posterior parietal activity as early as 100 ms after stimulus onset, and as participants are required to evaluate cross-modal stimuli based on their semantic category or their degree of congruence, mult isensory processes extend in cingulate, temporal, and prefrontal cortices.
Abstract: Perceptual objects often comprise a visual and auditory signature that arrives simultaneously through distinct sensory channels, and cross-modal features are linked by virtue of being attributed to...

64 citations


Journal ArticleDOI
TL;DR: Compared the modulations of the N170 ERP component to faces, eyes and eyeless faces of humans, apes, cats and dogs, presented upright and inverted, the data support the intuitive idea that eyes are what make animal head fronts look face-like and that proficiency for the human species involves visual expertise for thehuman eyes.

62 citations


Journal ArticleDOI
TL;DR: In this paper, the authors measured fMRI signals during an n-back working memory task for sound identity or location, where stimuli selected randomly from three semantic categories (human, animal, and music) were presented at three possible virtual locations.

57 citations


Journal ArticleDOI
TL;DR: Performance improvement during an hour of auditory perceptual training is accompanied by rapid physiological changes associated with learning, distinct from changes related to task repetition.
Abstract: Performance improvement during an hour of auditory perceptual training is accompanied by rapid physiological changes. These changes may reflect learning or simply task repetition independent of learning. We assessed the contribution of learning and task repetition to changes in auditory evoked potentials during a difficult speech identification task and an easy tone identification task. We posited that only task repetition effects would occur in the tone task but that task repetition and learning would interact in the speech task. Speech identification improved with practice (increased sensitivity d' with a constant response bias β). This behavioral improvement coincided with a decrease in the amplitude of sensory evoked responses (N1, P2) and a decrease in the amplitude of a slow wave (peak=320 ms after onset) over the left frontal and parietal sites. Results show rapid physiological changes associated with learning, distinct from changes related to task repetition.

44 citations


Journal ArticleDOI
TL;DR: It is proposed that during auditory scene analysis, acoustic differences among the various sources are combined linearly to increase the perceptual distance between the co-occurring sound objects.
Abstract: In noisy social gatherings, listeners perceptually integrate sounds originating from one person’s voice (e.g., fundamental frequency (f0) and harmonics) at a particular location and segregate these from concurrent sounds of other talkers. Though increasing the spectral or the spatial distance between talkers promotes speech segregation, synergetic effects of spatial and spectral distances are less well understood. We studied how spectral and/or spatial distances between 2 simultaneously presented steady-state vowels contribute to perception and activation in auditory cortex using magnetoencephalography. Participants were more accurate in identifying both vowels when they differed in f0 and location than when they differed in a single cue only or when they shared the same f0 and location. The combined effect of f0 and location differences closely matched the sum of single effects. The improvement in concurrent vowel identification coincided with an object-related negativity that peaked at about 140 ms after vowel onset. The combined effect of f0 and location closely matched the sum of the single effects even though vowels with different f0, location, or both generated different time courses of neuromagnetic activity. We propose that during auditory scene analysis, acoustic differences among the various sources are combined linearly to increase the perceptual distance between the co-occurring sound objects.

40 citations


Journal ArticleDOI
TL;DR: A beamformer spatial filter is applied to magnetoencephalography data recorded during an auditory paradigm that used inharmonicity to promote the formation of multiple auditory objects to suggest that these neural populations are distinct from the long latency evoked responses reflecting the detection of sound onset.

26 citations


Journal ArticleDOI
TL;DR: The results indicate that temporal attention can be deployed to a particular time, which facilitates short-term consolidation of the probe.
Abstract: Attentional blink (AB) refers to a phenomenon where the correct identification of a first target (i.e., target) impairs the processing of a second target (i.e., probe) nearby in time. In the present study, we investigate the influence of temporal attention on auditory AB by means of scalp-recorded event-related potentials. Participants were instructed to focus their attention on a particular time interval following the target (i.e., short, middle, or long temporal position) in order to detect the occurrence of the probe in a rapid series of distractor sounds. We found a large probe processing deficit when the probe occurred immediately after the target. This AB decreased as the time interval between the target and the probe increased and coincided with the generation of a positive wave at parietal sites (i.e., P3b). The P3b elicited by the probe peaked earlier when the probe occurred at the designated time than when it occurred at another position in time. The results indicate that temporal attention can be deployed to a particular time, which facilitates short-term consolidation of the probe.

24 citations


Journal ArticleDOI
TL;DR: In this article, the authors explored age differences in auditory perception by measuring fMRI adaptation of brain activity to repetitions of sound identity (what) and location (where), using meaningful environmental sounds.
Abstract: We explored age differences in auditory perception by measuring fMRI adaptation of brain activity to repetitions of sound identity (what) and location (where), using meaningful environmental sounds. In one condition, both sound identity and location were repeated allowing us to assess non-specific adaptation. In other conditions, only one feature was repeated (identity or location) to assess domain-specific adaptation. Both young and older adults showed comparable non-specific adaptation (identity and location) in bilateral temporal lobes, medial parietal cortex, and subcortical regions. However, older adults showed reduced domain-specific adaptation to location repetitions in a distributed set of regions, including frontal and parietal areas, and to identity repetition in anterior temporal cortex. We also re-analyzed data from a previously published 1-back fMRI study, in which participants responded to infrequent repetition of the identity or location of meaningful sounds. This analysis revealed age differences in domain-specific adaptation in a set of brain regions that overlapped substantially with those identified in the adaptation experiment. This converging evidence of reductions in the degree of auditory fMRI adaptation in older adults suggests that the processing of specific auditory “what” and “where” information is altered with age, which may influence cognitive functions that depend on this processing.

21 citations


Journal ArticleDOI
TL;DR: The study supports the notion of a division of labor in the auditory andVisual pathways following both auditory and visual cues that signal identity or location response preparation to upcoming auditory or visual targets.
Abstract: The present study examined the modality specificity and spatio-temporal dynamics of "what" and "where" preparatory processes in anticipation of auditory and visual targets using ERPs and a cue-target paradigm. Participants were presented with an auditory (Experiment 1) or a visual (Experiment 2) cue that signaled them to attend to the identity or location of an upcoming auditory or visual target. In both experiments, participants responded faster to the location compared to the identity conditions. Multivariate spatio-temporal partial least square (ST-PLS) analysis of the scalp-recorded data revealed supramodal "where" preparatory processes between 300-600 msec and 600-1200 msec at central and posterior parietal electrode sites in anticipation of both auditory and visual targets. Furthermore, preparation for pitch processing was captured at modality-specific temporal regions between 300 and 700 msec, and preparation for shape processing was detected at occipital electrode sites between 700 and 1150 msec. The spatio-temporal patterns noted above were replicated when a visual cue signaled the upcoming response (Experiment 2). Pitch or shape preparation exhibited modality-dependent spatio-temporal patterns, whereas preparation for target localization was associated with larger amplitude deflections at multimodal, centro-parietal sites preceding both auditory and visual targets. Using a novel paradigm, the study supports the notion of a division of labor in the auditory and visual pathways following both auditory and visual cues that signal identity or location response preparation to upcoming auditory or visual targets.

12 citations


Journal ArticleDOI
TL;DR: Signal detection in situations that promote either the fusion of tonal elements into a single sound object or the segregation of a mistuned element that "popped out" as a separate individuated auditory object and yielded the perception of concurrent sound objects is examined.
Abstract: Since the introduction of the concept of auditory scene analysis, there has been a paucity of work focusing on the theoretical explanation of how attention is allocated within a complex auditory scene. Here we examined signal detection in situations that promote either the fusion of tonal elements into a single sound object or the segregation of a mistuned element (i.e., harmonic) that "popped out" as a separate individuated auditory object and yielded the perception of concurrent sound objects. On each trial, participants indicated whether the incoming complex sound contained a brief gap or not. The gap (i.e., signal) was always inserted in the middle of one of the tonal elements. Our findings were consistent with an object-based account in which perception of two simultaneous auditory objects interfered with signal detection. This effect was observed for a wide range of gap durations and was greater when the mistuned harmonic was perceived as a separate object. These results suggest that attention may be initially shared among concurrent sound objects thereby reducing listeners' ability to process acoustic details belonging to a particular sound object. These findings provide new theoretical insight for our understanding of auditory attention and auditory scene analysis.