scispace - formally typeset
Search or ask a question

Showing papers by "Claude Alain published in 2015"


Journal ArticleDOI
TL;DR: It is shown that musical training offsets declines in auditory brain processing that accompanying normal aging in humans, preserving robust speech recognition late into life and implying that robust neuroplasticity conferred by musical training is not restricted by age and may serve as an effective means to bolster speech listening skills that decline across the lifespan.
Abstract: Musicianship in early life is associated with pervasive changes in brain function and enhanced speech-language skills. Whether these neuroplastic benefits extend to older individuals more susceptible to cognitive decline, and for whom plasticity is weaker, has yet to be established. Here, we show that musical training offsets declines in auditory brain processing that accompanying normal aging in humans, preserving robust speech recognition late into life. We recorded both brainstem and cortical neuroelectric responses in older adults with and without modest musical training as they classified speech sounds along an acoustic–phonetic continuum. Results reveal higher temporal precision in speech-evoked responses at multiple levels of the auditory system in older musicians who were also better at differentiating phonetic categories. Older musicians also showed a closer correspondence between neural activity and perceptual performance. This suggests that musicianship strengthens brain-behavior coupling in the aging auditory system. Last, “neurometric” functions derived from unsupervised classification of neural activity established that early cortical responses could accurately predict listeners' psychometric speech identification and, more critically, that neurometric profiles were organized more categorically in older musicians. We propose that musicianship offsets age-related declines in speech listening by refining the hierarchical interplay between subcortical/cortical auditory brain representations, allowing more behaviorally relevant information carried within the neural code, and supplying more faithful templates to the brain mechanisms subserving phonetic computations. Our findings imply that robust neuroplasticity conferred by musical training is not restricted by age and may serve as an effective means to bolster speech listening skills that decline across the lifespan.

184 citations


Journal ArticleDOI
TL;DR: Three healthy, high functioning adults with the reverse pattern: lifelong severely deficient autobiographical memory (SDAM) with otherwise preserved cognitive function are reported: these individuals function normally in day-to-day life, even though their past is experienced in the absence of recollection.

95 citations


Journal ArticleDOI
TL;DR: Recording electroencephalography from humans confirmed that task performance during attentional orienting was correlated with alpha/low-beta desynchronization (i.e., power suppression), and elucidated feature-general and feature-specific neural correlates of auditory attention to STM.
Abstract: Sounds are ephemeral. Thus, coherent auditory perception depends on “hearing” back in time: retrospectively attending that which was lost externally but preserved in short-term memory (STM). Current theories of auditory attention assume that sound features are integrated into a perceptual object, that multiple objects can coexist in STM, and that attention can be deployed to an object in STM. Recording electroencephalography from humans, we tested these assumptions, elucidating feature-general and feature-specific neural correlates of auditory attention to STM. Alpha/beta oscillations and frontal and posterior event-related potentials indexed feature-general top-down attentional control to one of several coexisting auditory representations in STM. Particularly, task performance during attentional orienting was correlated with alpha/low-beta desynchronization (i.e., power suppression). However, attention to one feature could occur without simultaneous processing of the second feature of the representation. Therefore, auditory attention to memory relies on both feature-specific and feature-general neural dynamics.

38 citations


Journal ArticleDOI
TL;DR: In this paper, the amplitude and latencies of the resulting source waveforms were examined as a function of sleep and passage of time, showing that auditory learning involves a consolidation phase that occurs during the wake state, which is followed by a sleep-dependent consolidation stage indexed by the P2m amplitude.

25 citations


Journal ArticleDOI
TL;DR: It is demonstrated that a primary cue for sound segregation, i.e., harmonicity, is encoded at the auditory nerve level within tens of milliseconds after the onset of sound and is maintained, largely untransformed, in phase-locked activity of the rostral brainstem.

24 citations


Journal ArticleDOI
TL;DR: In this paper, feature-specific gains in performance for groups of participants briefly trained to use either a spectral or spatial difference between two vowels presented simultaneously during a vowel identification task were demonstrated.
Abstract: Behavioral improvement within the first hour of training is commonly explained as procedural learning (i.e., strategy changes resulting from task familiarization). However, it may additionally reflect a rapid adjustment of the perceptual and/or attentional system in a goal-directed task. In support of this latter hypothesis, we show feature-specific gains in performance for groups of participants briefly trained to use either a spectral or spatial difference between 2 vowels presented simultaneously during a vowel identification task. In both groups, the neuromagnetic activity measured during the vowel identification task following training revealed source activity in auditory cortices, prefrontal, inferior parietal, and motor areas. More importantly, the contrast between the 2 groups revealed a striking double dissociation in which listeners trained on spectral or spatial cues showed higher source activity in ventral (“what”) and dorsal (“where”) brain areas, respectively. These feature-specific effects indicate that brief training can implicitly bias top-down processing to a trained acoustic cue and induce a rapid recalibration of the ventral and dorsal auditory streams during speech segregation and identification.

14 citations


Journal ArticleDOI
TL;DR: Two of the most commonly used algorithms for BCG artifact removal (OBS and AAS) are compared based on the estimated signal-to-noise ratio (SNR) of auditory and visual evoked responses recorded during fMRI acquisition to suggest that performance of the OBS algorithm can be significantly improved by choosing the optimum number of principal components.

14 citations


Journal ArticleDOI
TL;DR: The Auditory Scene Analysis: The Perceptual Organization of Sound has had a tremendous impact on research in auditory neuroscience as discussed by the authors, which has far-reaching societal implications on health and quality of life.
Abstract: Albert Bregman’s (1990) book Auditory Scene Analysis: The Perceptual Organization of Sound has had a tremendous impact on research in auditory neuroscience. Here, we outline some of the accomplishments. This review is not meant to be exhaustive, but rather aims to highlight milestones in the brief history of auditory neuroscience. The steady increase in neuroscience research following the book’s pivotal publication has advanced knowledge about how the brain forms representations of auditory objects. This research has far-reaching societal implications on health and quality of life. For instance, it helped us understand why some people experience difficulties understanding speech in noise, which in turn has led to development of therapeutic interventions. Importantly, the book acts as a catalyst, providing scientists with a common conceptual framework for research in such diverse fields as speech perception, music perception, neurophysiology and computational neuroscience. This interdisciplinary approach to research in audition is one of this book’s legacies.

11 citations


Journal ArticleDOI
TL;DR: The results suggest that the limitation in detecting the gap is related to attentional processing, possibly divided attention induced by the concurrent sound objects, rather than deficits in preattentional sensory encoding.
Abstract: Detecting a brief silent interval i.e., a gap is more difficult when listeners perceive two concurrent sounds rather than one in a sound containing a mistuned harmonic in otherwise in-tune harmonics. This impairment in gap detection may reflect the interaction of low-level encoding or the division of attention between two sound objects, both of which could interfere with signal detection. To distinguish between these two alternatives, we compared ERPs during active and passive listening with complex harmonic tones that could include a gap, a mistuned harmonic, both features, or neither. During active listening, participants indicated whether they heard a gap irrespective of mistuning. During passive listening, participants watched a subtitled muted movie of their choice while the same sounds were presented. Gap detection was impaired when the complex sounds included a mistuned harmonic that popped out as a separate object. The ERP analysis revealed an early gap-related activity that was little affected by mistuning during the active or passive listening condition. However, during active listening, there was a marked decrease in the late positive wave that was thought to index attention and response-related processes. These results suggest that the limitation in detecting the gap is related to attentional processing, possibly divided attention induced by the concurrent sound objects, rather than deficits in preattentional sensory encoding.

6 citations


Journal ArticleDOI
TL;DR: For instance, the authors found that absolute pitch (AP) is the rare ability to identify or produce a specific pitch without a reference pitch, which appears to be more prevalent in tone-language speakers than non-tone language speakers.
Abstract: Absolute pitch (AP) is the rare ability to identify or produce a specific pitch without a reference pitch, which appears to be more prevalent in tone-language speakers than non-tone-language speakers. Numerous studies support a close relationship between AP, music, and language. Despite this relationship, the extent to which these factors contribute to the processing and encoding of pitch has not yet been investigated. Addressing this research question would provide insights into the relationship between music and language, as well as the mechanisms of AP. To this aim, we recruited AP musicians and non-AP musicians who were either tone-language (Mandarin and Cantonese) or non-tone language speakers. Participants completed a zero- and one-back working memory task using music and non-music (control) stimuli. In general, AP participants had better accuracy and faster reaction times than participants without AP. This effect remained even after controlling for the age at which participants began formal music lessons. We did not observe a performance advantage afforded by speaking a tone language, nor a cumulative advantage afforded by having AP and being a tone-language speaker.

5 citations


Journal ArticleDOI
TL;DR: The evidence suggesting that speaker's voice constitutes an integral context cue in auditory memory is reviewed, with evidence to suggest that voice effects can be found in implicit memory paradigms, and the presence of voice effects appears to depend greatly on the task employed.

Journal ArticleDOI
TL;DR: This research revealed a negative correlation between brain activation and perceptual accuracy in speech motor regions (i.e., left ventral premotor cortex (PMv) and Broca’s area), suggesting a compensatory recruitment of the motor system during speech perception in noise.
Abstract: Background noise is detrimental to speech comprehension. The decline-compensation hypothesis posits that deficits in sensory processing regions caused by background noise can be counteracted by compensatory recruitment of more general cognitive areas. We are exploring the role of the speech motor system as a compensatory mechanism during impoverished sensory representations in the auditory cortices. Prior studies using functional magnetic resonance imaging (fMRI) have revealed an increase in prefrontal activity when peripheral and central auditory systems cannot effectively process speech sounds. Our research using event-related fMRI revealed a negative correlation between brain activation and perceptual accuracy in speech motor regions (i.e., left ventral premotor cortex (PMv) and Broca’s area), suggesting a compensatory recruitment of the motor system during speech perception in noise. Moreover, multi-voxel pattern analysis revealed effective phoneme categorization in the PMv and Broca’s area, even in adverse listening conditions. This is in sharp contrast with phoneme discriminability in auditory cortices and left posterior superior temporal gyrus, which showed reliable phoneme classification only when the noise was extremely weak. Better discriminative activity in the speech motor system may compensate for the loss of specificity in the auditory system by forward sensorimotor mapping during adverse listening conditions.

Journal ArticleDOI
TL;DR: This work examines the consequence of harmonic enhancement on listeners’ ability to detect a brief amplitude notch embedded in one of the harmonics after the period of mistuning and suggests that attention was drawn to the enhanced harmonic thereby easing the processing of sound features within that object.
Abstract: When the frequency of one harmonic, in a sound composed of many harmonics, is briefly mistuned and then returned to the ‘in-tune’ frequency and phase, observers report hearing this harmonic as a separate tone long after the brief period of mistuning – a phenomenon called harmonic enhancement. Here, we examined the consequence of harmonic enhancement on listeners’ ability to detect a brief amplitude notch embedded in one of the harmonics after the period of mistuning. When present, the notch was either on the enhanced harmonic or on a different harmonic. Detection was better on the enhanced harmonic than on a non-enhanced harmonic. This finding suggests that attention was drawn to the enhanced harmonic (which constituted a new sound object) thereby easing the processing of sound features (i.e., a notch) within that object. This is the first evidence of a functional consequence of the after-effect of transient mistuning on auditory perception. Moreover, the findings provide support for an attention-based explanation of the enhancement phenomenon.