scispace - formally typeset
Search or ask a question

Showing papers by "Claude Alain published in 2017"


Journal ArticleDOI
TL;DR: The findings suggest that MCI is associated with poorer encoding and transfer of speech signals between functional levels of the auditory system and advance the pathophysiological understanding of cognitive aging by identifying subcortical deficits in auditory sensory processing mere milliseconds after sound onset and before the emergence of perceptual speech deficits.
Abstract: Mild cognitive impairment (MCI) is recognized as a transitional phase in the progression toward more severe forms of dementia and is an early precursor to Alzheimer's disease. Previous neuroimaging studies reveal that MCI is associated with aberrant sensory-perceptual processing in cortical brain regions subserving auditory and language function. However, whether the pathophysiology of MCI extends to speech processing before conscious awareness (brainstem) is unknown. Using a novel electrophysiological approach, we recorded both brainstem and cortical speech-evoked brain event-related potentials (ERPs) in older, hearing-matched human listeners who did and did not present with subtle cognitive impairment revealed through behavioral neuropsychological testing. We found that MCI was associated with changes in neural speech processing characterized as hypersensitivity (larger) brainstem and cortical speech encoding in MCI compared with controls in the absence of any perceptual speech deficits. Group differences also interacted with age differentially across the auditory pathway; brainstem responses became larger and cortical ERPs smaller with advancing age. Multivariate classification revealed that dual brainstem-cortical speech activity correctly identified MCI listeners with 80% accuracy, suggesting its application as a biomarker of early cognitive decline. Brainstem responses were also a more robust predictor of individuals' MCI severity than cortical activity. Our findings suggest that MCI is associated with poorer encoding and transfer of speech signals between functional levels of the auditory system and advance the pathophysiological understanding of cognitive aging by identifying subcortical deficits in auditory sensory processing mere milliseconds (<10 ms) after sound onset and before the emergence of perceptual speech deficits.SIGNIFICANCE STATEMENT Mild cognitive impairment (MCI) is a precursor to dementia marked by declines in communication skills. Whether MCI pathophysiology extends below cerebral cortex to affect speech processing before conscious awareness (brainstem) is unknown. By recording neuroelectric brain activity to speech from brainstem and cortex, we show that MCI hypersensitizes the normal encoding of speech information across the hearing brain. Deficient neural responses to speech (particularly those generated from the brainstem) predicted the presence of MCI with high accuracy and before behavioral deficits. Our findings advance the neurological understanding of MCI by identifying a subcortical biomarker in auditory-sensory processing before conscious awareness, which may be a precursor to declines in speech understanding.

74 citations


Book ChapterDOI
01 Jan 2017
TL;DR: This article found that older adults with hearing loss are at greater risk for developing cognitive impairments than peers with better hearing, and older adults exhibit enhanced cognitive compensation with performance on auditory tasks being facilitated by top-down use of context and knowledge.
Abstract: Successful communication and navigation in cocktail party situations depends on complex interactions among an individual’s sensory, cognitive, and social abilities. Older adults may function well in relatively ideal communication situations, but they are notorious for their difficulties understanding speech in noisy situations such as cocktail parties. However, as healthy adults age, declines in auditory and cognitive processing may be offset by compensatory gains in ability to use context and knowledge. From a practical perspective, it is important to consider the aging auditory system in multitalker situations because these are among the most challenging situations for older adults. From a theoretical perspective, studying age-related changes in auditory processing provides a special window into the relative contributions of, and interactions among sensory, cognitive, and social abilities. In the acoustical wild, younger listeners typically function better than older listeners. Experimental evidence indicates that age-related differences in simple measures such as word recognition in quiet or noise are largely due to the bottom-up effects of age-related auditory declines. These differences can often be eliminated when auditory input is adjusted to equate the performance levels of listeners on baseline measures in quiet or noise. Notably, older adults exhibit enhanced cognitive compensation, with performance on auditory tasks being facilitated by top-down use of context and knowledge. Nevertheless, age-related differences can persist when tasks are more cognitively demanding and involve discourse comprehension, memory, and attention. At an extreme, older adults with hearing loss are at greater risk for developing cognitive impairments than peers with better hearing.

58 citations


Journal ArticleDOI
TL;DR: It is demonstrated that the proximity of formants between adjacent vowels is an important factor in the perceptual organization of speech, and a widely distributed neural network supporting perceptual grouping of speech sounds is revealed.
Abstract: The neural substrates by which speech sounds are perceptually segregated into distinct streams are poorly understood. Here, we recorded high-density scalp event-related potentials (ERPs) while participants were presented with a cyclic pattern of three vowel sounds (/ee/-/ae/-/ee/). Each trial consisted of an adaptation sequence, which could have either a small, intermediate, or large difference in first formant (Δf1) as well as a test sequence, in which Δf1 was always intermediate. For the adaptation sequence, participants tended to hear two streams (“streaming”) when Δf1 was intermediate or large compared to when it was small. For the test sequence, in which Δf1 was always intermediate, the pattern was usually reversed, with participants hearing a single stream with increasing Δf1 in the adaptation sequences. During the adaptation sequence, Δf1-related brain activity was found between 100–250 ms after the /ae/ vowel over fronto-central and left temporal areas, consistent with generation in auditory cortex. For the test sequence, prior stimulus modulated ERP amplitude between 20–150 ms over left fronto-central scalp region. Our results demonstrate that the proximity of formants between adjacent vowels is an important factor in the perceptual organization of speech, and reveal a widely distributed neural network supporting perceptual grouping of speech sounds.

26 citations


Journal ArticleDOI
TL;DR: In this paper, the importance of steady state brain oscillation for brain connectivity and cognition is discussed, and specific frequencies of sound in their vibratory nature can serve as a means to brain stimulation through auditory and vibrotactile means and as such can contribute to regulation of oscillatory activity.
Abstract: This paper addresses the importance of steady state brain oscillation for brain connectivity and cognition. Given that a healthy brain maintains particular levels of oscillatory activity, it argues that disturbances or dysrhythmias of this oscillatory activity can be implicated in common health conditions including Alzheimer’s disease, Parkinson’s Disease, pain, and depression. Literature is reviewed that shows that electric stimulation of the brain can contribute to regulation of neural oscillatory activity and the alleviation of related health conditions. It is then argued that specific frequencies of sound in their vibratory nature can serve as a means to brain stimulation through auditory and vibrotactile means and as such can contribute to regulation of oscillatory activity. The frequencies employed and found effective in electric stimulation are reviewed with the intent of guiding the selection of sound frequencies for vibroacoustic stimulation in the treatment of AD, PD, Pain, and depression.

20 citations


Journal ArticleDOI
TL;DR: Findings indicate that memory for audio clips is acquired quickly and is surprisingly robust; both implicit and explicit LTM for the location of a faint target tone modulated auditory spatial attention.
Abstract: Long-term memory (LTM) has been shown to bias attention to a previously learned visual target location. Here, we examined whether memory-predicted spatial location can facilitate the detection of a faint pure tone target embedded in real world audio clips (e.g., soundtrack of a restaurant). During an initial familiarization task, participants heard audio clips, some of which included a lateralized target (p = 50%). On each trial participants indicated whether the target was presented from the left, right, or was absent. Following a 1 hr retention interval, participants were presented with the same audio clips, which now all included a target. In Experiment 1, participants showed memory-based gains in response time and d'. Experiment 2 showed that temporal expectations modulate attention, with greater memory-guided attention effects on performance when temporal context was reinstated from learning (i.e., when timing of the target within audio clips was not changed from initially learned timing). Experiment 3 showed that while conscious recall of target locations was modulated by exposure to target-context associations during learning (i.e., better recall with higher number of learning blocks), the influence of LTM associations on spatial attention was not reduced (i.e., number of learning blocks did not affect memory-guided attention). Both Experiments 2 and 3 showed gains in performance related to target-context associations, even for associations that were not explicitly remembered. Together, these findings indicate that memory for audio clips is acquired quickly and is surprisingly robust; both implicit and explicit LTM for the location of a faint target tone modulated auditory spatial attention. (PsycINFO Database Record

13 citations


Journal ArticleDOI
TL;DR: The timing and distribution of neuroelectric activity is consistent with two distinct neural processes with suppression of task-irrelevant information occurring before conflict resolution, and this new paradigm may prove useful in clinical populations to assess impairments in filtering out task-IRrelevant information and/or resolving conflicting information.
Abstract: In everyday situations auditory selective attention requires listeners to suppress task-irrelevant stimuli and to resolve conflicting information in order to make appropriate goal-directed decisions. Traditionally, these two processes (i.e. distractor suppression and conflict resolution) have been studied separately. In the present study we measured neuroelectric activity while participants performed a new paradigm in which both processes are quantified. In separate block of trials, participants indicate whether two sequential tones share the same pitch or location depending on the block's instruction. For the distraction measure, a positive component peaking at ~250 ms was found - a distraction positivity. Brain electrical source analysis of this component suggests different generators when listeners attended to frequency and location, with the distraction by location more posterior than the distraction by frequency, providing support for the dual-pathway theory. For the conflict resolution measure, a negative frontocentral component (270-450 ms) was found, which showed similarities with that of prior studies on auditory and visual conflict resolution tasks. The timing and distribution are consistent with two distinct neural processes with suppression of task-irrelevant information occurring before conflict resolution. This new paradigm may prove useful in clinical populations to assess impairments in filtering out task-irrelevant information and/or resolving conflicting information.

10 citations


Journal ArticleDOI
TL;DR: Older adults often experience some form of reduced sensory input in addition to other age-related changes in perceptual and cognitive functions, and what happens to auditory function when the authors' cognitive systems begin to fade.
Abstract: 18 The Hearing Journal May 2017 “I can hear you, but I can’t understand you.” This is one of the most common reasons for older adults to consult an audiologist. Speech comprehension problems, especially in noisy environments, become increasingly common as we get older. Unfortunately, the cause(s) of reduced communication skills is not always easily determined. Contributing factors may include difficulty registering sound by the ear itself, as well as reduced information as an auditory signal is transmitted from the ear to brain areas that process and decode speech sounds. Understanding how aging affects listening skills is difficult because older adults often experience some form of reduced sensory input (i.e., presbycusis) in addition to other age-related changes in perceptual and cognitive functions. But independent of hearing loss, what happens to auditory function when our cognitive systems begin to fade?

1 citations