scispace - formally typeset
Search or ask a question

Showing papers in "Ear and Hearing in 2016"


Journal ArticleDOI
TL;DR: This work adapted Kahneman's seminal (1973) Capacity Model of Attention to listening and proposed a heuristically useful Framework for Understanding Effortful Listening (FUEL), which incorporates the well-known relationship between cognitive demand and the supply of cognitive capacity that is the foundation of cognitive theories of attention.
Abstract: The Fifth Eriksholm Workshop on “Hearing Impairment and Cognitive Energy” was convened to develop a consensus among interdisciplinary experts about what is known on the topic, gaps in knowledge, the use of terminology, priorities for future research, and implications for practice. The general term cognitive energy was chosen to facilitate the broadest possible discussion of the topic. It goes back to Titchener (1908) who described the effects of attention on perception; he used the term psychic energy for the notion that limited mental resources can be flexibly allocated among perceptual and mental activities. The workshop focused on three main areas: (1) theories, models, concepts, definitions, and frameworks; (2) methods and measures; and (3) knowledge translation. We defined effort as the deliberate allocation of mental resources to overcome obstacles in goal pursuit when carrying out a task, with listening effort applying more specifically when tasks involve listening. We adapted Kahneman’s seminal (1973) Capacity Model of Attention to listening and proposed a heuristically useful Framework for Understanding Effortful Listening (FUEL). Our FUEL incorporates the well-known relationship between cognitive demand and the supply of cognitive capacity that is the foundation of cognitive theories of attention. Our FUEL also incorporates a motivation dimension based on complementary theories of motivational intensity, adaptive gain control, and optimal performance, fatigue, and pleasure. Using a three-dimensional illustration, we highlight how listening effort depends not only on hearing difficulties and task demands but also on the listener’s motivation to expend mental effort in the challenging situations of everyday life.

686 citations


Journal ArticleDOI
TL;DR: The data suggest that listeners with normal hearing and low working memory capacity are less able to adapt to distortion of speech signals caused by background noise, which requires the allocation of more processing resources to earlier processing stages.
Abstract: Objectives This study aimed to determine if younger and older listeners with normal hearing who differ on working memory span perform differently on speech recognition tests in noise. Older adults typically exhibit poorer speech recognition scores in noise than younger adults, which is attributed primarily to poorer hearing sensitivity and more limited working memory capacity in older than younger adults. Previous studies typically tested older listeners with poorer hearing sensitivity and shorter working memory spans than younger listeners, making it difficult to discern the importance of working memory capacity on speech recognition. This investigation controlled for hearing sensitivity and compared speech recognition performance in noise by younger and older listeners who were subdivided into high and low working memory groups. Performance patterns were compared for different speech materials to assess whether or not the effect of working memory capacity varies with the demands of the specific speech test. The authors hypothesized that (1) normal-hearing listeners with low working memory span would exhibit poorer speech recognition performance in noise than those with high working memory span; (2) older listeners with normal hearing would show poorer speech recognition scores than younger listeners with normal hearing, when the two age groups were matched for working memory span; and (3) an interaction between age and working memory would be observed for speech materials that provide contextual cues. Design Twenty-eight older (61 to 75 years) and 25 younger (18 to 25 years) normal-hearing listeners were assigned to groups based on age and working memory status. Northwestern University Auditory Test No. 6 words and Institute of Electrical and Electronics Engineers sentences were presented in noise using an adaptive procedure to measure the signal-to-noise ratio corresponding to 50% correct performance. Cognitive ability was evaluated with two tests of working memory (Listening Span Test and Reading Span Test) and two tests of processing speed (Paced Auditory Serial Addition Test and The Letter Digit Substitution Test). Results Significant effects of age and working memory capacity were observed on the speech recognition measures in noise, but these effects were mediated somewhat by the speech signal. Specifically, main effects of age and working memory were revealed for both words and sentences, but the interaction between the two was significant for sentences only. For these materials, effects of age were observed for listeners in the low working memory groups only. Although all cognitive measures were significantly correlated with speech recognition in noise, working memory span was the most important variable accounting for speech recognition performance. Conclusions The results indicate that older adults with high working memory capacity are able to capitalize on contextual cues and perform as well as young listeners with high working memory capacity for sentence recognition. The data also suggest that listeners with normal hearing and low working memory capacity are less able to adapt to distortion of speech signals caused by background noise, which requires the allocation of more processing resources to earlier processing stages. These results indicate that both younger and older adults with low working memory capacity and normal hearing are at a disadvantage for recognizing speech in noise.

109 citations


Journal ArticleDOI
TL;DR: In this paper, a review of the literature on hearing loss-related fatigue is presented, with a focus on core constructs, consequences, and methods for assessing fatigue and related constructs.
Abstract: Fatigue is common in individuals with a variety of chronic health conditions and can have significant negative effects on quality of life. Although limited in scope, recent work suggests persons with hearing loss may be at increased risk for fatigue, in part due to effortful listening that is exacerbated by their hearing impairment. However, the mechanisms responsible for hearing loss-related fatigue, and the efficacy of audiologic interventions for reducing fatigue, remain unclear. To improve our understanding of hearing loss-related fatigue, as a field it is important to develop a common conceptual understanding of this construct. In this article, the broader fatigue literature is reviewed to identify and describe core constructs, consequences, and methods for assessing fatigue and related constructs. Finally, the current knowledge linking hearing loss and fatigue is described and may be summarized as follows: Hearing impairment may increase the risk of subjective fatigue and vigor deficits; adults with hearing loss require more time to recover from fatigue after work and have more work absences; sustained, effortful, listening can be fatiguing; optimal methods for eliciting and measuring fatigue in persons with hearing loss remain unclear and may vary with listening condition; and amplification may minimize decrements in cognitive processing speed during sustained effortful listening. Future research is needed to develop reliable measurement methods to quantify hearing loss-related fatigue, explore factors responsible for modulating fatigue in people with hearing loss, and identify and evaluate potential interventions for reducing hearing loss-related fatigue.

108 citations


Journal ArticleDOI
TL;DR: The RLOs were shown to be beneficial to first-time hearing aid users across a range of quantitative and qualitative measures and could be used to supplement clinical rehabilitation practice.
Abstract: Objectives: The aims of this study were to (1) develop a series of short interactive videos (or reusable learning objects [RLOs]) covering a broad range of practical and psychosocial issues relevant to the auditory rehabilitation for first-time hearing aid users; (2) establish the accessibility, take-up, acceptability and adherence of the RLOs; and (3) assess the benefits and cost-effectiveness of the RLOs. Design: The study was a single-center, prospective, randomized controlled trial with two arms. The intervention group (RLO+, n = 103) received the RLOs plus standard clinical service including hearing aid(s) and counseling, and the waitlist control group (RLO−, n = 100) received standard clinical service only. The effectiveness of the RLOs was assessed 6-weeks posthearing aid fitting. Seven RLOs (total duration 1 hr) were developed using a participatory, community of practice approach involving hearing aid users and audiologists. RLOs included video clips, illustrations, animations, photos, sounds and testimonials, and all were subtitled. RLOs were delivered through DVD for TV (50.6%) and PC (15.2%), or via the internet (32.9%). Results: RLO take-up was 78%. Adherence overall was at least 67%, and 97% in those who attended the 6-week follow-up. Half the participants watched the RLOs two or more times, suggesting self-management of their hearing loss, hearing aids, and communication. The RLOs were rated as highly useful and the majority of participants agreed the RLOs were enjoyable, improved their confidence and were preferable to written information. Postfitting, there was no significant between-group difference in the primary outcome measure, overall hearing aid use. However, there was significantly greater hearing aid use in the RLO+ group for suboptimal users. Furthermore, the RLO+ group had significantly better knowledge of practical and psychosocial issues, and significantly better practical hearing aid skills than the RLO− group. Conclusions: The RLOs were shown to be beneficial to first-time hearing aid users across a range of quantitative and qualitative measures. This study provides evidence to suggest that the RLOs may provide valuable learning and educational support for first-time hearing aid users and could be used to supplement clinical rehabilitation practice.

107 citations


Journal ArticleDOI
TL;DR: The authors propose that the behavioral economics or neuroeconomics of listening can provide a conceptual and experimental framework for understanding effort and fatigue that may have clinical significance.
Abstract: This review examines findings from functional neuroimaging studies of speech recognition in noise to provide a neural systems level explanation for the effort and fatigue that can be experienced during speech recognition in challenging listening conditions. Neuroimaging studies of speech recognition consistently demonstrate that challenging listening conditions engage neural systems that are used to monitor and optimize performance across a wide range of tasks. These systems appear to improve speech recognition in younger and older adults, but sustained engagement of these systems also appears to produce an experience of effort and fatigue that may affect the value of communication. When considered in the broader context of the neuroimaging and decision making literature, the speech recognition findings from functional imaging studies indicate that the expected value, or expected level of speech recognition given the difficulty of listening conditions, should be considered when measuring effort and fatigue. The authors propose that the behavioral economics or neuroeconomics of listening can provide a conceptual and experimental framework for understanding effort and fatigue that may have clinical significance.

104 citations


Journal ArticleDOI
TL;DR: Reviewing studies that have used measures of electrodermal activity (skin conductance) and heart rate variability (HRV) to index sympathetic and parasympathetic activity during auditory tasks indicates sensitivity of both measures to increased task demand.
Abstract: Cognitive and emotional challenges may elicit a physiological stress response that can include arousal of the sympathetic nervous system (fight or flight response) and withdrawal of the parasympathetic nervous system (responsible for recovery and rest). This article reviews studies that have used measures of electrodermal activity (skin conductance) and heart rate variability (HRV) to index sympathetic and parasympathetic activity during auditory tasks. In addition, the authors present results from a new study with normal-hearing listeners examining the effects of speaking rate on changes in skin conductance and high-frequency HRV (HF-HRV). Sentence repetition accuracy for normal and fast speaking rates was measured in noise using signal to noise ratios that were adjusted to approximate 80% accuracy (+3 dB fast rate; 0 dB normal rate) while monitoring skin conductance and HF-HRV activity. A significant increase in skin conductance level (reflecting sympathetic nervous system arousal) and a decrease in HF-HRV (reflecting parasympathetic nervous system withdrawal) were observed with an increase in speaking rate indicating sensitivity of both measures to increased task demand. Changes in psychophysiological reactivity with increased auditory task demand may reflect differences in listening effort, but other person-related factors such as motivation and stress may also play a role. Further research is needed to understand how psychophysiological activity during listening tasks is influenced by the acoustic characteristics of stimuli, task demands, and by the characteristics and emotional responses of the individual.

92 citations


Journal ArticleDOI
TL;DR: Smartphone hearing screening using the hearScreen™ application is accurate and time efficient, with smartphone screening demonstrating equivalent sensitivity and specificity to conventional screening audiometry.
Abstract: Objectives The study aimed to determine the validity of a smartphone hearing screening technology (hearScreen™) compared with conventional screening audiometry in terms of (1) sensitivity and specificity, (2) referral rate, and (3) test time. Design One thousand and seventy school-age children in grades 1 to 3 (8 ± 1.1 average years) were recruited from five public schools. Children were screened twice, once using conventional audiometry and once with the smartphone hearing screening. Screening was conducted in a counterbalanced sequence, alternating initial screen between conventional or smartphone hearing screening. Results No statistically significant difference in performance between techniques was noted, with smartphone screening demonstrating equivalent sensitivity (75.0%) and specificity (98.5%) to conventional screening audiometry. While referral rates were lower with the smartphone screening (3.2 vs. 4.6%), it was not significantly different (p > 0.05). Smartphone screening (hearScreen™) was 12.3% faster than conventional screening. Conclusion Smartphone hearing screening using the hearScreen™ application is accurate and time efficient.

83 citations


Journal ArticleDOI
TL;DR: In this paper, the authors characterized the psychometric functions that describe task performance in dual-task listening effort measures as a function of signal to noise ratio (SNR), and the results indicated that the RT curves had a peak shape.
Abstract: OBJECTIVES The purpose of the study was to characterize the psychometric functions that describe task performance in dual-task listening effort measures as a function of signal to noise ratio (SNR). DESIGN Younger adults with normal hearing (YNH, n = 24; experiment 1) and older adults with hearing impairment (n = 24; experiment 2) were recruited. Dual-task paradigms wherein the participants performed a primary speech recognition task simultaneously with a secondary task were conducted at a wide range of SNRs. Two different secondary tasks were used: an easy task (i.e., a simple visual reaction-time task) and a hard task (i.e., the incongruent Stroop test). The reaction time (RT) quantified the performance of the secondary task. RESULTS For both participant groups and for both easy and hard secondary tasks, the curves that described the RT as a function of SNR were peak shaped. The RT increased as SNR changed from favorable to intermediate SNRs, and then decreased as SNRs moved from intermediate to unfavorable SNRs. The RT reached its peak (longest time) at the SNRs at which the participants could understand 30 to 50% of the speech. In experiments 1 and 2, the dual-task trials that had the same SNR were conducted in one block. To determine if the peak shape of the RT curves was specific to the blocked SNR presentation order used in these experiments, YNH participants were recruited (n = 25; experiment 3) and dual-task measures, wherein the SNR was varied from trial to trial (i.e., nonblocked), were conducted. The results indicated that, similar to the first two experiments, the RT curves had a peak shape. CONCLUSIONS Secondary task performance was poorer at the intermediate SNRs than at the favorable and unfavorable SNRs. This pattern was observed for both YNH and older adults with hearing impairment participants and was not affected by either task type (easy or hard secondary task) or SNR presentation order (blocked or nonblocked). The shorter RT at the unfavorable SNRs (speech intelligibility < 30%) possibly reflects that the participants experienced cognitive overload and/or disengaged themselves from the listening task. The implication of using the dual-task paradigm as a listening effort measure is discussed.

81 citations


Journal ArticleDOI
TL;DR: The goal of this article is to trace the evolution of models of working memory and cognitive resources from the early 20th century to today.
Abstract: The goal of this article is to trace the evolution of models of working memory and cognitive resources from the early 20th century to today. Linear flow models of information processing common in the 1960s and 1970s centered on the transfer of verbal information from a limited-capacity short-term memory store to long-term memory through rehearsal. Current conceptions see working memory as a dynamic system that includes both maintaining and manipulating information through a series of interactive components that include executive control and attentional resources. These models also reflect the evolution from an almost exclusive concentration on working memory for verbal materials to inclusion of a visual working memory component. Although differing in postulated mechanisms and emphasis, these evolving viewpoints all share the recognition that human information processing is a limited-capacity system with limits on the amount of information that can be attended to, remain activated in memory, and utilized at one time. These limitations take on special importance in spoken language comprehension, especially when the stimuli have complex linguistic structures or listening effort is increased by poor acoustic quality or reduced hearing acuity.

77 citations


Journal ArticleDOI
TL;DR: Diodes that reroute sounds from an ear with a severe to profound hearing loss to anEar with minimal hearing loss may improve speech perception in noise when signals of interest are located toward the impaired ear, however, the same device may also degrade speech perception as all signals are rerouted indiscriminately, including noise.
Abstract: Objectives: A systematic review of the literature and meta-analysis was conducted to assess the nature and quality of the evidence for the use of hearing instruments in adults with a unilateral severe-to-profound sensorineural hearing loss. Design: The PubMed, EMBASE, MEDLINE, Cochrane, CINAHL and DARE databases were searched with no restrictions on language. The search included articles from the start of each database until 11th February 2015. Studies were included that: (a) assessed the impact of any form of hearing instrument, including devices that re-route signals between the ears or restore aspects of hearing to a deaf ear, in adults with a sensorineural severe-to-profound loss in one ear and normal or near-normal hearing in the other ear; (b) compared different devices or compared a device to placebo or the unaided condition; (c) measured outcomes in terms of speech perception, spatial listening, or quality of life; (d) were prospective controlled or observational studies. Studies that met prospectively-defined criteria were subjected to random-effects meta-analyses. Results: Twenty-seven studies reported in thirty articles were included. The evidence was graded as low-to-moderate quality having been obtained primarily from observational before-after comparisons. The meta-analysis identified statistically-significant benefits to speech perception in noise for devices that re-routed the speech signals of interest from the worse ear to the better ear using either air or bone conduction (mean benefit 2.5 dB). However, these devices also degraded speech understanding significantly and to a similar extent (mean deficit 3.1 dB) when noise was re-routed to the better ear. Data on the effects of cochlear implantation on speech perception could not be pooled as the prospectively-defined criteria for meta-analysis were not met. Inconsistency in the assessment of outcomes relating to sound localisation also precluded the synthesis of evidence across studies. Evidence for the relative efficacy of different devices was sparse but a statistically significant advantage was observed for re-routing speech signals using abutment-mounted bone conduction devices when compared to outcomes after pre-operative trials of air-conduction devices when speech and noise were co-located (mean benefit 1.5 dB). Patients reported significant improvements in hearing-related quality of life with both re-routing devices and following cochlear implantation. Only two studies measured health-related quality of life and findings were inconclusive. Conclusions: Devices that re-route sounds from an ear with a severe-to-profound hearing loss to an ear with minimal hearing loss may improve speech perception in noise when signals of interest are located towards the impaired ear. However, the same device may also degrade speech perception as all signals are re-routed indiscriminately, including noise. While the restoration of functional hearing in both ears through cochlear implantation could be expected to provide benefits to speech perception, the inability to synthesise evidence across existing studies means that such a conclusion cannot yet be made. For the same reason, it remains unclear whether cochlear implantation can improve the ability to localise sounds despite restoring bilateral input. Prospective controlled studies that measure outcomes consistently and control for selection and observation biases are required to improve the quality of the evidence for the provision of hearing instruments to patients with unilateral deafness and to support any future recommendations for the clinical management of these patients.

75 citations


Journal ArticleDOI
TL;DR: Younger children require a more advantageous signal-to-noise ratio than older children and adults to achieve 50% correct word recognition in both masker conditions, however, children’s ability to recognize words appears to take longer to mature and follows a different developmental trajectory for the two-talker speech masker than the speech-shaped noise masker.
Abstract: Objective The goal of this study was to establish the developmental trajectories for children's open-set recognition of monosyllabic words in each of two maskers: two-talker speech and speech-shaped noise. Design Listeners were 56 children (5 to 16 years) and 16 adults, all with normal hearing. Thresholds for 50% correct recognition of monosyllabic words were measured in a two-talker speech or a speech-shaped noise masker in the sound field using an open-set task. Target words were presented at a fixed level of 65 dB SPL throughout testing, while the masker level was adapted. A repeated-measures design was used to compare the performance of three age groups of children (5 to 7 years, 8 to 12 years, and 13 to 16 years) and a group of adults. The pattern of age-related changes during childhood was also compared between the two masker conditions. Results Listeners in all four age groups performed more poorly in the two-talker speech than the speech-shaped noise masker, but the developmental trajectories differed for the two masker conditions. For the speech-shaped noise masker, children's performance improved with age until about 10 years of age, with little systematic child-adult differences thereafter. In contrast, for the two-talker speech masker, children's thresholds gradually improved between 5 and 13 years of age, followed by an abrupt improvement in performance to adult-like levels. Children's thresholds in the two masker conditions were uncorrelated. Conclusions Younger children require a more advantageous signal-to-noise ratio than older children and adults to achieve 50% correct word recognition in both masker conditions. However, children's ability to recognize words appears to take longer to mature and follows a different developmental trajectory for the two-talker speech masker than the speech-shaped noise masker. These findings highlight the importance of considering both age and masker type when evaluating children's masked speech perception abilities.

Journal ArticleDOI
TL;DR: This article uses listening effort as an umbrella term for two different types of effort that can arise during listening, one of these types is processing effort, which is used to denote the utilization of “extra” mental processing resources in listening conditions that are adverse for an individual.
Abstract: Listening effort has been recognized as an important dimension of everyday listening, especially with regard to the comprehension of spoken language. At constant levels of comprehension performance, the level of effort exerted and perceived during listening can differ considerably across listeners and situations. In this article, listening effort is used as an umbrella term for two different types of effort that can arise during listening. One of these types is processing effort, which is used to denote the utilization of "extra" mental processing resources in listening conditions that are adverse for an individual. A conceptual description is introduced how processing effort could be defined in terms of situational influences, the listener's auditory and cognitive resources, and the listener's personal state. Also, the proposed relationship between processing effort and subjectively perceived listening effort is discussed. Notably, previous research has shown that the availability of mental resources, as well as the ability to use them efficiently, changes over the course of adult aging. These common age-related changes in cognitive abilities and their neurocognitive organization are discussed in the context of the presented concept, especially regarding situations in which listening effort may be increased for older people.

Journal ArticleDOI
TL;DR: A second auditory input via a CI can facilitate the perceptual separation of competing talkers in situations where monaural cues are insufficient to do so, thus partially restoring a key advantage of having two ears that was previously thought to be inaccessible to CI users.
Abstract: Objectives Listening to speech with multiple competing talkers requires the perceptual separation of the target voice from the interfering background. Normal-hearing listeners are able to take advantage of perceived differences in the spatial locations of competing sound sources to facilitate this process. Previous research suggests that bilateral (BI) cochlear-implant (CI) listeners cannot do so, and it is unknown whether single-sided deaf (SSD) CI users (one acoustic and one CI ear) have this ability. This study investigated whether providing a second ear via cochlear implantation can facilitate the perceptual separation of targets and interferers in a listening situation involving multiple competing talkers. Design BI-CI and SSD-CI listeners were required to identify speech from a target talker mixed with one or two interfering talkers. In the baseline monaural condition, the target speech and the interferers were presented to one of the CIs (for the BI-CI listeners) or to the acoustic ear (for the SSD-CI listeners). In the bilateral condition, the target was still presented to the first ear but the interferers were presented to both the target ear and the listener's second ear (always a CI), thereby testing whether CI listeners could use information about the interferer obtained from a second ear to facilitate perceptual separation of the target and interferer. Results Presenting a copy of the interfering signals to the second ear improved performance, up to 4 to 5 dB (12 to 18 percentage points), but the amount of improvement depended on the type of interferer. For BI-CI listeners, the improvement occurred mainly in conditions involving one interfering talker, regardless of gender. For SSD-CI listeners, the improvement occurred in conditions involving one or two interfering talkers of the same gender as the target. This interaction is consistent with the idea that the SSD-CI listeners had access to pitch cues in their normal-hearing ear to separate the opposite-gender target and interferers, while the BI-CI listeners did not. Conclusions These results suggest that a second auditory input via a CI can facilitate the perceptual separation of competing talkers in situations where monaural cues are insufficient to do so, thus partially restoring a key advantage of having two ears that was previously thought to be inaccessible to CI users.

Journal ArticleDOI
TL;DR: Adults seeking help for hearing difficulties are more likely to experience severe fatigue and vigor problems; surprisingly, this increased risk appears unrelated to degree of hearing loss.
Abstract: Objectives Anecdotal reports and qualitative research suggests that fatigue is a common, but often overlooked, accompaniment of hearing loss which negatively affects quality of life. However, systematic research examining the relationship between hearing loss and fatigue is limited. In this study, the authors examined relationships between hearing loss and various domains of fatigue and vigor using standardized and validated measures. Relationships between subjective ratings of multidimensional fatigue and vigor and the social and emotional consequences of hearing loss were also explored. Design Subjective ratings of fatigue and vigor were assessed using the profile of mood states and the multidimensional fatigue symptom inventory-short form. To assess the social and emotional impact of hearing loss participants also completed, depending on their age, the hearing handicap inventory for the elderly or adults. Responses were obtained from 149 adults (mean age = 66.1 years, range 22 to 94 years), who had scheduled a hearing test and/or a hearing aid selection at the Vanderbilt Bill Wilkerson Center Audiology clinic. These data were used to explore relationships between audiometric and demographic (i.e., age and gender) factors, fatigue, and hearing handicap scores. Results Compared with normative data, adults seeking help for their hearing difficulties in this study reported significantly less vigor and more fatigue. Reports of severe vigor/fatigue problems (ratings exceeding normative means by ±1.5 standard deviations) were also increased in the study sample compared with that of normative data. Regression analyses, with adjustments for age and gender, revealed that the subjective percepts of fatigue, regardless of domain, and vigor were not strongly associated with degree of hearing loss. However, similar analyses controlling for age, gender, and degree of hearing loss showed a strong association between measures of fatigue and vigor (multidimensional fatigue symptom inventory-short form scores) and the social and emotional consequences of hearing loss (hearing handicap inventory for the elderly/adults scores). Conclusions Adults seeking help for hearing difficulties are more likely to experience severe fatigue and vigor problems; surprisingly, this increased risk appears unrelated to degree of hearing loss. However, the negative psychosocial consequences of hearing loss are strongly associated with subjective ratings of fatigue, across all domains, and vigor. Additional research is needed to define the pathogenesis of hearing loss-related fatigue and to identify factors that may modulate and mediate (e.g., hearing aid or cochlear implant use) its impact.

Journal ArticleDOI
TL;DR: Results for pre-ejection period reactivity supported the hypothesis that the relationship between listening demand and listening effort is moderated by other variables and suggested that a broader perspective on the determinants of listening effort was warranted.
Abstract: A common element of the psychophysiological research on listening effort is the focus on listening demand as determinant of effort. The article discusses preceding studies and theorizing on effort to show that the link between listening demand and listening effort is moderated by various variables. Moreover, I will present a recent study that examined the joint effect of listening demand and success importance on effort-related cardiovascular reactivity in an auditory discrimination task. Results for pre-ejection period reactivity-an indicator of sympathetic activity-supported the hypothesis that the relationship between listening demand and listening effort is moderated by other variables: Pre-ejection period reactivity was higher in the high-demand-high-success-importance condition than in the other three conditions. This new finding as well as the findings of previous research on effort suggest that a broader perspective on the determinants of listening effort is warranted.

Journal ArticleDOI
TL;DR: In this article, the authors used functional near-infrared spectroscopy to measure responses within the lateral temporal lobe and the superior temporal gyrus to speech stimuli of varying intelligibility.
Abstract: Author(s): Olds, Cristen; Pollonini, Luca; Abaya, Homer; Larky, Jannine; Loy, Megan; Bortfeld, Heather; Beauchamp, Michael S; Oghalai, John S | Abstract: ObjectivesCochlear implants are a standard therapy for deafness, yet the ability of implanted patients to understand speech varies widely. To better understand this variability in outcomes, the authors used functional near-infrared spectroscopy to image activity within regions of the auditory cortex and compare the results to behavioral measures of speech perception.DesignThe authors studied 32 deaf adults hearing through cochlear implants and 35 normal-hearing controls. The authors used functional near-infrared spectroscopy to measure responses within the lateral temporal lobe and the superior temporal gyrus to speech stimuli of varying intelligibility. The speech stimuli included normal speech, channelized speech (vocoded into 20 frequency bands), and scrambled speech (the 20 frequency bands were shuffled in random order). The authors also used environmental sounds as a control stimulus. Behavioral measures consisted of the speech reception threshold, consonant-nucleus-consonant words, and AzBio sentence tests measured in quiet.ResultsBoth control and implanted participants with good speech perception exhibited greater cortical activations to natural speech than to unintelligible speech. In contrast, implanted participants with poor speech perception had large, indistinguishable cortical activations to all stimuli. The ratio of cortical activation to normal speech to that of scrambled speech directly correlated with the consonant-nucleus-consonant words and AzBio sentences scores. This pattern of cortical activation was not correlated with auditory threshold, age, side of implantation, or time after implantation. Turning off the implant reduced the cortical activations in all implanted participants.ConclusionsTogether, these data indicate that the responses the authors measured within the lateral temporal lobe and the superior temporal gyrus correlate with behavioral measures of speech perception, demonstrating a neural basis for the variability in speech understanding outcomes after cochlear implantation.

Journal ArticleDOI
TL;DR: A relatively greater degradation in the neural representation of TFS compared with periodicity envelope in individuals with SNHL is suggested, which may reflect a disruption in the temporal pattern of phase-locked neural activity arising from altered tonotopic maps and/or wider filters causing poor frequency selectivity in these listeners.
Abstract: Objective Listeners with sensorineural hearing loss (SNHL) typically experience reduced speech perception, which is not completely restored with amplification. This likely occurs because cochlear damage, in addition to elevating audiometric thresholds, alters the neural representation of speech transmitted to higher centers along the auditory neuroaxis. While the deleterious effects of SNHL on speech perception in humans have been well-documented using behavioral paradigms, our understanding of the neural correlates underlying these perceptual deficits remains limited. Using the scalp-recorded frequency following response (FFR), the authors examine the effects of SNHL and aging on subcortical neural representation of acoustic features important for pitch and speech perception, namely the periodicity envelope (F0) and temporal fine structure (TFS; formant structure), as reflected in the phase-locked neural activity generating the FFR. Design FFRs were obtained from 10 listeners with normal hearing (NH) and 9 listeners with mild-moderate SNHL in response to a steady-state English back vowel /u/ presented at multiple intensity levels. Use of multiple presentation levels facilitated comparisons at equal sound pressure level (SPL) and equal sensation level. In a second follow-up experiment to address the effect of age on envelope and TFS representation, FFRs were obtained from 25 NH and 19 listeners with mild to moderately severe SNHL to the same vowel stimulus presented at 80 dB SPL. Temporal waveforms, Fast Fourier Transform and spectrograms were used to evaluate the magnitude of the phase-locked activity at F0 (periodicity envelope) and F1 (TFS). Results Neural representation of both envelope (F0) and TFS (F1) at equal SPLs was stronger in NH listeners compared with listeners with SNHL. Also, comparison of neural representation of F0 and F1 across stimulus levels expressed in SPL and sensation level (accounting for audibility) revealed that level-related changes in F0 and F1 magnitude were different for listeners with SNHL compared with listeners with NH. Furthermore, the degradation in subcortical neural representation was observed to persist in listeners with SNHL even when the effects of age were controlled for. Conclusions Overall, our results suggest a relatively greater degradation in the neural representation of TFS compared with periodicity envelope in individuals with SNHL. This degraded neural representation of TFS in SNHL, as reflected in the brainstem FFR, may reflect a disruption in the temporal pattern of phase-locked neural activity arising from altered tonotopic maps and/or wider filters causing poor frequency selectivity in these listeners. Finally, while preliminary results indicate that the deleterious effects of SNHL may be greater than age-related degradation in subcortical neural representation, the lack of a balanced age-matched control group in this study does not permit us to completely rule out the effects of age on subcortical neural representation.

Journal ArticleDOI
TL;DR: Evidence continues to accumulate supporting a link between decline in sensory function and cognitive decline in older adults, particularly when more than one sensory domain is measured.
Abstract: The objective of this study was regarding sensory and cognitive interactions in older adults published since 2009, the approximate date of the most recent reviews on this topic. After an electronic database search of articles published in English since 2009 on measures of hearing and cognition or vision and cognition in older adults, a total of 437 articles were identified. Screening by title and abstract for appropriateness of topic and for articles presenting original research in peer-reviewed journals reduced the final number of articles reviewed to 34. These articles were qualitatively evaluated and synthesized with the existing knowledge base. Additional evidence has been obtained since 2009 associating declines in vision, hearing, or both with declines in cognition among older adults. The observed sensory-cognitive associations are generally stronger when more than one sensory domain is measured and when the sensory measures involve more than simple threshold sensitivity. Evidence continues to accumulate supporting a link between decline in sensory function and cognitive decline in older adults.

Journal ArticleDOI
TL;DR: The finding that reverberation did not affect listening effort, even when word recognition performance was degraded, is inconsistent with current models of listening effort.
Abstract: OBJECTIVES The purpose of this study was to investigate the effects of background noise and reverberation on listening effort. Four specific research questions were addressed related to listening effort: (A) With comparable word recognition performance across levels of reverberation, what are the effects of noise and reverberation on listening effort? (B) What is the effect of background noise when reverberation time is constant? (C) What is the effect of increasing reverberation from low to moderate when signal to noise ratio is constant? (D) What is the effect of increasing reverberation from moderate to high when signal to noise ratio is constant? DESIGN Eighteen young adults (mean age 24.8 years) with normal hearing participated. A dual-task paradigm was used to simultaneously assess word recognition and listening effort. The primary task was monosyllable word recognition, and the secondary task was word categorization (press a button if the word heard was judged to be a noun). Participants were tested in quiet and in background noise in three levels of reverberation (T30 < 100 ms, T30 = 475 ms, and T30 = 834 ms). Signal to noise ratios used were chosen individually for each participant and varied by reverberation to address the specific research questions. RESULTS As expected, word recognition performance was negatively affected by both background noise and by increases in reverberation. Furthermore, analysis of mean response times revealed that background noise increased listening effort, regardless of degree of reverberation. Conversely, reverberation did not affect listening effort, regardless of whether word recognition performance was comparable or signal to noise ratio was constant. CONCLUSIONS The finding that reverberation did not affect listening effort, even when word recognition performance was degraded, is inconsistent with current models of listening effort. The reasons for this surprising finding are unclear and warrant further investigation. However, the results of this study are limited in generalizability to young listeners with normal hearing and to the signal to noise ratios, loudspeaker to listener distance, and reverberation times evaluated. Other populations, like children, older listeners, and listeners with hearing loss, have been previously shown to be more sensitive to reverberation. Therefore, the effects of reverberation for these vulnerable populations also warrant further investigation.

Journal ArticleDOI
TL;DR: Extending the FUEL using social-cognitive psychological theories may provide valuable insights into how effortful listening could be reduced by adopting health-promoting approaches to rehabilitation.
Abstract: The framework for understanding effortful listening (FUEL) draws on psychological theories of cognition and motivation. In the present article, theories of social-cognitive psychology are related to the FUEL. Listening effort is defined in our consensus as the deliberate allocation of mental resources to overcome obstacles in goal pursuit when carrying out a task that involves listening. Listening effort depends not only on hearing difficulties and task demands but also on the listener's motivation to expend mental effort in challenging situations. Listeners' cost/benefit evaluations involve appraisals of listening demands, their own capacity, and the importance of listening goals. Social psychological factors can affect a listener's actual and self-perceived auditory and cognitive abilities, especially when those abilities may be insufficient to readily meet listening demands. Whether or not listeners experience stress depends not only on how demanding a situation is relative to their actual abilities but also on how they appraise their capacity to meet those demands. The self-perception or appraisal of one's abilities can be lowered by poor self-efficacy or negative stereotypes. Stress may affect performance in a given situation and chronic stress can have deleterious effects on many aspects of health, including auditory and cognitive functioning. Social support can offset demands and mitigate stress; however, the burden of providing support may stress the significant other. Some listeners cope by avoiding challenging situations and withdrawing from social participation. Extending the FUEL using social-cognitive psychological theories may provide valuable insights into how effortful listening could be reduced by adopting health-promoting approaches to rehabilitation.

Journal ArticleDOI
TL;DR: Findings from this randomized controlled trial show that Lace training does not result in improved outcomes over standard-of-care hearing aid intervention alone, and audiologists may want to temper the expectations of their patients who embark on LACE training.
Abstract: To examine the effectiveness of the Listening and Communication Enhancement (LACE) program as a supplement to standard-of-care hearing aid intervention in a Veteran population.A multisite randomized controlled trial was conducted to compare outcomes following standard-of-care hearing aid intervention supplemented with (1) LACE training using the 10-session DVD format, (2) LACE training using the 20-session computer-based format, (3) placebo auditory training (AT) consisting of actively listening to 10 hr of digitized books on a computer, and (4) educational counseling-the control group. The study involved 3 VA sites and enrolled 279 veterans. Both new and experienced hearing aid users participated to determine if outcomes differed as a function of hearing aid user status. Data for five behavioral and two self-report measures were collected during three research visits: baseline, immediately following the intervention period, and at 6 months postintervention. The five behavioral measures were selected to determine whether the perceptual and cognitive skills targeted in LACE training generalized to untrained tasks that required similar underlying skills. The two self-report measures were completed to determine whether the training resulted in a lessening of activity limitations and participation restrictions. Outcomes were obtained from 263 participants immediately following the intervention period and from 243 participants 6 months postintervention. Analyses of covariance comparing performance on each outcome measure separately were conducted using intervention and hearing aid user status as between-subject factors, visit as a within-subject factor, and baseline performance as a covariate.No statistically significant main effects or interactions were found for the use of LACE on any outcome measure.Findings from this randomized controlled trial show that LACE training does not result in improved outcomes over standard-of-care hearing aid intervention alone. Potential benefits of AT may be different than those assessed by the performance and self-report measures utilized here. Individual differences not assessed in this study should be examined to evaluate whether AT with LACE has any benefits for particular individuals. Clinically, these findings suggest that audiologists may want to temper the expectations of their patients who embark on LACE training.

Journal ArticleDOI
TL;DR: Findings indicate that spoken word recall can be used to identify benefits from hearing aid signal processing at ecologically valid, positive SNRs where SRTs are insensitive.
Abstract: In adaptive Speech Reception Threshold (SRT) tests used in the audiological clinic, speech is presented at signal to noise ratios (SNRs) that are lower than those generally encountered in real-life communication situations. At higher, ecologically valid SNRs, however, SRTs are insensitive to changes in hearing aid signal processing that may be of benefit to listeners who are hard of hearing. Previous studies conducted in Swedish using the Sentence-final Word Identification and Recall test (SWIR) have indicated that at such SNRs, the ability to recall spoken words may be a more informative measure. In the present study, a Danish version of SWIR, known as the Sentence-final Word Identification and Recall Test in a New Language (SWIRL) was introduced and evaluated in two experiments. The objective of experiment 1 was to determine if the Swedish results demonstrating benefit from noise reduction signal processing for hearing aid wearers could be replicated in 25 Danish participants with mild to moderate symmetrical sensorineural hearing loss. The objective of experiment 2 was to compare direct-drive and skin-drive transmission in 16 Danish users of bone-anchored hearing aids with conductive hearing loss or mixed sensorineural and conductive hearing loss. In experiment 1, performance on SWIRL improved when hearing aid noise reduction was used, replicating the Swedish results and generalizing them across languages. In experiment 2, performance on SWIRL was better for direct-drive compared with skin-drive transmission conditions. These findings indicate that spoken word recall can be used to identify benefits from hearing aid signal processing at ecologically valid, positive SNRs where SRTs are insensitive.

Journal ArticleDOI
TL;DR: Unacknowledged or unaddressed hearing loss was associated with a significantly increased risk of social isolation among 60- to 69-year-olds but not those 70 years or older and PTA was not associated significantly with falls, hospitalizations, burden of physical or mental health, or depression, or social isolation in these samples.
Abstract: OBJECTIVES: Hearing screening programs may benefit adults with unacknowledged or unaddressed hearing loss, but there is limited evidence regarding whether such programs are effective at improving health outcomes. The objective was to determine if poorer audiometric hearing thresholds are associated with poorer cognition, social isolation, burden of physical or mental health, inactivity due to poor physical or mental health, depression, and overnight hospitalizations among older American adults with unacknowledged or unaddressed hearing loss. DESIGN: The authors performed a cross-sectional population-based analysis of older American adults with normal hearing or unacknowledged or unaddressed hearing loss. Data was obtained from the 1999 to 2010 cycles of the National Health and Nutrition Examination Survey. Participants with a pure-tone average (PTA in the better hearing ear of thresholds at 0.5, 1, 2, and 4 kHz) > 25 dB HL who self-reported their hearing ability to be "good" or "excellent" were categorized as having "unacknowledged" hearing loss. Those who had a PTA > 25 dB HL and who self-reported hearing problems but had never had a hearing test or worn a hearing aid were categorized as having "unaddressed" hearing loss. Multivariate regression was performed to account for confounding due to demographic and health variables. RESULTS: A 10 dB increase in PTA was associated with a 52% increased odds of social isolation among 60- to 69-year-olds in multivariate analyses (p = 0.001). The average Digit Symbol Substitution Test score dropped by 2.14 points per 10 dB increase in PTA (p = 0.03), a magnitude equivalent to the drop expected for 3.9 years of chronological aging. PTA was not associated significantly with falls, hospitalizations, burden of physical or mental health, or depression, or social isolation among those aged 70 years or older in these samples. CONCLUSION: Unacknowledged or unaddressed hearing loss was associated with a significantly increased risk of social isolation among 60- to 69-year-olds but not those 70 years or older. It was also associated with lower cognitive scores on the Digit Symbol Substitution Test among 60- to 69-year-olds. This study differs from prior studies by focusing specifically on older adults who have unacknowledged or unaddressed hearing loss because they are the most likely to benefit from pure-tone hearing screening. The finding of associations between hearing loss and measures of social isolation and cognition in these specific samples extends previous findings on unrestricted samples of older adults including those who had already acknowledged hearing problems. Future randomized controlled trials measuring the effectiveness of adult hearing screening programs should measure whether interventions have an effect on these measures in those who have unacknowledged or unaddressed pure-tone hearing loss. Language: en

Journal ArticleDOI
TL;DR: When evaluating people with cochlear implants, controlled speech-based stimuli are feasible to use in tests of auditory cue categorization, to complement traditional measures of auditory discrimination.
Abstract: Objectives This study was conducted to measure auditory perception by cochlear implant users in the spectral and temporal domains, using tests of either categorization (using speech-based cues) or discrimination (using conventional psychoacoustic tests). The authors hypothesized that traditional nonlinguistic tests assessing spectral and temporal auditory resolution would correspond to speech-based measures assessing specific aspects of phonetic categorization assumed to depend on spectral and temporal auditory resolution. The authors further hypothesized that speech-based categorization performance would ultimately be a superior predictor of speech recognition performance, because of the fundamental nature of speech recognition as categorization. Design Nineteen cochlear implant listeners and 10 listeners with normal hearing participated in a suite of tasks that included spectral ripple discrimination, temporal modulation detection, and syllable categorization, which was split into a spectral cue-based task (targeting the /ba/-/da/ contrast) and a timing cue-based task (targeting the /b/-/p/ and /d/-/t/ contrasts). Speech sounds were manipulated to contain specific spectral or temporal modulations (formant transitions or voice onset time, respectively) that could be categorized. Categorization responses were quantified using logistic regression to assess perceptual sensitivity to acoustic phonetic cues. Word recognition testing was also conducted for cochlear implant listeners. Results Cochlear implant users were generally less successful at utilizing both spectral and temporal cues for categorization compared with listeners with normal hearing. For the cochlear implant listener group, spectral ripple discrimination was significantly correlated with the categorization of formant transitions; both were correlated with better word recognition. Temporal modulation detection using 100- and 10-Hz-modulated noise was not correlated either with the cochlear implant subjects' categorization of voice onset time or with word recognition. Word recognition was correlated more closely with categorization of the controlled speech cues than with performance on the psychophysical discrimination tasks. Conclusions When evaluating people with cochlear implants, controlled speech-based stimuli are feasible to use in tests of auditory cue categorization, to complement traditional measures of auditory discrimination. Stimuli based on specific speech cues correspond to counterpart nonlinguistic measures of discrimination, but potentially show better correspondence with speech perception more generally. The ubiquity of the spectral (formant transition) and temporal (voice onset time) stimulus dimensions across languages highlights the potential to use this testing approach even in cases where English is not the native language.

Journal ArticleDOI
TL;DR: The MOC strategy significantly improved the intelligibility of speech presented in competition with a spatially separated noise source, both in unilateral and bilateral listening conditions; produced significant spatial release from masking in unilateral listening conditions, something that did not occur with fixed compression.
Abstract: Objectives In natural hearing, cochlear mechanical compression is dynamically adjusted via the efferent medial olivocochlear reflex (MOCR). These adjustments probably help understanding speech in noisy environments and are not available to the users of current cochlear implants (CIs). The aims of the present study are to: (1) present a binaural CI sound processing strategy inspired by the control of cochlear compression provided by the contralateral MOCR in natural hearing; and (2) assess the benefits of the new strategy for understanding speech presented in competition with steady noise with a speech-like spectrum in various spatial configurations of the speech and noise sources. Design Pairs of CI sound processors (one per ear) were constructed to mimic or not mimic the effects of the contralateral MOCR on compression. For the nonmimicking condition (standard strategy or STD), the two processors in a pair functioned similarly to standard clinical processors (i.e., with fixed back-end compression and independently of each other). When configured to mimic the effects of the MOCR (MOC strategy), the two processors communicated with each other and the amount of back-end compression in a given frequency channel of each processor in the pair decreased/increased dynamically (so that output levels dropped/increased) with increases/decreases in the output energy from the corresponding frequency channel in the contralateral processor. Speech reception thresholds in speech-shaped noise were measured for 3 bilateral CI users and 2 single-sided deaf unilateral CI users. Thresholds were compared for the STD and MOC strategies in unilateral and bilateral listening conditions and for three spatial configurations of the speech and noise sources in simulated free-field conditions: speech and noise sources colocated in front of the listener, speech on the left ear with noise in front of the listener, and speech on the left ear with noise on the right ear. In both bilateral and unilateral listening, the electrical stimulus delivered to the test ear(s) was always calculated as if the listeners were wearing bilateral processors. Results In both unilateral and bilateral listening conditions, mean speech reception thresholds were comparable with the two strategies for colocated speech and noise sources, but were at least 2 dB lower (better) with the MOC than with the STD strategy for spatially separated speech and noise sources. In unilateral listening conditions, mean thresholds improved with increasing the spatial separation between the speech and noise sources regardless of the strategy but the improvement was significantly greater with the MOC strategy. In bilateral listening conditions, thresholds improved significantly with increasing the speech-noise spatial separation only with the MOC strategy. Conclusions The MOC strategy (1) significantly improved the intelligibility of speech presented in competition with a spatially separated noise source, both in unilateral and bilateral listening conditions; (2) produced significant spatial release from masking in bilateral listening conditions, something that did not occur with fixed compression; and (3) enhanced spatial release from masking in unilateral listening conditions. The MOC strategy as implemented here, or a modified version of it, may be usefully applied in CIs and in hearing aids.

Journal ArticleDOI
TL;DR: For bimodal listening, the AGC-matched HA outperformed the standard HA in speech understanding in noise tasks using a single competing talker and it was favored in questionnaires and in a subjective preference test.
Abstract: OBJECTIVES: The purpose of this study was to improve bimodal benefit in listeners using a cochlear implant (CI) and a hearing aid (HA) in contralateral ears, by matching the time constants and the number of compression channels of the automatic gain control (AGC) of the HA to the CI. Equivalent AGC was hypothesized to support a balanced loudness for dynamically changing signals like speech and improve bimodal benefit for speech understanding in quiet and with noise presented from the side(s) at 90 degree. DESIGN: Fifteen subjects participated in the study, all using the same Advanced Bionics Harmony CI processor and HA (Phonak Naida S IX UP). In a 3-visit crossover design with 4 weeks between sessions, performance was measured using a HA with a standard AGC (syllabic multichannel compression with 1 ms attack time and 50 ms release time) or an AGC that was adjusted to match that of the CI processor (dual AGC broadband compression, 3 and 240 msec attack time, 80 and 1500 msec release time). In all devices, the AGC was activated above the threshold of 63 dB SPL. The authors balanced loudness across the devices for soft and loud input sounds in 3 frequency bands (0 to 548, 548 to 1000, and >1000 Hz). Speech understanding was tested in free field in quiet and in noise for three spatial speaker configurations, with target speech always presented from the front. Single-talker noise was either presented from the CI side or the HA side, or uncorrelated stationary speech-weighted noise or single-talker noise was presented from both sides. Questionnaires were administered to assess differences in perception between the two bimodal fittings. RESULTS: Significant bimodal benefit over the CI alone was only found for the AGC-matched HA for the speech tests with single-talker noise. Compared with the standard HA, matched AGC characteristics significantly improved speech understanding in single-talker noise by 1.9 dB when noise was presented from the HA side. AGC matching increased bimodal benefit insignificantly by 0.6 dB when noise was presented from the CI implanted side, or by 0.8 (single-talker noise) and 1.1 dB (stationary noise) in the more complex configurations with two simultaneous maskers from both sides. In questionnaires, subjects rated the AGC-matched HA higher than the standard HA for understanding of one person in quiet and in noise, and for the quality of sounds. Listening to a slightly raised voice, subjects indicated increased listening comfort with matched AGCs. At the end of the study, 9 of 15 subjects preferred to take home the AGC-matched HA, 1 preferred the standard HA and 5 subjects had no preference. CONCLUSION: For bimodal listening, the AGC-matched HA outperformed the standard HA in speech understanding in noise tasks using a single competing talker and it was favored in questionnaires and in a subjective preference test. When noise was presented from the HA side, AGC matching resulted in a 1.9 dB SNR additional benefit, even though the HA was at the least favorable SNR side in this speaker configuration. Our results possibly suggest better binaural processing for matched AGCs.

Journal ArticleDOI
TL;DR: In this article, the authors present results of three tests of cognitive spare capacity: 1. Sentence-final Word Identification and Recall (SWIR) test, 2. Cognitive Spare Capacity Test (CSCT) and 3. Auditory Inference Span Test (AIST).
Abstract: Everyday listening may be experienced as effortful, especially by individuals with hearing loss. This may be due to internal factors, such as cognitive load, and external factors, such as noise. Even when speech is audible, internal and external factors may combine to reduce cognitive spare capacity, or the ability to engage in cognitive processing of spoken information. A better understanding of cognitive spare capacity and how it can be optimally allocated may guide new approaches to rehabilitation and ultimately improve outcomes. This article presents results of three tests of cognitive spare capacity:1. Sentence-final Word Identification and Recall (SWIR) test2. Cognitive Spare Capacity Test (CSCT)3. Auditory Inference Span Test (AIST)Results show that noise reduces cognitive spare capacity even when speech intelligibility is retained. In addition, SWIR results show that hearing aid signal processing can increase cognitive spare capacity, and CSCT and AIST results show that increasing load reduces cognitive spare capacity. Correlational evidence suggests that while the effect of noise on cognitive spare capacity is related to working memory capacity, the effect of load is related to executive function. Future studies should continue to investigate how hearing aid signal processing can mitigate the effect of load on cognitive spare capacity, and whether such effects can be enhanced by developing executive skills through training. The mechanisms modulating cognitive spare capacity should be investigated by studying their neural correlates, and tests of cognitive spare capacity should be developed for clinical use in conjunction with developing new approaches to rehabilitation.

Journal ArticleDOI
TL;DR: It appears that MOC shifts, as analyzed in the present study, may be too variable for clinical use, at least in some individuals.
Abstract: OBJECTIVES Measurement of changes in transient-evoked otoacoustic emissions (TEOAEs) caused by activation of the medial olivocochlear reflex (MOCR) may have clinical applications, but the clinical utility is dependent in part on the amount of variability across repeated measurements. The purpose of this study was to investigate the within- and across-subject variability of these measurements in a research setting as a step toward determining the potential clinical feasibility of TEOAE-based MOCR measurements. DESIGN In 24 normal-hearing young adults, TEOAEs were elicited with 35 dB SL clicks and the MOCR was activated by 35 dB SL broadband noise presented contralaterally. Across a 5-week span, changes in both TEOAE amplitude and phase evoked by MOCR activation (MOC shifts) were measured at four sessions, each consisting of four independent measurements. Efforts were undertaken to reduce the effect of potential confounds, including slow drifts in TEOAE amplitude across time, activation of the middle-ear muscle reflex, and changes in subjects' attentional states. MOC shifts were analyzed in seven 1/6-octave bands from 1 to 2 kHz. The variability of MOC shifts was analyzed at the frequency band yielding the largest and most stable MOC shift at the first session. Within-subject variability was quantified by the size of the standard deviations across all 16 measurements. Across-subject variability was quantified as the range of MOC shift values across subjects and was also described qualitatively through visual analyses of the data. RESULTS A large majority of MOC shifts in subjects were statistically significant. Most subjects showed stable MOC shifts across time, as evidenced by small standard deviations and by visual clustering of their data. However, some subjects showed within- and across-session variability that could not be explained by changes in hearing status, middle ear status, or attentional state. Simulations indicated that four baseline measurements were sufficient to predict the expected variability of subsequent measurements. However, the measured variability of subsequent MOC shifts in subjects was often larger than expected (based on the variability present at baseline), indicating the presence of additional variability at subsequent sessions. CONCLUSIONS Results indicated that a wide range of within- and across-subject variability of MOC shifts was present in a group of young normal-hearing individuals. In some cases, very large changes in MOC shifts (e.g., 1.5 to 2 dB) would need to occur before one could attribute the change to either an intervention or pathology, rather than to measurement variability. It appears that MOC shifts, as analyzed in the present study, may be too variable for clinical use, at least in some individuals. Further study is needed to determine the extent to which changes in MOC shifts can be reliably measured across time for clinical purposes.

Journal ArticleDOI
TL;DR: The authors discuss the possible anatomical and physiological bases of the BIC and the effects of electrode placement and stimulus characteristics on the binaurally evoked ABR, and review how interaural time and intensity differences affect the B IC.
Abstract: The auditory brainstem response (ABR) is a sound-evoked noninvasively measured electrical potential representing the sum of neuronal activity in the auditory brainstem and midbrain. ABR peak amplitudes and latencies are widely used in human and animal auditory research and for clinical screening. The binaural interaction component (BIC) of the ABR stands for the difference between the sum of the monaural ABRs and the ABR obtained with binaural stimulation. The BIC comprises a series of distinct waves, the largest of which (DN1) has been used for evaluating binaural hearing in both normal hearing and hearing-impaired listeners. Based on data from animal and human studies, the authors discuss the possible anatomical and physiological bases of the BIC (DN1 in particular). The effects of electrode placement and stimulus characteristics on the binaurally evoked ABR are evaluated. The authors review how interaural time and intensity differences affect the BIC and, analyzing these dependencies, draw conclusion about the mechanism underlying the generation of the BIC. Finally, the utility of the BIC for clinical diagnoses are summarized.

Journal ArticleDOI
TL;DR: The kurtosis-adjusted CNE may be a reasonable candidate for use in NIHL risk assessment across a wide variety of noise environments and provided a single metric for dose–response effects across different types of noise.
Abstract: Objective To test a kurtosis-adjusted cumulative noise exposure (CNE) metric for use in evaluating the risk of hearing loss among workers exposed to industrial noises. Specifically, to evaluate whether the kurtosis-adjusted CNE (1) provides a better association with observed industrial noise-induced hearing loss, and (2) provides a single metric applicable to both complex (non-Gaussian [non-G]) and continuous or steady state (Gaussian [G]) noise exposures for predicting noise-induced hearing loss (dose-response curves). Design Audiometric and noise exposure data were acquired on a population of screened workers (N = 341) from two steel manufacturing plants located in Zhejiang province and a textile manufacturing plant located in Henan province, China. All the subjects from the two steel manufacturing plants (N = 178) were exposed to complex noise, whereas the subjects from textile manufacturing plant (N = 163) were exposed to a G continuous noise. Each subject was given an otologic examination to determine their pure-tone HTL and had their personal 8-hr equivalent A-weighted noise exposure (LAeq) and full-shift noise kurtosis statistic (which is sensitive to the peaks and temporal characteristics of noise exposures) measured. For each subject, an unadjusted and kurtosis-adjusted CNE index for the years worked was created. Multiple linear regression analysis controlling for age was used to determine the relationship between CNE (unadjusted and kurtosis adjusted) and the mean HTL at 3, 4, and 6 kHz (HTL346) among the complex noise-exposed group. In addition, each subject's HTLs from 0.5 to 8.0 kHz were age and sex adjusted using Annex A (ISO-1999) to determine whether they had adjusted high-frequency noise-induced hearing loss (AHFNIHL), defined as an adjusted HTL shift of 30 dB or greater at 3.0, 4.0, or 6.0 kHz in either ear. Dose-response curves for AHFNIHL were developed separately for workers exposed to G and non-G noise using both unadjusted and adjusted CNE as the exposure matric. Results Multiple linear regression analysis among complex exposed workers demonstrated that the correlation between HTL3,4,6 and CNE controlling for age was improved when using the kurtosis-adjusted CNE compared with the unadjusted CNE (R = 0.386 versus 0.350) and that noise accounted for a greater proportion of hearing loss. In addition, although dose-response curves for AHFNIHL were distinctly different when using unadjusted CNE, they overlapped when using the kurtosis-adjusted CNE. Conclusions For the same exposure level, the prevalence of NIHL is greater in workers exposed to complex noise environments than in workers exposed to a continuous noise. Kurtosis adjustment of CNE improved the correlation with NIHL and provided a single metric for dose-response effects across different types of noise. The kurtosis-adjusted CNE may be a reasonable candidate for use in NIHL risk assessment across a wide variety of noise environments.