scispace - formally typeset
Search or ask a question

Showing papers in "Journal of The American Academy of Audiology in 2009"


Journal ArticleDOI
TL;DR: A well-fit digital hearing aid worn in conjunction with a cochlear implant is beneficial to speech recognition and localization and the dynamic test procedures used in this study illustrate the importance of bilateral hearing for locating, identifying, and switching attention between multiple speakers.
Abstract: Background The use of bilateral amplification is now common clinical practice for hearing aid users but not for cochlear implant recipients. In the past, most cochlear implant recipients were implanted in one ear and wore only a monaural cochlear implant processor. There has been recent interest in benefits arising from bilateral stimulation that may be present for cochlear implant recipients. One option for bilateral stimulation is the use of a cochlear implant in one ear and a hearing aid in the opposite nonimplanted ear (bimodal hearing).

140 citations


Journal ArticleDOI
TL;DR: The absence of ear differences suggests that temporal resolution as measured by the GIN is an auditory process that develops relatively early and symmetrically (i.e., no laterality or ear dominance effects).
Abstract: Background: The recently developed Gaps-In-Noise (GIN) test has provided a new diagnostic tool for the detection of temporal resolution deficits. Previous reports indicate that the GIN is a relatively sensitive tool for the diagnosis of central auditory processing disorder ([C]APD) in adult populations. Purpose: The purpose of the present study was to determine the feasibility of the GIN test in the pediatric population. Research Design: This was a prospective pseudorandomized investigation.

115 citations


Journal ArticleDOI
TL;DR: It is demonstrated that minor differences in VEMP responses occur with age, and frequency tuning curves may not be a good diagnostic indicator for individuals over the age of 60.
Abstract: Background Vestibular Evoked Myogenic Potential (VEMP) testing has gained increased interest in the diagnosis of a variety of vestibular etiologies. Comparisons of P13 / N23 latency, amplitude and threshold response curves have been used to compare pathologic groups to normal controls. Appropriate characterization of these etiologies requires normative data across the frequency spectrum and age range.

99 citations


Journal ArticleDOI
TL;DR: The results suggest that some middle-aged women with little or no pure-tone hearing loss experience listening difficulty in complex environments and suggest a strong relationship between temporal processing and speech understanding in certain competing speech situations.
Abstract: It is not unusual to see middle-aged women self-referring for hearing assessment. The primary complaint of many of these individuals is difficulty understanding speech in adverse listening conditions. Despite these subjective problems, routine audiometric examination usually reveals clinically normal hearing sensitivity and no difficulty understanding words in quiet. This is not an unexpected finding, since population-based studies of pure-tone hearing suggest that age-related threshold elevation typically does not occur in middle age (e.g., Morrell et al, 1996; Cruickshanks et al, 1998). Difficulty understanding speech in adverse listening conditions has been demonstrated previously in older individuals with essentially normal peripheral hearing sensitivity (e.g., Helfer and Wilber, 1990; Dubno et al, 2002; Wingfield et al, 2006) as well as in younger people with discrete auditory nervous system dysfunction (e.g., Musiek et al, 2005). This raises the question of whether subtle, age-related changes in auditory processing occur in middle age. Although research studies rarely have focused on hearing in middle age, a number of investigations of aging effects in general have included one or more groups of middle-aged subjects. Data from many of these studies suggest that certain auditory abilities begin to decline in middle age. For example, middle-aged subjects have been shown to perform more poorly than younger listeners (but better than older individuals) on tasks such as perception of dichotically presented speech (Barr and Giambra, 1990; Martin and Cranford, 1991; Jerger et al, 1994), speech perception in noise (Ewertsen and Birk-Nielsen, 1971; Plomp and Mimpen, 1979; Era et al, 1986; Gelfand et al, 1986; but see Kim et al, 2006) or in reverberation (Nabelek and Robinson, 1982), perception of interrupted speech (Bergman, 1971, 1980; Era et al, 1986) and time-compressed speech (Vaughan and Letowski, 1997), localization or lateralization of sound (Abel and Hay, 1996; Abel et al, 2000; Babkoff et al, 2002), and duration discrimination (Abel et al, 1990). It is tempting to attribute these subtle deficits to central causes, since the majority of subjects in the studies cited above had normal or near-normal peripheral hearing sensitivity. However, many of the participants had minor threshold elevation in relation to younger subjects. This raises the issue of whether slight threshold elevation and/or the underlying peripheral auditory dysfunction indicated by elevated thresholds contributed to reduced performance on these tasks. A number of studies cited above have noted that even mild high-frequency hearing loss can affect test performance (e.g., Gelfand et al, 1986) or that, once pure-tone thresholds were controlled statistically, differences between younger and middle-aged subjects were no longer significant (Takahashi and Bacon, 1992, for modulation detection and masking). Moreover, DPOAE amplitudes are smaller in normal-hearing middle-aged adults than in normal-hearing younger adults (e.g., Dorn et al, 1998), suggesting that subtle peripheral auditory changes may begin in this age range. There is evidence, however, that the changes in auditory perception in middle age may at least in part be due to factors beyond the cochlea. Subtle differences between younger and middle-aged subjects are noted in event-related potentials (ERPs) (Geal-Dor et al, 2006) and on the mismatch negativity potential (Alain et al, 2004). A recent investigation (Ross et al, 2007) measured the processing of interaural phase differences both in behavioral and physiological tasks, demonstrating that age-dependent changes in binaural functioning occur in middle age. Furthermore, lipreading ability also begins to show age-related declines in middle age (Farrimond, 1959; Pelson and Prather, 1974; Dancer et al, 1994), which could indicate age-related changes in speech understanding on a more central level. As mentioned above, there is a paucity of research specifically designed to examine hearing in middle-aged adults. One recent study, however, did focus specifically on auditory functioning in this age group. Grose et al (2006) measured two aspects of temporal hearing (gap duration detection and gap duration leading to perception of a stop consonant within a word [e.g., say vs. stay]) in younger and middle-aged (40–55 years) adults with normal hearing. Results of both of these experiments suggest age-related temporal processing changes can be demonstrated in middle age and that they may lead to subtle problems processing speech stimuli. The data summarized above suggest that certain aspects of hearing begin to decline in middle age. The purpose of the present study was to compare the performance of younger and middle-aged adults on a measure of speech understanding that taps into the problems commonly reported by middle-aged listeners and to test whether any measured changes are related to temporal processing ability. Temporal processing was assessed because this ability has been shown to be particularly prone to the negative effects of aging (e.g., Gordon-Salant and Fitzgibbons, 1999; Gordon-Salant et al, 2006; Pichora-Fuller et al, 2006). A clinically feasible measure of temporal processing was used as we were interested in determining whether currently available tests may be useful in identifying individuals who have difficulty in real-life listening situations. Of particular interest was the assessment of performance of middle-aged adults who did not have specific auditory complaints; hence, our subjects were drawn from a nonclinical population. In addition to objective measures of speech understanding, we also obtained participants’ subjective view of how they function under realistic listening conditions. Finally, because of potential differences in age-related speech recognition abilities between men and women (e.g., Dubno et al, 2008) data were collected exclusively from female listeners.

98 citations


Journal ArticleDOI
TL;DR: Age-limited working memory resources are impacted both by the resource demands required for comprehension of syntactically complex sentences and by effortful listening attendant to hearing loss.
Abstract: Older adults with good hearing and an age-matched group with mild to moderate hearing loss heard monosyllabic words in isolation and nine-word sentences that varied in their syntactic complexity. Each of the these stimulus types was presented initially below the level of audibility and then increased in loudness in 2 dB increments until the single-word stimuli and all nine words of the sentence stimuli could be correctly reported. A group of young adults with age-normal hearing were also tested for comparison. Results confirmed the common findings of better report accuracy for meaningful sentences than for words heard in isolation without a sentence context, but for the older adults there was also a significant effect of syntactic complexity of the sentence stimuli. This effect was further exaggerated by hearing loss. Results are interpreted in terms of age-limited working memory resources impacted both by the resource demands required for comprehension of syntactically complex sentences and by effortful listening attendant to hearing loss.

86 citations


Journal ArticleDOI
TL;DR: Dynamic FM should be considered for use with persons with CIs to improve speech recognition in noise, and at default CI settings, FM performance is better for Advanced Bionics recipients when compared to Cochlear corporation recipients, but use of Autosensitivity by Cochlear Corporation users results in equivalent group performance.
Abstract: Background: Use of personal frequency-modulated (FM) systems significantly improves speech rec- ognition in noise for users of cochlear implants (CIs). Previous studies have shown that the most appropriate gain setting on the FM receiver may vary based on the listening situation and the manufacturer of the CI system. Unlike traditional FM systems with fixed-gain settings, Dynamic FM automatically varies the gain of the FM receiver with changes in the ambient noise level. There are no published reports describing the benefits of Dynamic FM use for CI recipients or how Dynamic FM performance varies as a function of CI manufacturer. Data Collection and Analysis: In Experiments 1 and 2, speech recognition was evaluated with a tra- ditional, fixed-gain FM system and a Dynamic FM system using the Hearing in Noise Test sentences in quiet and in classroom noise. A repeated-measures analysis of variance (ANOVA) was used to evaluate effects of CI manufacturer (Advanced Bionics and Cochlear Corporation), type of FM system (traditional and dynamic), noise level, and use of Autosensitivity for users of Cochlear Corporation implants. Experiment 3 determined the effects of Autosensitivity on speech recognition of Cochlear Corporation implant recipients when listening through the speech processor microphone with the FM system muted. A repeated-measures ANOVA was used to examine the effects of signal-to-noise ratio and Autosensitivity. Results: In Experiment 1, use of Dynamic FM resulted in better speech recognition in noise for Ad- vanced Bionics recipients relative to traditional FM at noise levels of 65, 70, and 75 dB SPL. Advanced Bionics recipients obtained better speech recognition in noise with FM use when compared to Cochlear Corporation recipients. When Autosensitivity was enabled in Experiment 2, the performance of Cochlear Corporation recipients was equivalent to that of Advanced Bionics recipients, and Dynamic FM was significantly better than traditional FM. Results of Experiment 3 indicate that use of

81 citations


Journal ArticleDOI
TL;DR: Most college students using iPods should not be at great risk of hearing loss from their iPods when used conscientiously, but most iPod users could be at risk for hearing loss given a combination of common practices.
Abstract: Background: The popularity of personal listening devices (PLDs) including iPods has increased dramatically over the past decade. PLDs allow users to listen to music uninterrupted for prolonged periods and at levels that may pose a risk for hearing loss in some listeners, particularly those using earbud earphones that fail to attenuate high ambient noise levels and necessitate increasing volume for acoustic enjoyment. Earlier studies have documented PLD use by teenagers and adults, but omitted college students, which represent a large segment of individuals who use these devices. Purpose: This study surveyed college students' knowledge about, experiences with, attitudes toward, and practices and preferences for hearing health and use of iPods and/or other PLDs. The study was designed to help determine the need, content, and preferred format for educational outreach campaigns regarding safe iPod use to college students. Research Design: An 83-item questionnaire was designed and used to survey college students' knowledge about, experiences with, attitudes toward, and practices/preferences for hearing health and PLD use. The questionnaire assessed Demographics and Knowledge of Hearing Health, iPod Users' Practices and Preferences, Attitudes toward iPod Use, and Reasons for iPod Use. Results: Generally, most college students were knowledgeable about hearing health but could use information about signs of and how to prevent hearing loss. Two-thirds of these students used iPods, but not at levels or for durations that should pose excessive risks for hearing loss when listening in quiet environments. However, most iPod users could be at risk for hearing loss given a combination of common practices. Conclusions: Most of these college students should not be at great risk of hearing loss from their iPods when used conscientiously. Some concern is warranted for a small segment of these students who seemed to be most at risk because they listened to their iPods at high volume levels for long durations using earbuds, and reported that they may already have hearing loss due to their iPods.

79 citations


Journal ArticleDOI
TL;DR: Examination of the effect of linear frequency transposition on consonant identification in quiet and in noise at three intervals found a decrease in the number of confusions and an increase in thenumber of correct identification over time, and linear frequencyTransposition improved fricative identification overTime.
Abstract: Background: Frequency transposition has gained renewed interest in recent years. This type of processing takes sounds in the unaidable high-frequency region and moves them to the lower frequency region. One concern is that the transposed sounds mask or distort the original low-frequency sounds and lead to a poorer performance. On the other hand, experience with transposition may allow the listeners to relearn the new auditory percepts and benefit from transposition. Purpose: The current study was designed to examine the effect of linear frequency transposition on consonant identification in quiet (50 dB SPL and 68 dB SPL) and in noise at three intervals—the initial fit, after one month of use (along with auditory training), and a further one month of use (without directed training) of transposition. Research Design: A single-blind, factorial repeated-measures design was used to study the effect of test conditions (three) and hearing aid setting/time interval (four) on consonant identification. Study Sample: Eight adults with a severe-to-profound high-frequency sensorineural hearing loss participated. Intervention: Participants were fit with the Widex m4-m behind-the-ear hearing aids binaurally in the frequency transposition mode, and their speech scores were measured initially. They wore the hearing aids home for one month and were instructed to complete a self-paced “bottom-up” training regimen. They returned after the training, and their speech performance was measured. They wore the hearing aids home for another month, but they were not instructed to complete any auditory training. Their speech performance was again measured at the end of the two-month trial. Data Collection and Analysis: Consonant performance was measured with a nonsense syllable test (ORCA-NST) that was developed at this facility (Office of Research in Clinical Amplification [Widex]). The test conditions included testing in quiet at 50 dB SPL and 68 dB SPL, and at 68 dB SPL in noise (SNR [signal-to-noise ratio] = +5). The hearing aid conditions included no transposition at initial fit (V1), transposition at initial fit (V2), transposition at one month post-fit (V3), and transposition at 2 months post-fit (V4). Identification scores were analyzed for each individual phoneme and phonemic class. Repeated-measures ANOVA were conducted using SPSS software to examine significant differences. Results: For all test conditions (50 dB SPL in quiet, 68 dB SPL in quiet, and 68 dB SPL in noise), a statistically significant difference (p Conclusions: Linear frequency transposition improved fricative identification over time. Proper candidate selection with appropriate training is necessary to fully realize the potential benefit of this type of processing.

75 citations


Journal ArticleDOI
TL;DR: Tinnitus can be a significant problem following cochlear implant (CI) recipients, but that the experienced distress is often moderate, and a quarter of CI recipients do demonstrate moderate/severe tinnitus handicap, and thus are candidates for tinn Titus specific therapy.
Abstract: Background: While several studies have investigated the presence and annoyance of tinnitus in cochlear implant (CI) recipients, few studies have probed the handicap experienced in association with tinnitus in this population. Purpose: The aim of this study was to use validated self-report measures in a consecutive sample of Cl patients who reported tinnitus in order to determine the extent of tinnitus handicap. Research Design: In a retrospective design, a total of 151 patients (80% response rate) responded to a postal questionnaire, and of these, 111 (74%) reported that they currently experienced tinnitus and were asked to complete the full questionnaire. Sampling was performed at a point of a mean 2.9 years postsurgery (SD = 1.8 years). Three established self-report questionnaires were included measuring tinnitus handicap (Tinnitus Handicap Inventory [THI]) hearing problems (Gothenburg Profile), and finally, a measure of anxiety and depression (Hospital Anxiety and Depression Scale). We analyzed the data by means of Pearson product moment correlations, Mests, ANOVAs, and chi-square. Results: Data from the validated questionnaires showed relatively low levels of tinnitus distress, moderate levels of hearing problems, and low scores on the anxiety and depression scales. Using the criteria proposed for the THI (which was completed by 107 patients), 35% (N = 38) had a score indicating "no handicap," 30% (N = 32) "mild handicap" 18% (N = 19) "moderate handicap", and 17% (N = 18) "severe handicap." Thus 37 individuals from the total series of 151 reported moderate to severe tinnitus handicap (24.5%). Tinnitus distress was associated with increased hearing problems, anxiety, and depression. Conclusion: Tinnitus can be a significant problem following Cl, but that the experienced distress is often moderate. However, a quarter of Cl recipients do demonstrate moderate/severe tinnitus handicap, and thus are candidates for tinnitus specific therapy. The level of tinnitus handicap is associated with hearing problems and psychological distress.

73 citations


Journal ArticleDOI
TL;DR: The findings suggest that the FST with and without head shake component is not a reliable screening tool for peripheral vestibular asymmetry in chronic dizzy patients; however, future research may hold promise for the F ST as a tool for patients with acute unilateral disorders.
Abstract: BACKGROUND A vestibulospinal test known as the Fukuda stepping test (FST) has been suggested to be a measure of asymmetrical labyrinthine function. However, an extensive review of the performance of this test to identify a peripheral vestibular lesion has not been reported. PURPOSE The purpose of this study was to evaluate the sensitivity and specificity of the standard FST and a head shaking variation for identification of a peripheral vestibular system lesion. RESEARCH DESIGN In this retrospective review, we compared performance on the FST with and without a head shaking component to electronystagmography (ENG) caloric irrigation unilateral weakness results. STUDY SAMPLE We studied these factors in 736 chronic dizzy patients. RESULTS Receiving operating characteristics (ROC) analysis and area under the curve (AUC) indicated no significant benefit to performance from the head shaking variation compared to the standard FST in identifying labyrinthine weakness as classified by caloric unilateral weakness results. CONCLUSIONS These findings suggest that the FST with and without head shake component is not a reliable screening tool for peripheral vestibular asymmetry in chronic dizzy patients; however, future research may hold promise for the FST as a tool for patients with acute unilateral disorders.

73 citations


Journal ArticleDOI
TL;DR: The age at achievement of benchmarks such as diagnosis, fitting of amplification, and enrollment in early intervention in children who were screened for hearing loss is on target with stated goals provided by the Academy of Pediatrics and the Joint Committee on Infant Hearing.
Abstract: Background: Newborn Hearing Screening (NHS) programs aim to reduce the age of identification and intervention of infants with hearing loss It is generally accepted that NHS programs achieve that outcome, but few studies have compared children who were screened to those not screened in the same study and during the same time period This study takes advantage of the emerging screening programs in California to compare children based on screening status on age at intervention milestones Purpose: The purpose of this study was to compare the outcomes of cohorts of children with hearing loss, some screened for hearing loss at birth and others not screened Specifically, the measures compared are the benchmarks suggested by the Joint Committee on Infant hearing for determining the quality of screening programs Study Sample: Records from 64 children with bilateral permanent hearing loss who were enrolled in a study of communication outcomes served as data for this study Of these children, 47 were screened with 39 failing and 8 passing, and 17 were not screened Intervention: This study was observational and involved no planned intervention Data Collection and Analysis: Outcome benchmarks included age at diagnosis of hearing loss, age at fitting of amplification, and age at enrollment in early intervention Delays between diagnosis and fitting or enrollment were also calculated Hearing screening status of the children included screened with fail outcome, screened with pass outcome, and not screened Analysis included simple descriptive statistics, and t-tests were used to compare outcomes by groups: screened/not screened, screened pass/screened failed, and passed/not screened Results: Children with hearing loss who had been screened as newborns were diagnosed with hearing loss 2462 months earlier, fitted with hearing aids 2351 months earlier, and enrolled in early intervention 1998 months earlier than those infants who were not screened Screening status did not influence delays in fitting of amplification or enrollment in intervention following diagnosis Eight of the infants with hearing loss (125%) passed the NHS, and the ages at benchmarks of those children were slightly but not significantly earlier than infants who had not been screened Conclusions: The age at achievement of benchmarks such as diagnosis, fitting of amplification, and enrollment in early intervention in children who were screened for hearing loss is on target with stated goals provided by the Academy of Pediatrics and the Joint Committee on Infant Hearing In addition, children who are not screened for hearing loss continue to show dramatic delays in achievement of benchmarks by as much as 24 months Evaluating achievement of benchmarks during the start-up period of NHS programs allowed a direct evaluation of ability of these screening programs to meet stated goals This demonstrates, unequivocally, that the NHS process itself is responsible for improvements in age at diagnosis, hearing aid fitting, and enrollment in intervention

Journal ArticleDOI
TL;DR: Measurements are presented showing that the long-term signal-to-noise ratio (SNR) at the output of an amplification system that includes amplitude compression may be higher or lower than thelong-term SNR at the input, dependent on interactions among the actual long- term input SNR, the modulation characteristics of the signal and noise being mixed, and the amplitude compression characteristic of the system under test.
Abstract: We present measurements showing that the long-term signal-to-noise ratio (SNR) at the output of an amplification system that includes amplitude compression may be higher or lower than the long-term SNR at the input, dependent on interactions among the actual long-term input SNR, the modulation characteristics of the signal and noise being mixed, and the amplitude compression characteristics of the system under test. The effects demonstrated with the measurements shown here have implications for choices of test methods when comparing alternative hearing aid systems. The results of speech-recognition tests intended to compare alternative systems may be misleading or misinterpreted if the above interactions are not considered.

Journal ArticleDOI
TL;DR: Results suggest that LFT is a potentially useful hearing aid feature for school-age children with a precipitous HF sensorineural hearing loss.
Abstract: Purpose: To investigate the clinical efficacy of linear frequency transposition (LFT) for a group of school-age children. Research Design: A nonrandomized, within-subject design was implemented to investigate vowel and consonant recognition and fricative articulation of school-age children utilizing this feature. Study Sample: Ten children, aged 6 years and 3 months, to 13 years and 6 months from a special education school district participated in this study. Individual hearing thresholds ranged from normal to moderate in the low frequencies and from severe to profound in the high frequencies. Average language age of children was within 2.2 years of chronological age. Data Collection and Analysis: Phoneme recognition and fricative articulation performance were compared for three conditions: (1) with the children’s own hearing aids, (2) with an advanced hearing instrument utilizing LFT, and (3) with the same instrument without LFT. Nonsense syllable materials were administered at 30 and 50 dB HL input levels. Fricative articulation was measured by analyzing speech samples of conversational speech and oral reading passages. Repeated measures general linear model was utilized to determine the significance of any noted effects. Results: Results indicated significant improvements in vowel and consonant recognition with LFT for the 30 dB HL input level. Significant improvement in the accuracy of production of high-frequency (HF) fricatives after six weeks of use of LFT was also observed. Conclusions: These results suggest that LFT is a potentially useful hearing aid feature for school-age children with a precipitous HF sensorineural hearing loss.

Journal ArticleDOI
TL;DR: Prefitting hearing aid counseling can be advantageous to hearing aid outcome and recommend the addition of prefitting counseling to address expectations associated with quality of life and self-image.
Abstract: Background: Data suggest that having high expectations about hearing aids results in better overall outcome. However, some have postulated that excessively high expectations will result in disappointment and thus poor outcome. It has been suggested that counseling patients with unrealistic expectations about hearing aids prior to fitting may be beneficial. Data, however, are mixed as to the effectiveness of such counseling, in terms of both changes in expectations and final outcome. Purpose: The primary purpose of this study was to determine whether supplementing prefitting counseling with demonstration of real-world listening can (1) alter expectations of new hearing aid users and (2) increase satisfaction over verbal-only counseling. Secondary goals of the study were to examine (1) the relationship between prefitting expectations and postfitting outcome, and (2) the effect of hearing aid fine-tuning on hearing aid outcome. Research Design: Sixty new hearing aid users were fitted binaurally with Beltone Oria behind-the-ear digital hearing aids. Forty participants received prefitting counseling and demonstration of listening situations with the Beltone AVE™ (Audio Verification Environment) system; 20 received prefitting counseling without a demonstration of listening situations. Hearing aid expectations were measured at initial contact and following prefitting counseling. Reported hearing aid outcome was measured after eight to ten weeks of hearing aid use. Study Sample: Sixty new hearing aid users aged between 55 and 81 years with symmetrical sensorineural hearing loss. Intervention: Participants were randomly assigned to one of three experimental groups, between which the prefitting counseling and follow-up differed: Group 1 received prefitting counseling in combination with demonstration of listening situations. Additionally, if the participant had complaints about sound quality at the follow-up visit, the hearing aids were fine-tuned using the Beltone AVE system. Group 2 received prefitting counseling in combination with demonstration of listening situations with the Beltone AVE system, but no fine-tuning was provided at follow-up. Group 3 received prefitting hearing aid counseling that did not include demonstration of listening, and the hearing aids were not fine-tuned at the follow-up appointment. Results: The results showed that prefitting hearing aid counseling had small but significant effects on expectations. The two forms of counseling did not differ in their effectiveness at changing expectations; however, anecdotally, we learned from many participants that that they enjoyed listening to the auditory demonstrations and that they found them to be an interesting listening exercise. The data also show that positive expectations result in more positive outcome and that hearing aid fine-tuning is beneficial to the user. Conclusions: We conclude that prefitting counseling can be advantageous to hearing aid outcome and recommend the addition of prefitting counseling to address expectations associated with quality of life and self-image. The data emphasize the need to address unrealistic expectations prior to fitting hearing aids cautiously, so as not to decrease expectations to the extent of discouraging and demotivating the patient. Data also show that positive expectations regarding the impact hearing aids will have on psychosocial well-being are important for successful hearing aid outcome.

Journal ArticleDOI
TL;DR: Attention to either the ipsilateral, evoking stimulus or the contralateral suppressor causes a top-down, cortically mediated release from inhibition at the level of the cochlea that is measurable with common audiologic protocols and instrumentation.
Abstract: Purpose: To determine cortical influence on the efferent medial olivocochlear bundle system. Research Design: The effects of attention on contralateral suppression (CS) of click-evoked otoacoustic emissions were measured. Study Sample: Fifteen normal-hearing listeners. Results: CS was greatest in the nonattending condition and decreased significantly when attending to the click or broadband noise suppressor. The effects of attention on CS were not frequency dependent or due to changes in recording noise measures. Conclusions: Attention to either the ipsilateral, evoking stimulus or the contralateral suppressor causes a top-down, cortically mediated release from inhibition at the level of the cochlea that is measurable with common audiologic protocols and instrumentation. Future studies assessing the effects of attention on CS of click-evoked otoacoustic emissions in normal controls and individuals with various auditory or attentional deficits may provide valuable information about the capabilities of the cortex to affect peripheral processing in a normal and/or pathological system.

Journal ArticleDOI
TL;DR: The sentence equivalence study showed that post-adjustment, sentence intelligibility increased by 18.7 percent for each 1 dB increase in signal-to-noise ratio, which makes the NA LiSN-S is a potentially valuable tool for assessing auditory stream segregation skills in children.
Abstract: Background The Listening in Spatialized Noise-Sentences test (LiSN-S) was originally developed in Australia to assess auditory stream segregation skills in children with suspected central auditory processing disorder (CAPD). The software produces a three-dimensional auditory environment under headphones. A simple repetition-response protocol is utilized to determine speech reception thresholds (SRTs) for sentences presented from 0 degrees azimuth in competing speech. The competing speech (looped children's stories) is manipulated with respect to its location (0 degrees vs. +90 degrees and -90 degrees azimuth) and the vocal quality of the speaker(s) (same as, or different to, the speaker of the target stimulus). Performance is measured as two SRT and three advantage measures. The advantage measures represent the benefit in dB gained when either talker, spatial, or both talker and spatial cues combined are incorporated in the maskers. Purpose The objective of this research was to develop a version of the LiSN-S suitable for use in the United States and Canada. The original sentences and children's stories were reviewed for unfamiliar semantic items and rerecorded by native North American speakers. Research design In a descriptive design, a sentence equivalence study was conducted to determine the relative intelligibility of the rerecorded sentences and adjust the amplitude of the sentences for equal intelligibility. Normative data and test-retest reliability data were then collected. Study sample Twenty-four children with normal hearing aged 8 years, 3 months, to 10 years, 0 months, took part in the sentence equivalence study. Seventy-two normal-hearing children aged 6 years, 2 months, to 11 years, 10 months, took part in the normative data study. Thirty-six children returned between two and three months after the initial assessment for retesting. Participants were recruited from sites in Cincinnati, Dallas, and Calgary. Results The sentence equivalence study showed that post-adjustment, sentence intelligibility increased by 18.7 percent for each 1 dB increase in signal-to-noise ratio. Analysis of the normative data revealed no significant differences on any performance measure as a consequence of data collection site or gender. Inter- and intra-participant variation was minimal. A trend of improved performance as a function of increasing age was found across performance measures, and cutoff scores, calculated as two standard deviations below the mean, were adjusted for age. Test-retest differences were not significant on any measure of the North American (NA) LiSN-S (p ranging from .080 to .862). Mean test-retest differences on the various NA LiSN-S performance measures ranged from 0.1 dB to 0.6 dB. One-sided critical difference scores calculated from the retest data ranged from 3 to 3.9 dB. These scores, which take into account mean practice effects and day-to-day fluctuations in performance, can be used to determine whether a child has improved on the NA LiSN-S on retest. Conclusions The NA LiSN-S is a potentially valuable tool for assessing auditory stream segregation skills in children. The availability of one-sided critical difference scores makes the NA LiSN-S useful for monitoring listening performance over time and determining the effects of maturation, compensation (such as an assistive listening device), or remediation.

Journal ArticleDOI
TL;DR: This test may be useful as part of the clinical battery for identifying binaural integration weaknesses and referring individuals for auditory rehabilitation for interaural asymmetry (ARIA).
Abstract: Purpose: To establish normative data for children and to characterize developmental differences in performance with the free recall version of the Randomized Dichotic Digits Test. Research Design: Group comparison of behavioral data derived from administration of the Randomized Dichotic Digits Test. Study Sample: Children from 10 to 18 years of age (167) and young adults from 19 to 28 years of age (50). Results: Performance improved with age across all types of digit pairs, especially in the left ear, leading to smaller interaural asymmetries among older participants. A left-ear advantage was produced by 39 subjects (18%), only two of whom were left-handed. Normative data are reported for right and left ear scores and for interaural asymmetry (percent correct difference between the two ears) under one-, two-, and three-pair conditions of the test and for interaural asymmetry across the entire test. A unilateral deficit was identified in children (15.5%) and young adults (12%) for the left ear and in children (11.3%) and young adults (6%) for the right ear. A bilateral deficit was also identified in children (6.5%) and young adults (6%). Conclusions: This test may be useful as part of the clinical battery for identifying binaural integration weaknesses and referring individuals for auditory rehabilitation for interaural asymmetry (ARIA).

Journal ArticleDOI
TL;DR: The results suggest that the directional microphone and the SII-based noise reduction algorithm may improve the SNR of the listening environments, and both the HINT and the ANL may be used to study their benefits.
Abstract: Purpose: To measure the subjective and objective improvement of speech intelligibility in noise offered by a commercial hearing aid that uses a fully adaptive directional microphone and a noise reduction algorithm that optimizes the Speech Intelligibility Index (SII). Research Design: Comparison of results on the Hearing in Noise Test (HINT) and the Acceptable Noise Level task (ANL). Study Sample: Eighteen participants with varying configurations of sensorineural hearing loss. Results: Both the directional microphone and the noise reduction algorithm improved the speech-innoise performance of the participants. The benefits reported were higher for the directional microphone than the noise reduction algorithm. A moderate correlation was noted between the benefits measured on the HINT and the ANL for the directional microphone condition, the noise reduction condition, and the directional microphone plus noise reduction conditions. Conclusions: These results suggest that the directional microphone and the SII-based noise reduction algorithm may improve the SNR of the listening environments, and both the HINT and the ANL may be used to study their benefits.

Journal ArticleDOI
TL;DR: Using the multiple-stimulus ASSR, infants with normal hearing referred for diagnostic electrophysiological threshold assessment can now be quickly confirmed as having normal thresholds for four frequencies in both ears.
Abstract: Background and Purpose: Multiple auditory steady-state responses (ASSRs) to stimuli modulated at ,80 Hz are a promising technique for threshold estimation in infants, but additional data are required. Results: ASSR thresholds, estimated from 50 percent using cumulative percent present distributions, were 36, 30, 24, and 15 dB HL at 500, 1000, 2000, and 4000 Hz, respectively. Most ($90%) of the infants showed present ASSRs at 49, 45, 36, and 32 dB HL at 500, 1000, 2000, and 4000 Hz, respectively, with no differences in the results of younger versus older infants. When responses were present for all stimuli for both ears, most infants showed all eight responses within five minutes. Compared to ipsilateral responses, ASSRs in the contralateral EEG (electroencephalogram) channel were smaller and often absent. Conclusions: Based upon these data and the literature, normal AC ASSR ''screening'' levels would be 50, 45, 40, and 40 dB HL at 500, 1000, 2000, and 4000 Hz, respectively. Using the multiple-stimulus ASSR, infants with normal hearing referred for diagnostic electrophysiological threshold assessment can now be quickly confirmed as having normal thresholds for four frequencies in both ears.

Journal ArticleDOI
TL;DR: The results indicate that different types of training are differentially effective with regard to improving recognition of musical instruments presented through a degraded signal, which has practical implications for the auditory rehabilitation of persons who use cochlear implants.
Abstract: Cochlear implants (CI) are assistive hearing devices that provide significant benefit in perception of speech for individuals with pre- and postlingual deafness. Longitudinal studies reveal that most recipients of CIs can achieve significant improvement in speech perception within three months after implantation as a result of everyday use; some CI users reach maximum benefit after 36 months postimplantation (Tyler et al, 1997; Ruffin et al, 2007). This typical pattern of improved speech reception is possible in part because the implant is well-suited for transmitting the most salient features of speech, especially in quiet. Furthermore, CI recipients have ready access to a number of visual cues (e.g., speech reading, closed captioning) that can support the development of speech reception in everyday life. Unfortunately, current limitations in implant technology result in less effective transmission of salient features needed for accurate music perception and enjoyment. In particular, recipients of CIs are significantly less accurate than listeners with normal hearing in music perception tasks such as pitch perception (Gfeller et al, 2005, 2007; Gfeller, Turner, et al, 2002; Kong et al, 2004; McDermott, 2004), melody recognition (Gfeller et al, 2005, 2007; Gfeller, Turner, et al, 2002; Kong et al, 2004; McDermott, 2004; Olszewski et al, 2005), and recognition of musical instruments (timbre recognition) (Dorman et al, 1991; Gfeller and Lansing, 1991; Gfeller et al, 1997, 1998; Gfeller, Witt, Adamek, et al, 2002; Gfeller, Witt, Woodworth, et al, 2002; McDermott and McKay, 1997; Pijl, 1997; Fujita and Ito, 1999; Leal et al, 2003; Schon et al, 2004; Pressnitzer et al, 2005; Laneau et al, 2006). There are melodies, however, a few individuals, commonly referred to as “star users,” are able to recognize using pitch perception. Furthermore, while CI recipients demonstrate significantly improved speech reception as a result of everyday experience over time, most CI recipients do not enjoy the same level of improvement for music perception and enjoyment as a result of incidental exposure over time (Gfeller et al, 2001, 2008). Some type of direct effort or rehabilitation is required for many CI recipients to improve music perception.

Journal ArticleDOI
TL;DR: Evidence is provided of the need to institute job-specific AFFD protocols, move beyond the pure-tone audiogram, and establish the validity of test protocols.
Abstract: Background: Auditory fitness for duty (AFFD) refers to the possession of hearing abilities sufficient for safe and effective job performance. In jobs such as law enforcement and piloting, where the ability to hear is critical to job performance and safety, hearing loss can decrease performance, even to the point of being hazardous to self and others. Tests of AFFD should provide an employer with a valid assessment of an employee’s ability to perform the job safely, without discriminating against the employee purely on the basis of hearing loss. Purpose: The purpose of this review is to provide a basic description of the functional hearing abilities required in hearing-critical occupations, and a summary of current practices in AFFD evaluation. In addition, we suggest directions for research and standardization to ensure best practices in the evaluation of AFFD in the future. Research Design: We conducted a systematic review of the English-language peer-reviewed literature in AFFD. ‘‘Popular’’ search engines were consulted for governmental regulations and trade journal articles. We also contacted professionals with expertise in AFFD regarding research projects, unpublished material, and current standards. Results: The literature review provided information regarding the functional hearing abilities required to perform hearing-critical tasks, the development of and characteristics of AFFD protocols, and the current implementation of AFFD protocols. Conclusions: This review paper provides evidence of the need to institute job-specific AFFD protocols, move beyond the pure-tone audiogram, and establish the validity of test protocols. These needs are arguably greater now than in times past.

Journal ArticleDOI
TL;DR: This study evaluated the following question for its potential usefulness as a determinant of patient readiness for amplification: "On a scale from 1 to 10, 1 being the worst and 10 being the best, how would you rate your overall hearing ability?"
Abstract: Background: Hearing threshold data are not particularly predictive of self-perceived hearing handicap or readiness to pursue amplification. Poor correlations between these measures have been reported repeatedly. When a patient is evaluated for hearing loss, it is common to collect both threshold data and the individual's self-perception of hearing ability. This is done to help the patient make an appropriate choice related to the pursuit of amplification or other communication strategies. It would be valuable, though, for the audiologist to be able to predict which patients are ready for amplification, which patients require more extensive counseling before pursuing amplification, and which patients simply are not ready for amplification regardless of the audiometric data. Purpose: The purpose of this study was to evaluate the following question for its potential usefulness as a determinant of patient readiness for amplification: “On a scale from 1 to 10, 1 being the worst and 10 being the best, how would you rate your overall hearing ability?” Research Design: The test–retest reliability and the predictive value of the question, based on final hearing aid purchase, were evaluated in a private practice setting. Study Sample: Eight hundred forty hearing-impaired adults in the age range from 18 to 95 years. Collection and Analysis: Data were collected retrospectively from patient files. Results and Conclusion: Results were repeatable and supported the use of this question in similar clinical settings.

Journal ArticleDOI
TL;DR: Although ASSR is not a suitable method to estimate auditory thresholds in this group of patients, perhaps it can be utilized as an adjunct technique for the differential diagnosis of this disorder.
Abstract: Background: The relation between the auditory steady-state response (ASSR) and behavioral audiometric thresholds requires further clarification in the case of adults with auditory neuropathy/auditory dys-synchrony (AN/AD). Purpose: The aim of this study was to compare pure-tone audiometric threshold (PTAT) and ASSR in adults with AN/AD. Study Sample: Sixteen adult participants (32 ears) with AN/AD, ranging in age from 14 to 34 years. Data Collection and Analysis: PTAT and ASSR with high-rate stimulus modulation were measured at four octave frequencies, 500, 1000, 2000, and 4000 Hz, in each ear. The behavioral auditory thresholds were compared with ASSR estimated thresholds at each frequency. Analyses included comparison of group means and coefficients of correlation. Results: The average pure-tone thresholds revealed a moderate hearing loss in the AN/AD patients with a focus on the low frequencies. Low-frequency loss audiograms were observed in almost twothirds of the participants. The estimated auditory thresholds measured by ASSR at all frequencies were substantially higher than the PTAT measures. There were no significant correlations between the PTAT and ASSR measurements at the 1000, 2000, and 4000 Hz frequencies (p . .05); the correlation between the two measures at 500 Hz (p 5 .029, r 5 0.39) was weak but significant. Conclusion: There was no significant correlation between the PTAT and ASSR results at the majority of the frequencies usually tested in adults with AN/AD. Although ASSR is not a suitable method to estimate auditory thresholds in this group of patients, perhaps it can be utilized as an adjunct technique for the differential diagnosis of this disorder.

Journal ArticleDOI
TL;DR: Gender and hearing aid experience did not influence these patients' responses on the IOI-HA, and all respondents were satisfied with their hearing aids and the practice that dispensed them, suggesting that the advanced hearing aid technology used here had a positive effect on patients' ratings and that the IOi-HA norms should be updated periodically to reflect changes in technology.
Abstract: PURPOSE To use the International Outcome Inventory for Hearing Aids (IOI-HA) with patients having advanced hearing aid technology to assess their satisfaction and benefit focusing on gender and experience effects, compare to norms, and use the IOI-HA and a practice-specific questionnaire to monitor the quality of the services provided by a dispensing practice. RESEARCH DESIGN A study of 160 potential participants who had worn their newly purchased multichannel digital hearing aids having directional microphones for at least three months, completed a trial period, and should have had time to acclimatize to them. English-speaking, private or insurance paying, competent, adult patients from a private practice were mailed a 12-item practice-specific questionnaire and the seven-item IOI-HA. RESULTS Of the 160 questionnaires mailed, 73 were returned for a 46% return rate. Of those, 64 were useable. Participants included male (34) and female (30), new (30) and previous (34) hearing aid users, who self-selected their participation by returning the questionnaires. The practice-specific questionnaire assessed patients' demographics and the quality of services received. The IOI-HA was analyzed according to an overall score and on two different factor scores. A power analysis revealed that 19 respondents per group were needed for the IOI-HA results to have a statistical power of .80 and probability of a Type II error of .20 for detecting a significant difference at the p < 0.05 level. Similar to earlier studies, no significant differences were observed either for any of the main effects or interactions for gender or user experience for the two IOI-HA factors and overall scores. A significant, but weak, positive correlation (r = .34; df = 63; p < .05) was observed between patients' overall satisfaction as indicated from the IOI-HA and the practice-specific quality assurance satisfaction question. T-tests on IOI-HA items 4 (satisfaction) and 7 (quality of life) revealed that the present participants' responses were significantly higher than for those in the normative study. CONCLUSIONS Gender and hearing aid experience did not influence these patients' responses on the IOI-HA, and all respondents were satisfied with their hearing aids and the practice that dispensed them. No major differences were found between these patients' IOI-HA results and normative data suggesting that both sets of respondents were satisfied with their hearing aids. However, limited statistical comparisons for the satisfaction and quality of life items revealed significant differences in favor of these participants' scores over those in the normative study. This suggested that the advanced hearing aid technology used here had a positive effect on patients' ratings and that the IOI-HA norms should be updated periodically to reflect changes in technology.

Journal ArticleDOI
TL;DR: For participants with mild to moderate gradually sloping hearing loss and for those with steeply sloping losses, the UCL - 5 dB and the 2 kHz SL methods resulted in the highest scores without exceeding listeners' UCLs.
Abstract: Speech recognition measures are widely recognized as an important component of the audiological test battery. Although there are professional guidelines for administering speech recognition threshold tests (American Speech-Language-Hearing Association, 1988), procedures related to suprathreshold speech recognition testing have never been standardized. It has been argued that the best approach to speech recognition testing is to present stimuli over a range of levels (Boothroyd, 1968; Ullrich and Grimm, 1976; Beattie and Warren, 1982; Beattie and Raffin, 1985; Beattie and Zipp, 1990; Boothroyd, 2008). This argument is supported by evidence that the level corresponding to maximum word recognition scores varies considerably across individuals. Beattie and Raffin (1985), for example, reported that levels corresponding to maximum recognition scores can vary from 20 to 60 dB SL. While these studies provide evidence that testing speech recognition in quiet at multiple levels is the best practice, the use of multiple presentation levels is not common in clinical settings (Martin and Forbis, 1978; Martin and Morris, 1989; DeBow and Green, 2000). In a study on the practice patterns of Canadian audiologists, 89% of the respondents reported that they did not test at multiple presentation levels (DeBow and Green, 2000). In another study on practice patterns of audiologists, 74% of the respondents reported using a single SL; however, the level used was not specified (Martin and Morris, 1989). Martin and Morris (1989) also reported that “several” respondents reported using more than one level in speech recognition testing, with the most common levels being most comfortable level (MCL) and 90 dB HL. When a single presentation level is used to assess suprathreshold word recognition abilities during routine diagnostic testing, a common objective is to test at a level that will result in maximum performance. With this objective in mind, a presentation level that results in maximum performance may be considered the “optimal presentation level.” Methods for determining the presentation level for suprathreshold speech testing fall into three broad categories: (1) methods based on SL, a fixed level above a reference threshold; (2) methods based on a fixed sound pressure level (SPL); and (3) methods based on loudness measures (e.g., most comfortable loudness level). The most popular approach is to set the presentation level at a particular SL above the speech recognition threshold (SRT). Martin and Morris (1989) noted that over half of the 74% of audiologists who used a single presentation level chose a level of 40 dB SL re SRT, while 30% used a level of 30 dB SL re SRT. In a later survey, Martin et al (1998) also noted that 67% of audiologists used a level referenced to the SRT, but a specific SL was not reported. Similar to their 1989 survey, Martin et al (1994) reported that 75% of audiologists tested at a specified SL, typically 40 dB SL. It is important to note, however, that the 40 dB SL presentation level re SRT is likely to reach uncomfortable loudness levels for the majority of people with an average hearing loss greater than 50 dB HL (Kamm et al, 1978). Kamm and her colleagues suggested using a fixed level of 95 dB SPL (75 dB HL) as an alternative to 40 dB SL (Kamm et al, 1983). For listeners with mild to moderate sensorineural hearing loss, Kamm et al reported that maximum word recognition scores were obtained for only 60% of the participants when the 40 dB SL re SRT method was used, but maximum word recognition scores were obtained for 76% of the participants when a presentation level of 95 dB SPL was used. As noted earlier, some clinicians also test at both the most comfortable loudness level (MCL) and a higher level approaching the uncomfortable loudness level (UCL). Testing at MCL seems logical given that “comfortable loudness” is a rationale underlying several hearing aid fitting prescriptions. There is little evidence, however, to support the MCL approach for testing word recognition in individuals with hearing loss, if finding the maximum speech recognition score is the goal. While maximum scores are generally obtained at MCL for listeners with normal hearing, higher levels are often needed for individuals with hearing loss (Ullrich and Grimm, 1976; Beattie and Warren, 1982; Beattie and Raffin, 1985; Beattie and Zipp, 1990). There is inconsistent support for the use of speech UCL as a presentation level. In two studies, Beattie and his colleagues reported that the level corresponding to maximum recognition scores was the same as the UCL for 79–90% of the cases (Beattie and Warren, 1982; Beattie and Zipp, 1990). Dirks et al (1981) reported, however, that recognition scores for words presented below the listeners' UCLs were equal to or better than scores presented at UCL. Most of the work evaluating different methods for obtaining the maximum word recognition score has been conducted using patients with mild to moderate hearing losses. The present study extends the work of previous investigators described above by evaluating a wider range of hearing losses and by examining additional methods for determining the optimal presentation level for suprathreshold speech recognition testing. For the purposes of this study, the “optimal presentation level” was defined as the level that produced the maximum speech recognition score without exceeding the participant's UCL. Listener groups consisted of people with gradually sloping mild, moderate, and moderately severe/severe losses. In addition, a group of individuals with steeply sloping losses was included to examine the impact of hearing loss configuration. Five different presentation levels were evaluated: 1) A fixed level of 95 dB SPL as recommended by Kamm et al. (1983) 2) The individually-determined MCL 3) 5 dB below the individually-determined UCL 4) A sensation level referenced to the SRT 5) A sensation level referenced to the 2-kHz threshold The choice of sensation levels varied with the degree of hearing loss and was determined from several criteria including UCL (described below). A sensation level referenced to 2-kHz was evaluated as an alternative to the sensation level re: SRT due to the importance of 2-kHz for consonant recognition (French and Steinberg, 1947). A sensation level referenced to the 2-kHz threshold rather than the SRT may result in better audibility in the high-frequency regions, particularly for individuals with steeply sloping losses. The UCL-5 dB was evaluated because it should maximize audibility while avoiding the problem of loudness discomfort. To the extent that maximum audibility corresponds to maximum intelligibility, the UCL-5 dB level may serve as the “gold standard” against which the other methods can be compared. The general approach used in the present study was to measure phoneme recognition over a range of levels. Recognition scores were then extracted for the five different presentation-level methods of interest. Based on the importance of the 2-kHz region for speech intelligibility, it was predicted that the scores obtained using a sensation level referenced to the 2-kHz threshold would result in scores equivalent to those for UCL- 5 dB. Based on previous research, it was also expected that scores for MCL would be lower than for UCL-5 dB for one or more participant groups. Specific predictions were not formulated for the other presentation levels.

Journal ArticleDOI
TL;DR: The reduction of latency in the time course of the ALR might be related to the fact that neurons with shorter latencies had faster recovery speed from adaptation and/or refractoriness than those with longer latencies.
Abstract: Background: This study provides a detailed description of the time course of amplitude and latency in the auditory late response (ALR) elicited by repeated tone bursts. Research Design: Tone bursts (50 and 80 dB SPL) were presented via insert earphones in trains of ten with interstimulus intervals (ISIs) of 0.7 and 2 msec and an intertrain interval of 15 sec. Averages were derived independently for each tone burst within the train across the total number of train presentations. Study Sample: Participants were 14 normal-hearing young adults. Data Collection and Analysis: Data were analyzed in terms of the amplitudes and latencies of the N1 and P2 waves of the ALR as well as the N1-P2 amplitude. Results: The N1-P2 amplitude was a more stable measure than the amplitude of individual N1 and P2 peaks. The N1-P2 amplitude was maximal for the first tone burst and decreased in a nonmonotonic pattern for the remainder of the tone bursts within a stimulus train. The amplitude decrement was dependent on stimulus intensity and ISI. The latencies of N1 and P2 were maximal for the first tone burst and reduced approximately 20% for the rest of the stimuli in a train. The time course of N1 and P2 latencies was not dependent on stimulus intensity and ISI. Conclusions: The reduction of latency in the time course of the ALR might be related to the fact that neurons with shorter latencies had faster recovery speed from adaptation and/or refractoriness than those with longer latencies. This finding is meaningful in the context of future research to restore normal adaptation in abnormal hearing populations such as cochlear implant patients.

Journal ArticleDOI
TL;DR: The psychometric properties of the IOI-HA questionnaire are strong and are essentially the same for the veteran sample and the original private-pay sample, and the norms established here should replace the original norms for use in veterans with current hearing aid technology.
Abstract: Background: The International Outcome Inventory for Hearing Aids (IOI-HA) was developed as a global hearing aid outcome measure targeting seven outcome domains. The published norms were based on a private-pay sample who were fitted with analog hearing aids. Purpose: The purpose of this study was to evaluate the psychometric properties of the IOI-HA and to establish normative data in a veteran sample. Results: A factor analysis showed that the IOI-HA in the veteran sample had the identical subscale structure as reported in the original sample. For the total scale, the internal consistency was good (Chronbach's a 5 0.83), and the test-retest reliability was high (l 5 0.94). Group and individual norms were developed for both hearing difficulty categories in the veteran sample. For each IOI-HA item, the critical difference scores were ,1.0. This finding suggests that for any item on the IOI-HA, there is a 95 percent chance that an observed change of one response unit between two test sessions reflects a true change in outcome for a given domain. Conclusions: The results of this study confirmed that the psychometric properties of th eI OI-HA questionnaire are strong and are essentially the same for the veteran sample and the original private-pay sample. The veteran norms, however, produced higher outcomes than those established originally, possibly because of differences in the population samples and/or hearing aid technology. Clinical and research applications of the current findings are presented. Based on the results from the current study, the norms established here should replace the original norms for use in veterans with current hearing aid technology.

Journal ArticleDOI
TL;DR: This study served as an introduction to the problem of using traditional behavioral testing for hearing assessment of nursing home residents and to discuss the purpose of adapting assessment procedures that can lead to more effective audiological assessments for this population.
Abstract: Background It is currently estimated that the resident population of individuals over the age of 65 living in nursing homes will double by 2020. Nearly one-third of all nursing home residents have difficulty seeing or hearing, 46% have some form of dementia, and 30-84% of those with dementia in nursing homes show some form of agitation. Nursing home residents who do not receive appropriate audiological services may experience social isolation, cognitive decline and decreased mobility. Purpose To examine the effectiveness of standard audiological testing procedures for nursing home residents and to discuss the purpose of adapting assessment procedures that can lead to more effective audiological assessments for this population. Research design A retrospective chart analyses. A 33-item coding form was used to complete descriptive analysis of original audiological data and demographic data for 307 nursing home residents for a study to examine the effects of auditory stimulation on dementia-related behavior problems exhibited by nursing home residents through audiotape exposure to environmental sounds or soothing voice. Results Although 77% (n = 235) of the 307 residents were considered compliant for the testing process and 74% (n = 288) tolerated putting on headphones, audiological assessment using air conduction testing could be completed in both ears on 32% (n = 100) of the residents. In fact, only 5% (n = 16) of the 307 residents were able to complete a full traditional audiometric assessment protocol. Conclusions Proper identification of hearing impairment through effective and appropriate audiological assessment is crucial for preserving and enhancing quality-of-life in nursing home residents. This study served as an introduction to the problem of using traditional behavioral testing for hearing assessment of nursing home residents. Much work needs to be done to establish best practices for audiometric assessment in this population.

Journal ArticleDOI
TL;DR: Findings can be used to support the need for counseling patients and their families about the potential advantages to using average speech rates or rates that are slightly slowed while conversing in the presence of background noise.
Abstract: PURPOSE To study the effect of noise on speech rate judgment and signal-to-noise ratio threshold (SNR50) at different speech rates (slow, preferred, and fast). RESEARCH DESIGN Speech rate judgment and SNR50 tasks were completed in a normal-hearing condition and a simulated hearing-loss condition. STUDY SAMPLE Twenty-four female and six male young, normal-hearing participants. RESULTS Speech rate judgment was not affected by background noise regardless of hearing condition. Results of the SNR50 task indicated that, as speech rate increased, performance decreased for both hearing conditions. There was a moderate correlation between speech rate judgment and SNR50 with the various speech rates, such that as judgment of speech rate increased from too slow to too fast, performance deteriorated. CONCLUSIONS These findings can be used to support the need for counseling patients and their families about the potential advantages to using average speech rates or rates that are slightly slowed while conversing in the presence of background noise.

Journal ArticleDOI
TL;DR: It is essential that the CI audiologist not only be aware of the disorder but also be well versed in the resulting implications for the cochlear implant process, as well as highlight trends in performance postimplantation.
Abstract: BACKGROUND Considered a rare disorder, superficial siderosis of the central nervous system (SSCN) has become more frequently diagnosed in recent years. As it is characterized by progressive sensorineural hearing loss, patients' needs may surpass the capability of hearing aid technology. Despite the retrocochlear nature of the disorder, patients have undergone cochlear implantation (CI) with varying success. PURPOSE To summarize the issues surrounding cochlear implant candidates with SSCN as well as highlight trends in performance postimplantation. RESEARCH DESIGN Retrospective case reports of seven cochlear implant candidates detail the symptoms, typical audiologic presentation, and array of clinical issues for patients with this progressive and potentially fatal disease. RESULTS Despite the retrocochlear component of a hearing loss caused by SSCN, cochlear implantation may be a viable option. CONCLUSIONS It is essential that the CI audiologist not only be aware of the disorder but also be well versed in the resulting implications for the cochlear implant process. A more thorough case history, an expanded candidacy test battery, and knowledge of the typical presentation of SSCN are critical. The diagnosis of SSCN will impact expectations for success with the cochlear implant, and counseling should be adjusted accordingly.