scispace - formally typeset
Search or ask a question

Showing papers in "Scandinavian audiology. Supplementum in 1993"


Journal Article
TL;DR: It appeared that although the noise reduction algorithms reduced the noise level, they did not improve the measured speech intelligibility, either for normal-hearing or for hearing-impaired listeners.
Abstract: Speech was mixed with different noise signals and then processed according to the well-known noise reduction method of 'spectral subtraction'. Three different algorithms were examined. The speech signals were subjected to a four alternative forced choice (4AFC) test. Both the processed and unprocessed signals were evaluated psycho-acoustically and objectively. Speech intelligibility was measured with the 4AFC test by presenting the signals via headphones to a group of normal-hearing and to a group of hearing-impaired listeners. The intelligibility scores were compared with the intelligibility scores predicted from a modified version of the Speech Transmission Index (STI). It appeared that although the noise reduction algorithms reduced the noise level, they did not improve the measured speech intelligibility, either for normal-hearing or for hearing-impaired listeners. This, however, was inconsistent with the scores estimated from STI, which erroneously predicted a significant improvement in intelligibility due to the noise reduction processing.

72 citations


Journal Article
TL;DR: A binaural noise-reduction algorithm for suppressing lateral noise sources was implemented in real-time and tested with normal and hearing-impaired subjects, suggesting a combination of both algorithms, while stressing the importance of binural listening and bINAural hearing aids for the rehabilitation of impaired listeners in noisy environments.
Abstract: A binaural noise-reduction algorithm for suppressing lateral noise sources was implemented in real-time and tested with normal and hearing-impaired subjects. With this algorithm, frequency bands with interaural time and amplitude differences characteristic for the "target" direction of the desired speech sound reach both ears without any attenuation. However, frequency bands with an interaural time and amplitude difference deviating from these desired values are attenuated. Speech quality assessments as well as speech intelligibility tests demonstrate a significant noise reduction of up to about 5 dB in signal-to-noise ratio for a dummy-head recorded "cocktail-party situation" in an anechoic chamber. For two additional acoustical situations, however, the subjective speech quality from the algorithm degrades with increasing reverberation, while the output from a real-time "dereverberation" algorithm was ranked higher with increasing reverberation. The results with hearing-impaired listeners suggest a combination of both algorithms, while stressing the importance of binaural listening and binaural hearing aids for the rehabilitation of impaired listeners in noisy environments.

44 citations


Journal Article
TL;DR: The preliminary results on three hearing-impaired subjects in repeated tests of speech recognition in noise showed an improvement in the order of 2 dB in speech-to-noise ratio for 50% correct recognition as compared to the complete broadband signal presented diotically.
Abstract: A digital 8-channel signal processing system has been implemented using a TMS 320C25 signal processor Tests with hearing-impaired subjects showed that the system allows a better fit to a specified frequency response than when using conventional aids with analog filtering The filter bank has also been tested in dichotic listening experiments, where odd-number channels were fed to one ear and even-number channels to the other The preliminary results on three hearing-impaired subjects in repeated tests of speech recognition in noise showed an improvement in the order of 2 dB in speech-to-noise ratio for 50% correct recognition as compared to the complete broadband signal presented diotically Temporal splitting of the signal by periodically switching of the odd and even bands between left and right ears did not show any improvement

39 citations


Journal Article
TL;DR: Toluene exposure accelerated the age-related hearing loss in C57BL/6J mice and causes a characteristic mid-frequency auditory damage, particularly in the OHCs, which are particularly susceptible to toluene Exposure.
Abstract: Toluene is a widely used organic solvent causing loss of auditory sensitivity in rats and presumably in humans. Also, the hearing loss in humans occupationally exposed to noise has been reported to be aggravated by simultaneous exposure to solvents. The aim of the present investigation was to study the effects of toluene, alone or in combination with other factors, on auditory sensitivity. The influence of acoustic trauma and of acetyl salicylic acid (ASA) on toluene induced auditory sensitivity loss in the rat, as well as the influence on age-related auditory sensitivity loss in two genotypes of mice were studied. An attempt was also made to identify cochlear structures damaged by toluene. Rats or mice were exposed to toluene by inhalation during 1 or 2 weeks. Rats were exposed to noise before or after the toluene exposure, or exposed to ASA by gavage simultaneously with toluene. The effect on auditory sensitivity was measured by frequency specific auditory brainstem response (ABR) between 1.6 and 20 kHz. In the investigations concerning the site of the toluene induced impairment, distortion product otoacoustic emissions (DPOEs) and morphological techniques were used. A permanent loss of auditory sensitivity, as measured by ABR, with the maximal threshold shifts between 6.3 and 12.5 kHz, was found in rats after exposure to toluene. In addition, lowered amplitudes of the DPOEs were recorded, implying that the outer hair cells (OHCs) were affected by toluene. A morphological study confirmed that toluene mainly affects the OHCs. A potentiated loss of auditory sensitivity was seen when toluene had preceded the noise exposure. When the exposure sequence was reversed, i.e. noise preceded toluene, an additive effect was observed. ASA, itself known to cause a reversible auditory impairment, was found to enhance the loss of auditory sensitivity caused by simultaneous toluene exposure. Finally, toluene exposure accelerated the age-related hearing loss in C57BL/6J mice. In conclusion, toluene causes a characteristic mid-frequency auditory damage. The OHCs are particularly susceptible to toluene exposure. The auditory sensitivity loss induced by toluene can be enhanced by ASA and by ensuing noise exposure. Also, toluene exposure can influence the rate of progress of age-related hereditary loss of auditory sensitivity.

37 citations


Journal Article
TL;DR: Noise reduction methods for use in sensory aids for hearing impairment failed to improve performance as the front-end of a hearing aid, but yielded improvements in performance as a preprocessor for the Nucleus Cochlear Implant.
Abstract: Four noise reduction methods for use in sensory aids for hearing impairment were evaluated. These include a two-microphone adaptive noise canceller, short-term Wiener filtering, a transformed spectrum subtraction technique, and sinusoidal modelling. The largest improvements in speech recognition were obtained with the two-microphone adaptive noise canceller in a moderately reverberant room. Significant improvements were also obtained for short-term Wiener filtering for some hearing-impaired subjects. The transformed spectrum-subtraction technique failed to improve performance as the front-end of a hearing aid, but yielded improvements in performance as a preprocessor for the Nucleus Cochlear Implant. Sinusoidal modelling resulted in significant improvements in signal-to-noise ratio, but without a corresponding improvement in speech intelligibility.

34 citations


Journal Article
TL;DR: Three noise reduction algorithms based on amplitude subtraction were designed and used to process speech mixed with babble noise in two signal-to-noise ratios and none of the algorithms improved speech intelligibility for any group of listeners and no change in the overall pattern of confusion was observed.
Abstract: Three noise reduction algorithms based on amplitude subtraction were designed and used to process speech mixed with babble noise in two signal-to-noise ratios. The estimation of the noise-magnitude spectrum was performed with a novel synchro method, which exploits specific characteristics of the speech signal. The unprocessed and processed signals were evaluated psychoacoustically by means of a four-alternative-forced choice test with monosyllabic words (minimal pairs) in carrier phrases. The testing was carried out on groups of normally hearing and hearing-impaired subjects and the long-term power spectra of the processed signals were shaped to be essentially identical with those from the corresponding unprocessed signals. For the hearing-impaired subjects all signals were spectrally shaped according to the POGO-fitting rule. None of the algorithms improved speech intelligibility for any group of listeners and no change in the overall pattern of confusion was observed.

25 citations


Journal Article
TL;DR: The results show that moderate syllabic compression may raise speech intelligibility, as long as overshoots are minimized and relatively short time constants are used.
Abstract: Syllabic compression has not been shown unequivocally to improve speech intelligibility in hearing-impaired listeners. This paper attempts to explain the poor results by introducing the concept of minimum overshoots. The concept was tested with a digital signal processor on hearing-impaired subjects. The results show that moderate syllabic compression may raise speech intelligibility, as long as overshoots are minimized and relatively short time constants are used. Frequency equalization also contributes to speech intelligibility.

21 citations


Journal Article
TL;DR: In this article, two portable omni-directional microphones were developed and tested using a KEMAR-manikin and the results showed that the two prototypes gave an improvement of the signal to noise ratio of 7 dB in a fully diffuse sound field.
Abstract: The hearing impaired often have great difficulty understanding speech in surroundings with background noise or reverberation. A directional hearing aid might be beneficial in reducing background noise in relation to the desired speech signal. To this end microphone systems were developed with strongly directional characteristics, using array techniques. Considerable attention was paid to optimization and stability. Free-field simulations of several robust models showed that a directivity index of 9 dB can be obtained. Simulations were verified with a laboratory model. Based on simulations and measurements, two portable prototypes were developed and tested using a KEMAR-manikin. The KEMAR-measurements showed that the two prototypes gave an improvement of the signal to noise ratio of 7 dB in a fully diffuse sound field. The benefit of these microphone arrays for the hearing impaired was tested in a sound insulated room. One loudspeaker was placed in front of the listener simulating the partner in a discussion, and a diffuse background noise was produced by eight loudspeakers placed on the corners of a cube. The hearing impaired subject was seated in the centre of the cube. The speech-reception threshold in noise for simple Dutch sentences was determined with a normal single omni-directional microphone and with one of the prototypes. The results of the listening tests with 45 hearing impaired subjects showed an average improvement of the S/N-ratio of 7.0 dB for monaural fitting.

10 citations


Journal Article
Counter Sa1
TL;DR: A prominent side effect of EMS was found to be the high intensity, high frequency impulse noise generated by the coil which causes severe cochlear damage and permanent sensorineural hearing loss in experimental animals.
Abstract: Extracranial electromagnetic stimulation (EMS) is a recently developed clinical technique which may be used in place of conventional transcutaneous electrical stimulation to activate the central and peripheral nervous systems. This technique is widely used in neurology and otolaryngology for non-invasive stimulation of the brain and facial nerve. EMS uses electromagnetic field pulses which pass unimpeded through the cranium and soft tissues to activate excitable membranes of volume conductors. In this series of studies, the effects and side-effects of electromagnetic stimulation on the auditory system of humans and experimental animals were investigated. In the first study, 18 profoundly hard-of-hearing and deaf patients who were candidates for cochlear implants were examined by non-invasive EMS in an effort to determine whether EMS could stimulate residual neurons in the cochlea, 8th nerve proper, or higher auditory brain centers, and evoke auditory sensations. The patients were stimulated with a magnetic coil positioned at the (1) auricle, (2) mastoid process, and (3) the temporal lobe area. EMS elicited auditory sensations in 26 ears (of 14 patients/subjects). The lowest threshold of auditory sensation (TAS) at each stimulus position was found to be at the 20% EMS level, with a range of 20-50% of the maximum level (2.0 Tesla), and with equal sensitivity in each coil position. There was no correlation between the EMS/TAS and the immediate postoperative psychoacoustic tests in ten patients receiving cochlear implants. A prominent side effect of EMS was found to be the high intensity, high frequency impulse noise generated by the coil which causes severe cochlear damage and permanent sensorineural hearing loss in experimental animals. Measurements of the sound pressure level (SPL) of the magnetic coil acoustic artifact (MCAA) at the tympanic membrane of the rabbit ear showed levels of up to 160 dB for maximum EMS. Measurements of the spectral content and SPL of the MCAA in the ear canal of life size models of the human cranium with the stimulating coil placed at standard clinical positions indicated that the major acoustic energy of the pulse is concentrated in the 2-5 kHz range, and that the SPL of the pulse at some positions may place persons at risk for hearing loss. Studies on computer simulated impulse noises showed that the peak sound pressure rather than the rise time (in the range 0.1-1.0 ms) determined the permanent threshold shift (PTS). The MCAA was more harmful than a 128 dB SPL continuous noise with 100 times more energy.(ABSTRACT TRUNCATED AT 400 WORDS)

8 citations


Journal Article
TL;DR: A method for adaptively identifying and equalizing the feedback path of a hearing aid in order to stabilize the system is described and an additional 10 to 15 dB of stable gain margin is demonstrated.
Abstract: A method is described for adaptively identifying and equalizing the feedback path of a hearing aid in order to stabilize the system. The algorithm utilizes an LMS adaptive filter and is implemented in digital form. An additional 10 to 15 dB of stable gain margin for a BTE hearing aid configuration has been demonstrated in laboratory experiments with hearing-impaired subjects using an interim custom VLSI circuit implementation.

7 citations


Journal Article
TL;DR: The two experiments described here were aimed at determining the optimum value of the recovery time of the AGC in the high-frequency channel and there was a clear trend for SRTs to increase with increasing recovery time.
Abstract: This paper describes two experiments in a series evaluating and optimising a hearing aid incorporating two forms of automatic gain control (AGC). The first form is a front-end AGC which is normally slow acting and which compensates for variations in the overall level of speech from one situation to another. The second form of AGC follows the front-end AGC. The signal is split into two frequency bands, and fast-acting AGC is applied in the upper band only. The bands are then recombined. The two experiments described here were aimed at determining the optimum value of the recovery time of the AGC in the high-frequency channel. Speech reception thresholds (SRTs) were measured in the presence of speech-shaped noise as a function of the recovery time, for subjects with mild-to-moderate sensorineural loss. In the first experiment the recovery time was varied over the range 10-80 ms. SRTs tended to be lowest (best) at the shorter recovery times but the effects were small. In the second experiment, the recovery time was varied over the range 5-320 ms. In this case, there was a clear trend for SRTs to increase with increasing recovery time. A recovery time of about 20 ms appears to be optimal.

Journal Article
TL;DR: A commercially available programmable BTE hearing aid with 3-channel AGC (Siemens, Triton 3000) was compared to the subjects' own single-channel compression aids in a group of experienced hearing aid wearers to compare the performance of the hearing instruments and subjective judgement of sound quality and speech intelligibility.
Abstract: A commercially available programmable BTE hearing aid with 3-channel AGC (Siemens, Triton 3000) was compared to the subjects' own single-channel compression aids in a group of experienced hearing aid wearers. Two cross-over frequencies, three AGC onset levels, and three channel amplifications were programmable. The maximum output level was controlled by conventional peak clipping. The frequency response of the multichannel system was matched to the individuals' own aids which had been fitted in the two years prior to this study. Frequency response shaping was accomplished by real ear measurement monitoring. The performance of the hearing instruments was measured by (1) speech audiometry in noise and (2) subjective judgement of sound quality and speech intelligibility. Speech recognition was tested using rhyme test material in noise in a group of 10 subjects with sloping high-frequency hearing loss as well as in a group of 16 individuals with nearly flat audiograms. In both subgroups speech recognition scores (S/N: -5 to 15 dB) were 7 to 20% higher for the 3-channel AGC device compared to the single-channel AGC instruments. This finding is equivalent to an improvement of the S/N ratio of about 7 dB for the user. The results of the subjective judgements were similar for both subgroups. Sound quality and speech intelligibility were mostly rated as "very good" or "good" on a 5-point-scale. The subjects were also asked to compare the performance of their own hearing aids with the 3-channel AGC instrument; the latter generally turned out to be preferable.(ABSTRACT TRUNCATED AT 250 WORDS)

Journal Article
TL;DR: A multichannel signal-processing hearing aid in which the gain is controlled by the level of the minima in the sound envelope was evaluated with hearing-impaired listeners, and frequency-selective attenuation of the signal in the octave band with the 20-dB increase of noise is more beneficial than wide-band gain control.
Abstract: A multichannel signal-processing hearing aid in which the gain is controlled by the level of the minima in the sound envelope [outlined by Festen et al., 1990] was evaluated with hearing-impaired listeners. This evaluation is an extension to the work reported by van Dijkhuizen et al. (1990). A first experiment focused on the speech-reception threshold (SRT), i.e. the S/N ratio for 50% intelligibility. The greatest benefit in terms of the SRT from frequency-dependent control of the amplification is expected in conditions where the spectrum of noise exceeds strongly that of the speech in a limited frequency region. In these conditions frequency-dependent amplification may reduce upward spread of masking. We investigated the upper limit of this benefit in conditions of intense frequency-limited interfering noise. Speech and noise were both spectrally shaped according to the line bisecting the listener's dynamic range; however, the level of the noise in one octave band (0.25-0.5 or 0.5-1 kHz) was increased by 20 dB. The results show that frequency-selective attenuation of the signal in the octave band with the 20-dB increase of noise is more beneficial than wide-band gain control, and gives a decrease in SRT of up to 4 dB relative to a condition without gain control. In a subsequent experiment we investigated, for several very common interfering sounds, the effect of controlling the gain by the minima in the signal envelope on both the SRT and the perceived noisiness. Results show that the condition with gain control does not affect the SRT for sentences in the presence of everyday interfering sounds having spectra that are roughly comparable to that of the speech signal; however, it substantially reduces the perceived noisiness. In line with our expectations, the effect of the gain control on the signal was very small for a single voice, and it was greatest in case of sounds with a more or less continuous character (e.g. stationary noise, music). For these last sounds it was found that the growth in perceived noisiness with the increase of input level is equivalent to the growth produced by only about one-fifth of the increase in input level (in decibels) in a condition without gain control.

Journal Article
TL;DR: These results suggest how the auditory speech signal might be coded so as to provide supplementary information to speechreading, and which Dutch consonants and vowels can be identified by the average lipreader.
Abstract: Successful rehabilitation of the profoundly hearing impaired by means of a speech-processing hearing aid requires integration of auditory and visual speech information. In two studies we investigated (1) which perceptual dimensions play a role in processing various auditory patterns by profoundly hearing-impaired subjects, and (2) which Dutch consonants and vowels can be identified by the average lipreader. One of the important cues in auditory pattern discrimination seems to be the presence of temporal fluctuations (beats) in the signal, resulting from two closely placed frequency components. However, this feature is confounded with the perception of loudness. A second cue used by some subjects is the presence of high-frequency peaks. In lipreading, at least three groups of consonants and three vowel groups may be distinguished; phonemes within a group cannot be discriminated from each other. Important features for both consonants and vowels are degree of lip opening and lip activity (movement or rounding). These results suggest how the auditory speech signal might be coded so as to provide supplementary information to speechreading.

Journal Article
TL;DR: Speechreading of short sentences could be enhanced by presenting the acoustic signal with real-time extracted supplementary speech features, including second format frequency and voiceless frication, which increased speechreading performance.
Abstract: The aim of this study was to investigate how speechreading of short sentences could be enhanced by presenting the acoustic signal with real-time extracted supplementary speech features. The features were second format frequency and voiceless frication. A group of 10 normal-hearing subjects participated in a simulation experiment. Speechreading performance increased from 22.2% for speechreading only (pretraining), to 61.5% for speechreading plus supplementary auditory information (post-training). The gain from the acoustic information turned out to be larger for subjects who also performed better in the speechreading-only condition. No significant effect was found for the training volume (i.e. 30 minutes versus 5 hours).

Journal Article
TL;DR: Evaluation experiments with 7 experienced cochlear implant users showed significantly better performance in consonant identification tests with the new processing strategies than with the subjects' own wearable speech processors whereas improvements in vowel identification tasks were rarely observed.
Abstract: The following processing strategies have been implemented on an experimental laboratory system of a cochlear implant digital speech processor (CIDSP) for the Nucleus 22-channel cochlear prosthesis. The first approach (PES, Pitch Excited Sampler) is based on the classical channel vocoder concept whereby the time-averaged spectral energy of a number of logarithmically spaced frequency bands is transformed into appropriate electrical stimulation parameters for up to 22 electrodes. The pulse rate at any electrode is controlled by the voice pitch of the input speech signal. The pitch extraction algorithm calculates the autocorrelation function of a lowpass-filtered segment of the speech signal and searches for a peak within a specified time window. A random pulse rate of about 150 to 250 Hz is used for unvoiced speech portions. The second approach (CIS, Continuous Interleaved Sampler) uses a stimulation pulse rate which is independent of the input signal. The algorithm scans continuously all specified frequency bands (typically between 4 and 22) and samples their energy levels. Evaluation experiments with 7 experienced cochlear implant users showed significantly better performance in consonant identification tests with the new processing strategies than with the subjects' own wearable speech processors whereas improvements in vowel identification tasks were rarely observed. Modifications of the basic PES- and CIS-strategies resulted in large variations of identification scores. Information transmission analysis of confusion matrices revealed a rather complex pattern across conditions and speech features. No final conclusions can yet be drawn. Optimization and fine-tuning of processing parameters for these coding strategies require more data both from speech identification and discrimination as well as psychophysical experiments.

Journal Article
TL;DR: The MLP-based pattern element aid gave significantly better performance in the reception of consonantal voicing contrasts from speech in pink noise than that achieved with conventional amplification and consequently, it also gave better overall performance in audio-visual consonant identification.
Abstract: Two new developments in speech pattern processing hearing aids will be described. The first development is the use of compound speech pattern coding. Speech information which is invisible to the lipreader was encoded in terms of three acoustic speech factors; the voice fundamental frequency pattern, coded as a sinusoid, the presence of aperiodic excitation, coded as a low-frequency noise, and the wide-band amplitude envelope, coded by amplitude modulation of the sinusoid and noise signals. Each element of the compound stimulus was individually matched in frequency and intensity to the listener's receptive range. Audio-visual speech receptive assessments in five profoundly hearing-impaired listeners were performed to examine the contributions of adding voiceless and amplitude information to the voice fundamental frequency pattern, and to compare these codings to amplified speech. In both consonant recognition and connected discourse tracking (CDT), all five subjects showed an advantage from the addition of amplitude information to the fundamental frequency pattern. In consonant identification, all five subjects showed further improvements in performance when voiceless speech excitation was additionally encoded together with amplitude information, but this effect was not found in CDT. The addition of voiceless information to voice fundamental frequency information did not improve performance in the absence of amplitude information. Three of the subjects performed significantly better in at least one of the compound speech pattern conditions than with amplified speech, while the other two performed similarly with amplified speech and the best compound speech pattern condition. The three speech pattern elements encoded here may represent a near-optimal basis for an acoustic aid to lipreading for this group of listeners. The second development is the use of a trained multi-layer-perceptron (MLP) pattern classification algorithm as the basis for a robust real-time voice fundamental frequency extractor. This algorithm runs on a low-power digital signal processor which can be incorporated in a wearable hearing aid. Aided lipreading for speech in noise was assessed in the same five profoundly hearing-impaired listeners to compare the benefits of conventional hearing aids with those of an aid which provided MLP-based fundamental frequency information together with speech+noise amplitude information. The MLP-based pattern element aid gave significantly better performance in the reception of consonantal voicing contrasts from speech in pink noise than that achieved with conventional amplification and consequently, it also gave better overall performance in audio-visual consonant identification.(ABSTRACT TRUNCATED AT 400 WORDS)