scispace - formally typeset
Open AccessJournal Article

The Hearing-Aid Speech Quality Index (HASQI)

James M. Kates, +1 more
- 08 Jun 2010 - 
- Vol. 58, Iss: 5, pp 363-381
Reads0
Chats0
TLDR
In this paper, an index for predicting the effects of noise, nonlinear distortion, and linear filtering on speech quality is developed for both normal-hearing and hearing-impaired listeners.
Abstract
Signal modifications in audio devices such as hearing aids include both nonlinear and linear processing. An index is developed for predicting the effects of noise, nonlinear distortion, and linear filtering on speech quality. The index is designed for both normal-hearing and hearing-impaired listeners. It starts with a representation of the auditory periphery that incorporates aspects of impaired hearing. The cochlear model is followed by the extraction of signal features related to the quality judgments. One set of features measures the effects of noise and nonlinear distortion on speech quality, whereas second set of features measures the effects of linear filtering. The hearing-aid speech quality index (HASQI) is the product of the subindices computed for each of the two sets of features. The models are evaluated by comparing the model predictions with quality judgments made by normal-hearing and hearing-impaired listeners for speech stimuli containing noise, nonlinear distortion, linear processing, and combinations of these signal degradations.

read more

Citations
More filters
Journal ArticleDOI

Audio-Visual Speech Enhancement Using Multimodal Deep Convolutional Neural Networks

TL;DR: The proposed AVDCNN model is structured as an audio–visual encoder–decoder network, in which audio and visual data are first processed using individual CNNs, and then fused into a joint network to generate enhanced speech and reconstructed images at the output layer.
Journal ArticleDOI

The Hearing-Aid Speech Perception Index (HASPI)

TL;DR: HASPI is found to give accurate intelligibility predictions for a wide range of signal degradations including speech degraded by noise and nonlinear distortion, speech processed using frequency compression, noisy speech processed through a noise-suppression algorithm, and speech where the high frequencies are replaced by the output of a noise vocoder.
Journal ArticleDOI

Working memory, age, and hearing loss: susceptibility to hearing aid distortion.

TL;DR: It is suggested that older listeners with hearing loss and poor working memory are more susceptible to distortions caused by at least some types of hearing aid signal-processing algorithms and by noise, and that this increased susceptibility should be considered in the hearing aid fitting process.
Journal ArticleDOI

Objective Quality and Intelligibility Prediction for Users of Assistive Listening Devices: Advantages and limitations of existing tools

TL;DR: An overview of 12 existing objective speech quality and intelligibility prediction tools is presented and recommendations are given for suggested uses of the different tools under specific environmental and processing conditions.
Posted Content

An Overview of Deep-Learning-Based Audio-Visual Speech Enhancement and Separation

TL;DR: This paper provides a systematic survey of this research topic, focusing on the main elements that characterise the systems in the literature: acoustic features; visual features; deep learning methods; fusion techniques; training targets; and objective functions.
Related Papers (5)