scispace - formally typeset
Search or ask a question
Author

Marjorie R. Leek

Bio: Marjorie R. Leek is an academic researcher from Portland VA Medical Center. The author has contributed to research in topics: Hearing loss & Speech perception. The author has an hindex of 25, co-authored 86 publications receiving 2604 citations. Previous affiliations of Marjorie R. Leek include Walter Reed Army Institute of Research & Oregon Health & Science University.


Papers
More filters
Journal ArticleDOI
TL;DR: The general development of adaptive procedures is described, and typically, a threshold value is measured using these methods, and, in some cases, other characteristics of the psychometric function underlying perceptual performance, such as slope, may be developed.
Abstract: As research on sensation and perception has grown more sophisticated during the last century, new adaptive methodologies have been developed to increase efficiency and reliability of measurement. An experimental procedure is said to be adaptive if the physical characteristics of the stimuli on each trial are determined by the stimuli and responses that occurred in the previous trial or sequence of trials. In this paper, the general development of adaptive procedures is described, and three commonly used methods are reviewed. Typically, a threshold value is measured using these methods, and, in some cases, other characteristics of the psychometric function underlying perceptual performance, such as slope, may be developed. Results of simulations and experiments with human subjects are reviewed to evaluate the utility of these adaptive procedures and the special circumstances under which one might be superior to another.

735 citations

Journal ArticleDOI
TL;DR: There was evidence that the ability to benefit from F0 differences between competing signals decreases with age, and normal-hearing listeners and hearing-impaired listeners with small F0-discrimination thresholds showed improvements in vowel labeling when there were differences in F0 between vowels on the concurrent-vowel task.
Abstract: Normal-hearing and hearing-impaired listeners were tested to determine F0 difference limens for synthetic tokens of 5 steady-state vowels. The same stimuli were then used in a concurrent-vowel labe...

123 citations

Journal ArticleDOI
TL;DR: It appears from data that birds can hear the fine temporal structure in complex waveforms over very short periods, and that at least part of the mechanisms underlying this high temporal resolving power resides at the peripheral level of the avian auditory system.
Abstract: The ability of three species of birds to discriminate among selected harmonic complexes with fundamental frequencies varying from 50 to 1000 Hz was examined in behavioral experiments. The stimuli were synthetic harmonic complexes with waveform shapes altered by component phase selection, holding spectral and intensive information constant. Birds were able to discriminate between waveforms with randomly selected component phases and those with all components in cosine phase, as well as between positive and negative Schroeder-phase waveforms with harmonic periods as short as 1-2 ms. By contrast, human listeners are unable to make these discriminations at periods less than about 3-4 ms. Electrophysiological measures, including cochlear microphonic and compound action potential measurements to the same stimuli used in behavioral tests, showed differences between birds and gerbils paralleling, but not completely accounting for, the psychophysical differences observed between birds and humans. It appears from these data that birds can hear the fine temporal structure in complex waveforms over very short periods. These data show birds are capable of more precise temporal resolution for complex sounds than is observed in humans and perhaps other mammals. Physiological data further show that at least part of the mechanisms underlying this high temporal resolving power resides at the peripheral level of the avian auditory system.

102 citations

Journal ArticleDOI
TL;DR: In this paper, the authors used the first 30 harmonics of a 100Hz tone to detect the minimum difference in amplitude between spectral peaks and troughs required for vowel identification by normal-hearing and hearing-impaired listeners.
Abstract: To determine the minimum difference in amplitude between spectral peaks and troughs sufficient for vowel identification by normal‐hearing and hearing‐impaired listeners, four vowel‐like complex sounds were created by summing the first 30 harmonics of a 100‐Hz tone. The amplitudes of all harmonics were equal, except for two consecutive harmonics located at each of three ‘‘formant’’ locations. The amplitudes of these harmonics were equal and ranged from 1–8 dB more than the remaining components. Normal‐hearing listeners achieved greater than 75% accuracy when peak‐to‐trough differences were 1–2 dB. Normal‐hearing listeners who were tested in a noise background sufficient to raise their thresholds to the level of a flat, moderate hearing loss needed a 4‐dB difference for identification. Listeners with a moderate, flat hearing loss required a 6‐ to 7‐dB difference for identification. The results suggest, for normal‐hearing listeners, that the peak‐to‐trough amplitude difference required for identification of this set of vowels is very near the threshold for detection of a change in the amplitude spectrum of a complex signal. Hearing‐impaired listeners may have difficulty using closely spaced formants for vowel identification due to abnormal smoothing of the internal representation of the spectrum by broadened auditory filters.

101 citations

Journal ArticleDOI
TL;DR: The results suggest that, for some patients, blast exposure may lead to difficulties with hearing in complex auditory environments, even when peripheral hearing sensitivity is near normal limits.
Abstract: INTRODUCTION The recent conflicts in Afghanistan and Iraq (Operation Iraqi Freedom/Operation Enduring Freedom/Operation New Dawn [OIF/OEF/OND]) have resulted in unprecedented rates of exposure to high-intensity blasts, often resulting in traumatic brain injury (TBI) among members of the U.S. military. The Department of Veterans Affairs (VA) 2011 TBI Comprehensive Evaluation Summary [1] estimated the prevalence of TBI in the OIF/OEF/ OND Veteran population at 7.8 percent. While the typical focus of auditory evaluation is on damage to the peripheral auditory system, the prevalence of brain injury among those exposed to high-intensity blasts suggests that damage to the central auditory system is an equally important concern for blast-exposed persons. Discussions with clinical audiologists and OIF/OEF/OND Veterans Service Office personnel suggest that a common complaint voiced by blast-exposed Veterans is an inability to understand speech in noisy environments, even when peripheral hearing is within normal or near-normal limits. Such complaints are consistent with damage to neural networks responsible for higher-order auditory processing [2]. The auditory structures most vulnerable to axonal injury are the lower- and mid-brain stem nuclei, the thalamus, and the corpus callosum. Damage may include swelling, stretching, and shearing of neural connections, as well as inflammatory changes in response to tissue injury [3]. There also may be a loss of synaptic structures connecting nuclei in the central auditory system, resulting in distorted or missing information transmitted to cortical centers [4-5]. The interhemispheric pathways connecting auditory areas of the two cerebral hemispheres run through the posterior half of the corpus callosum [6]. The corpus callosum is a structure that may be particularly vulnerable, as it has been shown to be damaged even in non-blast-related head injury [7-8]. Axonal damage to this part of the corpus callosum would be expected to interfere with auditory and speech processing, as well as other bilaterally represented auditory cortical functions. Furthermore, recent modeling work has revealed that the blast wave itself can exert stress and strain forces on the brain that are likely to cause widespread axonal and blood vessel damage [9]. Such impacts would not necessarily create changes visible on a medical image, but could still impair function by reducing neural transduction time, efficiency, or precision of connectivity. This wide diversity of potential damage and sites of injury also suggests that the profile of central auditory damage is likely to vary considerably among patients. For this reason, the first step in the diagnosis and treatment of blast-related dysfunction is the identification of which brain functions have been impaired. TESTS OF CENTRAL AUDITORY FUNCTION Behavioral tests are mainstays of central auditory test batteries, and many have been shown to be both sensitive and specific to particular brain injuries. It may also be important, however, to include evoked potential (EP) measures (electrophysiological tests) of neural function [10] to complement the behavioral tests. The Auditory Brainstem Response (ABR) is a commonly used test that evaluates the integrity of the auditory nerve and brainstem structures, whereas measures from the auditory evoked late response reflect cortical processing [11]. Long latency responses (LLRs), which are sensitive to impaired neuronal firing and desynchronization of auditory information, are useful tools for the assessment of cognitive capability. Prolonged latencies in LLRs would suggest interruptions in neural transmission within or between cortical networks. This could be due to reduced cortical neuron availability or diminished neural firing intensity. In addition, longer neural refractory periods can result in reduced amplitudes of event-related potentials. The purpose of the current study was to determine whether performance on a battery of behavioral and electrophysiological tests of central auditory function differs between individuals who have recently experienced a high-explosive blast and those who have not. …

97 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: An integrated approach to fitting psychometric functions, assessing the goodness of fit, and providing confidence intervals for the function’s parameters and other estimates derived from them, for the purposes of hypothesis testing is described.
Abstract: The psychometric function relates an observer’s performance to an independent variable, usually some physical quantity of a stimulus in a psychophysical task. This paper, together with its companion paper (Wichmann & Hill, 2001), describes an integrated approach to (1) fitting psychometric functions, (2) assessing the goodness of fit, and (3) providing confidence intervals for the function’s parameters and other estimates derived from them, for the purposes of hypothesis testing. The present paper deals with the first two topics, describing a constrained maximum-likelihood method of parameter estimation and developing several goodness-of-fit tests. Using Monte Carlo simulations, we deal with two specific difficulties that arise when fitting functions to psychophysical data. First, we note that human observers are prone to stimulus-independent errors (orlapses). We show that failure to account for this can lead to serious biases in estimates of the psychometric function’s parameters and illustrate how the problem may be overcome. Second, we note that psychophysical data sets are usually rather small by the standards required by most of the commonly applied statistical tests. We demonstrate the potential errors of applying traditionalX2 methods to psychophysical data and advocate use of Monte Carlo resampling techniques that do not rely on asymptotic theory. We have made available the software to implement our methods.

2,263 citations

Journal ArticleDOI
TL;DR: A comprehensive overview of deep learning-based supervised speech separation can be found in this paper, where three main components of supervised separation are discussed: learning machines, training targets, and acoustic features.
Abstract: Speech separation is the task of separating target speech from background interference. Traditionally, speech separation is studied as a signal processing problem. A more recent approach formulates speech separation as a supervised learning problem, where the discriminative patterns of speech, speakers, and background noise are learned from training data. Over the past decade, many supervised separation algorithms have been put forward. In particular, the recent introduction of deep learning to supervised speech separation has dramatically accelerated progress and boosted separation performance. This paper provides a comprehensive overview of the research on deep learning based supervised speech separation in the last several years. We first introduce the background of speech separation and the formulation of supervised separation. Then, we discuss three main components of supervised separation: learning machines, training targets, and acoustic features. Much of the overview is on separation algorithms where we review monaural methods, including speech enhancement (speech-nonspeech separation), speaker separation (multitalker separation), and speech dereverberation, as well as multimicrophone techniques. The important issue of generalization, unique to supervised learning, is discussed. This overview provides a historical perspective on how advances are made. In addition, we discuss a number of conceptual issues, including what constitutes the target source.

1,009 citations

Proceedings ArticleDOI
03 Oct 2010
TL;DR: The proposed technology is based on the electrovibration principle, does not use any moving parts and provides a wide range of tactile feedback sensations to fingers moving across a touch surface, which enables the design of a wide variety of interfaces that allow the user to feel virtual elements through touch.
Abstract: We present a new technology for enhancing touch interfaces with tactile feedback. The proposed technology is based on the electrovibration principle, does not use any moving parts and provides a wide range of tactile feedback sensations to fingers moving across a touch surface. When combined with an interactive display and touch input, it enables the design of a wide variety of interfaces that allow the user to feel virtual elements through touch. We present the principles of operation and an implementation of the technology. We also report the results of three controlled psychophysical experiments and a subjective user evaluation that describe and characterize users' perception of this technology. We conclude with an exploration of the design space of tactile touch screens using two comparable setups, one based on electrovibration and another on mechanical vibrotactile actuation.

740 citations

Journal ArticleDOI
TL;DR: The general development of adaptive procedures is described, and typically, a threshold value is measured using these methods, and, in some cases, other characteristics of the psychometric function underlying perceptual performance, such as slope, may be developed.
Abstract: As research on sensation and perception has grown more sophisticated during the last century, new adaptive methodologies have been developed to increase efficiency and reliability of measurement. An experimental procedure is said to be adaptive if the physical characteristics of the stimuli on each trial are determined by the stimuli and responses that occurred in the previous trial or sequence of trials. In this paper, the general development of adaptive procedures is described, and three commonly used methods are reviewed. Typically, a threshold value is measured using these methods, and, in some cases, other characteristics of the psychometric function underlying perceptual performance, such as slope, may be developed. Results of simulations and experiments with human subjects are reviewed to evaluate the utility of these adaptive procedures and the special circumstances under which one might be superior to another.

735 citations

Journal ArticleDOI
TL;DR: This article presents evidence that favors the independent horse-race model but also some evidence that challenges the model, and discusses of recent models that elaborate the role of a stop process in inhibiting a response.

667 citations