scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Transformation of sound pressure level from the free field to the eardrum in the horizontal plane.

01 Dec 1974-Journal of the Acoustical Society of America (Acoustical Society of America)-Vol. 56, Iss: 6, pp 1848-1861
TL;DR: Measurements of pressure transformation, azimuthal dependence, interaural level difference, and ear canal pressure distribution from 12 studies are brought together in a common framework, leading to the construction of self‐consistent families of curves best fitting the data.
Abstract: Measurements of pressure transformation, azimuthal dependence, interaural level difference, and ear canal pressure distribution from 12 studies are brought together in a common framework The pool of data covers 100 subjects, the majority male, measured in five countries over a 40‐yr period Logical procedures are developed to identify the surfaces which best fit these essentially three‐dimensional distributions of data, making allowance for the many disparities between studies Sheets of data are presented showing transformation to the eardrum, azimuthal dependence, and interaural difference as functions of frequency from 02 to 12 kHz at 45° intervals in azimuth Other sheets show azimuthal dependence and interaural difference as functions of azimuth at 24 discrete frequencies The logical procedures, data presentations, and review of disparities lead to the construction of self‐consistent families of curves best fitting the data and showing the average sound pressure transformation from the free field to the human eardrum as a function of frequency at 15° intervals in azimuth Possible explanations of differences between studies are suggested
Citations
More filters
Book
01 Jan 1994
TL;DR: In this article, technology and applications for the rendering of virtual acoustic spaces are reviewed, including applications to computer workstations, communication systems, aeronautics and space, and sonic arts.
Abstract: Technology and applications for the rendering of virtual acoustic spaces are reviewed. Chapter 1 deals with acoustics and psychoacoustics. Chapters 2 and 3 cover cues to spatial hearing and review psychoacoustic literature. Chapter 4 covers signal processing and systems overviews of 3-D sound systems. Chapter 5 covers applications to computer workstations, communication systems, aeronautics and space, and sonic arts. Chapter 6 lists resources. This TM is a reprint of the 1994 book from Academic Press.

960 citations

Journal ArticleDOI
TL;DR: Techniques used to synthesize headphone-presented stimuli that simulate the ear-canal waveforms produced by free-field sources are described, showing that the simulations duplicate free- field waveforms within a few dB of magnitude and a few degrees of phase at frequencies up to 14 kHz.
Abstract: This article describes techniques used to synthesize headphone-presented stimuli that simulate the ear-canal waveforms produced by free-field sources. The stimulus synthesis techniques involve measurement of each subject's free-field-to-eardrum transfer functions for sources at a large number of locations in free field, and measurement of headphone-to-eardrum transfer functions with the subject wearing headphones. Digital filters are then constructed from the transfer function measurements, and stimuli are passed through these digital filters. Transfer function data from ten subjects and 144 source positions are described in this article, along with estimates of the various sources of error in the measurements. The free-field-to-eardrum transfer function data are consistent with comparable data reported elsewhere in the literature. A comparison of ear-canal waveforms produced by free-field sources with ear-canal waveforms produced by headphone-presented simulations shows that the simulations duplicate free-field waveforms within a few dB of magnitude and a few degrees of phase at frequencies up to 14 kHz.

724 citations

Journal ArticleDOI
TL;DR: This study investigated whether the distinct and separate localization of speech and interference provides any perceptual advantage that, due to the precedence effect, is not degraded by reflections.
Abstract: Spatial separation of speech and noise in an anechoic space creates a release from masking that often improves speech intelligibility. However, the masking release is severely reduced in reverberant spaces. This study investigated whether the distinct and separate localization of speech and interference provides any perceptual advantage that, due to the precedence effect, is not degraded by reflections. Listeners’ identification of nonsense sentences spoken by a female talker was measured in the presence of either speech-spectrum noise or other sentences spoken by a second female talker. Target and interference stimuli were presented in an anechoic chamber from loudspeakers directly in front and 60 degrees to the right in single-source and precedence-effect (lead-lag) conditions. For speech-spectrum noise, the spatial separation advantage for speech recognition (8 dB) was predictable from articulation index computations based on measured release from masking for narrow-band stimuli. The spatial separation advantage was only 1 dB in the lead-lag condition, despite the fact that a large perceptual separation was produced by the precedence effect. For the female talker interference, a much larger advantage occurred, apparently because informational masking was reduced by differences in perceived locations of target and interference.

485 citations

Journal Article
TL;DR: In this paper, a model for calculating the loudness of steady sounds from their spectrum using a waveform as its input is presented, which uses a finite impulse response filter representing transfer through the outer and middle ear.
Abstract: Previously we described a model for calculating the loudness of steady sounds from their spectrum. Here a new version of the model is presented, which uses a waveform as its input. The stages of the model are as follows. (a) A finite impulse response filter representing transfer through the outer and middle ear. (b) Calculation of the short-term spectrum using the fast Fourier transform (FFT). To give adequate spectral resolution at low frequencies, combined with adequate temporal resolution at high frequencies, six FFTs are calculated in parallel, using longer signal segments for low frequencies and shorter segments for higher frequencies. (c) Calculation of an excitation pattern from the physical spectrum. (d) Transformation of the excitation pattern to a specific loudness pattern. (e) Determination of the area under the specific loudness pattern. This gives a value for the instantaneous loudness. The short-term perceived loudness is calculated from the instantaneous loudness using an averaging mechanism similar to an automatic gain control system, with attack and release times. Finally the overall loudness impression is calculated from the short-term loudness using a similar averaging mechanism, but with longer attack and release times. The new model gives very similar predictions to our earlier model for steady sounds. In addition, it can predict the loudness of brief sounds as a function of duration and the overall loudness of sounds that are amplitude modulated at various rates.

420 citations


Cites background from "Transformation of sound pressure le..."

  • ...The transfer of sound through the outer and middle ear can be modeled using fixed filters, although the filtering produced by the outer ear depends on the direction of incidence of the sound relative to the head [24]....

    [...]

Journal ArticleDOI
TL;DR: The results suggest that binaural cues play an important role in auditory distance perception for nearby sources and that the interaural level difference increases substantially for lateral sources as distance decreases below 1 m, even at low frequencies where the ILD is small for distant sources.
Abstract: Although researchers have long recognized the unique properties of the head-related transfer function (HRTF) for nearby sources (within 1 m of the listener’s head), virtually all of the HRTF measurements described in the literature have focused on source locations 1 m or farther from the listener. In this study, HRTFs for sources at distances from 0.12 to 1 m were calculated using a rigid-sphere model of the head and measured using a Knowles Electronic Manikin for Acoustic Research (KEMAR) and an acoustic point source. Both the calculations and the measurements indicate that the interaural level difference (ILD) increases substantially for lateral sources as distance decreases below 1 m, even at low frequencies where the ILD is small for distant sources. In contrast, the interaural time delay (ITD) is roughly independent of distance even when the source is close. The KEMAR measurements indicate that the direction of the source relative to the outer ear plays an important role in determining the high-frequency response of the HRTF in the horizontal plane. However, the elevation-dependent characteristics of the HRTFs are not strongly dependent on distance, and the contribution of the pinna to the HRTF is independent of distance beyond a few centimeters from the ear. Overall, the results suggest that binaural cues play an important role in auditory distance perception for nearby sources.

381 citations