scispace - formally typeset
Search or ask a question

Showing papers in "Journal of the Acoustical Society of America in 1977"


Journal ArticleDOI
TL;DR: A test of everyday speech reception is described, in which a listener’s utilization of the linguistic‐situational information of speech is assessed, and is compared with the utilization of acoustic‐phonetic information.
Abstract: This paper describes a test of everyday speech reception, in which a listener’s utilization of the linguistic‐situational information of speech is assessed, and is compared with the utilization of acoustic‐phonetic information. The test items are sentences which are presented in babble‐type noise, and the listener response is the final word in the sentence (the key word) which is always a monosyllabic noun. Two types of sentences are used: high‐predictability items for which the key word is somewhat predictable from the context, and low‐predictability items for which the final word cannot be predicted from the context. Both types are included in several 50‐item forms of the test, which are balanced for intelligibility, key‐word familiarity and predictability, phonetic content, and length. Performance of normally hearing listeners for various signal‐to‐noise ratios shows significantly different functions for low‐ and high‐predictability items. The potential applications of this test, particularly in the assessment of speech reception in the hearing impaired, are discussed.

1,076 citations


Journal ArticleDOI
TL;DR: Two experiments were performed to evaluate the perceptual relationships between 16 music instrument tones, and a three‐dimensional scaling solution was found to be interpretable in terms of the spectral energy distribution.
Abstract: Two experiments were performed to evaluate the perceptual relationships between 16 music instrument tones. The stimuli were computer synthesized based upon an analysis of actual instrument tones, and they were perceptually equalized for loudness, pitch, and duration. Experiment 1 evaluated the tones with respect to perceptual similarities, and the results were treated with multidimensional scaling techniques and hierarchic clustering analysis. A three‐dimensional scaling solution, well matching the clustering analysis, was found to be interpretable in terms of (1) the spectral energy distribution; (2) the presence of synchronicity in the transients of the higher harmonics, along with the closely related amount of spectral fluctuation within the the tone through time; and (3) the presence of low‐amplitude, high‐frequency energy in the initial attack segment; an alternate interpretation of the latter two dimensions viewed the cylindrical distribution of clusters of stimulus points about the spectral energy distribution, grouping on the basis of musical instrument family (with two exceptions). Experiment 2 was a learning task of a set of labels for the 16 tones. Confusions were examined in light of the similarity structure for the tones from experiment 1, and one of the family‐grouping exceptions was found to be reflected in the difficulty of learning the labels.

734 citations


Journal ArticleDOI
TL;DR: The aim of this article is to promote a better understanding of hearing impairment as a communicative handicap, primarily in noisy environments, and to explain by means of a quantitative model the essentially limited applicability of hearing aids.
Abstract: The aim of this article is to promote a better understanding of hearing impairment as a communicative handicap, primarily in noisy environments, and to explain by means of a quantitative model the essentially limited applicability of hearing aids. After data on the prevalence of hearing impairment and of auditory handicap have been reviewed, it is explained that every hearing loss for speech can be interpreted as the sum of a loss class A (attenuation), characterized by a reduction of the levels of both speech signal and noise, and a loss D (distortion), comparable with a decrease in speech‐to‐noise ratio. On the average, the hearing loss of class D (hearing loss in noise) appears to be about one‐third (in decibels) of the total hearing loss (A+D, hearing loss in quiet). A hearing aid can compensate for class‐A hearing losses, giving difficulties primarily in quiet, but not for class‐D hearing losses, giving difficulties primarily in noise. The latter class represents the first stage of auditory handicap, beginning at an average hearing loss of about 24 dB.

464 citations


Journal ArticleDOI
TL;DR: In this paper, an objective study of the steady-state interaural time difference (ITD) was performed on a manikin comprised of a head and torso, and data were taken for both a bare and clothed torso.
Abstract: An objective study of the steady‐state interaural time difference (ITD) was performed on a manikin comprised of a head and torso. Data were taken for both a bare and clothed torso. The measured ITD’s correspond reasonably accurately at the low and the high frequencies to the computed theoretical values for a rigid sphere of an effective radius a. The theoretical ratio of the low‐frequency (<500 Hz) ITD to the high‐frequency (≳2000 Hz) ITD is 3/2. The measured ITD is a minimum between 1.4 and 1.6 kHz for angles of incidence, ϑinc, of sound between 15° and 60°. At both the low and the high frequencies the data can be expressed by universal curves when the ITD is normalized by (a/c0) sinϑinc, where c0 is the speed of sound in air and ϑinc is the angle of incidence. Both the steady‐state ITD and the interaural sound‐pressure‐level difference (ILD) show differences between measurements made with the bare torso and those with a clothed torso. These objectives results support the subjective measurements of past...

433 citations


Journal ArticleDOI
TL;DR: In this article, the effect of pressure on the relative speeds of sound, (UP−UPH2O) ‐UO−UOH2O), have been measured relative to pure water with a Nusonics single-transducer sound velocimeter as a function of salinity (5−40°/00), temperature 0°-40°C, and applied pressure (0-1000 bars).
Abstract: The speed of sound in standard seawater (diluted with pure water and evaporated) have been measured relative to pure water with a Nusonics single‐transducer sound velocimeter as a function of salinity (5–40°/00), temperature 0°–40°C, and applied pressure (0–1000 bars). The effect of pressure on the relative speeds of sound, (UP−UPH2O) ‐UO−UOH2O), have been fitted to an equation of the form (with a standard deviation of 0.19 msec−1) (UP−UPH2O) ‐ (UO−UOH2O) =AS (°/oo)+BS (°/oo)3/2 +CS (°/oo)2, where U and UH20 are the speeds of sound in seawater and pure water, respectively; superscripts P and O are used to denote applied pressure P and O (1 atm); A, B, and C are temperature‐ and pressure‐dependent parameters; S (o/oo) is the salinity in parts per thousand. This equation has been combined with the refitted high‐pressure pure‐water sound‐speed equation of Wilson [Naval Ordnance Lab. Rep.(1959)], Chen and Millero [J. Acoust. Soc. Am. 60, 1270–1273 (1976)], and the 1‐atm seawater sound‐speed data of Millero and Kubinski [J. Acoust. Soc. Am. 57, 312–319 (1975)] to calculate the speeds of sound for seawater at various salinities, temperatures, and pressures. Our results agree with the work of Wilson on the average to 0.36 msec−1 over the range of 5 to 40o/oo salinity, 0° to 30°C, and 0 to 1000 bars. Over the oceanic range our results agree on the average with the work of Wilson to 0.3 msec−1 (maximum deviation 0.6 msec−1), and with the work of Del Grosso to 0.5 msec−1 (maximum deviation 0.9 msec−1). The better agreement of our results with those of Wilson may be fortuitous since our measurements were made relative to his pure‐water data.

414 citations


Journal ArticleDOI
TL;DR: Frequency discrimination was measured and sensation level was measured using pulsed sinusoids as stimuli in an adaptive two‐interval force‐choice psychophysical procedure to evaluate the predictions of current theoretical models.
Abstract: Frequency discrimination was measured for frequencies from 200 to 8000 Hz and for sensation levels from 5 to 80 dB using pulsed sinusoids as stimuli in an adaptive two‐interval force‐choice psychophysical procedure. An analysis of variance indicated significant effects of frequency and sensation level, and of the interaction between frequency and sensation level. The effect of sensation level is greatest at low frequencies and decreases at high frequenices, being quite small at 8000 Hz. The data are used to evaluate the predictions of current theoretical models.

399 citations


Journal ArticleDOI
TL;DR: In this paper, a linearized theory of the forced radial oscillations of a gas bubble in a liquid is presented, with particular attention devoted to the thermal effects of the bubble.
Abstract: A linearized theory of the forced radial oscillations of a gas bubble in a liquid is presented. Particular attention is devoted to the thermal effects. It is shown that both the effective polytropic exponent and the thermal damping constant are strongly dependent on the driving frequency. This dependence is illustrated with the aid of graphs and numerical tables which are applicable to any noncondensing gas–liquid combination. The particular case of an air bubble in water is also considered in detail.

368 citations


Journal ArticleDOI
TL;DR: An analysis of the actual values of ΔI/I reported in the other studies indicates a range larger than would be predicted on the basis of individual differences among observers in this study.
Abstract: Intensity discrimination was measured for pulsed sinusoids of various frequencies (200–8000 Hz) and sensation levels (5–80 dB). The data for all frequencies were fitted by a single function, ΔI/I=0.463 (I/I0)−0.072, where I0 is intensity at threshold, I is the intensity of the tone, and ΔI is the increment needed to obtain 71% correct in a two‐interval forced‐choice adaptive procedure. The form of this function is in good agreement with data reported in comparable studies but differs markedly from the data reported by Riesz [Phys. Rev. 31, 867–875 (1928)]. An analysis of the actual values of ΔI/I reported in the other studies indicates a range larger than would be predicted on the basis of individual differences among observers in this study. The data are also discussed differences among observers in this study. The data are also discussed in terms of the predictions of current theoretical models.

346 citations


Journal ArticleDOI
TL;DR: Four experiments were carried out to investigate a possible underlying basis of the results of voiced–voiceless distinction in stop consonants, which showed strong evidence for categorical perception.
Abstract: Experiments on the voiced–voiceless distinction in stop consonants have shown sharp and consistent labeling functions and categorical‐like discrimination functions for synthetically produced speech stimuli differing in voice‐onset time (VOT). Other research has found somewhat comparable results for young infants and chinchillas as well as cross‐language differences in the perception of these same synthetic stimuli. In the present paper, four experiments were carried out to investigate a possible underlying basis of these seemingly diverse results. All of the experiments employed a set of nonspeech tonal stimuli that differed in the relative onset time of their components. In the first experiment identification and discrimination functions were obtained with these signals which showed strong evidence for categorical perception: the labeling functions were sharp and consistent, the discrimination functions showed peaks and troughs which were correlated with the labeling probabilities. Other experiments provided evidence foir the presence of three distinct categories along this nonspeech stimulus continuum which were separated by narrow regions of high discriminability. Based on these findings a general account of voicing perception for stops in initial position is proposed in terms of the discriminability of differences in the temporal order of the component events at onset.

337 citations


Journal ArticleDOI
TL;DR: In this paper, a set of empirical equations was developed for use in determining the target strength or acoustic cross section of an individual fish at any insonified aspect as a function of fish size and insonifying frequency in the range 1?L/λ? 100, where L is fish length and λ is acoustic wavelength.
Abstract: A set of empirical equations has been developed for use in determining the target strength or acoustic cross section of an individual fish at any insonified aspect as a function of fish size and insonifying frequency in the range 1?L/λ? 100, where L is fish length and λ is acoustic wavelength. The equations were developed by interpolating experimental data obtained by insonifying individual fish as they were rotated about one of their principal axes. It was found that acoustic cross section σ is proportional to slightly less than L2 for each aspect, indicating that σ is approximately proportional to insonified area. Since σ is almost proportional to L2, a modified set of empirical equations was developed with σ exactly proportional to L2, thus eliminating the dependence of σ on frequency. The resulting errors are relatively minor and in some situations the modified equations lead to considerable simplifications which make their use quite convenient.

330 citations


Journal ArticleDOI
TL;DR: In this paper, an experimental technique for the determination of normal acoustic properties in a tube, including the effect of mean flow, was presented, where two stationary, wall-mounted microphones measure the sound pressure at arbitrary but known positions in the tube.
Abstract: An experimental technique is presented for the determination of normal acoustic properties in a tube, including the effect of mean flow. An acoustic source is driven by Gaussian white noise to produce a randomly fluctuating sound field in a tube terminated by the system under investigation. Two stationary, wall‐mounted microphones measure the sound pressure at arbitrary but known positions in the tube. Theory is developed, including the effect of mean flow, showing that the incident‐ and reflected‐wave spectra, and the phase angle between the incident and reflected waves, can be determined from measurement of the auto‐ and cross‐spectra of the two microphone signals. Expressions for the normal specific acoustic impedance and the reflection coefficient of the tube termination are developed for a random sound field in the tube. Three no‐flow test cases are evaluated using the two‐microphone random‐excitation technique: a closed tube of specified length, an open, unbaffled tube of specified length, and a pro...

Journal ArticleDOI
TL;DR: In this article, an equation for sound absorption in Lyman and Fleming sea water (S= 35‰, pH=8) is presented as a function of frequency, which represents the contributions to absorption due to boric acid, magnesium sulfate, and water.
Abstract: An equation for sound absorption in Lyman and Fleming sea water (S= 35‰, pH=8) is presented as a function of frequency. temperature, and pressure. It represents the contributions to absorption due to boric acid, magnesium sulfate, and water. The pressure effect on sound absorption by magnesium sulfate is treated in more detail than Schulkia and March did. The equation is based on the laboratory work at atmospheric pressure by Simmons for MgSo4 and boric acid, on the pressure work by Fisher in 0.5m MgSO4 solutions, and on pressure work by Litovitz and Carnevale in pure water. For frequencies from 10 to 400 kHz up to pressures of 500 atm the absorption is substantially lower than that calculated from the Schulkin and Marsh equation. The pressure results are in good agreement with field data obtained by Bezdek. [Work supported by Office of Naval Research.]

Journal ArticleDOI
TL;DR: In this article, the phase modulation of an optical beam in a submerged optical fiber coil by sound waves propagating in a fluid is used to produce a sensitive acoustic detector, and the results indicate that the sensitivity of this technique compares well with that of the best available hydrophone.
Abstract: We have demonstrated the feasibility of employing directly acousto‐optic interactions in an optical fiber to produce a sensitive acoustic detector. Our technique utilizes the phase modulation of an optical beam in a submerged optical fiber coil by sound waves propagating in a fluid. Analysis of our results indicates that the sensitivity of this technique compares well with that of the best available hydrophone.

Journal ArticleDOI
TL;DR: A multimicrophone digital processing scheme for removing much of the degrading distortion in acoustic recordings produced in untreated rooms by dividing microphone signals into frequency bands whose corresponding outputs are cophased and added.
Abstract: It is well known that room reverberation can significantly impair one’s perception of sounds recorded by a microphone in that room. Acoustic recordings produced in untreated rooms are characterized by a hollow echolike quality resulting from not locating the microphone close to the source. In this paper we discuss a multimicrophone digital processing scheme for removing much of the degrading distortion. To accomplish this the individual microphone signals are divided into frequency bands whose corresponding outputs are cophased (delay differences are compensated) and added. Then the gain of each resulting band is set based on the cross correlation between corresponding microphone signals in that band. The reconstructed broadband speech is perceived with considerably reduced reverberation.

Journal ArticleDOI
TL;DR: A general review is presented of most areas of sound‐propagation outdoors that are of interest for the control of community noise and suggestions made concerning research activities, applications of existing research, and practical problems which arise in the prediction of noise levels.
Abstract: A general review is presented of most areas of sound‐propagation outdoors that are of interest for the control of community noise. These areas are geometrical spreading, atmospheric absorption, ground effect, (near horizontal propagation in a homogenous atmosphere close to flat ground), refraction, the effect of atmospheric turbulence, and the effect of topography (elevation, hillsides, foilage, etc.) The current state of knowledge in each area is presented and suggestions made concerning research activities, applications of existing research, and practical problems which arise in the prediction of noise levels.

Journal ArticleDOI
TL;DR: With an impulse response technique the transfer functions from the free sound field to the ear‐canal entrance were measured on 20 subjects for sound incidence and the eardrum impedance was computed from this transfer function and completes the poor knowledge of the eARDrum impedance in the frequency range from 2 to 15 kHz.
Abstract: With an impulse response technique the transfer functions from the free sound field to the ear‐canal entrance were measured on 20 subjects for sound incidence from ten directions of the symmetry plane and 20 directions of the horizontal plane. Separate for each direction amplitude and phase of these transfer functions were averaged using a technic, which yields mean values still containing fine structures of single measurements. Additionally the transfer function of the ear canal was measured on three subjects. The eardrum impedance was then computed from this transfer function and completes the poor knowledge of the eardrum impedance in the frequency range from 2 to 15 kHz.

Journal ArticleDOI
TL;DR: In this paper, the temporal behavior of all measurable consonants, detailed in all possible conditions, in an extensive reading by one speaker, was discussed and a strong parallelism in duration distributions among similar kinds of consonants was found.
Abstract: The paper discusses the temporal behavior of all measurable consonants, detailed in all possible conditions, in an extensive reading by one speaker. The data indicate a strong parallelism in duration distributions among similar kinds of consonants, and interesting similarities and differences between different kinds of consonants in terms of phoneme‐sequential constraints and higher‐level linguistic factors.

Journal ArticleDOI
TL;DR: Using counterexamples, it is shown that vocabulary size and static and dynamic branching factors are all inadequate as measures of speech recognition complexity of finite state grammars and that perplexity is a more appropriate measure of equivalent choice.
Abstract: Using counterexamples, we show that vocabulary size and static and dynamic branching factors are all inadequate as measures of speech recognition complexity of finite state grammars. Information theoretic arguments show that perplexity (the logarithm of which is the familiar entropy) is a more appropriate measure of equivalent choice. It too has certain weaknesses which we discuss. We show that perplexity can also be applied to languages having no obvious statistical description, since an entropy‐maximizing probability assignment can be found for any finite‐state grammar. Table I shows perplexity values for some well‐known speech recognition tasks. Perplexity Vocabulary Dynamic Phone Word size branching factorIBM‐Lasers 2.14 21.11 1000 1000IBM‐Raleigh 1.69 7.74 250 7.32CMU‐AIX05 1.52 6.41 1011 35

PatentDOI
TL;DR: In this article, a voice sound transmitting and receiving (VRS) system was designed for the use under noisy circumstances and designed to be inserted into the external auditory canal of a user for attaining the voice sound transmission and receiving operation using the vibrations of the internal auditory canal wall.
Abstract: A voice sound transmitting and receiving apparatus suitable for the use under noisy circumstances and designed to be inserted into the external auditory canal of a user for attaining the voice sound transmitting and receiving operation using the vibrations of the external auditory canal wall.

Journal ArticleDOI
TL;DR: In this article, a technique for evaluating the acoustic intensity vector in an arbitrary environment by measuring the imaginary part of the cross-spectral density between the output signals from two closely spaced, pressure microphones is presented.
Abstract: A technique is presented for evaluating the acoustic‐intensity vector in an arbitrary environment by measuring the imaginary part of the cross‐spectral density between the output signals from two closely spaced, pressure microphones.

Journal ArticleDOI
TL;DR: In this paper, a single-mode optical fiber was used to pass a laser beam through a tank in which an approximately plane acoustic wave was produced, and the response of an optical fiber in water to low-frequency acoustic waves was investigated experimentally and compared with analytical results.
Abstract: The response of an optical fiber in water to low‐frequency acoustic waves was investigated experimentally and compared with analytical results. A single‐mode optical fiber was used to pass a laser beam through a tank in which an approximately plane acoustic wave was produced. A change in the optical index of refraction of the fiber creates an effective path‐length change for the optical beam which results in a phase shift of the optical beam with respect to a reference beam unaltered by the acoustic field. By mixing the phase‐shifted beam with a reference beam of constant phase in a photodetector the phase variation at the acoustic frequency is detected. Theoretically, the sensitivity of a fiber‐optic interferometer should be independent of frequency. Experimental results confirm this.

Journal ArticleDOI
TL;DR: Subjects showed excellent within-category discrimination in all three tasks after a moderate amount of training in a same-different task with a fixed standard and with feedback, and discrimination performance continuously improved with increasing stimulus difference for both intra- and intercategory comparisons.
Abstract: The discriminability of bilabial stop consonants differing in VOT (the Abramson–Lisker bilabial series) was measured in a same–different task, an oddity task, and a dual response, discrimination–identification task. Subjects showed excellent within‐category discrimination in all three tasks after a moderate amount of training in a same–different task with a fixed standard and with feedback. In addition, discrimination performance continuously improved with increasing stimulus difference for both intra‐ and intercategory comparisons. Also, subjects were able to alter their identification responses so that well‐defined category boundaries fell at arbitrary values determined by the experimenters. These results are not compatible with a strict interpretation of the categorical perception of stop consonants.

Journal ArticleDOI
TL;DR: Listening experiments with spliced speech showed that cues for the perception of word juncture occurred always at word onset, at word offset only for /l/ and /r/, and never word medially.
Abstract: Listening experiments with spliced speech showed that cues for the perception of word juncture occurred always at word onset, at word offset only for /l/ and /r/, and never word medially Spectrograms showed the cues to be bursts, aspiration, glottal stops, laryngealization, and distinct syllable‐initial allophones of /l/ and /r/

PatentDOI
TL;DR: A vibration isolation system for a passenger carrying helicopter with which the crew seats in the cockpit area and the floor in the passenger area are decoupled from the airframe is presented in this paper.
Abstract: A vibration isolation system for a passenger carrying helicopter with which the crew seats in the cockpit area and the floor in the passenger area are decoupled from the airframe thereby isolating the seats and floor from the airframe vibrations. In addition, the fuel tanks of the helicopter are isolated from the airframe so that a force feedback from the fuel tank to the airframe resulting from the changing fuel quantity is effectively eliminated. The system employs nodal isolators which both isolate (decouple) and support the particular structural mass in question.

Journal ArticleDOI
TL;DR: Sperm whale codas serve as a means of individual acoustic identification and acoustical locations calculated for the four‐hydrophone array data showed that changes in underwater meovement coincided with changes in the coda sequences.
Abstract: Short series of 3 to 40 or more clicks are produced by sperm whales, Physeter catodon, in stereotyped repetitive sequences or codas. The temporal click patterns in codas appear to be unique to individual whales over at least a few hours. It is suggested that sperm whale codas serve as a means of individual acoustic identification. An apparent exchange of codas between two animals was analyzed and acoustical locations calculated for the four‐hydrophone array data that changes in underwater meovement coincided with changes in the coda sequences.

Journal ArticleDOI
TL;DR: In this article, a binaural interaction model is described in which the peripheral transduction from acoustical waveforms to firing patterns on the auditory nerves is included explicitly, and the model describes the parametric dependence of masking level differences in almost all available data.
Abstract: A binaural interaction model is described in which the peripheral transduction from acoustical waveforms to firing patterns on the auditory nerves is included explicitly. Quantitative predictions are compared with available data on the binaural detection of low‐frequency tones masked by Gaussian noise. The model describes the parametric dependence of masking‐level differences in almost all available data.

Journal ArticleDOI
TL;DR: A series of experiments carried out to further elucidate the role of spectral cues in locating sounds in the median sagittal plane (MSP) revealed a notch in the frequency response curves which migrated toward the lower frequencies as the sound source was moved from above to below the aural axis.
Abstract: A series of experiments was carried out to further elucidate the role of spectral cues in locating sounds in the median sagittal plane (MSP). Broadband noise bursts, generated at ±30°, ±15°, and O° re aural axis, were recorded via microphones placed in the external ear canals of 8 Ss. When these recorded sounds were played back dichotically through headphones, they were perceived as originating from the loudspeakers, not the headphones. In fact, Ss could identify that loudspeaker which originally generated the sound nearly as accurately as they could when listening under free‐field conditions. Analysis of the spectra of these recorded sounds revealed a notch in the frequency response curves which migrated toward the lower frequencies as the sound source was moved from above to below the aural axis. This feature of the spectrum may well be important for accuracy in locating sounds emanating from the frontal segment of the MSP. Four Ss were given additional tests to find out whether they could locate sounds...

Journal ArticleDOI
TL;DR: Results confirm Lisker’s suggestion that the major effect of F1 in initial voicing contrasts is determined by its perceived frequency at the onset of voicing and show that a periodically excited F1 transition is not, per se, a positive cue to voicing.
Abstract: It has been claimed that a rising first‐formant (F1) transition is an important cue to the voiced–voiceless distinction for syllable‐initial, prestressed stop consonants in English. Lisker [J. Acoust. Soc. Am. 57, 1547–1551 (L) (1975)] has pointed out that the acoustic manipulations suggesting a role for F1 have involved covariation of the onset frequency of F1 with the duration, and hence the frequency extent, of the F1 transition; he has argued that effects hitherto ascribed to the transition are more properly attributed to its onset. Two experiments are reported in which F1 onset frequency and F1 transition duration/extent were manipulated independently. The results confirm Lisker’s suggestion that the major effect of F1 in initial voicing contrasts is determined by its perceived frequency at the onset of voicing and show that a periodically excited F1 transition is not, per se, a positive cue to voicing. In further experiments, the relative levels and the frequencies at the onset of voicing of both F1 and F2 were manipulated. The influences on the perception of stop‐consonant voicing that resulted were determined specifically by the frequency of F1 and not by its absolute or relative level or by the overall distribution of energy in the spectrum. The results demonstrate a complementary relationship between perceptual cue sensitivity and production constraints: In production, the VOT characterizing a particular stop consonant varies inversely with the degree of vocal‐tract constriction, and hence with the frequency of F1, required by the phoneme following the stop; in perception, the lower the frequency of F1 at the onset of voicing, the longer the VOT that is required to cue voicelessness. In this way, the inclusion of F1 onset frequency in the cue repertoire for voicing, and the establishment of the cue trading relationship, reduce the problem of contextual variation that would be met were VOT alone, or some other amalgam of cues, the only basis of the voicing distinction.

Journal ArticleDOI
TL;DR: temporal effects in loudness with respect to four parameters can be summarized: (a) phase effects of complex tones and the influence of physiological noise, both in the low‐frequency range, (b) effects of amplitude modulation, (c) frequency modulation, and (d) bandwidth in the high-frequency range.
Abstract: Data on loundness comparisons are complemented by additional measurements. Thus, temporal effects in loudness with respect to four parameters can be summarized: (a) phase effects of complex tones and the influence of physiological noise, both in the low‐frequency range, (b) effects of amplitude modulation, (c) frequency modulation, and (d) bandwidth in the high‐frequency range. Additionally, transient masking patterns and the corresponding specific loudness patterns produced by strongly time‐varying sounds are discussed. A model for the loudness development of such sounds is designed and realized electronically as a loudness measuring device. The usefulness of this equipment is demonstrated for measuring loudness of the following six types of sounds that vary both temporally and spectrally: (a) single tone bursts as a function of duration, (b) sinusoidally amplitude modulated tones as a function of repetition rate, (c) bandpass noise with constant rms value as a function of bandwidth within the critical b...

Journal ArticleDOI
TL;DR: In this article, the authors developed theoretical predictions of the way that the rate of energy transmission from strings to bridge, as a function of time, depends on the various parameters mentioned; then compare these predictions with experimental measurements of beats and aftersound under various conditions.
Abstract: The fact that a piano typically has not one, but two or three strings tuned to a given pitch, and that these sets of strings cross the bridge at (almost) the same point, leads to significant dynamical coupling among their vibrations. Since the dominant dissipation mechanism is the non‐rigidity of the bridge, the rate of energy loss by one string is radically affected by the way that its partners are vibrating; for example, an “antisymmetric” vibration of a pair of strings is much longer‐lived than a “symmetric” one. This fundamental phenomenon is complicated by a number of factors, including (a) slight differences in the natural frequencies of the individual strings; (b) a bridge admittance which has a reactive as well as a resistive part; (c) two possible polarizations of the string vibration; and (d) hammer irregularities which cause nonidentical initial excitations of the strings. In this paper we develop theoretical predictions of the way that the rate of energy transmission from strings to bridge, as a function of time, depends on the various parameters mentioned; we then compare these predictions with experimental measurements of beats and “aftersound” under various conditions. Our data shows the time‐dependence of each polarization of string vibration amplitude, as well as the resulting sound pressure level, covering a dynamic range of 70 dB from the moment of hammer impact until the signal is lost in the noise. The agreement with theory is excellent. On the basis of this understanding we also explain the function of the una corda pedal in controlling the aftersound, point out the stylistic possibilities of a split damper, and explore the way in which an excellent tuner can use the fine tuning of the unisons to make the aftersound more uniform from note to note.