scispace - formally typeset
Search or ask a question

Showing papers in "Journal of the Acoustical Society of America in 1973"


Journal ArticleDOI
TL;DR: A theory was formulated for the central formation of the pitch of complex tones, i.e., periodicity pitch, which is a logical deduction from statistical estimation theory of the optimal estimate for fundamental frequency.
Abstract: A comprehensive theory is formulated for the central formation of the pitch of complex tones, i.e., periodicity pitch [Schouten, Ritsma, and Cardozo, J. Acoust. Soc. Amer. 34, 1418–1424 (1962)]. This theory is a logical deduction from statistical estimation theory of the optimal estimate for fundamental frequency, when this estimate is constrained in ways inferred from empirical phenomena. The basic constraints are (i) the estimator receives noisy information on the frequencies, but not amplitudes and phases, of aurally resolvable simple tones from the stimulus and its aural combination tones, and (ii) the estimator presumes all stimuli are periodic with spectra comprising successive harmonics. The stochastic signals representing the frequencies of resolved tones are characterized by independent Gaussian distributions with mean equal to the frequency represented and a variance that serves as free parameter. The theory is applicable whether frequency is coded by place or time. Optimum estimates of fundamental frequency and harmonic numbers are calculated upon each stimulus presentation. Multimodal probability distributions for the estimated fundamental are predicted in consequence of variability in the estimated harmonic numbers. Quantification of the variance parameter from musical intelligibility data in Houtsma and Goldstein [J. Acoust. Soc. Amer. 51, 520–529 (1972)] shows it to be dependent upon the frequency represented and not upon other stimulus frequencies. The quantified optimum processor theory consolidates known data on pitch of complex tones.

542 citations


Journal ArticleDOI
TL;DR: In this paper, the effects of mismatch on a conventional beamformer and two optimum beamformers are compared and conditions for the resolution of closely spaced sources by an optimum beamformer are presented.
Abstract: Mismatch in a beamformer occurs when the knowledge of the signal directional properties is imprecise. The effects of mismatch on a conventional beamformer and two optimum beamformers are compared. One optimum beamformer is based on inversion of the noise cross‐spectral matrix while the other is based on inversion of the signal‐plus‐noise cross‐spectral matrix. When there is mismatch, the inclusion of the signal in the matrix inversion process can lead to dramatic reductions in the output signal‐to‐noise ratio when the output signal‐to‐noise ratio of a perfectly matched beamformer would be greater than unity. However, the corresponding effect on the total beamformer output is less dramatic since an increase in the noise response partially offsets the decrease in signal response. The question of suppressing mismatched signals is closely related to the question of resolving closely spaced sources. Exact conditions are presented for resolution of closely spaced sources by an optimum beamformer. These results are applied to a line array and compared with the resolution capability of a conventional beamformer. It is found, for example, that an output signal‐to‐noise ratio of about 47 dB is required to achieve a resolving power with an optimum processor which is ten times that given by the classical Rayleigh limit. Conditions are also presented for the resolution of two sources of unequal strength.

528 citations


Journal ArticleDOI
TL;DR: Experimentation demonstrated with English nonsense words that final‐syllable and initial‐consonant lengthening occur in utterances with various intonational patterns (imperative, declarative, interrogative); (2) final‐Syllable lengthening occurs in word‐final and phrase‐final positions as well as in utterance‐final...
Abstract: The duration of speech segments as a function of position in utterances (initial, medial, final) was studied. In the first experiment seven English speakers read nonsense utterances of the form “say a [bab], say a [babab], say a [babab],” etc. Spectrograms were used to determine the duration of speech segments in the readings. Final syllables were found to be longer than nonfinal syllables. Final‐syllable vowel increments were approximately 100 msec. Final‐syllable consonant increments were less than vowel increments; for instance, absolute final consonant increments were about 20 msec. Also word‐initial consonants were found to be lengthened by 20–30 msec over medial consonants. Subsequent experimentation demonstrated with English nonsense words that (1) final‐syllable and initial‐consonant lengthening occur in utterances with various intonational patterns (imperative, declarative, interrogative); (2) final‐syllable lengthening occurs in word‐final and phrase‐final positions as well as in utterance‐final position; and (3) final‐syllable and initial‐consonant lengthening occur in various kinds of syllables, including syllables with diphthongs, with fricative consonants, with voiceless stops, with consonant cluster, and with no final consonants (i.e., CV syllables). These studies report durational increments of particularly great magnitude for absolute final fricative consonants. Explanations of the lengthening effects are discussed. One theory suggests that lengthening in certain utterance positions is a learned aspect of language which cues listeners concerning the location of boundaries of words, phrases, or sentences. Explanations based on hypothesized properties of the speech production process are also discussed.

404 citations


Journal ArticleDOI
TL;DR: Study of midsagittal x‐ray tracings reveals that the vocal‐tract outline can be accurately represented by means of variables specifying the positions of the jaw, tongue body, tongue tip, lips, velum, and hyoid.
Abstract: Study of midsagittal x‐ray tracings reveals that the vocal‐tract outline can be accurately represented by means of variables specifying the positions of the jaw, tongue body, tongue tip, lips, velum, and hyoid. As the articulators move, they modify the vocal‐tract cross‐sectional area and the vocal‐tract transfer function computed thereform. The speech signal may be synthesized by concatenating the responses to repeated excitation of the quasistatic vocal tract. Vowels are specified in terms of variables denoting the positions of the jaw, tongue body, lips, and velum. Consonants are implemented as transformations on the underlying vowel‐derived articulatory states that satisfy given constraints on the position of an articulator relative to the fixed structures. The set of states which satisfy the given constraint corresponds to the allowed productions of the consonant. Coarticulation effects control the selection of the underlying state and thus determine the particular consonant produced. Vowel‐consonant‐vowel sequences generated with the aid of rules for articulator movement and the articulator‐position to vocal‐tract cross‐sectional‐area transformation are intelligible and exhibit coarticulation in agreement with acoustic measurements.

371 citations


Journal ArticleDOI
TL;DR: It was found that at short durations the product of Δf and d was about one order of magnitude smaller than the minimum predicted from the place model, except for frequencies above 5 kHz.
Abstract: Models which attempt to account for our ability to discriminate the pitch of pure tones are discussed. It is concluded that models based on a place (spectral) analysis should be subject to a limitation of the type Δf⋅d ⩾ constant, where Δf is the frequency difference limen (DL) for a tone pulse of duration d. The value of this constant will depend on the ability of the system to resolve small intensity differences. If a resolution of 1 dB is assumed, the value of the constant is about 0.24. In principle, a mechanism based on the measurement of time intervals could do considerably better than this. Frequency DLs were measured over a wide range of frequencies and durations. It was found that at short durations the product of Δf and d was about one order of magnitude smaller than the minimum predicted from the place model, except for frequencies above 5 kHz. A “kink” in the obtained functions was also observed at about 5 kHz. It is concluded that the evidence is consistent with the operation of a time‐measuring mechanism for frequencies below 5 kHz, and with a spectral or place mechanism for frequencies above this.

366 citations


Journal ArticleDOI
TL;DR: Results of the analyses indicate that for most confusion matrices several feature systems can be shown to account equally well for transmitted information, and that across syllable sets and listening conditions, there is little consistency in the identification of perceptually important features.
Abstract: Consonant confusion matrices were obtained for four sets of CV and VC nonsense syllables presented both in quiet and in the presence of a masking noise. A sequential method of partitioning transmitted information for confusion matrices was developed and used to test the hypothesis that when the internal redundancy of feature systems is taken into account, certain articulatory and phonological features of consonants consistently account for transmitted information better than other, closely related, features. Results of the analyses indicate that for most confusion matrices several feature systems can be shown to account equally well for transmitted information, and that across syllable sets and listening conditions, there is little consistency in the identification of perceptually important features. The implication of these findings with respect to the existence of natural perceptual features for consonants is discussed.

365 citations


Journal ArticleDOI
TL;DR: Production data indicate that VOT measures can separate voicing contrasts for speakers of Canadian English, but not for speaker of Canadian French, and language switching in bilinguals is w...
Abstract: Cross‐language studies have shown that Voice Onset Time (VOT) is a sufficient cue to separate initial stop consonants into phonemic categories. The present study used VOT as a linguistic cue in examining the perception and production of stop consonants in three groups of subjects: unilingual Canadian French, unilingual Canadian English, and bilingual French‐English speakers. Perception was studied by having subjects label synthetically produced stop‐vowel syllables while production was assessed through spectrographic measurements of VOT in word‐initial stops. Six stop consonants, common to both languages, were used for these tasks. On the perception task, the two groups of unilingual subjects showed different perceptual crossovers with those of the bilinguals occupying an intermediate position. The production data indicate that VOT measures can separate voicing contrasts for speakers of Canadian English, but not for speakers of Canadian French. The data also show that language switching in bilinguals is well controlled for production but poorly controlled for perception at the phonological level.

357 citations


Journal ArticleDOI
TL;DR: Comparisons between discrimination performance and autocorrelation functions of echolocation sounds used in the discriminations suggest that these bats possess some neural equivalent of a matched‐filter, ideal sonar receiver which functionally cross‐correlates a replica of the outgoing signal with the returning echo to detect the echo and determine its arrival time.
Abstract: Using simultaneous discrimination procedures the acuity of resolution of differences in target range has been determined on four species of echolocating bats (Eptesicus fuscus, Phyllostomus hastatus, Pteronotus suapurensis, and Rhinolophus ferrumequinum). All can discriminate range differences as small as 1 to 3 cm and, for the first three species, the acuity of range resolution appears to be independent of absolute range, at least at short distances. In Eptesicus range discrimination mediated in terms of the arrival times of echoes returning from different targets. Comparisons between discrimination performance and autocorrelation functions of echolocation sounds used in the discriminations suggest that these bats possess some neural equivalent of a matched‐filter, ideal sonar receiver which functionally cross‐correlates a replica of the outgoing signal with the returning echo to detect the echo and determine its arrival time. Eptesicus and Phyllostomus both use the entire FM signal for target ranging. Pteronotus uses its entire compound, short‐duration CF/FM signal for ranging, while Rhinolophus separates the FM component from its compound, long‐duration CF/FM sound and uses the FM part for target ranging. The results indicate different functions for the CF and FM components of bat sonar cries, and they suggest that the matched‐filter or cross‐correlation approach to echolocation is appropriate.

357 citations



Journal ArticleDOI
TL;DR: Tuning curves for auditory‐nerve fibers in anesthetized cats also show significant low‐frequency tails, and fibers with high characteristic frequencies (CF) respond to tones throughout a wide range of low and middle frequencies.
Abstract: Bekesy's interest in improving telephone communication led him eventually to make observations on the mechanical properties of the basilar membrane For apical parts of the membrane he obtained resonance curves with steep high‐frequency slopes and gradual low‐frequency slopes Tuning curves for auditory‐nerve fibers in anesthetized cats also show significant low‐frequency tails Even fibers with high characteristic frequencies (CF) respond to tones throughout a wide range of low and middle frequencies For low‐frequency tones at moderate and high stimulus levels the responses of high‐CF fibers tend to be time‐locked either to the same phase of the stimulus or to a phase that differs by a half cycle For complex stimuli such as speech, which contains many low‐frequency components, these fibers behave as broadly tuned elements, discharging with time patterns that are principally determined by the temporal characteristics of the acoustic waveform Results of experiments using low‐ and high‐frequency masking stimuli reinforce the view that responses of high‐CF fibers to low‐frequency stimuli should not be ignored in theories that seek to describe the role of the auditory nerve in speech communication

333 citations


Journal ArticleDOI
TL;DR: In this paper, the authors derived the nonlinear differential equations and boundary conditions in small-field variables, for small fields superposed on large static biasing states, from general rotationally invariant nonlinear electroelastic equations derived previously.
Abstract: The nonlinear differential equations and boundary conditions in small‐field variables, for small fields superposed on large static biasing states, are obtained from general rotationally invariant nonlinear electroelastic equations derived previously. The small‐field equations are directly applicable in the consistent description of parametric effects in high‐coupling piezoelectric materials in terms of the fundamental material parameters. The application of the equations to homogeneously polarized ferroelectrics reveals that in the linear limit the electroelastic equations are identical with the equations of linear piezoelectricity for the symmetry of the polarized state. The influence of a thickness directed homogeneous biasing electric field on the thickness vibrations of a piezoelectric plate, to second order in the biasing field, has been determined. To first order in the biasing field the results indicate that the effective fifth‐rank tensor assumed in earlier quasilinear work on the subject did not have correct symmetry properties because the influence of the homogeneous static deformation under the biasing field was ignored.

Journal ArticleDOI
TL;DR: A new approach to pitch perception is outlined in this paper, based on what might be called auditory pattern recognition and formalized in a mathematical model, the so‐called “pattern‐transformation model.”
Abstract: A new approach to pitch perception is outlined in this paper. It is based on what might be called auditory pattern recognition. The general approach is formalized in a mathematical model, the so‐called “pattern‐transformation model.” In this model an acoustic stimulus is first transformed by the sense organ into a pattern of peripheral neural activity. This peripheral pattern is assumed roughly to represent the power‐spectrum of the stimulus. Thus, the temporal fine structure of the stimulus is virtually ignored; the model is phase insensitive. The peripheral pattern is then assumed to be Fourier transformed into another pattern of activity. This second pattern roughly represents the autocorrelation function of the stimulus. Pitch is derived from the positions of maximal activity in this pattern. From preliminary tests it appears that the model can successfully predict the pitch of many types of complex stimuli. In addition, the model provides estimates of pitch “strength” or “clarity.” These estimates also agree, at least qualitatively, with available data.

Journal ArticleDOI
TL;DR: In this paper, the impulse response of a piston of a given geometry is known, and the steady state field is computed by evaluating the driving frequency component of the Fourier transform.
Abstract: A method is presented whereby the pressure variations at any point in the field of a baffled piston may be efficiently calculated. If a solution for the impulse response of a piston of a given geometry is known, then for harmonic excitation the steady‐state field may be computed by evaluating the driving‐frequency component of the Fourier transform of the impulse response. This method involves a single integration, whereas the direct numerical solution requires a double numerical integration. An exact, closed‐form solution for the impulse response of a rectangular piston is derived. With this solution and the known solution for the impulse response of a circular piston the steady‐state solutions for these two geometries are obtained. Three‐dimensional and contour plots of data obtained for a circular piston and for a plane of symmetry of a rectangular piston field are presented. The plots for the circular piston compare favorably with previously published plots of data calculated by a double integration.

Journal ArticleDOI
TL;DR: It is shown that localization ability decreases with increasing occlusion, that it is better for signals in the anterior than in the posterior sector of the median plane, and that high‐frequency signal content is more important than the low.
Abstract: Localization of sound sources outside the median plane is influenced primarily by differences in head shadow and arrival time of the signal at the two ears of the observer. For sources located within this plane, localization is influenced primarily by the irregularities of the pinna. By progressively occluding these cavities, it is shown that localization ability decreases with increasing occlusion, that it is better for signals in the anterior than in the posterior sector of the median plane, and that high‐frequency signal content is more important than the low. A number of hypotheses regarding localization in the median plane are noted.

Journal ArticleDOI
TL;DR: It is argued that the elastic properties of soft tissues are largely responsible for their echographic visualizability and that these are determined, for the most part, by structural collagen‐containing components.
Abstract: It is argued that the elastic properties of soft tissues are largely responsible for their echographic visualizability and that these are determined, for the most part, by structural collagen‐containing components.

Journal ArticleDOI
TL;DR: In this article, the occurrence of the wolfnote in the cello is shown by direct tonal analysis to be caused by the beating of two equally forced oscillations, and the fundamental sinusoidal component of vibration of the string is shown to split at the wolf note into a pair of oscillations separated by a frequency interval equal to that of the stuttering frequency.
Abstract: The occurrence of the wolfnote in the cello is shown by direct tonal analysis to be caused by the beating of two equally forced oscillations. The fundamental sinusoidal component of vibration of the string is shown to split at the wolfnote into a pair of oscillations separated by a frequency interval equal to that of the stuttering frequency of the wolfnote. In one of the two cellos investigated, higher partials up to the third are also split into pairs, their separation being the multiple of the harmonic number and the separation of the fundamental pair. Further evidence is obtained to support the interpretation of the wolf as beating between two equal forced vibrations from a frequency analysis of the bowed‐string vibration for different stopped string lengths. The frequencies of vibration of the string bear a close resemblance to those obtained in coupled electrical resonant circuits.

Journal ArticleDOI
TL;DR: Initial research on a model of binaural hearing in which the peripheral transduction from acoustical waveforms to firing patterns on the auditory nerves is explicitly described finds that performance at least as good as experimentally observed performance can be achieved with central processing that is substantially restricted.
Abstract: We discuss initial research on a model of binaural hearing in which the peripheral transduction from acoustical waveforms to firing patterns on the auditory nerves is explicitly described. In most of this initial research, attention is focused on interaural time discrimination, and the processing that follows the peripheral transduction (the central processing) is assumed to be ideal. By imposing limitations on the central processing and computing the consequences for performance, we find that performance at least as good as experimentally observed performance can be achieved with central processing that is substantially restricted.

Journal ArticleDOI
TL;DR: In a preliminary experiment it is established that recruitment in normal subjects, induced by masking or simulated by expansion of the signal, reduces the intelligibility of amplified speech severely, and that this intelligibility can be largely restored by signal processing.
Abstract: A deaf person with recruitment perceives sound as though listening through a volume expander followed by an attenuator, the expansion ratio and attenuation being typically frequency dependent. (Other perceptive aberrations may also be present, of course.) The subject is often prevented from using enough hearing‐aid gain to bring weak consonants into the useful dynamic range of his hearing, because this amount of gain would make lower‐frequency, high‐amplitude vowels intolerably loud. Such subjects commonly find amplified speech to have poor intelligibility. In a preliminary experiment it is established that recruitment in normal subjects, induced by masking or simulated by expansion of the signal, reduces the intelligibility of amplified speech severely, and that this intelligibility can be largely restored by signal processing. The implication is that recruitment in deaf subjects is a sufficient cause for loss of intelligibility, whether or not there are other causes. In the present experiments, speech is processed by a two‐channel amplitude compressor whose frequency‐dependent compression ratio is adjusted to compensate the recruitment of the individual subject, and the compressed speech is subjected to frequency‐selective amplification similarly adapted to the subject. The aim is to amplify each acoustical element of speech, at each frequency‐amplitude coordinate of the speech band, to a relative loudness for the deaf subject corresponding to the relative loudness of that speech element perceived by normals. This processing improved speech recognition, both in quiet and in the presence of competing speech introduced before processing, for six perceptively deaf subjects. Subjects showed an improvement in either initial‐ or terminal‐consonant recognition of at least 22% and as much as 160% at optimum levels in quiet, and from 10% to 229% with speech interference 10 dB below the pre‐processed signal.

Journal ArticleDOI
TL;DR: In this article, the authors developed a practical approximation for the backscattering of periodic bursts of sine waves by a volume of randomly distributed scatterers, which is applied to the measurement of a volumetric backscatter cross section, using a substitution method in which the rms value of the gated backscattered signal is compared with the value of a wave reflected from a target of known coefficient of reflection.
Abstract: This paper develops a practical approximation for the backscattering of periodic bursts of sine waves by a volume of randomly distributed scatterers. The approximation is applied to the measurement of a “volumetric backscattering cross section,” using a substitution method in which the rms value of the gated backscattered signal is compared with the rms value of a wave reflected from a target of known coefficient of reflection. It is shown that the signal backscattered from the ensemble depends on the attenuation of the wave in the volume and upon the burst and gate lengths. An equation to obtain the volumetric backscattering cross section from experimental data is derived.

Journal ArticleDOI
TL;DR: It is suggested that vowels become strongly incompressible beyond a certain amount of shortening and that vowel duration modification rules should have the form Do = k (Di − Dmin)+Dmin, where Di is the input duration to the rule, Do is the output duration of therule, and the scale factor k is greater than zero and depends on the particular rule.
Abstract: It is well known that stressed vowels are shorter before voiceless consonants than before voiced, and that stressed vowels are shorter before an unstressed syllable in a bisyllabic word than in a monsyllabic word. A set of test materials was designed to study the interaction between these two rules. Results suggest that vowels become strongly incompressible beyond a certain amount of shortening and that vowel duration modification rules should have the form Do = k (Di − Dmin)+Dmin, where Di is the input duration to the rule, Do is the output duration of the rule, Dmin is the minimum duration for the vowel, and the scale factor k is greater than zero and depends on the particular rule. The data indicate that Dmin is about 45% of the inherent duration of a given vowel.

Journal ArticleDOI
TL;DR: In this article, the authors developed explicit expressions for the cross-spectral density between pairs of sensors and the wave-number-frequency spectrum projected onto the line joining these sensors, when they are placed in arbitrary positions in noise fields which are described by an arbitrary directional distribution of uncorrelated plane waves.
Abstract: Explicit expressions are developed for the cross‐spectral density between pairs of sensors and the wavenumber‐frequency spectrum projected onto the line joining these sensors, when they are placed in arbitrary positions in noise fields which are described by an arbitrary directional distribution of uncorrelated plane waves. The approach is to expand the directional distribution in spatial harmonics. Each harmonic leads to a corresponding term, with the same coefficient, in series representations for the cross‐spectral density function and the wave‐number‐frequency spectrum. The approach particularly attractive when the field can be adquately represented by a relatively small number of harmonics. Both two‐ and three‐dimensional fields are considered. The approach is applied to the vertical directionality of ambient sea noise and is related to some existing models of ambient sea noise. Some new models are presented. Results are compared with reported experimental data and found to be in good agreement over a wide frequency range.

Journal ArticleDOI
TL;DR: Statistical analysis of these formant variables confirmed that F1 and F2 are the most appropriate two distinctive parameters for describing the spectral differences among the vowel sounds.
Abstract: The frequencies and levels of the first three formants of 12 Dutch vowels were measured. The vowels were spoken by 50 male speakers in an h (vowel) t context. Statistical analysis of these formant variables confirmed that F1 and F2 are the most appropriate two distinctive parameters for describing the spectral differences among the vowel sounds. Maximum likelihood regions were computed and used to classify the vowels, and a score of 71.3% correct classification in the logF1 ‐logF2 plane was obtained (87.3% if three related pairs are grouped together). These scores rose to 78.3% and 95.2%, respectively, when a simple speaker‐dependent correction was applied. The scores are comparable with those obtained in an earlier study in which a principal‐components analysis was applied to the 1/3‐oct filter levels of the same vowel sounds [Klein, Plomp, and Pols, J. Acoust. Soc. Amer. 48, 999–1009 (1970)]. From the latter data a two‐dimensional representation (“optimal plane”) equivalent to the logF1 ‐logF2 plane cou...

Journal ArticleDOI
TL;DR: Based on the data reviewed in the paper, it is tentatively concluded that the teleost auditory system is well adapted as a temporal analyzer.
Abstract: The purpose of this critical review is to reevaluate the current experimental literature on fish audition based upon evaluation of structural, physiological and behavioral studies. The specific emphasis of the paper will be to (1) review the recent literature on the psychophysiology of hearing in fishes; (2) look at the subject of fish hearing from the standpoint of auditory mechanisms and their relationship to what is known about hearing in terrestrial vertebrates; and (3) emphasize some questions and areas of research which we feel require more investigation. Based on the data reviewed in the paper we have tentatively concluded that the teleost auditory system is well adapted as a temporal analyzer.

Journal ArticleDOI
TL;DR: A brief summary of the state of knowledge of echolocation by small toothed whales is given in this article, where the authors also discuss the role of small whales in their detection.
Abstract: A brief summary is presented of the state of knowledge of echolocation by small toothed whales.

Journal ArticleDOI
TL;DR: The results showed that as the number of frequency or duration transitions increased from zero to two, ear superiority shifted from left to right, suggesting that perception of temporal patterns might be one of the underlying mechanisms in speech perception.
Abstract: The assumption that the direction of ear superiority in dichotic listening to sounds varies as a function of the number of stimulus transitions within sound sequences was tested in two experiments on 36 right‐handed students. In each sequence, three sounds which varied in terms of either frequency or duration were employed. Subject's task was to report the sequences by ear. The results showed that as the number of frequency or duration transitions increased from zero to two, ear superiority shifted from left to right. The shift from left to right ear superiority as a function of the increase in the complexity of temporal patterning suggests that perception of temporal patterns might be one of the underlying mechanisms in speech perception.

Journal ArticleDOI
TL;DR: A narrow‐band pulse of 0.1‐msec duration, with the main energy between 110 and 150 kHz and a source level of 40 dB re 1 μbar at 1 m has been found and can explain how wires below 1‐mm diam can be detected by echolocation.
Abstract: Besides the already known low‐frequency components of the Phocoena echolocation pulse, a narrow‐band pulse of 0.1‐msec duration, with the main energy between 110 and 150 kHz and a source level of 40 dB re 1 μbar at 1 m has been found. The properties of this pulse can explain how wires below 1‐mm diam can be detected by echolocation. The results are consistent with the hypothesis that the emission is restricted to a narrow beam.

Journal ArticleDOI
TL;DR: In this paper, the relationship between bowing parameters, i.e., force applied, bow position, and velocity, was derived in terms of load impedance presented by bridge to string, characteristic impedance, and frictional coefficients.
Abstract: Relations between bowing parameters, i.e., force applied, bow position, and velocity, are derived in terms of load impedance presented by bridge to string, characteristic impedance, and frictional coefficients. The range between least applied force needed to couple bow to load during sticking and maximum permitting uncoupling following sticking provides the generous tolerance, variable but typically in the ratio of 1 to 10, that makes the bowed string so flexible in performance. Domains of string behavior recognized by players are related graphically to bowing parameters. An electromagnetic method for observing particle velocity in the string reveals small but significant ripples caused by forces at the bow that the idealized explanation ignores. In one very flexible string, force exerted on the bridge varied approximately inversely with frequency out to the 15th harmonic, whereas for a string equivalent to a gut G for violin force became zero near the 7th. Elastic effects are considered, and it is suggested that in strings, either solid or wound, inharmonicity of 0.1 cent per square of mode number will not perceptibly degrade bowed‐string performance.

Journal ArticleDOI
TL;DR: It was found that when one of the CV's trailed the other by 30–60 msec, the trailing CV became more intelligible than when it was given simultaneously; the leading syllable's intelligibility dropped from its “simultaneous” level when leading by 15 and 30 msec.
Abstract: In two experiments on normals we presented CV nonsense syllables both dichotically and monotically, with onsets of the syllables separated by 0, 15, 30, 60, and 90 msec (first experiment) and 0, 90, 180, 250, and 500 msec (second experiment). We found that when one of the CV's trailed the other by 30–60 msec, the trailing CV became more intelligible than when it was given simultaneously; the leading syllable's intelligibility dropped from its “simultaneous” level when leading by 15 and 30 msec. The leading message was more intelligible between 15 and 250 msec when the two channels were mixed monotically. In the dichotic simultaneous conditon, voiceless consonants were more intelligible than voiced, especially in voiced‐voiceless pairs. When the voiced CV trailed the voiceless CV, the former became almost as intelligible as its voiceless counterpart. A left hemisphere “speech processor” was postulated, with suppression of information from ipsilateral sources during contralateral stimulation. The postulated “speech processor” may be involved in acoustic‐signal‐vocal‐tract control functions.

Journal ArticleDOI
TL;DR: It is concluded that backward masking is mainly due to interactions at the level of the filter outputs, and that in forward masking, in addition to a short‐term component, a long-term component is distinguishable.
Abstract: The frequency selectivity of the peripheral ear (e.g., at the VIIIth nerve level) is so acute that onset and offset transients in responses to short signals produce a nonnegligible extension of the signal duration. Thus, peripheral excitation patterns produced by signals which were separated in time can overlap and thereby mask each other. We refer to this type of masking as transient masking. Published data on nonsimultaneous masking and the results of two new experiments are compared with the masking that may be expected from filter transients. It is concluded that backward masking is mainly due to interactions at the level of the filter outputs, and that in forward masking, in addition to a short‐term component, a long‐term component is distinguishable. The latter has an exponential decay with a time constant of approximately 75 msec, and is probably related to physiological adaptation effects.

Journal ArticleDOI
TL;DR: In this paper, the dispersion relations for acoustic waves in plates of arbitrary anisotropy are presented, and dispersion curves for propagation in a (001)cut cubic plate are compared to the uncoupled SV and P modes which, in turn, are related to the slowness curves for bulk waves.
Abstract: The mathematical formalism for obtaining dispersion relations for acoustic waves in plates of arbitrary anisotropy is outlined, and dispersion curves for propagation in a (001)‐cut cubic plate are presented. These results are compared to the uncoupled SV and P modes which, in turn, are related to the slowness curves for bulk waves. This approach provides an explanation for the behavior of the computed dispersion curves, and it also provides a means of approximating plate wave dispersion curves from the behavior of the slowness curves. The relationship of plate waves to surface waves is also explored for directions in which pseudosurface waves are known to propagate.