scispace - formally typeset
Search or ask a question

Showing papers in "Journal of the Acoustical Society of America in 1996"


PatentDOI
TL;DR: In this paper, a two-way wireless link between a hearing aid and a remote processor unit (RPU) was proposed. But the earpiece was not included in the system.
Abstract: A hearing aid or audio communication system includes an earpiece (10) that can be hidden in the ear canal, and which communicates wirelessly with a remote processor unit, or RPU (16), that enhances audio signals and can be concealed under clothing. Sounds from the environnment are picked up by a microphone (12) in the earpiece and sent with other information over a two-way wireless link (17) to the RPU (16). The wireless link (17) uses microwaves for component miniaturization. Furthermore, use of radar technology to implement the wireless link (17), with an RPU (16) interrogator and earpiece (10) transponder, reduces earpiece size and power, as no microwave oscillator is needed in the earpiece (10). Optional secondary wireless link circuitry (19) can be used between the RPU (16) and a cellular telephone system or other sources of information. Electronic voice recognition and response can control system operation.

651 citations


Journal ArticleDOI
TL;DR: In this paper, the authors used time-resolved photography to measure the position of the bubble front and the bubble wall as a function of time and the photographs were used to determine the shock front and bubble wall velocity as well as the shock wave pressure.
Abstract: Shock wave emission and cavitation bubble expansion after optical breakdown in water with Nd:YAG laser pulses of 30‐ps and 6‐ns duration is investigated for energies between 50 μJ and 10 mJ which are often used for intraocular laser surgery. Time‐resolved photography is applied to measure the position of the shock front and the bubble wall as a function of time. The photographs are used to determine the shock front and bubble wall velocity as well as the shock wave pressure as a function of time or position. Calculations of the bubble formation and shock wave emission are performed using the Gilmore model of cavitation bubble dynamics and the Kirkwood–Bethe hypothesis. The calculations are based on the laser pulse duration, the size of the plasma, and the maximally expanded cavitation bubble, i.e., on easily measurable parameters. They yield the dynamics of the bubble wall, the pressure evolution inside the bubble, and pressure profiles in the surrounding liquid at fixed times after the start of the laser...

636 citations


Journal ArticleDOI
TL;DR: It is shown that the intensity differences in the higher regions are caused by an increase in physiological effort rather than by shifting formant frequencies due to stress, and duration proved the most reliable correlate of stress.
Abstract: Although intensity has been reported as a reliable acoustical correlate of stress, it is generally considered a weak cue in the perception of linguistic stress. In natural speech stressed syllables are produced with more vocal effort. It is known that, if a speaker produces more vocal effort, higher frequencies increase more than lower frequencies. In this study, the effects of lexical stress on intensity are examined in the abstraction from the confounding accent variation. A production study was carried out in which ten speakers produced Dutch lexical and reiterant disyllabic minimal stress pairs spoken with and without an accent in a fixed carrier sentence. Duration, overall intensity, formant frequencies, and spectral levels in four contiguous frequency bands were measured. Results revealed that intensity differences as a function of stress are mainly located above 0.5 kHz, i.e., a change in spectral balance emphasizing higher frequencies for stressed vowels. Furthermore, we showed that the intensity differences in the higher regions are caused by an increase in physiological effort rather than by shifting formant frequencies due to stress. The potential of each acoustic correlate of stress to differentiate between initial‐ and final‐stressed words was examined by linear discriminant analysis. Duration proved the most reliable correlate of stress. Overall intensity and vowel quality are the poorest cues. Spectral balance, however, turned out to be a reliable cue, close in strength to duration.

563 citations


Journal ArticleDOI
TL;DR: A quantitative model for signal processing in the auditory system that combines a series of preprocessing stages with an optimal detector as the decision device allows one to estimate thresholds with the same signals and psychophysical procedures as those used in actual experiments.
Abstract: This paper describes a quantitative model for signal processing in the auditory system. The model combines a series of preprocessing stages with an optimal detector as the decision device. The present paper gives a description of the various preprocessing stages and of the implementation of the optimal detector. The output of the preprocessing stages is a time‐varying activity pattern to which ‘‘internal noise’’ is added. In the decision process, a stored temporal representation of the signal to be detected (template) is compared with the actual activity pattern. The comparison amounts to calculating the correlation between the two temporal patterns and is comparable to a ‘‘matched filtering’’ process. The detector itself derives the template at the beginning of each simulated threshold measurement from a suprathreshold value of the stimulus. The model allows one to estimate thresholds with the same signals and psychophysical procedures as those used in actual experiments. In the accompanying paper [Dau et al., J. Acoust. Soc. Am. 99, •••–••• (1996)] data obtained for human observers are compared with the optimal‐detector model for various masking conditions.

499 citations


Journal ArticleDOI
TL;DR: It is reported here that short-term laboratory experience with speech is sufficient to influence infants' speech production, and a hypothesis is advanced extending Kuhl's native language magnet (NLM) model to encompass infants'speech production.
Abstract: Infants’ development of speech begins with a language‐universal pattern of production that eventually becomes language specific. One mechanism contributing to this change is vocal imitation. The present study was undertaken to examine developmental change in infants’ vocalizations in response to adults’ vowels at 12, 16, and 20 weeks of age and test for vocal imitation. Two methodological aspects of the experiment are noteworthy: (a) three different vowel stimuli (/a/, /i/, and /u/) were videotaped and presented to infants by machine so that the adult model could not artifactually influence infant utterances, and (b) infants’ vocalizations were analyzed both physically, using computerized spectrographic techniques, and perceptually by trained phoneticians who transcribed the utterances. The spectrographic analyses revealed a developmental change in the production of vowels. Infants’ vowel categories become more separated in vowel space from 12 to 20 weeks of age. Moreover, vocal imitation was documented. Infants listening to a particular vowel produced vocalizations resembling that vowel. A hypothesis is advanced extending Kuhl’s native language magnet (NLM) model to encompass infants’ speech production. It is hypothesized that infants listening to ambient language store perceptually derived representations of the speech sounds they hear which in turn serve as targets for the production of speech utterances. NLM unifies previous findings on the effects of ambient language experience on infants’ speech perception and the findings reported here that short‐term laboratory experience with speech is sufficient to influence infants’ speech production.

467 citations


PatentDOI
TL;DR: In this article, a portable ultrasound imaging system includes a scan head coupled by a cable to a portable battery-powered data processor and display unit, including an array of ultrasonic transducers and the circuitry associated therewith.
Abstract: A portable ultrasound imaging system includes a scan head coupled by a cable to a portable battery-powered data processor and display unit. The scan head enclosure houses an array of ultrasonic transducers and the circuitry associated therewith, including pulse synchronizer circuitry used in the transmit mode for transmission of ultrasonic pulses and beam forming circuitry used in the receive mode to dynamically focus reflected ultrasonic signals returning from the region of interest being imaged.

449 citations


Journal ArticleDOI
TL;DR: Comparisons of formant locations extracted from the natural (recorded) speech of theImaged subject and from simulations using the newly acquired area functions show reasonable similarity but suggest that the imaged vocal tract shapes may be somewhat centralized.
Abstract: There have been considerable research efforts in the area of vocal tract modeling but there is still a small body of information regarding direct 3-D measurements of the vocal tract shape The purpose of this study was to acquire, using magnetic resonance imaging (MRI), an inventory of speaker-specific, three-dimensional, vocal tract air space shapes that correspond to a particular set of vowels and consonants A set of 18 shapes was obtained for one male subject who vocalized while being scanned for 12 vowels, 3 nasals, and 3 plosives The 3-D shapes were analyzed to find the cross-sectional areas evaluated within planes always chosen to be perpendicular to the centerline extending from the glottis to the mouth to produce an "area function" This paper provides a speaker-specific catalogue of area functions for 18 vocal tract shapes Comparisons of formant locations extracted from the natural (recorded) speech of the imaged subject and from simulations using the newly acquired area functions show reasonable similarity but suggest that the imaged vocal tract shapes may be somewhat centralized Additionally, comparisons of the area functions reported in this study are compared with those from four previous studies and demonstrate general similarities in shape but also obvious differences that can be attributed to differences in imaging techniques, image processing methods, and anatomical differences of the imaged subjects

438 citations


Journal ArticleDOI
TL;DR: In this paper, the authors presented a complete analysis of the DORT method in the case of two scatterers and showed that the phase law of the time reversal operator can be applied to the transducers in order to focus on one of the scatterer.
Abstract: The decomposition of the time reversal operator (DORT) method is a selective detection and focusing technique using an array of transmit–receive transducers. It relies on the theory of iterative time reversal mirrors which was presented by Prada et al. [C. Prada, J. L. Thomas, and M. Fink, J. Acoust. Soc. Am. 97, 62–71 (1995)]. The time reversal operator was defined as K*(ω)K(ω), where ω is the frequency, * means complex conjugate, and K(ω) is the transfer matrix of the array of L transducers insonifying a time invariant scattering medium. It was shown that this time reversal operator can be diagonalized and that for ideally resolved scatterers of different reflectivities, each of its eigenvectors of nonzero eigenvalue provides the phase law to be applied to the transducers in order to focus on one of the scatterers. The DORT method consists in determining these eigenvectors and using them for the selective focusing. This paper presents a complete analysis of this method in the case of two scatterers. The...

435 citations


Journal ArticleDOI
TL;DR: Evidence suggests that a number of adverse effects of noise in general arise from exposure to low-frequency noise: Loudness judgments and annoyance reactions are sometimes reported to be greater for low- frequency noise than other noises for equal sound-pressure level.
Abstract: The sources of human exposure to low-frequency noise and its effects are reviewed. Low-frequency noise is common as background noise in urban environments, and as an emission from many artificial sources: road vehicles, aircraft, industrial machinery, artillery and mining explosions, and air movement machinery including wind turbines, compressors, and ventilation or air-conditioning units. The effects of low-frequency noise are of particular concern because of its pervasiveness due to numerous sources, efficient propagation, and reduced efficacy of many structures (dwellings, walls, and hearing protection) in attenuating low-frequency noise compared with other noise. Intense low-frequency noise appears to produce clear symptoms including respiratory impairment and aural pain. Although the effects of lower intensities of low-frequency noise are difficult to establish for methodological reasons, evidence suggests that a number of adverse effects of noise in general arise from exposure to low-frequency noise: Loudness judgments and annoyance reactions are sometimes reported to be greater for low-frequency noise than other noises for equal sound-pressure level; annoyance is exacerbated by rattle or vibration induced by low-frequency noise; speech intelligibility may be reduced more by low-frequency noise than other noises except those in the frequency range of speech itself, because of the upward spread of masking. On the other hand, it is also possible that low-frequency noise provides some protection against the effects of simultaneous higher frequency noise on hearing. Research needs and policy decisions, based on what is currently known, are considered.

410 citations


Journal ArticleDOI
TL;DR: In this article, a wavelet transmission statistically self-similar signals detection and estimation with 1/processes deterministically selfsimilar signals was proposed, along with a fractal modulation linear selfsimilar signal.
Abstract: Wavelet transmission statistically self-similar signals detection and estimation with 1/processes deterministically self-similar signals fractal modulation linear self-similar signals.

369 citations


Journal ArticleDOI
TL;DR: In this paper, the governing equations controlling the coupled electromagnetic-seismic (or "electroseismic") wave propagation are presented for a general anisotropic and heterogeneous porous material.
Abstract: In a porous material saturated by a fluid electrolyte, mechanical and electromagnetic disturbances are coupled. The coupling is due to an excess of electrolyte ions that exist in a fluid layer near the grain surfaces within the material; i.e., the coupling is electrokinetic in nature. The governing equations controlling the coupled electromagnetic‐seismic (or ‘‘electroseismic’’) wave propagation are presented for a general anisotropic and heterogeneous porous material. Uniqueness is derived as well as the statements of energy conservation and reciprocity. Representation integrals for the various wave fields are derived that require, in general, nine different Green’s tensors. For the special case of an isotropic and homogeneous wholespace, both the plane‐wave and the point‐source responses are obtained. Finally, the boundary conditions that hold at interfaces in the porous material are derived.

Journal ArticleDOI
TL;DR: Peng and Toksoz as mentioned in this paper presented a method for application of the perfectly matched layer absorbing boundary condition (ABC) to the P•SV velocity-stress finite-difference method.
Abstract: A method is presented for application of the perfectly matched layer (PML) absorbing boundary condition (ABC) to the P‐SV velocity–stress finite‐difference method The PML consists of a nonphysical material, containing both passive loss and dependent sources, that provides ‘‘active’’ absorption of fields It has been used in electromagnetic applications where it has provided excellent results for a wide range of angles and frequencies In this work, numerical simulations are used to compare the PML and an ‘‘optimal’’ second‐order elastic ABC [Peng and Toksoz, J Acoust Soc Am 95, 733–745 (1994)] Reflection factors are used to compare angular performance for continuous wave illumination; snapshots of potentials are used to compare performance for broadband illumination These comparisons clearly demonstrate the superiority of the PML formulation Within the PML there is a 60% increase in the number of unknowns per grid cell relative to the velocity–stress formulation However, the high quality of the PML ABC allows the use of a smaller grid, which can result in a lower overall computational cost

PatentDOI
TL;DR: In this article, the authors present an ultrasound system and method for performing relatively non-invasive cardiac ablation on a patient, which includes a plurality of ultrasound transducers forming a phased array that is to be located externally of the patient.
Abstract: An ultrasound system and method for performing relatively non-invasive cardiac ablation on a patient. The system of the present invention includes a plurality of ultrasound transducers forming a phased array that is to be located externally of the patient. The array a focused beam of sufficient energy to ablate a predetermined cardiac tissue volume. The system is capable of refocusing the beam so that acoustical aberrations encountered by the beam, as it is transmitted through inhomogeneous body tissues between the array and the treatment volume, are taken into account and will not impede operation of the system. To refocus the beam, the system includes a senor which senses the phase distribution caused by the aberrations allowing a controller to calculate a compensating driving phase distribution and accordingly drive the array. The system also allows for real time correction of the beam's position enabling the beam to follow a moving myocardial target volume.

Journal ArticleDOI
TL;DR: In this article, a tutorial paper emphasizes field measurements and simple physical interpretations of sound propagation in the presence of weather and acoustical factors such as source and receiver heights and their separation.
Abstract: Concerns about noise in the community date back to the dawn of recorded history. Then, after centuries of relatively little activity, scientific interest grew during the 17th century and social concerns were again voiced during the 19th century. Many of the wave‐propagation mechanisms relevant outdoors were understood at least qualitatively by the late 1800s. Today, knowledge of sound propagation phenomena is of great economic and social importance because of environmental and other concerns. Reality is more complicated than geometrical spreading above flat ground. Some grounds are acoustically hard like concrete, and others soft as snow. Corresponding reflection coefficients are complex and vary with angle. Grounds may not be flat, leading to shadow zones or alternatively multiple reflections at the ground. Gradients of wind or temperature refract waves either upwards (upwind or in a temperature lapse) or downwards (downwind or in a temperature inversion). Atmospheric turbulence causes fluctuations in the acoustical effects. Many of these features mutually interact. Measured sound pressure levels owe as much to near‐surface weather and to ground shape and impedance as to acoustical factors such as source and receiver heights and their separation. This tutorial paper emphasizes field measurements and simple physical interpretations.

PatentDOI
TL;DR: In this article, a method and apparatus for simultaneous measurement of multiple distances by means of networked piezoelectric transducers through the use of high frequency digital counters, the propagation delay between the activation of an ultrasonic transducer and the reception by similar transducers is quickly and accurately defined.
Abstract: A method and apparatus for simultaneous measurement of multiple distances by means of networked piezoelectric transducers Through the use of high frequency digital counters, the propagation delay between the activation of an ultrasonic transducer and the reception by similar transducers is quickly and accurately defined By alternating the duty cycle between transmit and receive modes, the system can track and triangulate the three-dimensional positions for each transducer

PatentDOI
TL;DR: A method and apparatus for storing and retrieving information to and from a memory of a hand-held audio database device, wherein the icons graphically represent the categories.
Abstract: A method and apparatus for storing and retrieving information to and from a memory of a hand-held audio database device The audio database device includes a graphics display provided on a hand-held housing for displaying graphical information A microphone and a speaker are provided on the housing to receive and broadcast audio information from and to a user, respectively The audio database device includes a memory configured to store graphical icons and to support a hierarchical memory structure comprising categories, wherein the icons graphically represent the categories A user-actuated navigation control is provided on the housing and permits a user to navigate the categories in the hierarchical memory structure and to select a desired category A processor is coupled to the memory, the display, and the navigation control and effects displaying of one of the icons on the display when the user is navigating a corresponding one of the categories, and storing of the audio information in the desired category of the memory

PatentDOI
TL;DR: In this paper, a real-time voice dialog system is described, where a process for automatic control of devices by voice dialog is used applying methods of voice input, voice signal processing and voice recognition, syntactical-grammatical postediting as well as dialog, executive sequencing and interface control.
Abstract: The invention pertains to a voice dialog system wherein a process for automatic control of devices by voice dialog is used applying methods of voice input, voice signal processing and voice recognition, syntactical-grammatical postediting as well as dialog, executive sequencing and interface control, and which is characterized in that syntax and command structures are set during real-time dialog operation; preprocessing, recognition and dialog control are designed for operation in a noise-encumbered environment; no user training is required for recognition of general commands; training of individual users is necessary for recognition of special commands; the input of commands is done in linked form, the number of words used to form a command for voice input being variable; a real-time processing and execution of the voice dialog is established; the voice input and output is done in the hands-free mode.

Patent
TL;DR: In this article, signals are accepted corresponding to interspersed speech elements including text elements corresponding to text to be recognized and commands corresponding to commands to be executed in a manner which depends on whether they represent text or commands.
Abstract: In a method for use in recognizing continuous speech, signals are accepted corresponding to interspersed speech elements including text elements corresponding to text to be recognized and command elements corresponding to commands to be executed. The elements are recognized. The recognized elements are acted on in a manner which depends on whether they represent text or commands.

Journal ArticleDOI
TL;DR: The model embodies two principal hypotheses supported by considerable experimental and theoretical research from the neuroscience literature: (1) sensory experience guides language-specific development of an auditory neural map, and (2) a population vector can predict psychological phenomena based on map cell activities.
Abstract: The perceptual magnet effect is one of the earliest known language-specific phenomena arising in infant speech development. The effect is characterized by a warping of perceptual space near phonemic category centers. Previous explanations have been formulated within the theoretical framework of cognitive psychology. The model proposed in this paper builds on research from both psychology and neuroscience in working toward a more complete account of the effect. The model embodies two principal hypotheses supported by considerable experimental and theoretical research from the neuroscience literature: (1) sensory experience guides language-specific development of an auditory neural map, and (2) a population vector can predict psychological phenomena based on map cell activities. These hypotheses are realized in a self-organizing neural network model. The magnet effect arises in the model from language-specific nonuniformities in the distribution of map cell firing preferences. Numerical simulations verify that the model captures the known general characteristics of the magnet effect and provides accurate fits to specific psychophysical data.

Journal ArticleDOI
TL;DR: In this paper, the authors extended the boundary element method to study the mode conversion phenomena of Lamb waves from a free edge and formulated the elastodynamic interior boundary value problem as a hybrid boundary integral equation in conjunction with the normal mode expansion technique based on the Lamb wave dispersion equation.
Abstract: The boundary element method, well known for bulk wave scattering, is extended to study the mode conversion phenomena of Lamb waves from a free edge. The elastodynamic interior boundary value problem is formulated as a hybrid boundary integral equation in conjunction with the normal mode expansion technique based on the Lamb wave dispersion equation. The present approach has the potential of easily handling the geometrical complexity of general guided wave scattering with improved computational efficiency due to the advantage of the boundary‐type integral method. To check the accuracy of the boundary element program, vertical shear wave diffraction, due to a circular hole, is solved and compared with previous analytical solutions. Edge reflection factors for the multibackscattered modes in a steel plate are satisfied quite well with the principle of energy conservation. In the cases of A0, A1, and S0 incidence, the variations of the multireflection factors show similar tendencies to the existing results fo...

PatentDOI
TL;DR: In this paper, an image with increased sensitivity to non-linear responses, particularly second harmonic responses, can be achieved by measuring the ultrasound response under multiple excitation levels, and then subtracted.
Abstract: An image with increased sensitivity to non-linear responses, particularly second harmonic responses, can be achieved by measuring the ultrasound response under multiple excitation levels. The responses gathered from the multiple excitation levels are gain corrected in an amount corresponding to the difference in excitation levels, then subtracted. Because of this subtraction, most of the linear response will be removed, and what remains corresponds to the non-linear response.

Journal ArticleDOI
TL;DR: In this paper, a normal mode method for propagation modeling in acousto-elastic ocean waveguides is described, where the downward and upward looking plane wave reflection coefficients R1 and R2 at a reference depth in the fluid and searching the complex k plane for points where the product R1R2=1.
Abstract: A normal mode method for propagation modeling in acousto‐elastic ocean waveguides is described. The compressional (p‐) and shear (s‐) wave propagation speeds in the multilayer environment may be constant or have a gradient (1/c2 linear) in each layer. Mode eigenvalues are found by analytically computing the downward‐ and upward‐looking plane wave reflection coefficients R1 and R2 at a reference depth in the fluid and searching the complex k plane for points where the product R1R2=1. The complex k‐plane search is greatly simplified by following the path along which |R1R2|=1. Modes are found as points on the path where the phase of R1R2 is a multiple of 2π. The direction of the path is found by computing the derivatives d(R1R2)/dk analytically. Leaky modes are found, allowing the mode solution to be accurate at short ranges. Seismic interface modes such as the Scholte and Stonely modes are also found. Multiple ducts in the sound speed profile are handled by employing multiple reference depths. Use of Airy function solutions to the wave equation in each layer when computing R1 and R2 results in computation times that increase only linearly with frequency.

PatentDOI
TL;DR: A three-dimensional ultrasound imaging system includes an ultrasound probe to direct ultrasound waves to and to receive reflected ultrasound waves from a target volume of a subject under examination and a user interface allows a user to manipulate the displayed image.
Abstract: A three-dimensional ultrasound imaging system includes an ultrasound probe to direct ultrasound waves to and to receive reflected ultrasound waves from a target volume of a subject under examination. The ultrasound probe is swept over the target volume along a linear scanning path and the reflected ultrasound waves are conveyed to a computer wherein successive two-dimensional images of the target volume are digitized. The digitized two-dimensional images can be used to generate a three-dimensional image with virtually no delay. A user interface allows a user to manipulate the displayed image. Specifically, the entire displayed image may be rotated about an arbitrary axis, a surface of the displayed image may be translated to provide different cross-sectional views of the image and a selected surface of the displayed image may be rotated about an arbitrary axis. All of these manipulations can be achieved via a single graphical input device such as a mouse connected to the computer.

Journal ArticleDOI
TL;DR: Two global-level properties were identified that appear likely to be linked to the improvements in intelligibility provided by clear/normal speech: increased energy in the 1000-3000-Hz range of long-term spectra and increased modulation depth of low frequency modulations of the intensity envelope.
Abstract: In adverse listening conditions, talkers can increase their intelligibility by speaking clearly. While producing clear speech, however, talkers often significantly reduce their speaking rate. A recent study [J. C. Krause and L. D. Braida, J. Acoust. Soc. Am. 98, 2982(A) (1995)] showed that talkers can be trained to produce a form of clear speech at normal conversational rates. This finding suggests that acoustical factors other than reduced speaking rate are responsible for the high intelligibility of clear speech. To gain insight into these factors, the acoustical properties of conversational and clear speech were analyzed to determine phonological and phonetic differences between the two speaking modes. These differences were interpreted in terms of error patterns made by normal‐hearing listeners identifying key words in the presence of wideband noise. Additional intelligibility tests investigated other degradations in order to explore the robustness of the high intelligibility of clear speech produced at conversational rates. Native and non‐native listeners were employed for degradations consisting of additive noise, high‐pass and low‐pass filtering, and reverberation. [Work supported by NIH.]

Journal ArticleDOI
TL;DR: Results show that, for four bands, the frequency alignment of the analysis bands and carrier bands is critical for good performance, while the exact frequency divisions and overlap in carrier bands are not as critical.
Abstract: Recognition of consonants, vowels, and sentences was measured in conditions of reduced spectral resolution and distorted spectral distribution of temporal envelope cues. Speech materials were processed through four bandpass filters (analysis bands), half-wave rectified, and low-pass filtered to extract the temporal envelope from each band. The envelope from each speech band modulated a band-limited noise (carrier bands). Analysis and carrier bands were manipulated independently to alter the spectral distribution of envelope cues. Experiment I demonstrated that the location of the cutoff frequencies defining the bands was not a critical parameter for speech recognition, as long as the analysis and carrier bands were matched in frequency extent. Experiment II demonstrated a dramatic decrease in performance when the analysis and carrier bands did not match in frequency extent, which resulted in a warping of the spectral distribution of envelope cues. Experiment III demonstrated a large decrease in performance when the carrier bands were shifted in frequency, mimicking the basal position of electrodes in a cochlear implant. And experiment IV showed a relatively minor effect of the overlap in the noise carrier bands, simulating the overlap in neural populations responding to adjacent electrodes in a cochlear implant. Overall, these results show that, for four bands, the frequency alignment of the analysis bands and carrier bands is critical for good performance, while the exact frequency divisions and overlap in carrier bands are not as critical.

PatentDOI
TL;DR: A hybrid BTE and CIC hearing aid has a BTE component worn behind the patient's ear and a CIC component which is worn in the bony portion of the ear canal as discussed by the authors.
Abstract: A hybrid BTE and CIC hearing aid has a BTE component which is worn behind the patient's ear and a CIC component which is worn in the bony portion of the patient's ear canal. The BTE and CIC components are connected together with a wire cable. Electroacoustic feedback is reduced or eliminated, allowing gain to be increased. The patient is not disturbed by the occlusion effect.

Journal ArticleDOI
TL;DR: The conclusion was that the tongue actually has a limited repertoire of shapes and positions them against the palate in different ways for consonants versus vowels to create narrow channels, divert airflow, and produce sound.
Abstract: This paper presents three-dimensional tongue surfaces reconstructed from multiple coronal cross-sectional slices of the tongue. Surfaces were reconstructed for sustained vocalizations of the American English sounds [symbol: see text]. Electropalatography (EPG) data were also collected for the sounds to compare tongue surface shapes with tongue-palate contact patterns. The study was interested also in whether 3-D surface shapes of the tongue were different for consonants and vowels. Previous research and speculation had found that there were differences in production, acoustics, and linguistic usage between the two groups. The present study found that four classes of tongue shape were adequate to categorize all the sounds measured. These classes were front raising, complete groove, back raising, and two-point displacement. The first and third classes have been documented before in the midsagittal plane [cf. R. Harshman, P. Ladefoged, and L. Goldstein, J. Acoust. Soc. Am. 62, 693-707 (1976)]. The first three classes contained both vowels and consonants, the last only consonants. Electropalatographic patterns of the sounds indicated three categories of tongue-palate contact: bilateral, cross-sectional, and combination of the two. Vowels used only the first pattern, consonants used all three. The EPG data provided an observable distinction in contact pattern between consonants and vowels. The ultrasound tongue surface data did not. The conclusion was that the tongue actually has a limited repertoire of shapes and positions them against the palate in different ways for consonants versus vowels to create narrow channels, divert airflow, and produce sound.

Journal ArticleDOI
TL;DR: A finite element study of the generation of Lamb waves in plates from a finite air coupled transducer, the interaction of these waves with defects, and their detection using an air coupled receiver is described in this article, where the use of an ideal collimated beam in the model, instead of using the real pressure field generated by the transducers, is demonstrated to have negligible effect on the predicted Lamb waves.
Abstract: Air‐coupled nondestructive testing has become feasible following recent improvements in air‐coupled transducer design. However, the large acoustic impedance mismatch between air and solid materials does not allow normal incidence pulse‐echo inspection. Nevertheless, air‐coupled transducers can be used for the generation and detection of Lamb waves, the receiver being outside the field of the specular reflection. A finite element study of the generation of Lamb waves in plates from a finite air‐coupled transducer, the interaction of these waves with defects, and their detection using an air‐coupled receiver is described. These predictions are compared with experimental results obtained on a variety of specimens using a pair of 1‐3 composite, air‐coupled transducers. The use of an ideal collimated beam in the model, instead of using the real pressure field generated by the transducers, is demonstrated to have a negligible effect on the predicted Lamb waves. It is shown both theoretically and experimentally ...

Journal ArticleDOI
TL;DR: The paper first distinguishes the two perceptual theories, the motor theory and the theory of direct perception, that nearly agree in the claim that listeners to speech perceive vocal tract gestures, and justifies the claim of the direct realist theory that listeners perceive gestures.
Abstract: The paper first distinguishes the two perceptual theories, the motor theory and the theory of direct perception, that nearly agree in the claim that listeners to speech perceive vocal tract gestures. Next it justifies the claim of the direct realist theory that listeners perceive gestures and consider some experimental evidence in its favor. Finally it addresses evidence and arguments judged by Ohala to disconfirm the theory. The argument is made that most of the evidence put forward by Ohala is irrelevant to a distinction between theories that we perceive acoustic signals and theories that we perceive gestures. The arguments are inaccurate or highly selective in the data upon which they draw.

Journal ArticleDOI
TL;DR: The experiments reported in this article attempted to determine the characteristics of signals appearing in the ear canals that are responsible for the perception of externalization, and found that externalization depends on the interaural phases of low-frequency phases but not high-frequency components, as defined by a boundary near 1 kHz.
Abstract: Listeners perceive the sounds of the real world to be externalized. The sound images are compact and correctly located in space. The experiments reported in this article attempted to determine the characteristics of signals appearing in the ear canals that are responsible for the perception of externalization. The experiments used headphones to gain experimental control, and they employed a psychophysical method whereby the measurement of externalization was reduced to discrimination. When the headphone signals were synthesized to best resemble real‐world signals (the baseline synthesis) listeners could not distinguish between the virtual image created by the headphones and the real source. Externalization was then studied, using both discrimination and listener rating, by systematically modifying the baseline synthesis. It was found that externalization depends on the interaural phases of low‐frequency components but not high‐frequency components, as defined by a boundary near 1 kHz. By contrast, interaural level differences in all frequency ranges appear to be about equally important. Other experiments showed that externalization requires realistic spectral profiles in both ears; maintaining only the interaural difference spectrum is inadequate. It was also found that externalization does not depend on dispersion around the head; an optimum interaural time difference proved to be an adequate phase relationship.