scispace - formally typeset
Search or ask a question

Showing papers on "Acoustic source localization published in 2000"


Journal ArticleDOI
Jacob Benesty1
TL;DR: A new approach is proposed that is based on eigenvalue decomposition that performs well and is very accurate for time delay estimation of acoustic source locations.
Abstract: To find the position of an acoustic source in a room, the relative delay between two (or more) microphone signals for the direct sound must be determined. The generalized cross-correlation method is the most popular technique to do so and is well explained in a landmark paper by Knapp and Carter. In this paper, a new approach is proposed that is based on eigenvalue decomposition. Indeed, the eigenvector corresponding to the minimum eigenvalue of the covariance matrix of the microphone signals contains the impulse responses between the source and the microphone signals (and therefore all the information we need for time delay estimation). In experiments, the proposed algorithm performs well and is very accurate.

395 citations


Proceedings ArticleDOI
05 Jun 2000
TL;DR: It is shown that the OSLS algorithm is mathematically equivalent to the so-called spherical interpolation (SI) method but with less computational complexity.
Abstract: A multi-input one-step least-squares (OSLS) algorithm for passive source localization is proposed. It is shown that the OSLS algorithm is mathematically equivalent to the so-called spherical interpolation (SI) method but with less computational complexity. The OSLS/SI method uses spherical equations (instead of hyperbolic equations) and solves them in a least-squares sense. Based on the adaptive eigenvalue decomposition time delay estimation method previously proposed by the same authors and the OSLS source localization algorithm, a real-time passive source localization system for video camera steering is presented. The system demonstrates many desirable features such as accuracy, portability, and robustness.

141 citations


Journal ArticleDOI
TL;DR: Overall, the results suggest that, from the viewpoint of urban noise reduction, it is better to design the street boundaries as diffusely reflective rather than acoustically smooth.
Abstract: This paper systematically compares the sound fields in street canyons with diffusely and geometrically reflecting boundaries. For diffuse boundaries, a radiosity-based theoretical/computer model has been developed. For geometrical boundaries, the image source method has been used. Computations using the models show that there are considerable differences between the sound fields resulting from the two kinds of boundaries. By replacing diffuse boundaries with geometrical boundaries, the sound attenuation along the length becomes significantly less; the RT30 is considerably longer; and the extra attenuation caused by air or vegetation absorption is reduced. There are also some similarities between the sound fields under the two boundary conditions. For example, in both cases the sound attenuation along the length with a given amount of absorption is the highest if the absorbers are arranged on one boundary and the lowest if they are evenly distributed on all boundaries. Overall, the results suggest that, from the viewpoint of urban noise reduction, it is better to design the street boundaries as diffusely reflective rather than acoustically smooth.

136 citations


Proceedings ArticleDOI
05 Jun 2000
TL;DR: This paper proposes a new method which suppresses the undesired cross-correlation by synchronous addition of CSP coefficients derived from multiple microphone pairs and shows that the proposed method improves the localization accuracy when increasing the number of the synchronous additions.
Abstract: Accurate localization of multiple sound sources is indispensable for the microphone array-based high quality sound capture. For single sound source localization, the CSP (cross-power spectrum phase analysis) method has been proposed. The CSP method localizes a sound source as a crossing point of sound directions estimated using different microphone pairs. However, when localizing multiple sound sources, the CSP method has a problem that the localization accuracy is degraded due to cross-correlation among different sound sources. To solve this problem, this paper proposes a new method which suppresses the undesired cross-correlation by synchronous addition of CSP coefficients derived from multiple microphone pairs. Experiment results in a real room showed that the proposed method improves the localization accuracy when increasing the number of the synchronous addition.

105 citations


Journal ArticleDOI
TL;DR: It is shown that coherence functions determined with appropriate spectral resolution contain the same information as the corresponding correlation functions, and that measuring such coherence function is a far more efficient way of obtaining this information.
Abstract: A new method of measuring spatial correlation functions in reverberant sound fields is presented. It is shown that coherence functions determined with appropriate spectral resolution contain the same information as the corresponding correlation functions, and that measuring such coherence functions is a far more efficient way of obtaining this information. The technique is then used to verify theoretical predictions of the spatial correlation between various components of the particle velocity in a diffuse sound field. Other possible applications of the technique are discussed and illustrated with experimental results obtained in an ordinary room.

101 citations


PatentDOI
TL;DR: In this article, an adaptive eigenvalue decomposition algorithm (AEDA) is employed to estimate the channel impulse response from the sound source to each of a pair of microphones, and then uses these estimated impulse responses to determine the time delay of arrival (TDOA) between the two microphones by measuring the distance between the first peaks thereof (i.e., the first significant taps of the corresponding transfer functions).
Abstract: A real-time passive acoustic source localization system for video camera steering advantageously determines the relative delay between the direct paths of two estimated channel impulse responses. The illustrative system employs an approach referred to herein as the “adaptive eigenvalue decomposition algorithm” (AEDA) to make such a determination, and then advantageously employs a “one-step least-squares algorithm” (OSLS) for purposes of acoustic source localization, providing the desired features of robustness, portability, and accuracy in a reverberant environment. The AEDA technique directly estimates the (direct path) impulse response from the sound source to each of a pair of microphones, and then uses these estimated impulse responses to determine the time delay of arrival (TDOA) between the two microphones by measuring the distance between the first peaks thereof (i.e., the first significant taps of the corresponding transfer functions). In one embodiment, the system minimizes an error function (i.e., a difference) which is computed with the use of two adaptive filters, each such filter being applied to a corresponding one of the two signals received from the given pair of microphones. The filtered signals are then subtracted from one another to produce the error signal, which is minimized by a conventional adaptive filtering algorithm such as, for example, an LMS (Least Mean Squared) technique. Then, the TDOA is estimated by measuring the “distance” (i.e., the time) between the first significant taps of the two resultant adaptive filter transfer functions.

81 citations


Book ChapterDOI
01 Jan 2000
TL;DR: In this paper, sound production and propagation in whales and dolphins are modeled based on physics and mathematics, and the limits of intensity and frequency that are physically possible given the anatomy of a species.
Abstract: Acoustic models based on physics and mathematics may yield significant advances in the understanding of sound production, propagation, and interaction associated with whales and dolphins. Models can be used to estimate the limits of intensity and frequency that are physically possible given the anatomy of a species. Models can also tell us what kind of anatomical structures would be necessary in order to produce sound having specific characteristics. Models can be used to clarify what type of measurements should be performed to answer specific questions. Many areas of bioacoustics stand to benefit from simulation of sound propagation through biological tissues and the media surrounding them. However, accurate modeling of biological subjects with complex anatomical features is extremely challenging, and few modern studies exist of sound production and propagation in whales and dolphins.

79 citations


Journal ArticleDOI
TL;DR: A new method has been developed whereby theshape of reflected beams is governed by the shape of reflecting surfaces so as to produce a geometrically perfect description of the sound propagation for halls with occluding surfaces.
Abstract: The most popular models to predict sound propagation in architectural spaces involve the tracing of rays, images, or beams. Most current beam-tracing methods use conical or triangular beams that may produce overlaps and holes in the predicted sound field. Hence a new method has been developed whereby the shape of reflected beams is governed by the shape of reflecting surfaces so as to produce a geometrically perfect description of the sound propagation for halls with occluding surfaces. The method also facilitates the calculation of diffuse sound propagation by managing the energy transfer from a specular model to a diffuse model. This adaptive beam-tracing method compares well with other methods in terms of speed and accuracy.

58 citations


Book ChapterDOI
01 Mar 2000
TL;DR: This chapter considers the problem of passively estimating the acoustic source location by using microphone arrays for video camera steering in real reverberant environments and develops different algorithms for time delay estimation and source localization.
Abstract: In this chapter, we consider the problem of passively estimating the acoustic source location by using microphone arrays for video camera steering in real reverberant environments. Within a two-stage framework for this problem, different algorithms for time delay estimation and source localization are developed. Their performance as well as computational complexity are analyzed and discussed. A successful real-time system is also presented.

57 citations



PatentDOI
TL;DR: In this article, a sound system and method for modeling a sound field generated by a sound source and creating a sound event based on the modeled sound field is described and compared to the original sound field model.
Abstract: A sound system and method for modeling a sound field generated by a sound source and creating a sound event based on the modeled sound field is disclosed. The system and method captures a sound field over an enclosing surface, models the sound field and enables reproduction of the modeled sound field. Explosion type acoustical radiation may be used. Further, the reproduced sound field may be modeled and compared to the original sound field model.

Journal ArticleDOI
TL;DR: It is shown that a direct and robust estimate of the critical angle, and therefore the sediment sound speed, at the lower frequencies can be achieved by analyzing the grazing angle dependence of the phase delays observed on a buried array.
Abstract: Understanding the basic physics of sound penetration into ocean sediments is essential for the design of sonar systems that can detect, localize, classify, and identify buried objects. In this regard the sound speed of the sediment is a crucial parameter as the ratio of sound speed at the water-sediment interface determines the critical angle. Sediment sound speed is typically measured from core samples using high frequency (100’s of kHz) pulsed travel time measurements. Earlier experimental work on subcritical penetration into sandy sediments has suggested that the effective sound speed in the 2–20 kHz range is significantly lower than the core measurement results. Simulations using Biot theory for propagation in porous media confirmed that sandy sediments may be highly dispersive in the range 1–100 kHz for the type of sand in which the experiments were performed. Here it is shown that a direct and robust estimate of the critical angle, and therefore the sediment sound speed, at the lower frequencies can be achieved by analyzing the grazing angle dependence of the phase delays observed on a buried array. A parametric source with secondary frequencies in the 2–16 kHz range was directed toward a sandy bottom similar to the one investigated in the earlier study. An array of 14 hydrophones was used to measure penetrated field. The critical angle was estimated by analyzing the variations of signal arrival times versus frequency, burial depth, and grazing angle. Matching the results with classical transmission theory yielded a sound speed estimate in the sand of 1626 m/s in the frequency range 2–5 kHz, again significantly lower the 1720 m/s estimated from the cores at 200 kHz. However, as described here, this dispersion is consistent with the predictions of the Biot theory for this type of sand.

PatentDOI
Koichiro Mizushima1
TL;DR: In this paper, a method and apparatus enabling information including respective angular directions to be obtained for one or more sound sources includes a sound source direction estimation section for frequency-domain and time-domain processing of sets of output signals from a microphone array to derive successive estimated angular directions of each of the sound sources.
Abstract: A method and apparatus enabling information including respective angular directions to be obtained for one or more sound sources includes a sound source direction estimation section for frequency-domain and time-domain processing of sets of output signals from a microphone array to derive successive estimated angular directions of each of the sound sources. The estimated directions can be utilized by a passage detection section to detect when a sound source is currently moving past the microphone array and the direction of the sound source at the time point when such passage detection is achieved, and a motion velocity detection section which is triggered by such passage detection to calculate the velocity of the passing sound source by using successively obtained estimated directions. In addition it becomes possible to produce directivity of the microphone array, oriented along the direction of a sound source which is moving past the microphone array, enabling accurate monitoring of sound levels of respective sound sources.

Journal ArticleDOI
TL;DR: A space- and time-dependent internal wave model was developed for a shallow water area on the New Jersey continental shelf and combined with a propagation algorithm to perform numerical simulations of acoustic field variability and results are presented as examples of propagation through this evolving environment.
Abstract: A space- and time-dependent internal wave model was developed for a shallow water area on the New Jersey continental shelf and combined with a propagation algorithm to perform numerical simulations of acoustic field variability. This data-constrained environmental model links the oceanographic field, dominated by internal waves, to the random sound speed distribution that drives acoustic field fluctuations in this region. Working with a suite of environmental measurements along a 42-km track, a parameter set was developed that characterized the influence of the internal wave field on sound speed perturbations in the water column. The acoustic propagation environment was reconstructed from this set in conjunction with bottom parameters extracted by use of acoustic inversion techniques. The resulting space- and time-varying sound speed field was synthesized from an internal wave field composed of both a spatially diffuse (linear) contribution and a spatially localized (nonlinear) component, the latter consisting of solitary waves propagating with the internal tide. Acoustic simulation results at 224 and 400 Hz were obtained from a solution to an elastic parabolic equation and are presented as examples of propagation through this evolving environment. Modal decomposition of the acoustic field received at a vertical line array was used to clarify the effects of both internal wave contributions to the complex structure of the received signals.

Proceedings ArticleDOI
30 Jul 2000
TL;DR: An automatic video-conferencing system is proposed which employs acoustic source localization, video face tracking and pose estimation, and multi-channel speech enhancement.
Abstract: An automatic video-conferencing system is proposed which employs acoustic source localization, video face tracking and pose estimation, and multi-channel speech enhancement. The video portion of the system tracks talkers by utilizing source motion, contour geometry, color data and simple facial features. Decisions involving which camera to use are based on an estimate of the head's gazing angle. This head pose estimation is achieved using a very general head model which employs hairline features and a learned network classification procedure. Finally, a wavelet microphone array technique is used to create an enhanced speech waveform to accompany the recorded video signal. The system presented in this paper is robust to both visual clutter (e.g. ovals in the scene of interest which are not faces) and audible noise (e.g. reverberations and background noise).

Patent
08 Dec 2000
TL;DR: In this article, the average sound velocity is determined using a regression method based on the travel time measurements of the ultrasound pulses, and the floor profile that forms the basis of the measured travel times is compared with a floor profile model composed of partial functions modeled in a specific manner.
Abstract: For mapping and exploring bodies of water, fan depth finders are used that emit ultrasound pulses and receive-echo pulses in a number of tightly-bundled receiving sectors. Because the predominant number of receiving directions is oriented diagonally downward instead of straight down, these ultrasound pulses propagate on bent paths due to sound refraction. Sound refraction is caused by different sound velocity layers, the precise knowledge of which is necessary for determining an average sound velocity. The method of the present invention does not require a separate measuring probe to measure the sound velocity at different depths; rather, the average sound velocity is determined using a regression method based on the travel time measurements of the ultrasound pulses. In this method, first the floor profile that forms the basis of the measured travel times is determined with an assumed average sound velocity and compared with a floor profile model composed of partial functions modeled in a specific manner. Because a correction value for the average sound velocity can be determined from the partial functions, improved depth values of the floor profile can be determined iteratively with the measured travel times and the corrected sound velocity. A method of this type can advantageously be implemented on research and survey vessels for attaining the high precision required in surveying technology.

Journal ArticleDOI
TL;DR: An efficient set of algorithms for calculating approximations to internal-wave effects on temporal and spatial coherences, coherent bandwidths, and regimes of acoustic fluctuation behavior are compiled.
Abstract: Variability in the ocean sound-speed field on time scales of a few hours and horizontal spatial scales of a few kilometers is often dominated by the random, anisotropic fluctuations caused by the internal-wave field. Results have been compiled from analytical approaches and from numerical simulations using the parabolic approximation into an efficient set of algorithms for calculating approximations to internal-wave effects on temporal and spatial coherences, coherent bandwidths, and regimes of acoustic fluctuation behavior. These approximate formulas account for the background, deterministic, sound-speed profile and the anisotropy of the internal-wave field, and they also allow for the incorporation of experimentally determined profiles of sound speed, buoyancy frequency, and sound-speed variance. The algorithms start from the geometrical-acoustics approximation, in which the field transmitted from a source can be described completely in terms of rays whose characteristics are determined by the sound speed as a function of position. Ordinary integrals along these rays provide approximations to acoustic-fluctuation quantities due to the statistical effects of internal waves, including diffraction. The results from the algorithms are compared with numerical simulations and with experimental results for long-range propagation in the deep ocean.

Journal ArticleDOI
TL;DR: In this paper, sound from a turbulent flow is computed using DNS, and the DNS results are compared with acoustic-analogy predictions for mutual validation, where the source is a three-dimensional region of forced turbulence which has limited extent in one coordinate direction and is periodic in the other two directions.
Abstract: Predicting sound radiated by turbulence is of interest in aeroacoustics, hydroacoustics, and combustion noise. Significant improvements in computer technology have renewed interest in applying numerical techniques to predict sound from turbulent flows. One such technique is a hybrid approach in which the turbulence is computed using a method such as direct numerical simulation (DNS) or large eddy simulation (LES), and the sound is calculated using an acoustic analogy. In this study, sound from a turbulent flow is computed using DNS, and the DNS results are compared with acoustic-analogy predictions for mutual validation. The source considered is a three-dimensional region of forced turbulence which has limited extent in one coordinate direction and is periodic in the other two directions. Sound propagates statistically as a plane wave from the turbulence to the far field. The cases considered here have a small turbulent Mach number so that the source is spatially compact; that is, the turbulence integral ...

Journal ArticleDOI
TL;DR: The new approach of regularized matched-mode processing (RMMP) carries out an independent modal decomposition prior to comparison with the replica excitations for each grid point, using the replica itself as the a priori estimate in a regularized inversion.
Abstract: This paper develops a new approach to matched-mode processing (MMP) for ocean acoustic source localization. MMP consists of decomposing far-field acoustic data measured at an array of sensors to obtain the excitations of the propagating modes, then matching these with modeled replica excitations computed for a grid of possible source locations. However, modal decomposition can be ill-posed and unstable if the sensor array does not provide an adequate spatial sampling of the acoustic field (i.e., the problem is underdetermined). For such cases, standard decomposition methods yield minimum-norm solutions that are biased towards zero. Although these methods provide a mathematical solution (i.e., a stable solution that fits the data), they may not represent the most physically meaningful solution. The new approach of regularized matched-mode processing (RMMP) carries out an independent modal decomposition prior to comparison with the replica excitations for each grid point, using the replica itself as the a priori estimate in a regularized inversion. For grid points at or near the source location, this should provide a more physically meaningful decomposition; at other points, the procedure provides a stable inversion. In this paper, RMMP is compared to standard MMP and matched-field processing for a series of realistic synthetic test cases, including a variety of noise levels and sensor array configurations, as well as the effects of environmental mismatch.

Patent
25 Aug 2000
TL;DR: In this paper, an acoustic emitter directs acoustic waves to the discontinuity in the material, and an acoustic receiver receives the acoustic waves generated by the acoustic emitters after the acoustic wave have interacted with the material (10), and the acoustic receiver also generates a signal representative of the acoustic signals received by the emitter.
Abstract: An apparatus for detecting a discontinuity in a material includes a source of electromagnetic radiation (110) has a wavelength and an intensity sufficient to induce an enhancement in contrast between a manifestation of an acoustic property in the material (10) and of the acoustic property in the discontinuity (20), as compared to when the material is not irradiated by the electromagnetic radiation. An acoustic emitter directs acoustic waves to the discontinuity in the material. The acoustic waves have a sensitivity to the acoustic property. An acoustic receiver (120) receives the acoustic waves generated by the acoustic emitter after the acoustic waves have interacted with the material (10) and the discontinuity (20). The acoustic receiver also generates a signal representative of the acoustic waves received by the acoustic receiver. A processor (130), in communication with the acoustic receiver and responsive to the signal generated by the acoustic receiver, is programmed to generate informational output about the discontinuity (134) based on the signal generated by the acoustic receiver.

Journal ArticleDOI
TL;DR: An approach for avoiding the problem of environmental uncertainty is tested using data from the TESPEX experiments, an alternative to the difficult task of characterizing the environment by performing direct measurements and solving inverse problems.
Abstract: An approach for avoiding the problem of environmental uncertainty is tested using data from the TESPEX experiments. Acoustic data basing is an alternative to the difficult task of characterizing the environment by performing direct measurements and solving inverse problems. A source is towed throughout the region of interest to obtain a database of the acoustic field on an array of receivers. With this approach, there is no need to determine environmental parameters or solve the wave equation. Replica fields from an acoustic database are used to perform environmental source tracking [J. Acoust. Soc. Am. 94, 3335–3341 (1993)], which exploits environmental complexity and source motion.

Journal ArticleDOI
TL;DR: In this paper, the effects of spatial filtering on the sound generated from a subsonic axisymmetric jet were investigated by filtering near-field flow variables obtained from a direct numerical simulation.
Abstract: The effects of spatial filtering on the sound generated from a subsonic axisymmetric jet were investigated by filtering near-field flow variables obtained from a direct numerical simulation. This is useful to assess the accuracy of the large-eddy simulation (LES) technique for predicting aerodynamically generated sound. Lighthill's acoustic analogy in the frequency domain was employed to predict the far-field sound. The direct numerical simulation results were in excellent agreement with recently published results for the same jet (Mitchell, B. E., Lele, S. K., and Moin, P., Direct Computation of the Sound Generated by Vortex Pairing in an Axisymmetric Jet, Journal of Fluid Mechanics, Vol. 383, 1999, pp. 113-142). To handle the effects of domain truncation errors on the Lighthill source term, a windowing function was employed. Predictions of the far-field sound using Lighthill's acoustic analogy were in good agreement with the simulation results at low frequencies, even for shallow angles from the jet axis. Significant discrepancies were observed at high frequencies. It was found that low-frequency sound was dominant and the effects of filtering on the low-frequency sound were negligible. In addition, the sound levels computed from both the filtered and unfiltered source terms were in good agreement with the directly computed results. Filtering reduced the small-scale fluctuations in the near-field and, as expected, decreased the magnitude of the source term for the high-frequency sound. A model was developed and tested to predict the subgrid contribution to the Lighthill tensor in cases where, as in LES, only relatively large-scale flow structures are resolved.

PatentDOI
TL;DR: In this paper, an impulse response when a sound radiated from a real sound source (S) is emitted from one of the sound source element regions (S1 to Sn), passes through the sound field (10), and then reaches the sound receiving point (R1 to Rm) is obtained for each of combinations of the S1-sn and R1-rm regions.
Abstract: A space surrounding a sound source (S) which is set in a sound field (10) to be reproduced is divided into sound source element regions (S1 to Sn), and a space surrounding a sound receiving point (R) is divided into sound receiving element regions (R1 to Rm). An impulse response when a sound radiated from the sound source (S) is emitted from one of the sound source element regions (S1 to Sn), passes through the sound field (10), enters one of the sound receiving element regions (R1 to Rm), and then reaches the sound receiving point (R) is obtained for each of combinations of the sound source element regions (S1 to Sn) and the sound receiving element regions (R1 to Rm). A sound emitted from a real sound source (Sr) in an arbitrary real space (26) is picked up by microphones (MC1 to MCn) placed correspondingly with the sound source element regions (S1 to Sn). In an FIR matrix circuit (42), the pickup signals are respectively subjected to a convolution operation with impulse responses which are obtained for each of the sound source element regions (S1 to Sn) in corresponding directions.

Journal ArticleDOI
TL;DR: It is concluded that speech intelligibility may not have been evaluated correctly by RASTI, and reflected waves were the primary contributors to sound fields out of sight of the source.
Abstract: The characteristics of sound propagation and speech transmission along a tunnel with a “T” intersection were investigated. At receivers within sight of the sound source, low frequencies were mainly attenuated around the intersection than high frequencies. At receivers out of sight of the source, high frequencies were extensively attenuated. The overall pattern of sound attenuation along the different sections of tunnel, which was calculated by the conical beam method, agreed well with the measurements in this study. Numerical calculations of reflected and diffractedwaves with minimum transmission paths in a two-dimensional plane showed that reflected waves were the primary contributors to sound fields out of sight of the source. The articulation scores measured at receivers within sight of the source were high, and most of the confusion concerned syllables that could easily be misheard, even if there were a high signal-to-noise ratio. The types of syllable confusions observed at the receivers out of sight of the source appeared to have been caused by the greater deterioration in speech signals along this part of the tunnel, especially at high frequencies. The evaluation by rapid speech transmission indices (RASTI) appeared to be overestimated at the receivers out of sight of the source. Taking into account the early decay times of impulsive sound and the calculation procedures used in RASTI, it is concluded that speech intelligibility may not have been evaluated correctly by RASTI.

Journal ArticleDOI
TL;DR: In this article, the experimental results of heat transfer in 10 and 20 kHz acoustic resonant standing wave fields for a small cylinder by means of hot-film anemometry were reported.

Proceedings ArticleDOI
24 Apr 2000
TL;DR: An experimental mobile robot with acoustic source localization capabilities for surveillance and transportation tasks in indoor environments using a distributed architecture with TCP/IP message passing and hardware and software architectures are described.
Abstract: This paper describes an experimental mobile robot with acoustic source localization capabilities for surveillance and transportation tasks in indoor environments. The location of a speaking operator is detected via a microphone array based algorithm; localization information are passed to a navigation module which sets up a navigation mission using knowledge of the environment map. The system has been developed using a distributed architecture with TCP/IP message passing. We describe the hardware and software architectures, as well as the algorithms. Experimental results describing the system performance in localization tasks are reported.

PatentDOI
TL;DR: In this paper, the authors proposed a method for remotely sensing sound waves in an optically transparent or semitransparent medium through detecting changes in the optical properties of the medium, which are caused by sound waves.
Abstract: Methods for remotely sensing sound waves in an optically transparent or semitransparent medium through detecting changes in the optical properties of the medium, which are caused by the sound waves. For example, to implement a microphone that can sense sound at a distance from the sound source. The variations in the attenuation or the phase of a beam of light that is received after passing through the sound waves are sensed and converted to an electrical or other signal. For the attenuation method, the wavelength of the beam of light sensed is selected to be one that is highly attenuated by a constituent of the medium, so that the changing instantaneous pressure of the medium due to the sound pressure waves can be detected through the changing light attenuation due to the changing density of the air along the light path. For the phase shift method, the velocity of light, and therefore its phase is changed by the changing density of the air due to the sound waves, and this can be detected through interferometric means.

Patent
06 Sep 2000
TL;DR: In this paper, the authors proposed a sound source direction inference method using a plurality of directional microphones with different directivities from each other, including two bidirectional microphones and one omnidirectional microphone.
Abstract: PROBLEM TO BE SOLVED: To separate and extract individual sound sources by inferring a sound source direction, using two bidirectional microphones and one omnidirectional microphone, and switching zones without having to move the microphone physically. SOLUTION: A sound pickup and sound source separating device comprises a plurality of directional microphones 1, 2 of directions for different directivities from each other, one omnidirectional microphone 3, a directivity-varying means 4 for generating a signal obtained by weighted summing by a weighted summing coefficient set in different combinations of directions of the directivity, based on sound pickup signals from the respective microphones, a sound source direction inferring means 7 for inferring the sound source direction, based on the signal obtained by weighted summing the respective weighted summing coefficients, and a directivity-selecting means 6 for selecting the weighted signal by the weighted summing coefficient based on the inferred sound source direction.

Journal ArticleDOI
TL;DR: In this article, bias errors in the energy density measurements that occur in one-dimensional sound fields when using the two-microphone technique to estimate the particle velocity are discussed, including inherent, phase mismatch, sensitivity mismatch and spatial errors.

Journal ArticleDOI
TL;DR: The influence of all the sound field parameters--the amplitude, frequency difference, and time--on the transmission function is studied and better-performing collinear spectrometers can be designed.