scispace - formally typeset
Search or ask a question

Showing papers on "Microphone array published in 2008"


Journal ArticleDOI
TL;DR: A unified maximum likelihood framework of these two techniques is presented, and it is demonstrated how such a framework can be adapted to create efficient SSL and beamforming algorithms for reverberant rooms and unknown directional patterns of microphones.
Abstract: In distributed meeting applications, microphone arrays have been widely used to capture superior speech sound and perform speaker localization through sound source localization (SSL) and beamforming. This paper presents a unified maximum likelihood framework of these two techniques, and demonstrates how such a framework can be adapted to create efficient SSL and beamforming algorithms for reverberant rooms and unknown directional patterns of microphones. The proposed method is closely related to steered response power-based algorithms, which are known to work extremely well in real-world environments. We demonstrate the effectiveness of the proposed method on challenging synthetic and real-world datasets, including over six hours of recorded meetings.

199 citations


Proceedings ArticleDOI
12 May 2008
TL;DR: A novel approach to directly recover the location of both microphones and sound sources from time-difference-of-arrival measurements only, which only requires solving linear equations and matrix factorization.
Abstract: In this paper we present a novel approach to directly recover the location of both microphones and sound sources from time-difference-of-arrival measurements only. No approximation solution is required for initialization and in the absence of noise our approach is guaranteed to always recover the exact solution. Our approach only requires solving linear equations and matrix factorization. We demonstrate the feasibility of our approach with synthetic data.

109 citations


Journal ArticleDOI
TL;DR: This correspondence presents a microphone array shape calibration procedure for diffuse noise environments by fitting the measured noise coherence with its theoretical model and then estimates the array geometry using classical multidimensional scaling.
Abstract: This correspondence presents a microphone array shape calibration procedure for diffuse noise environments. The procedure estimates intermicrophone distances by fitting the measured noise coherence with its theoretical model and then estimates the array geometry using classical multidimensional scaling. The technique is validated on noise recordings from two office environments.

96 citations


Proceedings ArticleDOI
01 Sep 2008
TL;DR: The iterative decoding algorithm is modified and adapts to do probabilistic inference for the problem of tracking humans in an indoor space, using multiple cameras and microphone arrays, based on the theory of turbo codes and factor graphs used in communication systems.
Abstract: Tracking humans in an indoor environment is an essential part of surveillance systems. Vision based and microphone array based trackers have been extensively researched in the past. Audio-visual tracking frameworks have also been developed. In this paper we consider human tracking to be a specific instance of a more general problem of information fusion in multimodal systems. Dynamic Bayesian networks have been the modeling technique of choice to build such information fusion schemes. The complexity and non-Gaussianity of distributions of the dynamic Bayesian networks for such multimodal systems have led to the use of particle filters as an approximate inference technique. In this paper we present an alternative approach to the information fusion problem. The iterative decoding algorithm is based on the theory of turbo codes and factor graphs used in communication systems. We modify and adapt the iterative decoding algorithm to do probabilistic inference for the problem of tracking humans in an indoor space, using multiple cameras and microphone arrays.

95 citations


Patent
16 Jul 2008
TL;DR: In this paper, a first plurality of microphone signals is generated by a first beamformer comprising beamforming weights to obtain the first beamformed signal, and a second plurality of signal is generated using a second beamformer.
Abstract: Embodiments of the present invention relate to methods, systems, and computer program products for signal processing. A first plurality of microphone signals is obtained by a first microphone array. A second plurality of microphone signals is obtained by a second microphone array different from the first microphone array. The first plurality of microphone signals is beamformed by a first beamformer comprising beamforming weights to obtain a first beamformed signal. The second plurality of microphone signals is beamformed by a second beamformer comprising the same beamforming weights as the first beamformer to obtain a second beamformed signal. The beamforming weights are adjusted such that the power density of echo components and/or noise components present in the first and second plurality of microphone signals is substantially reduced.

87 citations


Proceedings ArticleDOI
20 Oct 2008
TL;DR: A realtime system for analyzing group meetings that uses a novel omnidirectional camera-microphone system to automatically discover the visual focus of attention (VFOA) and new 3-D visualization schemes for meeting scenes and the results of an analysis are presented.
Abstract: This paper presents a realtime system for analyzing group meetings that uses a novel omnidirectional camera-microphone system. The goal is to automatically discover the visual focus of attention (VFOA), i.e. "who is looking at whom", in addition to speaker diarization, i.e. "who is speaking and when". First, a novel tabletop sensing device for round-table meetings is presented; it consists of two cameras with two fisheye lenses and a triangular microphone array. Second, from high-resolution omnidirectional images captured with the cameras, the position and pose of people's faces are estimated by STCTracker (Sparse Template Condensation Tracker); it realizes realtime robust tracking of multiple faces by utilizing GPUs (Graphics Processing Units). The face position/pose data output by the face tracker is used to estimate the focus of attention in the group. Using the microphone array, robust speaker diarization is carried out by a VAD (Voice Activity Detection) and a DOA (Direction of Arrival) estimation followed by sound source clustering. This paper also presents new 3-D visualization schemes for meeting scenes and the results of an analysis. Using two PCs, one for vision and one for audio processing, the system runs at about 20 frames per second for 5-person meetings.

84 citations


Patent
18 Jan 2008
TL;DR: In this article, a sound zoom method, medium, and apparatus generating a signal in which a target sound is removed from sound signals input to a microphone-array by adjusting a null width that restricts a directivity sensitivity of the microphone array, and extracting a signal corresponding to the target sound from the sound signals by using the generated signal.
Abstract: A sound zoom method, medium, and apparatus generating a signal in which a target sound is removed from sound signals input to a microphone-array by adjusting a null width that restricts a directivity sensitivity of the microphone array, and extracting a signal corresponding to the target sound from the sound signals by using the generated signal. Thus, a sound located at a predetermined position away from the microphone array can be selectively obtained so that a target sound is efficiently obtained.

75 citations


Patent
08 Apr 2008
TL;DR: In this article, an apparatus for virtual navigation and voice processing is described, which includes a computer readable storage medium having computer instructions for processing voice signals captured from a microphone array and detecting a location of an object in a touchless sensory field of the microphone array.
Abstract: An apparatus for virtual navigation and voice processing is provided. A system that incorporates teachings of the present disclosure may include, for example, a computer readable storage medium having computer instructions for processing voice signals captured from a microphone array, detecting a location of an object in a touchless sensory field of the microphone array, and receiving information from a user interface in accordance with the location and voice signals.

74 citations


Journal ArticleDOI
TL;DR: It is shown that high robustness can be achieved without increasing the number of microphones by arranging the microphones in the volume of a spherical shell, and another simpler configuration employs a single sphere and an additional microphone at the sphere center, showing improved robustness at the low-frequency range.
Abstract: Spherical microphone arrays have been recently studied for a wide range of applications. In particular, microphones arranged around an open or virtual sphere are useful in scanning microphone arrays for sound field analysis. However, open-sphere spherical arrays have been shown to have poor robustness at frequencies related to the zeros of the spherical Bessel functions. This paper presents a framework for the analysis of array robustness using the condition number of a given matrix, and then proposes several robust array configurations. In particular, a dual-sphere configuration previously presented which uses twice as many microphones compared to a single-sphere configuration is analyzed. This paper then shows that high robustness can be achieved without increasing the number of microphones by arranging the microphones in the volume of a spherical shell. Another simpler configuration employs a single sphere and an additional microphone at the sphere center, showing improved robustness at the low-frequency range. Finally, the white-noise gain of the arrays is investigated verifying that improved white-noise gain is associated with lower matrix condition number.

72 citations


Patent
25 Nov 2008
TL;DR: In this article, an orientation sensor detects a change in the orientation of the microphone array and provides an orientation signal to the signal processor for adjusting the aim of the beamforming to maintain the selected direction.
Abstract: A device includes a microphone array fixed to the device. A signal processor produces an audio output using audio beamforming with input from the microphone array. The signal processor aims the beamforming in a selected direction. An orientation sensor—such as a compass, an accelerometer, or an inertial sensor—is coupled to the signal processor. The orientation sensor detects a change in the orientation of the microphone array and provides an orientation signal to the signal processor for adjusting the aim of the beamforming to maintain the selected direction. The device may include a camera that captures an image. An image processor may identify an audio source in the image and provide a signal adjusting the selected direction to follow the audio source. The image processor may receive the orientation signal and adjust the image for changes in the orientation of the camera before tracking movement of the audio source.

65 citations


Journal ArticleDOI
TL;DR: The beamforming correction is applied to the identification of realistic aeolian-tone dipoles and shows an improvement of array performance on estimating dipole source powers.
Abstract: In this paper, a beamforming correction for identifying dipole sources by means of phased microphone array measurements is presented and implemented numerically and experimentally. Conventional beamforming techniques, which are developed for monopole sources, can lead to significant errors when applied to reconstruct dipole sources. A previous correction technique to microphone signals is extended to account for both source location and source power for two-dimensional microphone arrays. The new dipole-beamforming algorithm is developed by modifying the basic source definition used for beamforming. This technique improves the previous signal correction method and yields a beamformer applicable to sources which are suspected to be dipole in nature. Numerical simulations are performed, which validate the capability of this beamformer to recover ideal dipole sources. The beamforming correction is applied to the identification of realistic aeolian-tone dipoles and shows an improvement of array performance on estimating dipole source powers.

Journal ArticleDOI
TL;DR: This letter presents the theory and a simulation example for steering general beam patterns in spherical microphone arrays by weighting the beam pattern coefficients in the spherical harmonics domain with Wigner-D functions that hold the rotation angles as parameters.
Abstract: Spherical microphone arrays, which have been recently developed and proposed for various applications, typically employ beam patterns that are rotationally symmetric about the look direction, providing efficient beam steering in the spherical harmonics domain. However, in some situations, a more general beam pattern may be desired. This letter presents the theory and a simulation example for steering general beam patterns in spherical microphone arrays. Beam steering, formulated as a rotation of the beam pattern, is achieved by weighting the beam pattern coefficients in the spherical harmonics domain with Wigner-D functions that hold the rotation angles as parameters. A matrix formulation is provided, with successive rotations formulated as matrix products.

PatentDOI
TL;DR: In this article, a method for canceling background noise of a sound source other than a target direction sound source in order to realize highly accurate speech recognition, and a system using the same.
Abstract: Provided is a method for canceling background noise of a sound source other than a target direction sound source in order to realize highly accurate speech recognition, and a system using the same. In terms of directional characteristics of a microphone array, due to a capability of approximating a power distribution of each angle of each of possible various sound source directions by use of a sum of coefficient multiples of a base form angle power distribution of a target sound source measured beforehand by base form angle by using a base form sound, and power distribution of a non-directional background sound by base form, only a component of the target sound source direction is extracted at a noise suppression part. In addition, when the target sound source direction is unknown, at a sound source localization part, a distribution for minimizing the approximate residual is selected from base form angle power distributions of various sound source directions to assume a target sound source direction. Further, maximum likelihood estimation is executed by using voice data of the component of the sound source direction passed through these processes, and a voice model obtained by predetermined modeling of the voice data, and speech recognition is carried out based on an obtained assumption value.

Proceedings ArticleDOI
12 May 2008
TL;DR: A new method using acoustic maps to deal with the case of two simultaneous speakers, based on a two step analysis of a coherence map, which allows one to localize both speakers.
Abstract: An interface for distant-talking control of home devices requires the possibility of identifying the positions of multiple users. Acoustic maps, based either on global coherence field (GCF) or oriented global coherence field (OGCF), have already been exploited successfully to determine position and head orientation of a single speaker. This paper proposes a new method using acoustic maps to deal with the case of two simultaneous speakers. The method is based on a two step analysis of a coherence map: first the dominant speaker is localized; then the map is modified by compensating for the effects due to the first speaker and the position of the second speaker is detected. Simulations were carried out to show how an appropriate analysis of OGCF and GCF maps allows one to localize both speakers. Experiments proved the effectiveness of the proposed solution in a linear microphone array set up.

Patent
27 May 2008
TL;DR: In this paper, an error detector is provided to detect functions of the first and second microphones based on the first audio signals to generate a status signal, which is used by a digital signal processor (DSP) to switch to a single microphone mode in which only the remaining normal microphone is enabled.
Abstract: An audio device is provided, employing a defect detection method to detect defectiveness within a microphone array. The microphone array comprising a first microphone and a second microphone, respectively, generates a first audio signal and a second audio signal from ambient audio signals. An error detector is provided to detect functions of the first and second microphones based on the first and second audio signals to generate a status signal. A digital signal processor (DSP) processes the first and second audio signals based on the status signal. If the status signal indicates that only the first microphone or the second microphone is defective, the DSP switches to a single microphone mode in which only the remaining normal microphone is enabled. If the status signal indicates that both the first and second microphones are defective, the DSP generates an error indication signal and stops processing the first and second audio signals.

Patent
Sudhir Raman Ahuja1, Jingdong Chen1, Yiteng Arden Huang, Dong Liu1, Qiru Zhou1 
03 Mar 2008
TL;DR: In this paper, a method and apparatus for active speaker selection in teleconferencing applications illustratively comprises a microphone array module, a speaker recognition system, a user interface, and a speech signal selection module.
Abstract: A method and apparatus for performing active speaker selection in teleconferencing applications illustratively comprises a microphone array module, a speaker recognition system, a user interface, and a speech signal selection module. The microphone array module separates the speech signal from each active speaker from those of other active speakers, providing a plurality of individual speaker's speech signals. The speaker recognition system identifies each currently active speaker using conventional speaker recognition/identification techniques. These identities are then transmitted to a remote teleconferencing location for display to remote participants via a user interface. The remote participants may then select one of the identified speakers, and the speech signal selection module then selects for transmission the speech signal associated with the selected identified speaker, thereby enabling the participants at the remote location to listen to the selected speaker and neglect the speech from other active speakers.

Book ChapterDOI
01 Jan 2008
TL;DR: Focusing on a two-stage framework for speech source localization, the state-of-the-art time delay estimation (TDE) and source localization algorithms are surveyed and analyzed.
Abstract: A fundamental requirement of microphone arrays is the capability of instantaneously locating and continuously tracking a speech sound source. The problem is challenging in practice due to the fact that speech is a nonstationary random process with a wideband spectrum, and because of the simultaneous presence of noise, room reverberation, and other interfering speech sources. This Chapter presents an overview of the research and development on this technology in the last three decades. Focusing on a two-stage framework for speech source localization, we survey and analyze the state-of-the-art time delay estimation (TDE) and source localization algorithms.

Patent
Ross Cutler1
27 Jun 2008
TL;DR: In this article, sound origination detection through use of infrared detection of satellite microphones, estimation of distance between satellite microphones and base unit utilizing captured audio, and estimation of satellite microphone orientation using captured audio are combined to enhance sound source localization and active speaker detection accuracy.
Abstract: Speakers are identified based on sound origination detection through use of infrared detection of satellite microphones, estimation of distance between satellite microphones and base unit utilizing captured audio, and/or estimation of satellite microphone orientation utilizing captured audio. Multiple sound source localization results are combined to enhance sound source localization and/or active speaker detection accuracy.

Patent
27 Jun 2008
TL;DR: In this article, the authors describe a position and vent microphones array, which includes at least two physical microphones to receive acoustic signals, and make use of a common rear vent (actual or virtual) that samples a common pressure source.
Abstract: Microphone arrays (MAs) are described that position and vent microphones so that performance of a noise suppression system coupled to the microphone array is enhanced. The MA includes at least two physical microphones to receive acoustic signals. The physical microphones make use of a common rear vent (actual or virtual) that samples a common pressure source. The MA includes a physical directional microphone configuration and a virtual directional microphone configuration. By making the input to the rear vents of the microphones (actual or virtual) as similar as possible, the real-world filter to be modeled becomes much simpler to model using an adaptive filter.

Journal ArticleDOI
TL;DR: A technique is described to image the vector intensity in the near field of a spherical array of microphones flush mounted in a rigid sphere, where the spatially measured pressure is decomposed into Fourier harmonics in order to reconstruct the volumetric vector intensity outside the sphere.
Abstract: An approach is presented that provides a prediction of the vector intensity field throughout a volume exterior to a rigid spherical measurement array consisting of 31 flush mounted microphones. The theory is based on spherical harmonic expansions of the measured field with the radial variation of the near‐field pressure obtained using the Greens function with vanishing normal derivative at the rigid sphere surface. Experimental results with rigid spherical arrays of differing radii are presented using multiple incoherent sources. Successful intensity reconstructions are obtained over a volume three times the sphere radius up to a frequency of 1.5 kHz that clearly reveal the locations and levels of the two sources. This volumetric intensity probe is very similar mathematically to one described recently by the author (EGW) that used 50 microphones in an open array. The latter was used successfully inside an aircraft cabin in flight to uncover sources of noise. This work was supported by the US Office of Naval Research and Nittobo Acoustic Engineering Co. Ltd.

Journal ArticleDOI
TL;DR: This paper introduces a measurement technique aimed at reducing or possibly eliminating the spatial aliasing problem in the beamforming technique, theoretically and numerically investigated by means of simple sound propagation models, proving its efficiency in reducing the spatialAliasing.
Abstract: This paper introduces a measurement technique aimed at reducing or possibly eliminating the spatial aliasing problem in the beamforming technique. Beamforming main disadvantages are a poor spatial resolution, at low frequency, and the spatial aliasing problem, at higher frequency, leading to the identification of false sources. The idea is to move the microphone array during the measurement operation. In this paper, the proposed approach is theoretically and numerically investigated by means of simple sound propagation models, proving its efficiency in reducing the spatial aliasing. A number of different array configurations are numerically investigated together with the most important parameters governing this measurement technique. A set of numerical results concerning the case of a planar rotating array is shown, together with a first experimental validation of the method.

Patent
04 Jun 2008
TL;DR: In this article, a method for eliminating noise of a long-distance microphone array and a noise-eliminating system is proposed, where the signals collected from two microphones are treated with respect to beam forming.
Abstract: The invention discloses a method for eliminating noise of a long-distance microphone array and a noise-eliminating system. In the invention, the signals collected from two microphones are treated with respect to beam forming, and then the intensified target phonetic signals and the weakened target phonetic signals are obtained; whether target phonetic signals exist in the signals collected from the two microphones is further tested; the update of the adaptive filter coefficient is controlled based on the test result; and lastly, based on a controlled adaptive filter coefficient, the obtained intensified target phonetic signals and the weakened target phonetic signals undergo an adaptive filter process. According to the invention, the performance of noise elimination is greatly improved without affecting the quality of the target phonetics even in case that the microphones are not increased in number.

Journal ArticleDOI
TL;DR: The results indicate that SHB almost entirely restored the loudness (or annoyance) of the target sounds to unmasked levels, even when presented with background noise, and thus may be a useful tool to psychoacoustically analyze composite sources.
Abstract: The potential of spherical-harmonics beamforming (SHB) techniques for the auralization of target sound sources in a background noise was investigated and contrasted with traditional head-related transfer function (HRTF)-based binaural synthesis. A scaling of SHB was theoretically derived to estimate the free-field pressure at the center of a spherical microphone array and verified by comparing simulated frequency response functions with directly measured ones. The results show that there is good agreement in the frequency range of interest. A listening experiment was conducted to evaluate the auralization method subjectively. A set of ten environmental and product sounds were processed for headphone presentation in three different ways: (1) binaural synthesis using dummy head measurements, (2) the same with background noise, and (3) SHB of the noisy condition in combination with binaural synthesis. Two levels of background noise (62, 72dB SPL) were used and two independent groups of subjects (N=14) evalua...

Proceedings ArticleDOI
12 May 2008
TL;DR: An augmented circular microphone array that allows one to have some control of the vertical spatial response of the array and a second-order system was built and measured.
Abstract: With the proliferation of inexpensive digital signal processors and high-quality audio codecs, microphone arrays and associated signal processing algorithms are becoming more attractive as a solution to improve audio communication quality. For room audio conferencing, a circular array using modal beamforming is potentially attractive since it allows a single or multiple beams to be steered to any angle in the plane of the array while maintaining a desired beampattern. One potential problem with circular arrays is that they do not allow the designer to have control of the spatial response of the array in directions that are normal to the array. In this paper we propose an augmented circular microphone array that allows one to have some control of the vertical spatial response of the array. A second-order system was built and measured.

Proceedings ArticleDOI
06 May 2008
TL;DR: This work shows that a straightforward direction estimator is biased, proposes an unbiased estimator and derives the theoretical limits for unique direction estimation, by means of simulations and measurements.
Abstract: Modern home entertainment systems offer surround sound audio playback. This progress over known mono and stereo devices is also intended for high quality hands-free telephony to enhance intelligibility of speech in group conversation. Directional Audio Coding (DirAC) provides an efficient and well-established way to record and encode spatial sound and to render it at an arbitrary loudspeaker setup. On the recording site, DirAC is based on B-format microphone signals. These signals can be obtained by one omnidirectional and three figure-of-eight microphones pointing along the axes of a three-dimensional Cartesian coordinate system. However, a grid of omnidirectional microphones is more appropriate for consumer applications due to economic reasons. Arrays can provide the required figure-of-eight directionality only for a certain frequency range. However, in this contribution we show that a straightforward direction estimator is biased. After formulating the bias analytically we propose an unbiased estimator and derive the theoretical limits for unique direction estimation. The results are illustrated by means of simulations and measurements.

Patent
08 Apr 2008
TL;DR: In this article, a method for extracting a target sound from mixed sound is proposed, which includes receiving a mixed signal through a microphone array, generating a first signal whose directivity is emphasized toward a target source and a second signal whose indirectivity toward the target sound source is suppressed based on the mixed signal, and extracting the target signal from the first signal by masking an interference sound signal, which is contained in the second signal.
Abstract: A method, medium, and apparatus for extracting a target sound from mixed sound. The method includes receiving a mixed signal through a microphone array, generating a first signal whose directivity is emphasized toward a target sound source and a second signal whose directivity toward the target sound source is suppressed based on the mixed signal, and extracting a target sound signal from the first signal by masking an interference sound signal, which is contained in the first signal, based on a ratio of the first signal to the second signal. Therefore, a target sound signal can be clearly separated from a mixed sound signal which contains a plurality of sound signals and is input to a microphone array.

Proceedings ArticleDOI
05 May 2008
TL;DR: In this paper, a rotating near-fleld microphone array is used to measure multi-point pressure statistics on a conical surface surrounding the jet plume, just outside the turbulent shear layer.
Abstract: A novel rotating near-fleld microphone array is used to measure multi-point pressure statistics on a conical surface surrounding the jet plume, just outside the turbulent shear layer. The microphone array extends axially to 10 jet diameters, with a maximum radial distance of 2 diameters from the jet centerline. A Green’s function based method is used to project the near-fleld pressure to the acoustic fleld. The diagnostic method is an extension of the approach previously described by Reba et al., 1 relying on measurement of equivalent noise sources described by second order statistics of a scalar quantity (pressure), rather than fourth-order statistics of a vector quantity over a volume as required by approaches based on the Lighthill Acoustic Analogy. Although the source description adopted here is less fundamental than that of Lighthill, it can be measured experimentally with relative ease. The diagnostic method is applied to a Mj = 1:5 ideally expanded jet over a range of temperature ratios, with acoustic Mach numbers from 1 to 2. It is shown that the near-fleld pressure statistics are well-represented by a Gaussian wave-packet model. Model parameters include convection speed, spatial source extent, and streamwise correlation scale. The acoustic fleld re-constructed from the near-fleld data is compared to direct far-fleld measurements and shown to give very good agreement.

Proceedings ArticleDOI
12 May 2008
TL;DR: It is shown that given a source in the nearfield, significant attenuation of farfield interference is achieved and nearfield sources may be attenuated relative to farfield sources.
Abstract: A nearfield spherical microphone array is presented. The nearfield criterion of the spherical array is defined in terms of array order, frequency and location. It is shown that given a source in the nearfield, significant attenuation of farfield interference is achieved. Also, nearfield sources may be attenuated relative to farfield sources. Dereverberation of a nearfield source in a reverberant enclosure is demonstrated using the nearfield microphone array.

Patent
13 Mar 2008
TL;DR: In this article, a method and an apparatus for acquiring a multi-channel sound by using a microphone array is presented; the method estimates positions of sound sources corresponding to sound source signals, which are mixed together, from the sound source signal input via a microphones array; and generates a multichannel sound signal by compensating for the sound sources signals, based on differences between the estimated positions of the sounds sources and a position of a virtual microphone array substituting for the microphone array.
Abstract: Provided are a method and an apparatus for acquiring a multi-channel sound by using a microphone array. The method estimates positions of sound sources corresponding to sound source signals, which are mixed together, from the sound source signals input via a microphone array; and generates a multi-channel sound source signal by compensating for the sound source signals, based on differences between the estimated positions of the sound sources and a position of a virtual microphone array substituting for the microphone array. By doing so, the multi-channel sound having a stereoscopic effect can be acquired from a plurality of distant sound source signals which are input via the microphone array from a portable sound acquisition device.

Proceedings ArticleDOI
05 May 2008
TL;DR: In this paper, a rotatable bypass duct was used for the synthesis of tonal in-duct sound fields with distinctive modal contents using 30 loudspeakers in the bypass duct.
Abstract: Model scale tests to investigate the sound propagation through the engine nozzle system and the jet shear layers were carried out. The present paper focuses on fan noise radiation from the bypass nozzle into the far-field. For the synthesis of representative tonal in-duct sound fields with distinctive modal contents an actuator ring providing 30 loudspeakers was operated in the bypass duct. Radial mode analysis was accomplished in order to assess the quality of the mode synthesis and to deliver data for the validation of numerical methods. 60 microphones were employed in a rotatable bypass duct section to detect the in-duct sound field at 2400 positions. Tests were conducted without and with flow in both clean nozzle configuration and nozzle configuration with installed pylon, respectively. The azimuthal and polar structure of the radiated far-field was examined with help of a large circular microphone array, which was equipped with 80 microphones. Polar radiation angles between 25° and 115° could be detected by traversing the array in the anechoic chamber along the jet axis. Azimuthal mode analysis was performed in the far-field ring to explore in more detail the impact of the mean flow on the sound propagation and to assess installation effects. The influence of signal-to-noise ratio, microphone mal-positioning and microphone failure on the quality of the far-field mode decomposition are discussed. Further covered in this paper are experiments with loudspeakers placed parallel to the jet axis outside the nozzle. The experiments were carried out to investigate the acoustic shielding of sound waves due to the coaxial core and bypass jets, which is of interest with regard to reflection effects at the lower wing surface in engine under-wing installations.