scispace - formally typeset
Search or ask a question

Showing papers on "Microphone array published in 2001"


Book
01 Jan 2001
TL;DR: This paper presents a meta-modelling architecture for microphone Array Processing that automates the very labor-intensive and therefore time-heavy and expensive process of manually shaping Microphone Arrays for Speech Input in Automobiles.
Abstract: I. Speech Enhancement.- 1 Constant Directivity Beamforming.- 2 Superdirective Microphone Arrays.- 3 Post-Filtering Techniques.- 4 Spatial Coherence Functions for Differential Microphones in Isotropic Noise Fields.- 5 Robust Adaptive Beamforming.- 6 GSVD-Based Optimal Filtering for Multi-Microphone Speech Enhancement.- 7 Explicit Speech Modeling for Microphone Array Speech Acquisition.- II. Source Localization.- 8 Robust Localization in Reverberant Rooms.- 9 Multi-Source Localization Strategies.- 10 Joint Audio-Video Signal Processing for Object Localization and Tracking.- III. Applications.- 11 Microphone-Array Hearing Aids.- 12 Small Microphone Arrays with Postfilters for Noise and Acoustic Echo Reduction.- 13 Acoustic Echo Cancellation for Beamforming Microphone Arrays.- 14 Optimal and Adaptive Microphone Arrays for Speech Input in Automobiles.- 15 Speech Recognition with Microphone Arrays.- 16 Blind Separation of Acoustic Signals.- IV. Open Problems and Future Directions.- 17 Future Directions for Microphone Arrays.- 18 Future Directions in Microphone Array Processing.

1,309 citations


Book ChapterDOI
01 Jan 2001
TL;DR: A theoretical analysis shows that Wiener post-filtering of the output of an optimum distortionless beamformer provides a minimum mean squared error solution.
Abstract: In the context of microphone arrays, the term post-filtering denotes the post-processing of the array output by a single-channel noise suppression filter. A theoretical analysis shows that Wiener post-filtering of the output of an optimum distortionless beamformer provides a minimum mean squared error solution. We examine published methods for post-filter estimation and develop a new algorithm. A simulation system is presented to compare the performance of the discussed algorithms.

237 citations


Book ChapterDOI
01 Jan 2001
TL;DR: In this paper, the authors present an overview of super-directive beamformers, which can be derived by applying the minimum variance distortionless response (MVDR) principle to theoretically well-defined noise fields, as for example the diffuse noise field.
Abstract: This chapter gives an overview of so-called superdirective beamformers, which can be derived by applying the minimum variance distortionless response (MVDR) principle to theoretically well-defined noise fields, as for example the diffuse noise field. We show that all relevant performance measures for beamformer designs are functions of the coherence matrix of the noise field. Additionally, we present unconstrained and constrained MVDR-solutions using modified coherence functions. Solutions for different choices of the optimization criterion are given including a new solution to optimize the front-to-back ratio. Finally, we present a comparison of superdirective beamformers to gradient microphones and an alternative generalized sidelobe canceler (GSC) implementation of the superdirective beamformer.

200 citations


PatentDOI
Naoshi Matsuo1
TL;DR: In this article, a filter coefficient calculator is used to calculate the filter coefficients of the filters in accordance with an evaluation function based on the residual signal, which is obtained by subtracting filtered output signals of the microphones other than the reference from a filtered output signal of the reference microphone.
Abstract: A microphone array apparatus includes a microphone array including microphones, one of the microphones being a reference microphone, filters receiving output signals of the microphones, and a filter coefficient calculator which receives the output signals of the microphones, a noise and a residual signal obtained by subtracting filtered output signals of the microphones other than the reference microphone from a filtered output signal of the reference microphone and which obtain filter coefficients of the filters in accordance with an evaluation function based on the residual signal.

156 citations


Journal ArticleDOI
TL;DR: In this paper, the concept of phase modes is discussed to generate a desired beam pattern for a circular microphone array mounted around a rigid sphere, where the sound diffraction caused by the sphere is taken into account.
Abstract: This paper will discuss the concept of phase modes to generate a desired beam pattern for a circular microphone array mounted around a rigid sphere. The method will be described for arrays consisting of omnidirectional and dipole sensors. The sound diffraction caused by the sphere is taken into account. It will be seen that the method allows, with some restrictions, the design of a wide variety of broadband beam patterns for a given elevation which usually will be the plane of the array. The directivity index is used to characterize the three-dimensional behavior of the array. Simulations show the realization of different beam patterns, based on a 16-element circular array located at the equator of a sphere with radius 0.085 m. The frequency range of this array is from 300 Hz to 5 kHz. Especially at low frequencies, a very good combination of the directivity index and the white noise gain is achieved which cannot be realized with “conventional” beamforming for an array of similar dimensions. The simulations are verified by means of a measurement.

140 citations


Patent
09 Jan 2001
TL;DR: In this article, a telephone system includes two or more cardioid microphones held together and directed outwardly from a central point, and control circuitry combines and analyzes signals from the microphones and selects the signal from one of the microphones or from one or more predetermined combinations of microphone signals in order to track a speaker as the speaker moves about a room or as various speakers situated about the room speak then fall silent.
Abstract: A telephone system includes two or more cardioid microphones held together and directed outwardly from a central point. Mixing circuitry and control circuitry combines and analyzes signals from the microphones and selects the signal from one of the microphones or from one of one or more predetermined combinations of microphone signals in order to track a speaker as the speaker moves about a room or as various speakers situated about the room speak then fall silent. Visual indicators, in the form of light emitting diodes (LEDs) are evenly spaced around the perimeter of a circle concentric with the microphone array. Mixing circuitry produces ten combination signals, A+B, A+C, B+C, A+B+C, A-B, B-C, A-C, A-0.5(B+C), B-0.5(A+C), and C-0.5(B+A), with the "listening beam" formed by combinations, such as A-0.5(B+C), that involve the subtraction of signals, generally being more narrowly directed than beams formed by combinations, such as A+B, that involve only the addition of signals. An omnidirectional combination A+B+C is employed when active speakers are widely scattered throughout the room. Weighting factors are employed in a known manner to provide unity gain output. Control circuitry selects the signal from the microphone or from one of the predetermined microphone combinations, based generally on the energy level of the signal, and employs the selected signal as the output signal. The control circuitry also operates to limit dithering between microphones and, by analyzing the beam selection pattern, may switch to a broader coverage pattern, rather than switching between two narrower beams that each covers one of the speakers.

104 citations


Patent
26 Jan 2001
TL;DR: In this article, a sound receiving signal estimate processing section assumes a sound wave from a sound source reaching two microphones to be a plane wave, expresses an estimate sound receiving signals at a position on a straight line tying the two microphones by a wave equation shown in Expression, estimates a coefficient b cosθ depending on an incoming direction of a soundwave expressed in the wave equation expressed in an Expression by assuming the mean power of the sound waves arrived respectively in the two microphone to be equal to each other, and then estimates a sound received signal at an optional position on the co-axis of
Abstract: PROBLEM TO BE SOLVED: To estimate sound receiving signals at positions on a co-axis by estimating a sound receiving signal at an optional position on a co-axis with two microphones from sound receiving signals from each microphone so as to place the two microphones on one axis. SOLUTION: A sound receiving signal estimate processing section assumes a sound wave from a sound source reaching two microphones to be a plane wave, expresses an estimate sound receiving signal at a position on a straight line tying the two microphones by a wave equation shown in Expression, estimates a coefficient b cosθdepending on an incoming direction of a sound wave expressed in the wave equation expressed in the Expression by assuming the mean power of the sound waves arrived respectively in the two microphones to be equal to each other, and then estimates a sound receiving signal at an optional position on the co-axis of the microphones on the basis of the sound receiving signals from the two microphones. In the Expression, x, y, z are each space axis, t is a time, v is a velocity of air particles, p is a sound pressure, a, b are coefficients, and θ indicates a direction of the sound source. Thus, the sound receiving signal at an optional position on the co-axis can be estimated.

91 citations


Journal ArticleDOI
TL;DR: Test results show a significant improvement in the integrated vision and sound localization (IVSL) system's ability over that of the stand-alone microphone-array based sound localization system to accurately localize sound sources in low signal to noise situations.

81 citations


PatentDOI
TL;DR: In this paper, a method and apparatus for providing a differential microphone with a desired frequency response is described, which is provided by operation of a filter, having an adjustable frequency response coupled to the microphone.
Abstract: A method and apparatus for providing a differential microphone with a desired frequency response are disclosed. The desired frequency response is provided by operation of a filter, having an adjustable frequency response, coupled to the microphone. The frequency response of the filter is set by operation of a controller, also coupled to the microphone, based on signals received from the microphone. The desired frequency response may be determined based upon the orientation angle and the distance between the microphone and a source of sound. The frequency response of the filter may comprise the substantial inverse of the frequency response of the microphone to provide a flat response. In a preferred embodiment, the gain of the differential microphone is adjusted so that the output level is effectively independent of microphone position relative to the source. In particular embodiments, the controller may determine, based on the distance from the sound source, whether to operate the differential microphone in a nearfield mode of operation or a farfield mode of operation.

67 citations


Proceedings ArticleDOI
07 May 2001
TL;DR: A new method of blind source separation on a microphone array combining subband independent component analysis (ICA) and beamforming and can resolve the low-convergence problem through optimization in ICA is described.
Abstract: We describe a new method of blind source separation (BSS) on a microphone array combining subband independent component analysis (ICA) and beamforming. The proposed array system consists of the following three sections: (1) subband-ICA-based BSS section with direction-of-arrival (DOA) estimation; (2) null beamforming section based on the estimated DOA information; and (3) integration of (1) and (2) based on the algorithm diversity. Using this technique, we can resolve the low-convergence problem through optimization in ICA. The results of the signal separation experiments reveal that a noise reduction rate (NRR) of about 18 dB is obtained under the nonreverberant condition, and NRR of 8 dB and 6 dB are obtained in the case that the reverberation times are 150 msec and 300 msec. These performances are superior to those of both simple ICA-based BSS and simple beamforming method.

65 citations


PatentDOI
John McCaskill1
TL;DR: An acoustic tracking system that uses an array of microphones to determine the location of an acoustical source is described in this paper. But it does not specify the beamforming parameters for each potential location and the most likely location is then determined by comparing the data from each beam.
Abstract: An acoustic tracking system that uses an array of microphones to determine the location of an acoustical source. Several points in space are determined to be potential locations of an acoustical source. Beamforming parameters for the array of microphones are determined at each potential location. The beamforming parameters are applied to the sound received by the microphone array for all of the potential locations and data is gathered for each beam. The most likely location is then determined by comparing the data from each beam. Once the location is determined, a camera is directed toward this source.

Book ChapterDOI
01 Jan 2001
TL;DR: This chapter investigates the spatial correlation functions for Nth-order differential microphones in both spherically and cylindrically isotropic noise fields and determines signal-to-noise ratio gains from arbitrarily positioned differential microphone elements in microphone array applications.
Abstract: The spatial correlation function between directional microphones is useful in the design and analysis of the performance of these microphones in actual acoustic noise fields. These correlation functions are well known for omnidirectional receivers, but not well known for directional receivers. This chapter investigates the spatial correlation functions for Nth-order differential microphones in both spherically and cylindrically isotropic noise fields. The results are used to calculate the amount of achievable cancellation from an adaptive noise cancellation application using combinations of differential microphones to remove unwanted noise from a desired signal. The results are useful in determining signal-to-noise ratio gains from arbitrarily positioned differential microphone elements in microphone array applications.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a directional acoustic receiving system, which consists of two or more microphones mounted on a housing supported on the chest of a user by a conducting loop encircling the user's neck.
Abstract: A directional acoustic receiving system is a form of a necklace including an array of two or more microphones mounted on a housing supported on the chest of a user by a conducting loop encircling the user's neck. Signal processing electronics contained in the same housing receive and combine the microphone signals in such a manner as to provide an amplified output signal which emphasizes sounds of interest arriving in a direction forward of the user. The amplified output signal drives the supporting conducting loop to produce a representative magnetic field. An electroacoustic transducer including a magnetic field pick up coil for receiving the magnetic field is mounted in or on the user's ear and generates an acoustic signal representative of the sounds of interest. The microphone output signals are weighted (scaled) and combined to achieve desired spatial directivity responses. The weighting coefficients are determined by an optimization process. By bandpass filtering the weighted microphone signals, with a set of filters covering the audio frequency range, and summing the filtered signals, a receiving microphone array with a small aperture size is caused to have a directivity pattern that is essentially uniform over frequency in two or three dimensions. This method enables the design of highly-directive-hearing instruments which are comfortable, inconspicuous, and convenient to use. The array provides the user with a dramatic improvement in speech perception over existing hearing aid designs, particularly in the presence of background noise, reverberation, and feedback.

01 Jan 2001
TL;DR: A tutorial of fundamental array processing and beamforming theory relevant to microphone array speech processing is presented in this paper, where the authors describe a microphone array as a set of multiple microphones placed at different spatial locations to enhance or attenuate signals emanating from particular directions.
Abstract: This report presents a tutorial of fundamental array processing and beamforming theory relevant to microphone array speech processing. A microphone array consists of multiple microphones placed at di!erent spatial locations. Built upon a knowledge of sou nd propagation principles, the multiple inputs can be manipulated to enhance or attenuate signals emanating from particular directions. In this way, microphone arrays provide a means of enhancing a desired signal in the presence of corrupting noise sources. Moreover, this enhancement is based purely o nk nowledge of the source location, and so microphone array techniques are applicable to a wide variety of noise types. Microphone arrays have great potential in practical applications of speech processing, due to their ability to provide both noise robustness and hands-free signal acquisition. This report has been extracted from my PhD thesis, and can be referenced as : I..A. McCowan. ”Robust Speech Recognition using Microphon eA rrays,” PhD Thesis, Queensland University of Technology, Australia, 2001. For a more in-depth discussion of key microphone processing techniques, the interested reader is refered to M. Brandstein and D. Ward (Eds). ”Microphone Arrays”, Springer, 2001.

Journal ArticleDOI
TL;DR: A new traffic sensing technique is described that utilizes a microphone array to detect the sound waves generated by the road vehicles, which are then digitized and processed by an on-site computer using a correlation based algorithm.
Abstract: A new traffic sensing technique is described that utilizes a microphone array to detect the sound waves generated by the road vehicles. The detected signals are then digitized and processed by an on-site computer using a correlation based algorithm, which extracts key data reflecting the road traffic conditions, e.g., the speed and density of vehicles on the road, automatically on site. In comparison with existing traffic sensors, the proposed system offers lower installation and maintenance costs and is less intrusive to the surrounding built environment. The results of theoretical analysis, computer simulation, and preliminary experimental results are presented.

Book ChapterDOI
01 Jan 2001
TL;DR: This chapter presents robust adaptive beamforming techniques designed specifically for microphone array applications, and GJBFs with an adaptive blocking matrix are presented in the form of a microphone array.
Abstract: This chapter presents robust adaptive beamforming techniques designed specifically for microphone array applications. The basics of adaptive beamformers are first reviewed with the Griffiths-Jim beamformer (GJBF). Its robustness problems caused by steering vector errors are then discussed with some conventionally proposed robust beamformers. As better solutions to the conventional robust beamformers, GJBFs with an adaptive blocking matrix are presented in the form of a microphone array. Simulation results and real-time evaluation data show that a new robust adaptive microphone array achieves improved robustness against steering vector errors. Good sound quality of the output signal is also confirmed by a subjective evaluation.

Book ChapterDOI
01 Jan 2001
TL;DR: This chapter discusses implementation issues related to their use in microphone arrays of constant directivity beam-formers, which aim to produce a constant spatial response over a broad frequency range.
Abstract: Beamforming, or spatial filtering, is one of the simplest methods for discriminating between different signals based on the physical location of the sources. Because speech is a very wideband signal, covering some four octaves, traditional narrowband beamforming techniques are inappropriate for hands-free speech acquisition. One class of broadband beamformers, called constant directivity beam-formers, aim to produce a constant spatial response over a broad frequency range. In this chapter we review such beamformers, and discuss implementation issues related to their use in microphone arrays.

Proceedings ArticleDOI
07 May 2001
TL;DR: A method for estimating the direction to a sound source, using a compact array of microphones, is presented, applicable to arbitrary microphone configurations, handles more than two microphone pairs, and has no blind spots.
Abstract: A method for estimating the direction to a sound source, using a compact array of microphones, is presented. For each pair of microphones, the signals are prefiltered and correlated. Rather than taking the peak of the correlation vectors as estimates for the time delay between the microphones, all the correlation vectors are accumulated in a common coordinate system, namely a unit hemisphere centered on the microphone array. The maximum cell in the hemisphere then indicates the azimuthal and elevation angles to the source. Unlike previous techniques, this algorithm is applicable to arbitrary microphone configurations, handles more than two microphone pairs, and has no blind spots. Experiments demonstrate significantly increased robustness to noise, compared with previous techniques.

PatentDOI
TL;DR: In this paper, a directional microphone system is described, which comprises circuitry for low pass filtering a first-order signal, and circuitry for high-pass filtering a second order signal, which is used to determine whether a plurality of microphones have sufficiently matched frequency response characteristics to be used in a multi-order directional microphone array.
Abstract: A directional microphone system is disclosed, which comprises circuitry for low pass filtering a first order signal, and circuitry for high pass filtering a second order signal. The system further comprises circuitry for summing the low pass filtered first order signal and the high pass filtered second order signal. A method of determining whether a plurality of microphones have sufficiently matched frequency response characteristics to be used in a multi-order directional microphone array is also disclosed. For a microphone array having at least three microphones, wherein one of the microphones is disposed between the other of the microphones, a method of determining the arrangement of the microphones in the array is also disclosed.

Proceedings ArticleDOI
21 Oct 2001
TL;DR: In this paper, the cross-correlation functions derived from various microphone pairs are simultaneously maximized over a set of potential delay combinations consistent with candidate source locations, which is a procedure that combines the advantages offered by the phase transform (PHAT) weighting and a more robust localization procedure without dramatically increasing computational load.
Abstract: This paper presents an alternative approach to acoustic source localization which modifies the traditional two-step localization procedure to not require explicit time-delay estimates. Instead, the cross-correlation functions derived from various microphone pairs are simultaneously maximized over a set of potential delay combinations consistent with candidate source locations. The result is a procedure that combines the advantages offered by the phase transform (PHAT) weighting (or any reasonable cross-correlation-type function) and a more robust localization procedure without dramatically increasing computational load. Simulations are performed across a range of reverberation conditions to illustrate the utility of the proposed method relative to conventional generalized cross-correlation (GCC) filtering approaches and a more modern eigenvalue-based technique.

Book ChapterDOI
01 Jan 2001
TL;DR: Two techniques, each successful on its own, are combined here to jointly achieve maximum echo cancellation in real environments, including acoustic echo cancellation (AEC), which has matured for single-microphone signal acquisition and beamforming microphone arrays, which aim at dereverberation of desired local signals and suppression of local interferers.
Abstract: Acoustic feedback from loudspeakers to microphones constitutes a major challenge for digital signal processing in interfaces for natural, full-duplex human—machine speech interaction. Two techniques, each one successful on its own, are combined here to jointly achieve maximum echo cancellation in real environments: For one, acoustic echo cancellation (AEC), which has matured for single-microphone signal acquisition, and, secondly, beamforming microphone arrays, which aim at dereverberation of desired local signals and suppression of local interferers, including acoustic echoes. Structural analysis shows that straightforward combinations of the two techniques either multiply the considerable computational cost of AEC by the number of array microphones or sacrifice algorithmic performance if the beamforming is time-varying. Striving for increased computational efficiency without performance loss, the integration of AEC into time-varying beamforming is examined for two broad classes of beamforming structures. Finally, the combination of AEC and beamforming is discussed for multi-channel recording and multi-channel reproduction schemes.

Proceedings ArticleDOI
21 Oct 2001
TL;DR: This contribution examines the possibility of tracking the desired signal source by estimating its distance and orientation angle and applies appropriate correction filters which equalize unwanted frequency response and level deviations within a reasonable range of operation without significantly degrading the noise canceling properties of differential arrays.
Abstract: Close-talking differential microphone arrays (CTMAs) are useful in situations where the background noise level is very high because they inherently suppress farfield noise while emphasizing desired nearfield signals. One problem, however, is that the array has to be placed as close to the desired source (talker's mouth) as possible since the frequency response and level of a differential nearfield array depend heavily on its position and orientation relative to the source signal. In order to be able to utilize the advantages of CTMAs for an extended range of microphone positions, this contribution examines the possibility of tracking the desired signal source by estimating its distance and orientation angle. Having this information, appropriate correction filters can be applied adaptively which equalize unwanted frequency response and level deviations within a reasonable range of operation without significantly degrading the noise canceling properties of differential arrays. A PC-based real-time implementation with transducer calibration capabilities of this adaptive CTMA is presented.


Proceedings ArticleDOI
07 May 2001
TL;DR: Simulations show that the subband beamformer has better performance than the full band beamformer when the input signals to the microphone array are coloured and in reverberant environments, also, the proposed subbandbeamformer performs better than its fullband counterpart.
Abstract: A new adaptive beamformer which combines the idea of subband processing and the generalized sidelobe canceller structure is presented. The proposed subband beamformer has a blocking matrix that uses coefficient-constrained subband adaptive filters to limit target cancellation within an allowable range of direction of arrival. Simulations compare the fullband adaptive beamformer and the subband adaptive beamformer show that the subband beamformer has better performance than the fullband beamformer when the input signals to the microphone array are coloured. In reverberant environments, also, the proposed subband beamformer performs better than its fullband counterpart.

Journal Article
TL;DR: This paper proposes methods that apply 3-D microphone arrays, directional analysis of measured room responses, and visualization of data, yielding useful information about the time-frequency-direction properties of the responses.
Abstract: Room impulse responses are inherently multidimensional, including components in three coordinate directions, each one further being described as a time-frequency representation. Suc h 5-dimensional data is di cult to visualize and interpret. We propose methods that apply 3-D microphone arrays, directional analysis of measured room responses, and visualization of data, yielding useful information about the time-frequency-direction properties of the responses. The applicability of the methods is demonstrated with three di erent cases of real measurements. INTRODUCTION A room impulse response, measured from a source to a receiver position, is inherently multidimensional. Traditionally, the evolution of an omnidirectional sound pressure response in a single point has been studied as a function of time and frequency. However, dividing the response further into directional components can reveal much more information about the actual propagation of sound in the room, as well as about its perceptual aspects. In this paper we propose methods that are based on 3-D microphone arrays, directional analysis of the measured responses, and visualization of such data in a way that yields maximal information about the time-frequency-direction properties of the response. MERIMAA ET AL. Measurement, Analysis, and Visualization of Directional Room Responses The measurement of directional room responses is made with a special 3-D microphone probe which basically consists of two intensity probes in each x-, y-, and z-coordinate directions and is constructed of small electret capsules. The responses are analyzed either with a uniform or an auditorily motivated time-frequency resolution. The analysis results in a significant amount of 5-dimensional data that is hard to visualize and interpret. Based on measured x/y/z-intensity components, intensity vectors (magnitude and direction) can be plotted in a spectrogram-like map, one vector for each time-frequency bin, illustrating the directional evolution of the field in time and frequency. Additionally, a pressure-related time-frequency spectrogram can be overlaid with the vectors, in gray levels or colors, illustrating for example a perceptually motivated spectrogram with no directional information. One such map can be used to illustrate the horizontal information and another one can be added for the elevation information. This technique is a part of a Matlab visualization toolbox for directional room responses developed by the authors, and it includes several other possibilities to analyze and represent room acoustical data. Traditional parameters and presentations are also available, some of them in 3-D versions, such as energy-time plots in desired directions. The paper starts with a discussion on measurements of directional room responses and sound intensity. This is followed by descriptions of the visualization method and the auditorily motivated time-frequency analysis. Finally, the applicability of the methods is demonstrated with three different cases of real measurements. DIRECTIONAL SOUND PRESSURE COMPONENTS Existing literature on room acoustics discusses mainly omnidirectional measurements with the exception of some special directional parameters. Directional room responses can be measured with either directional microphones or arrays of microphones. However, an array of omnidirectional microphones has some distinct advantages compared to directional microphones. Omnidirectional capsules can be made smaller and they usually behave more like ideal transducers. Further, if the omnidirectional signals are stored at the measurement time, it is possible to afterwards create varying directivity patterns based on a single measurement. Typical directivity patterns can be formed with an array of two or more closely spaced omnidirectional microphones and some equalization to compensate for the resulting non-flat magnitude response. For example the difference of two microphone signals gives a dipole pattern and adding an appropriate delay to one of the signals changes the pattern to a cardioid. Okubo et al. [1] have also proposed a method that uses a product of cardioid and dipole signals to achieve a directivity pattern more suitable for some directional room acoustics measurements. Various directional sound pressure responses can be used to plot traditional impulse responses, energy-time-curves or spectrograms that give information about the directional properties of the room responses. With larger microphone arrays it is also possible to form directivity patterns with very narrow beams and thus good spatial resolution. However, groups of similar plots for several different directions are not very visual or easy to interpret. Sound intensity as a vector quantity can solve some of the visualization problems in the method we are proposing in this paper. SOUND INTENSITY Sound intensity [2] describes the propagation of energy in a sound field. Instantaneous intensity vector is defined as the product of instantaneous sound pressure p(t) and particle velocity u(t) I(t) = p(t)u(t) (1) Based on the linearized fluid momentum equation, particle velocity in the direction n can be written in the form

Book ChapterDOI
01 Jan 2001
TL;DR: This chapter first discusses implementation issues and performance metrics specific to the hearing-aid application, and a review of previous work on microphone-array hearing aids includes systems with directional microphones, fixed beamformer, adaptive beamformers, physiologically-motivated processing, and binaural outputs.
Abstract: Microphone-array hearing aids provide a promising solution to the problems encountered by hearing-impaired persons when listening to speech in the presence of background noise. This chapter first discusses implementation issues and performance metrics specific to the hearing-aid application. A review of previous work on microphone-array hearing aids includes systems with directional microphones, fixed beamformers, adaptive beamformers, physiologically-motivated processing, and binaural outputs. Recent simulation results of one promising adaptive beamforming system are presented. The performance of microphone-array hearing aids depends heavily on the acoustic environments in which they are used. Additional information about the level of reverberation, number of interferers, and relative levels of interferers encountered by hearing-aid users in everyday situations is required to quantify the benefit of microphone-array hearing aids and to select the optimal processing strategy.

Proceedings ArticleDOI
28 May 2001
TL;DR: In this paper, a nonintrusive technique using a circular microphone array outside the engine measuring the complex noise spectrum on an arc of a circle was proposed to detect radiated modes.
Abstract: The bypass duct of an aircraft engine is a low-pass filter allowing some spinning modes to radiate outside the duct. The knowledge of the radiated modes can help in noise reduction, as well as the diagnosis of noise generation mechanisms inside the duct. We propose a nonintrusive technique using a circular microphone array outside the engine measuring the complex noise spectrum on an arc of a circle. The array is placed at various axial distances from the inlet or the exhaust of the engine. Using a model of noise radiation from the duct, an overdetermined system of linear equations is constructed for the complex amplitudes of the radial modes for a fixed circumferential mode. This system of linear equation is generally singular, indicating that the problem is ill-posed. Tikhonov regularization is employed to solve this system of equations for the unknown amplitudes of the radiated modes. An application of our mode detection technique using measured acoustic data from a circular microphone array is presented. We show that this technique can reliably detect radiated modes with the possible exception of modes very close to cut-off.

Patent
27 Jun 2001
TL;DR: In this paper, a system for discerning an audible command from ambient noise in a vehicular cabin is described, which consists of a microphone array and a signal processing system, and it can be used to detect the presence of an intruder.
Abstract: A system for discerning an audible command from ambient noise in a vehicular cabin is disclosed. The system comprises a microphone array and a signal processing system.

Proceedings ArticleDOI
07 May 2001
TL;DR: A robust speech detection algorithm that can operate reliably in a microphone array teleconferencing system and can prevent the microphone-array-based speaker tracking system from being misguided by noises commonly present in a conference room.
Abstract: This paper describes a robust speech detection algorithm that can operate reliably in a microphone array teleconferencing system. High performance in a nonstationary noisy environment is achieved by combining the following techniques: (1) noise suppression by spectral subtraction, (2) silence detection by adaptive noise threshold and (3) non-stationary noise detection based on the availability of pitch signal. This algorithm can prevent the microphone-array-based speaker tracking system from being misguided by noises commonly present in a conference room. Real world experiments show that this algorithm performs very well and has the potential for practical applications.

Journal ArticleDOI
TL;DR: For single frequency feedforward control problems, the principal component algorithm is shown to be useful for reducing the computational burden and simplifying the implementation of control effort penalties in high channel count control systems.
Abstract: An in-flight evaluation of a principal component algorithm for feedforward active noise control is discussed. Cabin noise at the first three harmonics of the blade passage frequency (103 Hz) of a Raytheon-Beech 1900D twin turboprop aircraft was controlled using 21 pairs of inertial force actuators bolted to the ring frames of the aircraft; 32 microphones provided error feedback inside the aircraft cabin. In a single frequency noise control test, the blade passage frequency was reduced by 15 dB averaged across the microphone array. When controlling the first three harmonics simultaneously, reductions of 11 dB at 103 Hz, 1.5 dB at 206 Hz, and 2.8 dB at 309 Hz were obtained. For single frequency feedforward control problems, the principal component algorithm is shown to be useful for reducing the computational burden and simplifying the implementation of control effort penalties in high channel count control systems. Good agreement was found between the in-flight behavior of the controller and the predicted optimal control solution.