scispace - formally typeset
Search or ask a question

Showing papers on "Microphone array published in 1997"


Proceedings ArticleDOI
21 Apr 1997
TL;DR: In this paper, a first-order differential microphone array with an infinitely steerable and variable beampattern is described, which consists of 6 small pressure microphones flush-mounted on the surface of a 3/4" diameter rigid nylon sphere.
Abstract: A new first-order differential microphone array with an infinitely steerable and variable beampattern is described. The microphone consists of 6 small pressure microphones flush-mounted on the surface of a 3/4" diameter rigid nylon sphere. The microphones are located on the surface at points where included octahedron vertices contact the spherical surface. By appropriately combining the three Cartesian orthogonal pairs with simple scalar weightings, a general first-order differential microphone beam (or beams) can be realized and directed to any angle in 4/spl pi/ steradian space. A working real-time version has been created and measured results from this microphone are shown. This microphone should be useful for surround sound recording/playback applications and to virtual reality audio applications.

309 citations


Journal ArticleDOI
TL;DR: The article reports on the use of crosspower-spectrum phase (CSP) analysis as an accurate time delay estimation (TDE) technique used in a microphone array system for the location of acoustic events in noisy and reverberant environments.
Abstract: The article reports on the use of crosspower-spectrum phase (CSP) analysis as an accurate time delay estimation (TDE) technique. It is used in a microphone array system for the location of acoustic events in noisy and reverberant environments. A corresponding coherence measure (CM) and its graphical representation are introduced to show the TDE accuracy. Using a two-microphone pair array, real experiments show less than a 10 cm average location error in a 6 m/spl times/6 m area.

250 citations


Proceedings ArticleDOI
Hong Wang1, P.L. Chu1
21 Apr 1997
TL;DR: This paper describes the voice source localization algorithm used in the PictureTel automatic camera pointing system (LimeLight/sup TM/, dynamic speech locating technology), which uses an array of 46 cm wide and 30 cm high, which contains 4 microphones, and is mounted on top of the monitor.
Abstract: This paper describes the voice source localization algorithm used in the PictureTel automatic camera pointing system (LimeLight/sup TM/, dynamic speech locating technology). The system uses an array of 46 cm wide and 30 cm high, which contains 4 microphones, and is mounted on top of the monitor. The three dimensional position of a sound source is calculated from the time delays of 4 pairs of microphones. In time delay estimation, the averaging of signal onsets of each frequency band is combined with phase correlation to reduce the influence of noise and reverberation. With this approach, it is possible to provide reliable three dimensional voice source localization by a small microphone array. Post processing based on a priori knowledge is also introduced to eliminate the influences of reflections from furniture such as tables. Results of speech source localization under real conference room conditions are given. Some system related issues are also discussed.

249 citations


Proceedings ArticleDOI
21 Apr 1997
TL;DR: The use of a microphone array for hands-free continuous speech recognition in noisy and reverberant environment is investigated and a phone HMM adaptation, based on a small amount of phonetically rich sentences, improved the recognition rate obtained.
Abstract: The use of a microphone array for hands-free continuous speech recognition in noisy and reverberant environment is investigated. An array of eight omnidirectional microphones was placed at different angles and distances from the talker. A time delay compensation module was used to provide a beamformed signal as input to a hidden Markov model (HMM) based recognizer. A phone HMM adaptation, based on a small amount of phonetically rich sentences, further improved the recognition rate obtained by applying only beamforming. These results were confirmed both by experiments conducted in a noisy and reverberant environment and by simulations. In the latter case, different conditions were recreated by using the image method to reproduce synthetic versions of the array microphone signals.

180 citations


Journal ArticleDOI
TL;DR: Speech quality is indeed enhanced at the output by the suppression of reflections and reverberation, and efficient adaptive beamforming for noise reduction is applied without risk of signal cancellation.
Abstract: This paper presents a method of adaptive microphone array beamforming using matched filters with signal subspace tracking. Our objective is to enhance near-field speech signals by reducing multipath and reverberation. In real applications such as speech acquisition in acoustic environments, sources do not propagate along known and direct paths. Particularly in hands-free telephony, we have to deal with undesired propagation phenomena such as reflections and reverberation. Prior methods developed adaptive microphone arrays for noise reduction after a time delay compensation of the direct path. This simple synchronization is insufficient to produce an acceptable speech quality, and makes adaptive beamforming unsuitable. We prove the identification of source-to-array impulse responses to be possible by subspace tracking. We consequently show the advantage of treating synchronization as a matched filtering step. Speech quality is indeed enhanced at the output by the suppression of reflections and reverberation (i.e., dereverberation), and efficient adaptive beamforming for noise reduction is applied without risk of signal cancellation. Evaluations confirm the performance achieved by the proposed algorithm under real conditions.

157 citations


Journal ArticleDOI
TL;DR: Results show that both single- and dual-array systems provided target-intelligibility enhancements (2-4 dB improvements in speech reception threshold) relative to binaural cardioid microphones, and the bINAural-output systems provided cues that assist in sound localization, with resulting performance depending directly upon the cue fidelity.
Abstract: This work is aimed at developing a design for the use of a microphone array with binaural hearing aids. The goal of such a hearing aid is to provide both the spatial-filtering benefits of the array and the natural benefits to sound localization and speech intelligibility that accrue from binaural listening. The present study examines two types of designs for fixed-processing systems: one in which independent arrays provide outputs to the two ears, and another in which the binaural outputs are derived from a single array. For the latter, various methods are used to merge array processing with binaural listening. In one approach, filters are designed to satisfy a frequency-dependent trade between directionality and binaural cue fidelity. In another, the microphone signals are filtered into low- and high-frequency components with the lowpass signals providing binaural cues and the highpass signal being the single output of the array processor. Acoustic and behavioral measurements were made in an anechoic chamber and in a moderately reverberant room to evaluate example systems. Theoretical performance was calculated for model arrays mounted on an idealized spherical head. Results show that both single- and dual-array systems provided target-intelligibility enhancements (2-4 dB improvements in speech reception threshold) relative to binaural cardioid microphones. In addition, the binaural-output systems provided cues that assist in sound localization, with resulting performance depending directly upon the cue fidelity. Finally, the sphere-based calculations accurately reflected the major features of the actual head-mounted array results, both in terms of directional sensitivity and output binaural cues.

142 citations


PatentDOI
TL;DR: In this article, a directional acoustic receiving system is constructed in the form of a necklace including an array of two or more microphones mounted on a housing supported on the chest of a user by a conducting loop encircling the user's neck.
Abstract: A directional acoustic receiving system is constructed in the form of a necklace including an array of two or more microphones mounted on a housing supported on the chest of a user by a conducting loop encircling the user's neck. Signal processing electronics contained in the same housing receives and combines the microphone signals in such a manner as to provide an amplified output signal which emphasizes sounds of interest arriving in a direction forward of the user. The amplified output signal drives the supporting conducting loop to produce a representative magnetic field. An electroacoustic transducer including a magnetic field pick-up coil for receiving the magnetic field is mounted in or on the user's ear and generates an acoustic signal representative of the sounds of interest. The microphone output signals are weighted (scaled) and combined to achieve desired spatial directivity responses. The weighting coefficients are determined by an optimization process. By bandpass filtering the weighted microphone signals with a set of filters covering the audio frequency range and summing the filtered signals, a receiving microphone array with a small aperture size is caused to have a directivity pattern that is essentially uniform over frequency in two or three dimensions. This method enables the design of highly-directive hearing instruments which are comfortable, inconspicuous, and convenient to use. The invention provides the user with a dramatic improvement in speech perception over existing hearing aid designs, particularly in the presence of background noise and reverberation.

137 citations


Journal ArticleDOI
TL;DR: A design in which two ear-level omnidirectional microphones constitute the array showed improvements in speech reception in noise in a mildly reverberant room of approximately 3 dB over simple binaural amplification and 5 dB over monaural amplification.
Abstract: For pt.I see ibid., vol.5, no.6, p.529-42, 1997. This work is aimed at developing a design for the use of a microphone array with binaural hearing aids. The goal of such a hearing aid is to provide both the spatial-filtering benefits of the array and the natural benefits to sound localization ability and speech intelligibility that result from binaural listening. The present study examines a design in which two ear-level omnidirectional microphones constitute the array. The merging of array processing with binaural listening is accomplished by dividing the frequency spectrum, devoting the lowpass part to binaural processing and the highpass part to adaptive array processing. Acoustic and behavioral measurements were made in an anechoic chamber and in a moderately reverberant room to assess the trade-off between sound localization and speech reception as the cutoff frequency was varied. A lowpass/highpass cutoff frequency of 500 Hz provided an improvement of 40 percentage points in sentence intelligibility over unaided listening for normal-hearing listeners, while still allowing adequate localization performance. Comparison of this binaural adaptive system to traditional amplification configurations with normal-hearing listeners showed improvements in speech reception in noise in a mildly reverberant room of approximately 3 dB over simple binaural amplification and 5 dB over monaural amplification.

128 citations


Proceedings ArticleDOI
21 Apr 1997
TL;DR: New concepts for efficient combination of acoustic echo cancellation and adaptive beamforming microphone arrays (ABMA) are presented and methods for controlling the interaction of ABMA and AEC are outlined.
Abstract: New concepts for efficient combination of acoustic echo cancellation (AEC) and adaptive beamforming microphone arrays (ABMA) are presented. By decomposing common beamforming methods into a time-invariant part, which the AEC can integrate, and a separate time-variant part, the number of echo cancellers is minimized without rendering the system identification problem more difficult. Methods for controlling the interaction of ABMA and AEC are outlined and implementations for typical microphone array applications are discussed.

111 citations


Proceedings ArticleDOI
21 Apr 1997
TL;DR: A microphone array can be used to locate a dominant acoustic source in a given environment and an "optimal" source location is obtained based on the interchannel delay estimates and on a geometrical description of the sensor arrangement is obtained.
Abstract: A microphone array can be used to locate a dominant acoustic source in a given environment. This capability is successfully employed to locate an active talker in teleconferencing or other multi-speaker applications. In this work the source location is obtained in two steps: (1) a time difference of arrival (TDOA) computation between the signals of the array; (2) an "optimal" source location based on the interchannel delay estimates and on a geometrical description of the sensor arrangement. The crosspower spectrum phase technique was used for TDOA estimation, while a maximum likelihood approach was followed to derive the source coordinates. Source location experiments in a three-dimensional space were performed by means of an array of 8 microphones. For this purpose both a loudspeaker and a real talker were used to collect data in a large noisy and reverberant room.

110 citations


Patent
Andre John Van Schyndel1
22 Dec 1997
TL;DR: In this article, a system that selects and/or steers a directional steerable microphone system based on input from an optical transducer is described, where a video camera provides video input to a processor that controls a steerable directional microphone (such as a microphone array).
Abstract: A system that selects and/or steers a directional steerable microphone system based on input from an optical transducer is described. An optical transducer, such as a video camera, provides video input to a processor that controls a steerable directional microphone (such as a microphone array) in the direction of audience members that exhibit physical cues commonly expressed by persons who are speaking or are about to speak.

Proceedings ArticleDOI
21 Apr 1997
TL;DR: A method for tracking the positional estimates of multiple talkers in the operating region of an acoustic microphone array using a time-delay-based localization algorithm and a Kalman filter derived from a set of potential source motion models.
Abstract: A method for tracking the positional estimates of multiple talkers in the operating region of an acoustic microphone array is presented. Initial talker location estimates are provided by a time-delay-based localization algorithm. These raw estimates are spatially smoothed by a Kalman filter derived from a set of potential source motion models. Data association techniques based on the estimate clusterings and source trajectories are incorporated to match location observations with individual talkers. Experimental results are presented for array recorded data using multiple talkers in a variety of scenarios.

Proceedings ArticleDOI
Hong Wang1, P.L. Chu1
19 Oct 1997
TL;DR: In this article, the authors describe the voice source localization algorithm used in the PictureTel automatic camera pointing system (LimeLight/sup TM/, dynamic speech locating technology), which uses an array of 46 cm wide and 30 cm high, which contains 4 microphones, and is mounted on top of the monitor.
Abstract: This paper describes the voice source localization algorithm used in the PictureTel automatic camera pointing system (LimeLight/sup TM/, dynamic speech locating technology). The system uses an array of 46 cm wide and 30 cm high, which contains 4 microphones, and is mounted on top of the monitor. The three dimensional position of a sound source is calculated from the time delays of 4 pairs of microphones. In time delay estimation, the averaging of signal onsets of each frequency band is combined with phase correlation to reduce the influence of noise and reverberation. With this approach, it is possible to provide reliable three dimensional voice source localization by a small microphone array. Post processing based on a priori knowledge is also introduced to eliminate the influences of reflections from furniture such as tables. Results of speech source localization under real conference room conditions are given. Some system related issues are also discussed.

Proceedings ArticleDOI
12 May 1997
TL;DR: In this article, a new phased array processing method was developed to achieve accurate two-dimensional localization of acoustic sources, which is designed for sparse arrays, and uses many fewer microphones.
Abstract: A new phased-array processing method has been developed to achieve accurate two-dimensional localization of acoustic sources. While conventional processing requires a full 2D array with an excessive number of microphones, the present method is designed for sparse arrays, and uses many fewer microphones. A cross-shaped array of 39 microphones was built for wind tunnel testing. The near real-time processing is based on the above method. This is applied to an airframe noise source study on an Airbus aircraft model in the French CEPRA-19 anechoic wind tunnel. The localization maps give an idea of the array's performance.

Journal ArticleDOI
TL;DR: This article presents and evaluates a new cepstral prefiltering technique which can be applied on the received signals before the actual TDE in order to obtain a more accurate estimate of the delay in a typical reverberant environment.

Patent
Yoshifumi Nagata1
14 Mar 1997
TL;DR: In this article, a microphone array input type speech recognition scheme capable of realizing a high precision sound source position or direction estimation by a small amount of calculations, and thereby achieving high precision speech recognition is presented.
Abstract: A microphone array input type speech recognition scheme capable of realizing a high precision sound source position or direction estimation by a small amount of calculations, and thereby realizing a high precision speech recognition. A band-pass waveform, which is a waveform for each frequency bandwidth, is obtained from input signals of the microphone array, and a band-pass power of the sound source is directly obtained from the band-pass waveform. Then, the obtained band-pass power is used as the speech parameter. It is also possible to realize the sound source estimation and the band-pass power estimation at high precision while further reducing an amount of calculations, by utilizing a sound source position search processing in which a low resolution position estimation and a high resolution position estimation are combined.

Proceedings ArticleDOI
21 Apr 1997
TL;DR: This paper presents a method to simultaneously perform 20 dB acoustic echo cancellation and 15-20 dB speech enhancement using an adaptive microphone array combined with spectral subtraction.
Abstract: This paper presents a method to simultaneously perform 20 dB acoustic echo cancellation and 15-20 dB speech enhancement using an adaptive microphone array combined with spectral subtraction. Primarily intended for handsfree telephones in automobiles, the microphone array system simultaneously emphasizes the near-end talker and suppresses the handsfree loudspeaker and the broadband car noise. The array system is based on a fast and efficient on-site calibration and can be used in other situations such as conventional speaker phones.

Proceedings ArticleDOI
21 Apr 1997
TL;DR: This paper presents the design and its justifications of the Huge Microphone Array and performance data for a few important algorithms relative to usage of processing-capability, response latency, and difficulty of programming are discussed.
Abstract: The Huge Microphone Array (HMA) project started in February 1994 to design, construct, and test a real-time 512-microphone array system and to develop algorithms for use on it. Analysis of known algorithms showed that signal-processing performance of over 6 Gigaflops would be required; at the same time, there was a need for "portability", i.e., fitting into a small van. These tradeoffs and many others have led to a unique design in both hardware and software. This paper presents the design and its justifications. Performance data for a few important algorithms relative to usage of processing-capability, response latency, and difficulty of programming are discussed.

Proceedings ArticleDOI
21 Apr 1997
TL;DR: The optimum near-field beamformer provides increased array gain over that obtained from a uniformly weighted delay-and-sum beamformer.
Abstract: This paper describes the application of array optimization techniques to improving the near-field response of an arbitrary microphone array The optimization exploits the differences in wavefront curvature between near-field and far-field sound sources and is suitable for reverberation reduction in small rooms The optimum near-field beamformer provides increased array gain over that obtained from a uniformly weighted delay-and-sum beamformer

Proceedings ArticleDOI
21 Apr 1997
TL;DR: This microphone array system features time delay estimation using prewhitening signal processing, an optimally weighted delay-and-sum array, and speech period detection based on the level difference between signals before and after array processing.
Abstract: This paper proposes a microphone array system which realizes the following important functions for speech recognition: (i) SNR improvement, (ii) flat spectrum response for an arbitrary speaker position, and (iii) speech period detection in noisy speech. This microphone array system features time delay estimation using prewhitening signal processing, an optimally weighted delay-and-sum array, and speech period detection (called MLD) based on the level difference between signals before and after array processing. Word recognition experiments performed in the presence of crowd noise demonstrate that the proposed system has great robustness against noise than does the system with a conventional directional microphone and a speech period detection method.

Proceedings ArticleDOI
09 Nov 1997
TL;DR: A traffic sensing technique is described which utilizes a microphone array to detect the sound waves generated by the road vehicles, then digitized and processed by an on-site computer using a correlation based algorithm, which extracts key data reflecting the road traffic conditions, e.g. the speed and density of vehicles on the road, automatically on- site.
Abstract: A traffic sensing technique is described which utilizes a microphone array to detect the sound waves generated by the road vehicles. The detected sounds are then digitized and processed by an on-site computer using a correlation based algorithm, which extracts key data reflecting the road traffic conditions, e.g. the speed and density of vehicles on the road, automatically on-site. In comparison with existing traffic sensors, the proposed system offers lower installation and maintenance costs and is less intrusive to the surrounding environment. The results of theoretical analysis, computer simulation and some preliminary experiments are presented.

Proceedings ArticleDOI
D.B. Ward1, G.W. Elko
19 Oct 1997
TL;DR: Using the spherical solution to the wave equation, a beamforming technique is presented in this paper to simultaneously approximate a desired nearfield beampattern and a desired farfield beAMPattern.
Abstract: In designing a microphone array for speech acquisition in a reverberant room, one is often faced with a mixed nearfield/farfield design problem, i.e., design a beamformer which can focus on a nearfield source, but which simultaneously can cancel room reverberation (which is typically modeled as isotropic farfield interference). This paper presents a new technique to solve such a problem. Using the spherical solution to the wave equation, a beamforming technique is presented to simultaneously approximate a desired nearfield beampattern and a desired farfield beampattern.

Proceedings ArticleDOI
P.L. Chu1
21 Apr 1997
TL;DR: A three microphone superdirective array is described which meets constraints of small size, high performance, and low cost and linearly combined so as to maximize the signal-to-noise ratio.
Abstract: In set-top videoconferencing, the complete videoconferencing system fits unobtrusively on top of the television. The microphone sound pickup system is one of the most important functional blocks with constraints of small size, high performance, and low cost. Persons speaking several feet away from the system must be picked up satisfactorily while noise generated internally in the system by the cooling fan and hard drive, and noise generated externally from air conditioning and nearby computers must be attenuated. In this paper, a three microphone superdirective array is described which meets these constraints. An analog highpass and lowpass filter are used to merge two of the microphone signals to form a single channel, so that a single stereo A/D converter is required to process the three microphone signals. The microphone signals are then linearly combined so as to maximize the signal-to-noise ratio, resulting in nulls steered toward nearby objectionable noise sources.

Journal ArticleDOI
TL;DR: The Huge Microphone Array is a collaborative effort between Brown University and Rutgers University that started in February 1994 to design, construct, debug, and test a real‐time 512‐microphone array system and to develop algorithms for use on it.
Abstract: The Huge Microphone Array (HMA) is a collaborative effort between Brown University and Rutgers University that started in February 1994 to design, construct, debug, and test a real‐time 512‐microphone array system and to develop algorithms for use on it. Analysis of known algorithms made it clear that signal‐processing performance of over 6 Gigaflops would be required; at the same time, there was a need for ‘‘portability,’’ i.e., fitting into a small van, that also set an upper limit to the power required. It was essential that the array be able to be used in both large and small acoustic environments. These tradeoffs and many others have led to a unique design in both hardware and software. The hardware uses 128 fast, floating‐point DSP microprocessors and is designed so that data flow is independent of data processing. This leads to an unusually simple software environment. This paper presents the full design and its justifications. Performance for a few important algorithms relative to usage of processing‐capability, response latency, and difficulty of programming is discussed. [Work supported by NSF Grant MIP‐9314625.]

Proceedings ArticleDOI
19 Oct 1997
TL;DR: In this paper, the authors use fractional lower order statistics in the frequency domain of two-sensor measurements to accurately locate the source in impulsive noise, and demonstrate a significant improvement in detection via simulation experiments of a sound source in /spl alpha/-stable noise.
Abstract: This paper addresses the problem of robust localization of a sound source in a wide range of operating environments. We use fractional lower order statistics in the frequency domain of two-sensor measurements to accurately locate the source in impulsive noise. We demonstrate a significant improvement in detection via simulation experiments of a sound source in /spl alpha/-stable noise. Applications of this technique include the efficient steering of a microphone array in teleconference applications.

Proceedings Article
01 Jan 1997
TL;DR: LTS1 Reference LTS-CONF-1997-044 Record created on 2006-06-14, modified on 2016-08-08.
Abstract: Keywords: LTS1 Reference LTS-CONF-1997-044 Record created on 2006-06-14, modified on 2016-08-08

Journal ArticleDOI
Osamu Hoshuyama1, Akihiko Sugiyama1
TL;DR: In this paper, a robust generalized sidelobe cancellation structure for adaptive microphone arrays is proposed, which can pick up a target signal with little distortion when the error in the sight direction from the target direction is large.
Abstract: A new robust generalized sidelobe canceller structure suited to adaptive microphone arrays is proposed. In the proposed structure, the blocking matrix incorporates leaky adaptive filters whose input signals are the output of the fixed beamformer. The leaky adaptive filters alleviate the influence of phase error, which results in the robustness. Undesirable target-signal cancellation is avoided in the presence of array imperfections such as target-direction error and microphone-position error. The proposed structure can pick up a target signal with little distortion when the error in the sight direction from the target direction is large. It can be implemented with a small number of microphones. Simulations demonstrate that the proposed structure, designed to allow 20° directional error, reduces interference by more than 18 dB. © 1997 Scripta Technica, Inc. Electron Comm Jpn Pt 3, 80(8): 56–65, 1997

Proceedings ArticleDOI
21 Apr 1997
TL;DR: It is shown by means of simulations using measured data from a real room that the minimax algorithm leads to a more uniform final noise field than the existing algorithms.
Abstract: This paper deals with multiple input multiple output systems for active control of acoustic signals. These systems are used when the acoustic field is complex and therefore a number of sensors are necessary to estimate the sound field and a number of sources to create the cancelling field. A steepest descent iterative algorithm is applied to minimise the p-norm of a vector composed by the output signals of a microphone array. The existing algorithms deal with the 2-norm of this vector. This paper describes a general framework that covers the existing systems and then it focuses on the /spl infin/-norm minimisation algorithm. The minimax algorithm based on the /spl infin/-norm minimises the output signal which has the greatest power. It is shown by means of simulations using measured data from a real room that the minimax algorithm leads to a more uniform final noise field than the existing algorithms.

Patent
18 Jul 1997
TL;DR: In this article, a sound source position detection part 3-1 of a control part 3 is proposed by inputting sound source information inputted from a microphone array 1 and a sensor 2 for image input.
Abstract: PROBLEM TO BE SOLVED: To highly accurately detect a speaker position by judging it while adding two information of the detection information of a sound source position and the detection information of a figure position. SOLUTION: A sound source position detection part 3-1 of a control part 3 prepares a sound source position may by inputting sound source information inputted from a microphone array 1. Based on the image information inputted by a sensor 2 for image input, a human body position detection part 3-2 prepares a human body position map. The map calculates the probability for the sound source or human body position to exist for the unit of each domain while partitioning the range of a space detectable for the microphone array 1 and sensor 2. A speaker position discrimination part 3-3 calculates the product of probabilities in the correspondent domains of the sound source and human body position maps and discriminates the domain having the largest product as the speaker position. Either one of an ultrasonic sensor, an infrared sensor or a telecision camera is used for the sensor 2.

29 Apr 1997
TL;DR: In this article, a VAE-15 tiltrotor aircraft performing a variety of terminal area operating procedures was used to measure the noise footprint produced during realistic approach and departure procedures.
Abstract: Acoustic data have been acquired for the XV-15 tiltrotor aircraft performing a variety of terminal area operating procedures This joint NASA/Bell/Army test program was conducted in two phases During Phase 1 the XV-15 was flown over a linear array of microphones, deployed perpendicular to the flight path, at a number of fixed operating conditions This documented the relative noise differences between the various conditions During Phase 2 the microphone array was deployed over a large area to directly measure the noise footprint produced during realistic approach and departure procedures The XV-15 flew approach profiles that culminated in IGE hover over a landing pad, then takeoffs from the hover condition back out over the microphone array Results from Phase 1 identify noise differences between selected operating conditions, while those from Phase 2 identify differences in noise footprints between takeoff and approach conditions and changes in noise footprint due to variation in approach procedures