scispace - formally typeset
Search or ask a question

Showing papers on "Microphone published in 2013"


Patent
14 Mar 2013
TL;DR: In this article, a microprocessor or other application specific integrated circuit provides a mechanism for comparing the relative transit times between a user's voice, a primary speech microphone, and a secondary compliance microphone to determine if the speech microphone is placed in an appropriate proximity to the user's mouth.
Abstract: Apparatus and method that improves speech recognition accuracy, by monitoring the position of a user's headset-mounted speech microphone, and prompting the user to reconfigure the speech microphone's orientation if required. A microprocessor or other application specific integrated circuit provides a mechanism for comparing the relative transit times between a user's voice, a primary speech microphone, and a secondary compliance microphone. The difference in transit times may be used to determine if the speech microphone is placed in an appropriate proximity to the user's mouth. If required, the user is automatically prompted to reposition the speech microphone.

308 citations


Proceedings ArticleDOI
01 Dec 2013
TL;DR: The accuracy of a network recognising speech from a single distant microphone can approach that of a multi-microphone setup by training with data from other microphones.
Abstract: We investigate the application of deep neural network (DNN)-hidden Markov model (HMM) hybrid acoustic models for far-field speech recognition of meetings recorded using microphone arrays We show that the hybrid models achieve significantly better accuracy than conventional systems based on Gaussian mixture models (GMMs) We observe up to 8% absolute word error rate (WER) reduction from a discriminatively trained GMM baseline when using a single distant microphone, and between 4-6% absolute WER reduction when using beamforming on various combinations of array channels By training the networks on audio from multiple channels, we find the networks can recover significant part of accuracy difference between the single distant microphone and beamformed configurations Finally, we show that the accuracy of a network recognising speech from a single distant microphone can approach that of a multi-microphone setup by training with data from other microphones

127 citations


Patent
22 Feb 2013
TL;DR: In this article, an Angle and Distance Processing (ADP) module is employed on a mobile device and configured to provide runtime angle and distance information to an adaptive beamformer for canceling noise signals, provides a means for building a table of filter coefficients for adaptive filters used in echo cancellation, provides faster and more accurate Automatic Gain Control (AGC), provides delay information for a classifier in a Voice Activity Detector (VAD), and assists in separating echo path changes from double talk.
Abstract: The disclosed system and method for a mobile device combines information derived from onboard sensors with conventional signal processing information derived from a speech or audio signal to assist in noise and echo cancellation. In some implementations, an Angle and Distance Processing (ADP) module is employed on a mobile device and configured to provide runtime angle and distance information to an adaptive beamformer for canceling noise signals, provides a means for building a table of filter coefficients for adaptive filters used in echo cancellation, provides faster and more accurate Automatic Gain Control (AGC), provides delay information for a classifier in a Voice Activity Detector (VAD), provides a means for automatic switching between a speakerphone and handset mode of the mobile device, or primary microphone and reference microphones and assists in separating echo path changes from double talk.

122 citations


Patent
04 Nov 2013
TL;DR: In this paper, the authors propose a self-calibration mechanism for a speaker with a connection to a microphone located at a listening area in a room, where the microphone picks up a test signal generated by the speaker and the loudspeaker uses the test signal to determine the speaker frequency response.
Abstract: Systems and methods for calibrating a loudspeaker with a connection to a microphone located at a listening area in a room. The loudspeaker includes self-calibration functions to adjust speaker characteristics according to effects generated by operating the loudspeaker in the room. In one example, the microphone picks up a test signal generated by the loudspeaker and the loudspeaker uses the test signal to determine the loudspeaker frequency response. The frequency response is analyzed below a selected low frequency value for a room mode. The loudspeaker generates parameters for a digital filter to compensate for the room modes. In another example, the loudspeaker may be networked with other speakers to perform calibration functions on all of the loudspeakers in the network.

118 citations


Journal ArticleDOI
TL;DR: A method for localizing an acoustic source with distributed microphone networks that turns to exhibit a significantly lower computational cost compared with state-of-the-art techniques, while retaining an excellent localization accuracy in fairly reverberant conditions.
Abstract: We propose a method for localizing an acoustic source with distributed microphone networks. Time Differences of Arrival (TDOAs) of signals pertaining the same sensor are estimated through Generalized Cross-Correlation. After a TDOA filtering stage that discards measurements that are potentially unreliable, source localization is performed by minimizing a fourth-order polynomial that combines hyperbolic constraints from multiple sensors. The algorithm turns to exhibit a significantly lower computational cost compared with state-of-the-art techniques, while retaining an excellent localization accuracy in fairly reverberant conditions.

96 citations


Patent
08 Mar 2013
TL;DR: In this article, a system can include a plurality of microphones operative to receive signals, a microphone condition detector, and an array of microphone condition determination sources, each of which can determine a condition for each of the microphones by using the received signals and accessing at least one condition determination source.
Abstract: Systems and methods for determining the operating condition of multiple microphones of an electronic device are disclosed. A system can include a plurality of microphones operative to receive signals, a microphone condition detector, and a plurality of microphone condition determination sources. The microphone condition detector can determine a condition for each of the plurality of microphones by using the received signals and accessing at least one microphone condition determination source.

94 citations


Patent
13 Aug 2013
TL;DR: In this paper, an adaptive filter algorithm was proposed to move the point at which acoustic cancellation occurs from the error microphone and closer to the user's eardrum, in order to improve the performance of a personal listening system.
Abstract: A personal listening system has an active noise control (ANC) controller that produces an anti-noise signal. A head worn audio device for a user has a speaker to convert the anti-noise signal into anti-noise, an error microphone, and a reference microphone. The controller uses signals from the error and reference microphones to produce the anti-noise signal in accordance with an adaptive filter algorithm that has an adjustable parameter which changes so as to move the point at which acoustic cancellation occurs from the error microphone and closer to the user's eardrum. Other embodiments are also described and claimed.

89 citations


Patent
Michael Sleator1
22 May 2013
TL;DR: In this article, a system can receive a gesture from a user and configure a microphone system based on the received gesture to be more sensitive in the direction of a user from a device.
Abstract: A system can receive a gesture from a user and configure a microphone system based on the received gesture to be more sensitive in the direction of a user from a device. The gesture can detected by a sensor and can be a touch input, input from a camera and a depth sensor and the like. The microphone system can include a microphone that can be electronically or mechanically steerable, or both. Acoustic signals received from the direction of the user and from other directions can be used in conjunction with an automatic speech recognition system to detect and process a command from the user.

86 citations


Proceedings ArticleDOI
26 May 2013
TL;DR: The main contribution of the proposed method is an algorithm that identifies and corrects for such internal delays and acoustic event onset times in the measured TOAs.
Abstract: We present a method for automatic microphone localization in adhoc microphone arrays. The localization is based on time-of-arrival (TOA) measurements obtained from spatially distributed acoustic events. In practice, measured TOAs are an incomplete representation of the true TOAs due to unknown onset times of the acoustic events and internal delays in the capturing devices and make the localization problem insoluble if not addressed appropriately. The main contribution of the proposed method is an algorithm that identifies and corrects for such internal delays and acoustic event onset times in the measured TOAs. Experimental results using both simulated and real-world data demonstrate the performance of the method and highlight the significance of correct estimation of the internal delays and onset times.

84 citations


Patent
24 Sep 2013
TL;DR: In this paper, multiple adaptive W filters and associated adaptive filter controllers are provided that use multiple reference microphone signals to produce multiple, component anti-noise signals, which are gain weighted and summed to produce a single antinoise signal, which drives an earpiece speaker.
Abstract: In one aspect, multiple adaptive W filters and associated adaptive filter controllers are provided that use multiple reference microphone signals to produce multiple, “component” anti-noise signals These are gain weighted and summed to produce a single anti-noise signal, which drives an earpiece speaker The weighting changes based on computed measures of the coherence between content in each reference signal and content in an error signal Other embodiments are also described and claimed

80 citations


Book
30 Apr 2013
TL;DR: In the comprehensive treatment of microphone arrays, the topics covered include an introduction to the theory, far-field and near-field array signal processing algorithms, practical implementations, and common applications: vehicles, computing and communications equipment, compressors, fans, and household appliances, and hands-free speech.
Abstract: Presents a unified framework of far-field and near-field array techniques for noise source identification and sound field visualization, from theory to application.Acoustic Array Systems: Theory, Implementation, and Applicationprovides an overview of microphone array technology with applications in noise source identification and sound field visualization. In the comprehensive treatment of microphone arrays, the topics covered include an introduction to the theory, far-field and near-field array signal processing algorithms, practical implementations, and common applications: vehicles, computing and communications equipment, compressors, fans, and household appliances, and hands-free speech. The author concludes with other emerging techniques and innovative algorithms.Encompasses theoretical background, implementation considerations and application know-howShows how to tackle broader problems in signal processing, control, and transudcersCovers both farfield and nearfield techniques in a balanced wayIntroduces innovative algorithms including equivalent source imaging (NESI) and high-resolution nearfield arraysSelected code examples available for download for readers to practice on their ownPresentation slides available for instructor useA valuable resource for Postgraduates and researchers in acoustics, noise control engineering, audio engineering, and signal processing.

Journal ArticleDOI
TL;DR: In this paper, the authors describe how different sound types can be explored using the microphone of a smartphone and a suitable app, and demonstrate experiments that enable learners to explore and understand these differences.
Abstract: This paper describes how different sound types can be explored using the microphone of a smartphone and a suitable app. Vibrating bodies, such as strings, membranes, or bars, generate air pressure fluctuations in their immediate vicinity, which propagate through the room in the form of sound waves. Depending on the triggering mechanism, it is possible to differentiate between four types of sound waves: tone, sound, noise, and bang. In everyday language, non-experts use the terms “tone” and “sound” synonymously; however, from a physics perspective there are very clear differences between the two terms. This paper presents experiments that enable learners to explore and understand these differences. Tuning forks and musical instruments (e.g., recorders and guitars) can be used as equipment for the experiments. The data are captured using a smartphone equipped with the appropriate app (in this paper we describe the app Audio Kit for iOS systems1). The values captured by the smartphone are displayed in a screen shot and then viewed directly on the smartphone or exported to a computer graphics program for printing.

Patent
08 Jul 2013
TL;DR: In this paper, a mobile communications device contains at least two microphones, one microphone is located away from the handset receiver and serves to pick up voice of a near end user of the device for transmission to the other party during a call.
Abstract: A mobile communications device contains at least two microphones. One microphone is located away from the handset receiver and serves to pick up voice of a near end user of the device for transmission to the other party during a call. Another microphone is located near the handset receiver and serves to pick up acoustic output of the handset receiver (a far end signal). A signal processor measures the frequency response of the receiver. The signal processor performs spectral analysis of the receiver frequency response to determine whether or not the device is being held at the ear of the user. On that basis, the device automatically changes its operating mode, e.g., turns on or off a touch sensitive display screen during the call. Other embodiments are also described.

Proceedings ArticleDOI
27 Jun 2013
TL;DR: A hybrid analog-digital back scatter platform that uses digital backscatter for addressability and control but switches into analog backscattering mode for high data rate transmission of sensor data is presented.
Abstract: After comparing the properties of analog backscatter and digital backscatter, we propose that a combination of the two can provide a solution for high data rate battery free wireless sensing that is superior to either approach on its own. We present a hybrid analog-digital backscatter platform that uses digital backscatter for addressability and control but switches into analog backscatter mode for high data rate transmission of sensor data. Using hybrid backscatter, we report the first digitally addressable real-time battery free wireless microphone. We develop the hybrid backscatter platform by integrating an electret microphone and RF switch with a digital RFID platform (WISP). The hybrid WISP operates by default in digital mode, transmitting and receiving digital data using the EPC Gen 2 RFID protocol but switching into analog mode to backscatter audio sensor data when activated by Gen 2 READ command. The data is recovered using a USRP-based Software Defined RFID reader. We report an operating range of 7.4 meters for the analog backscatter microphone and 2.7 meters for hybrid microphone with 26.7 dBm output power USRP-based RFID reader.

Journal ArticleDOI
TL;DR: The integration of the ManyEars Library with Willow Garage’s Robot Operating System is presented and the customized microphone board and sound card distributed as an open hardware solution for implementation of robotic audition systems are introduced.
Abstract: ManyEars is an open framework for microphone array-based audio processing. It consists of a sound source localization, tracking and separation system that can provide an enhanced speaker signal for improved speech and sound recognition in real-world settings. ManyEars software framework is composed of a portable and modular C library, along with a graphical user interface for tuning the parameters and for real-time monitoring. This paper presents the integration of the ManyEars Library with Willow Garage's Robot Operating System. To facilitate the use of ManyEars on various robotic platforms, the paper also introduces the customized microphone board and sound card distributed as an open hardware solution for implementation of robotic audition systems.

Patent
10 Jan 2013
TL;DR: In this paper, a sound processing system suitable for use in a vehicle having multiple acoustic zones includes a plurality of microphone In-Car Communication (Mic-ICC) instances coupled and a multiplicity of loudspeaker In-car Communication (Ls-ICC) instances.
Abstract: An In-Car Communication (ICC) system supports the communication paths within a car by receiving the speech signals of a speaking passenger and playing it back for one or more listening passengers. Signal processing tasks are split into a microphone related part and into a loudspeaker related part. A sound processing system suitable for use in a vehicle having multiple acoustic zones includes a plurality of microphone In-Car Communication (Mic-ICC) instances coupled and a plurality of loudspeaker In-Car Communication (Ls-ICC) instances. The system further includes a dynamic audio routing matrix with a controller and coupled to the Mic-ICC instances, a mixer coupled to the plurality of Mic-ICC instances and a distributor coupled to the Ls-ICC instances.

Patent
Lae-Hoon Kim1, Erik Visser1
15 Mar 2013
TL;DR: In this paper, a spatially directive filter is applied to a multichannel audio signal to produce an output signal, based on angles of arrival of source components relative to the axes of different microphone pairs.
Abstract: Systems, methods, and apparatus are described for applying, based on angles of arrival of source components relative to the axes of different microphone pairs, a spatially directive filter to a multichannel audio signal to produce an output signal.

Patent
18 Apr 2013
TL;DR: In this article, an error microphone is provided proximate the speaker to measure the output of the transducer in order to control the adaptation of the anti-noise signal and to estimate an electro-acoustical path from the noise canceling circuit through the transducers.
Abstract: A personal audio device, such as a wireless telephone, includes noise canceling circuit that adaptively generates an anti-noise signal from a reference microphone signal and injects the anti-noise signal into the speaker or other transducer output to cause cancellation of ambient audio sounds. An error microphone may also be provided proximate the speaker to measure the output of the transducer in order to control the adaptation of the anti-noise signal and to estimate an electro-acoustical path from the noise canceling circuit through the transducer. A processing circuit that performs the adaptive noise canceling (ANC) function also detects frequency-dependent characteristics in and/or direction of the ambient sounds and alters adaptation of the noise canceling circuit in response to the detection.

Patent
Yeri Lee1
12 Nov 2013
TL;DR: A mobile terminal may include an audio output module to output sound, a microphone to receive a user's voice input, and a detecting device to detect a voice control command as discussed by the authors.
Abstract: A mobile terminal may include an audio output module to output sound, a microphone to receive a user's voice input, a detecting device to detect a voice control command, and a controller to control the audio output module to adjust the volume of the sound output by the audio output module based on the voice control command.

Proceedings ArticleDOI
26 May 2013
TL;DR: In this paper, the authors present a complete characterization and solution to microphone position self-calibration problem for time-of-arrival (TOA) measurements, which is the problem of determining the positions of receivers and transmitters given all receiver-transmitter distances.
Abstract: This paper presents a complete characterization and solution to microphone position self-calibration problem for time-of-arrival (TOA) measurements. This is the problem of determining the positions of receivers and transmitters given all receiver-transmitter distances. Such calibration problems arise in application such as calibration of radio antenna networks, audio or ultra-sound arrays and WiFi transmitter arrays. We show for what cases such calibration problems are well-defined and derive efficient and numerically stable algorithms for the minimal TOA based self-calibration problems. The proposed algorithms are non-iterative and require no assumptions on the sensor positions. Experiments on synthetic data show that the minimal solvers are numerically stable and perform well on noisy data. The solvers are also tested on two real datasets with good results.

Journal ArticleDOI
TL;DR: An algorithm for jaw movement identification is presented that is designed to be as general as possible; it requires no calibration and identifies jaw movements according to key features in the time domain that are defined in relative terms.

Journal ArticleDOI
TL;DR: The GSC-form implementation, by separating the constraints and the minimization, enables the adaptation of the BF during speech-absent time segments, and relaxes the requirement of other distributed LCMV based algorithms to re-estimate the sources RTFs after each iteration.
Abstract: This paper proposes a distributed multiple constraints generalized sidelobe canceler (GSC) for speech enhancement in an N-node fully connected wireless acoustic sensor network (WASN) comprising M microphones. Our algorithm is designed to operate in reverberant environments with constrained speakers (including both desired and competing speakers). Rather than broadcasting M microphone signals, a significant communication bandwidth reduction is obtained by performing local beamforming at the nodes, and utilizing only transmission channels. Each node processes its own microphone signals together with the N + P transmitted signals. The GSC-form implementation, by separating the constraints and the minimization, enables the adaptation of the BF during speech-absent time segments, and relaxes the requirement of other distributed LCMV based algorithms to re-estimate the sources RTFs after each iteration. We provide a full convergence proof of the proposed structure to the centralized GSC-beamformer (BF). An extensive experimental study of both narrowband and (wideband) speech signals verifies the theoretical analysis.

Journal ArticleDOI
TL;DR: Experimental results show that the proposed method improves AEI performance compared with the direct method (i.e., feature vector is extracted from the audio recording directly), and the proposed scheme is robust to MP3 compression attack.
Abstract: An audio recording is subject to a number of possible distortions and artifacts. Consider, for example, artifacts due to acoustic reverberation and background noise. The acoustic reverberation depends on the shape and the composition of the room, and it causes temporal and spectral smearing of the recorded sound. The background noise, on the other hand, depends on the secondary audio source activities present in the evidentiary recording. Extraction of acoustic cues from an audio recording is an important but challenging task. Temporal changes in the estimated reverberation and background noise can be used for dynamic acoustic environment identification (AEI), audio forensics, and ballistic settings. We describe a statistical technique based on spectral subtraction to estimate the amount of reverberation and nonlinear filtering based on particle filtering to estimate the background noise. The effectiveness of the proposed method is tested using a data set consisting of speech recordings of two human speakers (one male and one female) made in eight acoustic environments using four commercial grade microphones. Performance of the proposed method is evaluated for various experimental settings such as microphone independent, semi- and full-blind AEI, and robustness to MP3 compression. Performance of the proposed framework is also evaluated using Temporal Derivative-based Spectrum and Mel-Cepstrum (TDSM)-based features. Experimental results show that the proposed method improves AEI performance compared with the direct method (i.e., feature vector is extracted from the audio recording directly). In addition, experimental results also show that the proposed scheme is robust to MP3 compression attack.

Patent
21 Feb 2013
TL;DR: In this article, a portable device such as a smartphone or tablet can be used to calibrate speakers by initiating playback of a test signal, detecting playback of the test signal with the portable device's microphone, and repeating this process for a number of speakers and/or device positions (e.g., next to each of the user's ears).
Abstract: Systems and method are disclosed for facilitating efficient calibration of filters for correcting room and/or speaker-based distortion and/or binaural imbalances in audio reproduction, and/or for producing three-dimensional sound in stereo system environments. According to some embodiments, using a portable device such as a smartphone or tablet, a user can calibrate speakers by initiating playback of a test signal, detecting playback of the test signal with the portable device's microphone, and repeating this process for a number of speakers and/or device positions (e.g., next to each of the user's ears). A comparison can be made between the test signal and the detected signal, and this can be used to more precisely calibrate rendering of future signals by the speakers.

Journal ArticleDOI
TL;DR: A two-stage beamforming approach for dereverberation and noise reduction is presented and different signal-dependent beamformers can be used depending on the desired operating point in terms of noise reduction and speech distortion.
Abstract: In general, the signal-to-noise ratio as well as the signal-to-reverberation ratio of speech received by a microphone decrease when the distance between the talker and microphone increases. Dereverberation and noise reduction algorithm are essential for many applications such as videoconferencing, hearing aids, and automatic speech recognition to improve the quality and intelligibility of the received desired speech that is corrupted by reverberation and noise. In the last decade, researchers have aimed at estimating the reverberant desired speech signal as received by one of the microphones. Although this approach has let to practical noise reduction algorithms, the spatial diversity of the received desired signal is not exploited to dereverberate the speech signal. In this paper, a two-stage beamforming approach is presented for dereverberation and noise reduction. In the first stage, a signal-independent beamformer is used to generate a reference signal which contains a dereverberated version of the desired speech signal as received at the microphones and residual noise. In the second stage, the filtered microphone signals and the noisy reference signal are used to obtain an estimate of the dereverberated desired speech signal. In this stage, different signal-dependent beamformers can be used depending on the desired operating point in terms of noise reduction and speech distortion. The presented performance evaluation demonstrates the effectiveness of the proposed two-stage approach.

Patent
01 Apr 2013
TL;DR: In this paper, an adaptive noise cancellation (ANC) circuit that adaptively generates an anti-noise signal for each earspeaker from at least one microphone signal that measures the ambient audio, and combines the antinoise signals with source audio to provide outputs for the earspeakers.
Abstract: A personal audio device including earspeakers, includes an adaptive noise canceling (ANC) circuit that adaptively generates an anti-noise signal for each earspeaker from at least one microphone signal that measures the ambient audio, and the anti-noise signals are combined with source audio to provide outputs for the earspeakers. The anti-noise signals cause cancellation of ambient audio sounds at the respective earspeakers. A processing circuit uses the microphone signal(s) to generate the anti-noise signals, which can be generated by adaptive filters. The processing circuit controls adaptation of the adaptive filters such that when an event requiring action on the adaptation of one of the adaptive filters is detected, action is taken on the other one of the adaptive filters. Another feature of the ANC system uses microphone signals provided at both of the earspeakers to perform processing on a voice microphone signal that receives speech of the user.

Journal ArticleDOI
TL;DR: An experimental study of spatial sound perception with the use of a spherical microphone array for sound recording and headphone-based binaural sound synthesis shows that a source will be perceived more spatially sharp and more externalized when represented by a bINAural stimuli reconstructed with a higher spherical harmonics order.
Abstract: The area of sound field synthesis has significantly advanced in the past decade, facilitated by the development of high-quality sound-field capturing and re-synthesis systems. Spherical microphone arrays are among the most recently developed systems for sound field capturing, enabling processing and analysis of three-dimensional sound fields in the spherical harmonics domain. In spite of these developments, a clear relation between sound fields recorded by spherical microphone arrays and their perception with a re-synthesis system has not yet been established, although some relation to scalar measures of spatial perception was recently presented. This paper presents an experimental study of spatial sound perception with the use of a spherical microphone array for sound recording and headphone-based binaural sound synthesis. Sound field analysis and processing is performed in the spherical harmonics domain with the use of head-related transfer functions and simulated enclosed sound fields. The effect of several factors, such as spherical harmonics order, frequency bandwidth, and spatial sampling, are investigated by applying the repertory grid technique to the results of the experiment, forming a clearer relation between sound-field capture with a spherical microphone array and its perception using binaural synthesis regarding space, frequency, and additional artifacts. The experimental study clearly shows that a source will be perceived more spatially sharp and more externalized when represented by a binaural stimuli reconstructed with a higher spherical harmonics order. This effect is apparent from low spherical harmonics orders. Spatial aliasing, as a result of sound field capturing with a finite number of microphones, introduces unpleasant artifacts which increased with the degree of aliasing error.

Patent
15 Apr 2013
TL;DR: In this paper, a secondary path estimating adaptive filter is used to estimate the electro-acoustical path from the noise canceling circuit through the transducer so that source audio can be removed from the error signal.
Abstract: A personal audio device, such as a wireless telephone, generates an anti-noise signal from an error microphone signal and injects the anti-noise signal into the speaker or other transducer output to cause cancellation of ambient audio sounds. The error microphone is also provided proximate the speaker to provide an error signal indicative of the effectiveness of the noise cancellation. A secondary path estimating adaptive filter is used to estimate the electro-acoustical path from the noise canceling circuit through the transducer so that source audio can be removed from the error signal. Noise bursts are injected intermittently and the adaptation of the secondary path estimating adaptive filter controlled, so that the secondary path estimate can be maintained irrespective of the presence and amplitude of the source audio.

Patent
18 Apr 2013
TL;DR: In this article, an adaptive filter is used to estimate the electro-acoustical path from the noise-canceling circuit through the transducer to the at least one microphone so that source audio can be removed from the microphone signal.
Abstract: A personal audio device, such as a wireless telephone, generates an anti-noise signal from a microphone signal and injects the anti-noise signal into the speaker or other transducer output to cause cancellation of ambient audio sounds. The microphone measures the ambient environment, but also contains a component due to the transducer acoustic output. An adaptive filter is used to estimate the electro-acoustical path from the noise-canceling circuit through the transducer to the at least one microphone so that source audio can be removed from the microphone signal. A determination of the relative amount of the ambient sounds present in the microphone signal versus the amount of the transducer output of the source audio present in the microphone signal is made to determine whether to update the adaptive response.

Patent
24 Apr 2013
TL;DR: In this article, a secondary path estimating adaptive filter estimates the electro-acoustical path from the noise canceling circuit through the transducer so that source audio can be removed from the error signal.
Abstract: An adaptive noise canceling (ANC) circuit adaptively generates an anti-noise signal from a reference microphone signal that is injected into the speaker or other transducer output to cause cancellation of ambient audio sounds. An error microphone proximate the speaker provides an error signal. A secondary path estimating adaptive filter estimates the electro-acoustical path from the noise canceling circuit through the transducer so that source audio can be removed from the error signal. Tones in the source audio, such as remote ringtones, present in downlink audio during initiation of a telephone call, are detected by a tone detector using accumulated tone persistence and non-silence hangover counting, and adaptation of the secondary path estimating adaptive filter is halted to prevent adapting to the tones. Adaptation of the adaptive filters is then sequenced so any disruption of the secondary path adaptive filter response is removed before allowing the anti-noise generating filter to adapt.