scispace - formally typeset
Search or ask a question

Showing papers on "Noise published in 2013"


Book
31 Oct 2013
TL;DR: Book on noise effects on man covering audiometry, aural reflex, hearing damage risk, physiological responses, motor performance and speech communication.
Abstract: Book on noise effects on man covering audiometry, aural reflex, hearing damage risk, physiological responses, motor performance and speech communication

411 citations


Patent
12 Mar 2013
TL;DR: In this article, audio frames are classified as either speech, non-transient background noise, or transient noise events, and other metrics may be calculated to indicate confidence in classification.
Abstract: Audio frames are classified as either speech, non-transient background noise, or transient noise events. Probabilities of speech or transient noise event, or other metrics may be calculated to indicate confidence in classification. Frames classified as speech or noise events are not used in updating models (e.g., spectral subtraction noise estimates, silence model, background energy estimates, signal-to-noise ratio) of non-transient background noise. Frame classification affects acceptance/rejection of recognition hypothesis. Classifications and other audio related information may be determined by circuitry in a headset, and sent (e.g., wirelessly) to a separate processor-based recognition device.

265 citations


Journal ArticleDOI
TL;DR: The AEP-technique was frequently used to study the effects of high sound/noise levels on hearing in particular by measuring the temporary threshold shifts after exposure to various noise types (white noise, pure tones and anthropogenic noises) and was successfully utilized to study acoustic communication.
Abstract: A recent survey lists more than 100 papers utilizing the auditory evoked potential (AEP) recording technique for studying hearing in fishes More than 95 % of these AEP-studies were published after Kenyon et al introduced a non-invasive electrophysiological approach in 1998 allowing rapid evaluation of hearing and repeated testing of animals First, our review compares AEP hearing thresholds to behaviorally gained thresholds Second, baseline hearing abilities are described and compared in 111 fish species out of 51 families Following this, studies investigating the functional significance of various accessory hearing structures (Weberian ossicles, swim bladder, otic bladders) by eliminating these morphological structures in various ways are dealt with Furthermore, studies on the ontogenetic development of hearing are summarized The AEP-technique was frequently used to study the effects of high sound/noise levels on hearing in particular by measuring the temporary threshold shifts after exposure to various noise types (white noise, pure tones and anthropogenic noises) In addition, the hearing thresholds were determined in the presence of noise (white, ambient, ship noise) in several studies, a phenomenon termed masking Various ecological (eg, temperature, cave dwelling), genetic (eg, albinism), methodical (eg, ototoxic drugs, threshold criteria, speaker choice) and behavioral (eg, dominance, reproductive status) factors potentially influencing hearing were investigated Finally, the technique was successfully utilized to study acoustic communication by comparing hearing curves with sound spectra either under quiet conditions or in the presence of noise, by analyzing the temporal resolution ability of the auditory system and the detection of temporal, spectral and amplitude characteristics of conspecific vocalizations

193 citations


Journal ArticleDOI
TL;DR: A statistical technique to model and estimate the amount of reverberation and background noise variance in an audio recording is described and an energy-based voice activity detection method is proposed for automatic decaying-tail-selection from anaudio recording.
Abstract: An audio recording is subject to a number of possible distortions and artifacts. Consider, for example, artifacts due to acoustic reverberation and background noise. The acoustic reverberation depends on the shape and the composition of a room, and it causes temporal and spectral smearing of the recorded sound. The background noise, on the other hand, depends on the secondary audio source activities present in the evidentiary recording. Extraction of acoustic cues from an audio recording is an important but challenging task. Temporal changes in the estimated reverberation and background noise can be used for dynamic acoustic environment identification (AEI), audio forensics, and ballistic settings. We describe a statistical technique to model and estimate the amount of reverberation and background noise variance in an audio recording. An energy-based voice activity detection method is proposed for automatic decaying-tail-selection from an audio recording. Effectiveness of the proposed method is tested using a data set consisting of speech recordings. The performance of the proposed method is also evaluated for both speaker-dependent and speaker-independent scenarios.

90 citations


Journal ArticleDOI
TL;DR: By using the Immersive Virtual Reality technique, some visual and acoustical aspects of the impact of a wind farm on a sample of subjects were assessed and analyzed and showed that, regarding the number of wind turbines, the visual component has a weak effect on individual reactions, while the colour influences both visual and auditory individual reaction, although in a different way.
Abstract: Preserving the soundscape and geographic extension of quiet areas is a great challenge against the wide-spreading of environmental noise. The E.U. Environmental Noise Directive underlines the need to preserve quiet areas as a new aim for the management of noise in European countries. At the same time, due to their low population density, rural areas characterized by suitable wind are considered appropriate locations for installing wind farms. However, despite the fact that wind farms are represented as environmentally friendly projects, these plants are often viewed as visual and audible intruders, that spoil the landscape and generate noise. Even though the correlations are still unclear, it is obvious that visual impacts of wind farms could increase due to their size and coherence with respect to the rural/quiet environment. In this paper, by using the Immersive Virtual Reality technique, some visual and acoustical aspects of the impact of a wind farm on a sample of subjects were assessed and analyzed. The subjects were immersed in a virtual scenario that represented a situation of a typical rural outdoor scenario that they experienced at different distances from the wind turbines. The influence of the number and the colour of wind turbines on global, visual and auditory judgment were investigated. The main results showed that, regarding the number of wind turbines, the visual component has a weak effect on individual reactions, while the colour influences both visual and auditory individual reactions, although in a different way.

76 citations


Journal ArticleDOI
TL;DR: In this paper, the authors examined longitudinal associations of aircraft noise exposure at primary school on children's reading comprehension, noise annoyance, and psychological health at secondary school using a six-year follow-up of 461 children.

75 citations


Journal ArticleDOI
TL;DR: Experimental results show that the proposed method improves AEI performance compared with the direct method (i.e., feature vector is extracted from the audio recording directly), and the proposed scheme is robust to MP3 compression attack.
Abstract: An audio recording is subject to a number of possible distortions and artifacts. Consider, for example, artifacts due to acoustic reverberation and background noise. The acoustic reverberation depends on the shape and the composition of the room, and it causes temporal and spectral smearing of the recorded sound. The background noise, on the other hand, depends on the secondary audio source activities present in the evidentiary recording. Extraction of acoustic cues from an audio recording is an important but challenging task. Temporal changes in the estimated reverberation and background noise can be used for dynamic acoustic environment identification (AEI), audio forensics, and ballistic settings. We describe a statistical technique based on spectral subtraction to estimate the amount of reverberation and nonlinear filtering based on particle filtering to estimate the background noise. The effectiveness of the proposed method is tested using a data set consisting of speech recordings of two human speakers (one male and one female) made in eight acoustic environments using four commercial grade microphones. Performance of the proposed method is evaluated for various experimental settings such as microphone independent, semi- and full-blind AEI, and robustness to MP3 compression. Performance of the proposed framework is also evaluated using Temporal Derivative-based Spectrum and Mel-Cepstrum (TDSM)-based features. Experimental results show that the proposed method improves AEI performance compared with the direct method (i.e., feature vector is extracted from the audio recording directly). In addition, experimental results also show that the proposed scheme is robust to MP3 compression attack.

64 citations


Patent
15 Apr 2013
TL;DR: In this paper, a secondary path estimating adaptive filter is used to estimate the electro-acoustical path from the noise canceling circuit through the transducer so that source audio can be removed from the error signal.
Abstract: A personal audio device, such as a wireless telephone, generates an anti-noise signal from an error microphone signal and injects the anti-noise signal into the speaker or other transducer output to cause cancellation of ambient audio sounds. The error microphone is also provided proximate the speaker to provide an error signal indicative of the effectiveness of the noise cancellation. A secondary path estimating adaptive filter is used to estimate the electro-acoustical path from the noise canceling circuit through the transducer so that source audio can be removed from the error signal. Noise bursts are injected intermittently and the adaptation of the secondary path estimating adaptive filter controlled, so that the secondary path estimate can be maintained irrespective of the presence and amplitude of the source audio.

59 citations


Journal ArticleDOI
TL;DR: In this paper, the authors found that males responded to masking tones by shifting song frequencies after an average of 66.4 seconds from tone onset, whereas frequency shifts in the presence of nonmasking tones occurred only after a average of 95.8 seconds.

51 citations


Journal Article
Christine Erbe1
TL;DR: The effects of underwater noise and the ranges over which they happen depend on the acoustic characteristics of the noise (level, spectral distribution, duration, duty cycle etc.), the sound propagation environment, and the characteristics of an acoustic receptor (the animal).
Abstract: introduCtion The ocean is not a quiet place. It is naturally noisy with sounds from physical (wind, waves, rain, ice) and biological sources (whales, dolphins, fish, crustaceans etc.). Anthropogenic contribution to underwater noise has increased rapidly in the past century. In some parts of the world, lowfrequency ambient noise has increased by 3.3 dB between 1950 and 2007, which was attributed to commercial shipping [1]. As ocean water conducts light very poorly but sound very well, many marine animals have evolved to rely primarily on their auditory system for orientation, communication, foraging and sensing their environment. For example, humpback whales (Megaptera novaeangliae) sing songs for hours to days. Killer whale (Orcinus orca) pods sharing the same geographic habitat have different dialects, and can be told apart from their calls. Odontocetes (toothed whales) use echolocation (active sonar) to navigate and forage. Fish and shrimp sing evening choruses. Coral larvae tune in to reef sounds for homing purposes. Underwater noise can interfere with all of these functions on an individual yet ultimately population level. The effects of noise and the ranges over which they happen depend on the acoustic characteristics of the noise (level, spectral distribution, duration, duty cycle etc.), the sound propagation environment, and the characteristics of the acoustic receptor (the animal). Figure 1 shows a sketch of the potential zones of impact. These types of impact have been demonstrated in species of marine mammal and fish. As sound spreads through the ocean away from its source, the sound level decreases. At the longest ranges, a sound might barely be detectable. For behavioural responses to occur, a sound would mostly have to be significantly above ambient levels and the animal’s audiogram. However, avoidance at tens of km has been reported that was estimated to be at the limit of audibility in beluga whales (Delphinapterus leucas) [2].

44 citations


Patent
15 Apr 2013
TL;DR: In this paper, a secondary path estimating adaptive filter is used to estimate the electro-acoustical path from the noise canceling circuit through the transducer so that source audio can be removed from the error signal.
Abstract: A personal audio device, such as a wireless telephone, includes an adaptive noise canceling (ANC) circuit that adaptively generates an anti-noise signal from a reference microphone signal and injects the anti-noise signal into the speaker or other transducer output to cause cancellation of ambient audio sounds. An error microphone is also provided proximate to the speaker to provide an error signal indicative of the effectiveness of the noise cancellation. A secondary path estimating adaptive filter is used to estimate the electro-acoustical path from the noise canceling circuit through the transducer so that source audio can be removed from the error signal. Adaptation of adaptive filters is sequenced so that update of their coefficients does not cause instability or error in the update. A level of the source audio with respect to the ambient audio can be determined to determine whether the system may generate erroneous anti-noise and/or become unstable.

Journal ArticleDOI
TL;DR: This letter presents a voice activity detection (VAD) approach using non-negative sparse coding to improve the detection performance in low signal-to-noise ratio (SNR) conditions and demonstrates that the VAD approach has a good performance inLow SNR conditions.
Abstract: This letter presents a voice activity detection (VAD) approach using non-negative sparse coding to improve the detection performance in low signal-to-noise ratio (SNR) conditions. The basic idea is to use features extracted from a noise-reduced representation of original audio signals. We decompose the magnitude spectrum of an audio signal on a speech dictionary learned from clean speech and a noise dictionary learned from noise samples. Only coefficients corresponding to the speech dictionary are considered and used as the noise-reduced representation of the signal for feature extraction. A conditional random field (CRF) is used to model the correlation between feature sequences and voice activity labels along audio signals. Then, we assign the voice activity labels for a given audio by decoding the CRF. Experimental results demonstrate that our VAD approach has a good performance in low SNR conditions.

Journal ArticleDOI
TL;DR: NC headphones show some promise as possible replacements for conventional earphones in school hearing screening programs, with lower referral rates at 500 Hz, particularly at the 25 dB HL criterion level.
Abstract: Background Excessive ambient noise in school settings is a major concern for school hearing screening as it typically masks pure tone test stimuli (particularly 500 Hz and below). This results in false positive findings and subsequent unnecessary follow-up. With advances in technology, noise-cancelling headphones have been developed that reduce low frequency noise by superimposing an anti-phase signal onto the primary noise. This research study examined the utility of noise-cancelling headphone technology in a school hearing screening environment.

Journal ArticleDOI
TL;DR: In this article, the authors examined the effect of noise cancellation technology (e.g., headphones) on concurrent task performance within an aviation environment, namely the cabin of commercial operations.

Journal ArticleDOI
TL;DR: It appears that traffic noise does not negatively affect mate attraction in these three species of anurans, and the previously documented negative effects of roads on anuran populations are likely caused mainly by road mortality.
Abstract: We previously found that males of two anuran species – Hyla versicolor and Rana clamitans – alter their mating calls in response to traffic noise. To test whether these alterations compensate for an effect of traffic noise on mate attraction, we (1) recorded a male calling at a quiet site; (2) played traffic noise at the same male and recorded its altered call; (3) used these recordings to attract females to a trap at sites either with or without broadcast traffic noise. The calls produced without traffic noise attracted fewer females when they were played at sites with traffic noise than when they were played at sites without noise. However, the calls of the same individuals produced with traffic noise attracted as many females at sites with noise as at sites without noise, and they attracted as many females as did the call of the same male made without noise and played at sites without noise (the ‘natural’ situation). Therefore, for these species, traffic noise does not affect mate attraction; males change their calls to compensate for a potential effect of traffic noise on mate attraction. A third species – Bufo americanus – does not alter its call in response to traffic noise, and its call made in the absence or presence of traffic noise was equally able to attract females in the absence or presence of traffic noise, indicating that traffic noise does not negatively affect mate attraction. Therefore, it appears that traffic noise does not negatively affect mate attraction in these three species of anurans. We suggest that, if our results apply to anurans in general, the previously documented negative effects of roads on anuran populations are likely caused mainly by road mortality. If this is true, road mitigation for anurans should focus mainly on reducing this mortality.

Patent
Vasu Iyengar1, Sorin V. Dusan1
06 Jun 2013
TL;DR: In this article, a noise suppression system uses two types of noise estimators, including a more aggressive one and less aggressive one, and decisions are made on how to select or combine their outputs into a usable noise estimate in a different speech and noise conditions.
Abstract: Digital signal processing techniques for automatically reducing audible noise from a sound recording that contains speech A noise suppression system uses two types of noise estimators, including a more aggressive one and less aggressive one Decisions are made on how to select or combine their outputs into a usable noise estimate in a different speech and noise conditions A 2-channel noise estimator is described Other embodiments are also described and claimed

Journal ArticleDOI
TL;DR: This application-oriented paper works out a set of local, case-dependent fusion rules that can be used to combine forward and backward detection alarms, yielding noticeable performance improvements, compared to the traditional methods, based on unidirectional processing.
Abstract: In this application-oriented paper we consider the problem of elimination of impulsive disturbances, such as clicks, pops and record scratches, from archive audio recordings. The proposed approach is based on bidirectional processing-noise pulses are localized by combining the results of forward-time and backward-time signal analysis. Based on the results of specially designed empirical tests (rather than on the results of theoretical analysis), incorporating real audio files corrupted by real impulsive disturbances, we work out a set of local, case-dependent fusion rules that can be used to combine forward and backward detection alarms. This allows us to localize noise pulses more accurately and more reliably, yielding noticeable performance improvements, compared to the traditional methods, based on unidirectional processing. The proposed approach is carefully validated using both artificially corrupted audio files and real archive gramophone recordings.

Journal ArticleDOI
07 Feb 2013-PLOS ONE
TL;DR: An impaired ability to efficiently process envelope and fine structure cues of the speech signal may be the cause of the extreme difficulties faced during speech perception in noise by listeners with Auditory Neuropathy.
Abstract: Aim The present study evaluated the relation between speech perception in the presence of background noise and temporal processing ability in listeners with Auditory Neuropathy (AN). Method The study included two experiments. In the first experiment, temporal resolution of listeners with normal hearing and those with AN was evaluated using measures of temporal modulation transfer function and frequency modulation detection at modulation rates of 2 and 10 Hz. In the second experiment, speech perception in quiet and noise was evaluated at three signal to noise ratios (SNR) (0, 5, and 10 dB). Results Results demonstrated that listeners with AN performed significantly poorer than normal hearing listeners in both amplitude modulation and frequency modulation detection, indicating significant impairment in extracting envelope as well as fine structure cues from the signal. Furthermore, there was significant correlation seen between measures of temporal resolution and speech perception in noise. Conclusion Results suggested that an impaired ability to efficiently process envelope and fine structure cues of the speech signal may be the cause of the extreme difficulties faced during speech perception in noise by listeners with AN.

Journal ArticleDOI
TL;DR: This paper presents a new lossless audio steganography approach based on Integer-to-Integer Lifting Wavelet Transform (Int2Int LWT) and Least Significant Bits (LSBs) substitution that has excellent transparency and robustness tests show immunity of the method against additive noise and perceptual statistical analysis.
Abstract: This paper presents a new lossless audio steganography approach based on Integer-to-Integer Lifting Wavelet Transform (Int2Int LWT) and Least Significant Bits (LSBs) substitution. In order to increase the security level a simple encryption with adaptive key has been proposed. The experimental results show that this approach has excellent transparency (above 45 dB of signal to noise ratio) with fixed high embedding capacity (25% from the audio cover signal size) and full recovery for the hidden secret message. Furthermore, robustness tests show immunity of the method against additive noise and perceptual statistical analysis. The proposed hiding and recovery procedures are simple and symmetry; therefore it can be easily used for real-time covert communication.

Journal ArticleDOI
TL;DR: The differential effects of noise on P1, N1, and P2 suggest differences in auditory processes underlying these peaks, and the combination of level and signal-to-noise ratio should be considered when using cortical auditory evoked potentials as an electrophysiological indicator of degraded speech processing.
Abstract: Young adults with no history of hearing concerns were tested to investigate their /da/-evoked cortical auditory evoked potentials (P1-N1-P2) recorded from 32 scalp electrodes in the presence and absence of noise at three different loudness levels (soft, comfortable, and loud), at a fixed signal-to-noise ratio (+3 dB). P1 peak latency significantly increased at soft and loud levels, and N1 and P2 latencies increased at all three levels in the presence of noise, compared with the quiet condition. P1 amplitude was significantly larger in quiet than in noise conditions at the loudest level. N1 amplitude was larger in quiet than in noise for the soft level only. P2 amplitude was reduced in the presence of noise to a similar degree at all loudness levels. The differential effects of noise on P1, N1, and P2 suggest differences in auditory processes underlying these peaks. The combination of level and signal-to-noise ratio should be considered when using cortical auditory evoked potentials as an electrophysiological indicator of degraded speech processing.

Journal ArticleDOI
TL;DR: In this article, a laboratory simulation was employed to differentiate between three types of aircraft noise common to national parks: overflight noise, helicopter noise, and propeller plane noise, with jet airplanes being the least negative when noise was present.

Patent
12 Jul 2013
TL;DR: In this article, the authors proposed a method to adjust the loudness control system to minimize undesirable effects during the transition between loudness levels, where content shifts from a high overall loudness level to a lower overall low level are detected.
Abstract: Loudness control systems or methods may normalize audio signals to a predetermined loudness level. If the audio signal includes moderate background noise, then the background noise may also be normalized to the target loudness level. Noise signals may be detected using content-versus-noise classification, and a loudness control system or method may be adjusted based on the detection of noise. Noise signals may be detected by signal analysis in the frequency domain or in the time domain. Loudness control systems may also produce undesirable audio effects when content shifts from a high overall loudness level to a lower overall loudness level. Such loudness drops may be detected, and the loudness control system may be adjusted to minimize the undesirable effects during the transition between loudness levels.

Patent
23 Jul 2013
TL;DR: In this article, a beamformer module was used to employ a first noise cancellation algorithm to the audio signal and then a second noise reduction algorithm was applied in proportion to the difference between the measured direction-of-arrival and the target direction of arrival.
Abstract: Systems and methods of improved noise reduction using direction of arrival information include: receiving an audio signal from two or more acoustic sensors; applying a beamformer module to employ a first noise cancellation algorithm to the audio signal; applying a noise reduction post-filter module to the audio signal, the application of which includes: estimating a current noise spectrum of the received audio signal after the application of the first noise cancellation algorithm; using spatial information derived from the audio signal received from the two or more acoustic sensors to determine a measured direction-of-arrival by estimating the current time-delay between the acoustic sensor inputs; comparing the measured direction-of-arrival to a target direction-of-arrival; applying a second noise reduction algorithm to the audio signal in proportion to the difference between the measured direction-of-arrival and the target direction-of-arrival; and outputting a single audio stream with reduced background noise.

Journal ArticleDOI
TL;DR: An efficient content-based audio classification approach to classify audio signals into broad genres using a fuzzy c-means (FCM) algorithm that outperforms the existing state-of-the-art audio classification systems by more than 11% in classification performance.
Abstract: Content-based audio signal classification into broad categories such as speech, music, or speech with noise is the first step before any further processing such as speech recognition, content-based indexing, or surveillance systems. In this paper, we propose an efficient content-based audio classification approach to classify audio signals into broad genres using a fuzzy c-means (FCM) algorithm. We analyze different characteristic features of audio signals in time, frequency, and coefficient domains and select the optimal feature vector by employing a noble analytical scoring method to each feature. We utilize an FCM-based classification scheme and apply it on the extracted normalized optimal feature vector to achieve an efficient classification result. Experimental results demonstrate that the proposed approach outperforms the existing state-of-the-art audio classification systems by more than 11% in classification performance.

Book
26 Jul 2013
TL;DR: Sound, Music, and Science as mentioned in this paper is a journal dedicated to the study of sound, music, and science related to digital audio, including the following topics: Vibrations 1.
Abstract: Sound, Music, and Science.- Vibrations 1.- Vibrations 2.- Instrumentation.- Sound Waves.- Wave Properties.- Standing Waves.- Standing Waves in Pipes.- Fourier Analysis and Synthesis.- Sound Intensity.- The Auditory System.- Loudness Perception.- Pitch.- Localization of Sound.- Sound Environments.- Audio Transducers.- Distortion and Noise.- Audio Systems.- Loudspeakers.- Digital Audio.- Broadcasting.- Speech.- Brass Musical Instruments.- Woodwind Instruments.- String Instruments.- Percussion Instruments.- Electronic Music.

Journal ArticleDOI
TL;DR: The proposed scheme can effectively authenticate the veracity and integrity of audio content and greatly expands the applicability of the audio watermarking scheme.
Abstract: In this paper, a novel semi-fragile watermarking scheme for authenticating an audio signal based on dual-tree complex wavelet transform DT-CWT and discrete cosine transform DCT is proposed. Specifically, the watermark data are efficiently inserted into the coefficients of the low-frequency sub-band of DT-CWT taking advantages of both DCT and quantization index modulation QIM. First, the original digital audio signal is segmented and then performed with DT-CWT. Second, based on the energy compression property, the low-frequency sub-band coefficients of the DT-CWT domain are performed with DCT, and the DC component is utilized to embed one distorted watermark bit by the QIM technique. Finally, inverse DCT and DT-CWT are orderly implemented on the watermarked coefficients of each audio segment to get a watermarked audio signal. Simulation results show that the hybrid embedding domain constructed by DT-CWT and DCT is effective, and the proposed watermarking scheme is not only inaudible, but also robust against content persistent non-malicious audio signal processing operations, such as MP3 compression, noise addition, re-sampling, re-quantization, etc. Furthermore, the proposed scheme can effectively authenticate the veracity and integrity of audio content and greatly expands the applicability of the audio watermarking scheme.

Patent
12 Mar 2013
TL;DR: In this paper, a system for removing noise from an audio signal is described, where content playing in the background during a voice command or phone call is removed from the audio signal representing the voice command and phone call.
Abstract: A system for removing noise from an audio signal is described. For example, noise caused by content playing in the background during a voice command or phone call may be removed from the audio signal representing the voice command or phone call. By removing noise, the signal to noise ratio of the audio signal may be improved.

Journal ArticleDOI
TL;DR: Using the discrete wavelet transform to extract trends and seasonal signals, and the Allan variance to characterise the residual noise which allows to evaluate the positioning stability of stations, useful nonlinear trends, annual and semi-annual signals contained in the studied time series are revealed.

Proceedings ArticleDOI
22 Nov 2013
TL;DR: A method for reducing noise from audio or speech signals using LMS adaptive filtering algorithm is proposed, where the signal is filtered in the time domain, while the filter coefficients are calculated adaptively by steepest-descent algorithm.
Abstract: Noise reduction of audio signals is a key challenge problem in speech enhancement, speech recognition and speech communication applications, etc. It has attracted a considerable amount of research attention over past several decades. The most widely used method is optimal linear filtering method, which achieves clean audio estimate by passing the noise observation through an optimal linear filter or transformation. The representative algorithms include Wiener filtering, Kalman filtering, spectral restoration, subspace method, etc. Many theoretical analysis and experiments have been carried out to show that the optimal filtering technique can reduce the level of noise that is present in the audio signals and improve the corresponding signal-to-noise ratio (SNR). However, one of the main problems for optimal filtering method is complexity of the algorithm which based upon SVD–decompositions or QR–decompositions. In almost real signal applications it difficult to implement. In this paper, a method for reducing noise from audio or speech signals using LMS adaptive filtering algorithm is proposed. The signal is filtered in the time domain, while the filter coefficients are calculated adaptively by steepest-descent algorithm. The simulation results exhibit a higher quality of the processed signal than unprocessed signal in the noise situation. 1 Y. Liu () College of Electronic Information Engineering, Inner Mongolia University, 010021, Hohhot, China e-mail: yangliuimu@163.com Y. Liu Faculty of Electronic Information and Electrical engineering, Dalian University of Technology, Dalian, China M. Xiao College of Electronic Information Engineering, Inner Mongolia University, Hohhot, China Y. Tie College of Electronic Information Engineering, Inner Mongolia University, Hohhot, China 3rd International Conference on Multimedia Technology(ICMT 2013) © 2013. The authors Published by Atlantis Press 1001

Proceedings ArticleDOI
01 Sep 2013
TL;DR: A new musical audio denoising technique is proposed, when the noise is modeled by an α-stable distribution, based on sparse linear regression with structured priors and Markov Chain Monte Carlo inference.
Abstract: A new musical audio denoising technique is proposed, when the noise is modeled by an α-stable distribution. The proposed technique is based on sparse linear regression with structured priors and uses Markov Chain Monte Carlo inference to estimate the clean signal model parameters and the α-stable noise model parameters. Experiments on noisy Greek folk music excerpts demonstrate better denoising for the α-stable noise assumption than the Gaussian white noise one.