scispace - formally typeset
Search or ask a question
Topic

Noise

About: Noise is a research topic. Over the lifetime, 5111 publications have been published within this topic receiving 69407 citations. The topic is also known as: Мопсы танцуют под радио бандитов из сталкера 10 часов.


Papers
More filters
Journal ArticleDOI
TL;DR: The audio inpainting framework that recovers portions of audio data distorted due to impairments such as impulsive noise, clipping, and packet loss is proposed and this approach is shown to outperform state-of-the-art and commercially available methods for audio declipping in terms of Signal-to-Noise Ratio.
Abstract: We propose the audio inpainting framework that recovers portions of audio data distorted due to impairments such as impulsive noise, clipping, and packet loss. In this framework, the distorted data are treated as missing and their location is assumed to be known. The signal is decomposed into overlapping time-domain frames and the restoration problem is then formulated as an inverse problem per audio frame. Sparse representation modeling is employed per frame, and each inverse problem is solved using the Orthogonal Matching Pursuit algorithm together with a discrete cosine or a Gabor dictionary. The Signal-to-Noise Ratio performance of this algorithm is shown to be comparable or better than state-of-the-art methods when blocks of samples of variable durations are missing. We also demonstrate that the size of the block of missing samples, rather than the overall number of missing samples, is a crucial parameter for high quality signal restoration. We further introduce a constrained Matching Pursuit approach for the special case of audio declipping that exploits the sign pattern of clipped audio samples and their maximal absolute value, as well as allowing the user to specify the maximum amplitude of the signal. This approach is shown to outperform state-of-the-art and commercially available methods for audio declipping in terms of Signal-to-Noise Ratio.

229 citations

Proceedings ArticleDOI
05 Jun 2000
TL;DR: This paper restricts its considerations to the case where only a single microphone recording of the noisy signal is available and proposes a method based on temporal quantiles in the power spectral domain, which is compared with pause detection and recursive averaging.
Abstract: Elimination of additive noise from a speech signal is a fundamental problem in audio signal processing. In this paper we restrict our considerations to the case where only a single microphone recording of the noisy signal is available. The algorithms which we investigate proceed in two steps. First, the noise power spectrum is estimated. A method based on temporal quantiles in the power spectral domain is proposed and compared with pause detection and recursive averaging. The second step is to eliminate the estimated noise from the observed signal by spectral subtraction or Wiener filtering. The database used in the experiments comprises 6034 utterances of German digits and digit strings by 770 speakers in 10 different cars. Without noise reduction, we obtain an error rate of 11.7%. Quantile based noise estimation and Wiener filtering reduce the error rate to 8.6%. Similar improvements are achieved in an experiment with artificial, non-stationary noise.

226 citations

Journal ArticleDOI
TL;DR: The authors found that nightingales do not maximize song amplitude but regulate vocal intensity dependent on the level of masking noise, which may serve to maintain a specific signal-to-noise ratio that is favorable for signal production.

223 citations

Journal ArticleDOI
TL;DR: It turned out that tests on frequency resolution form a cluster, and are approximately independent of audiometric loss, whereas hearing loss for speech in noise is closely allied to frequency resolution, whereas Hearing loss forspeech in quiet is governed by audiometric Loss.
Abstract: Relations between auditory functions, as expressed by coefficients of correlation, were studied for a group of 22 sensorineurally hearing‐impaired subjects with moderate losses. In addition to the audiogram, we measured frequency resolution, temporal resolution, and speech reception in quiet and in noise. Frequency resolution was derived from masking with comb‐filtered noise and from the psychophysical tuning curve, for both paradigms in simultaneous and in nonsimultaneous masking. The critical ratio was also determined. Temporal resolution was determined with intensity‐modulated noise and from backward and forward masking. All tests were performed at 1000 Hz. Correlations among tests were gathered in a matrix and subjected to a principal‐components analysis. It turned out that tests on frequency resolution form a cluster, and are approximately independent of audiometric loss. Furthermore, hearing loss for speech in noise is closely allied to frequency resolution, whereas hearing loss for speech in quiet is governed by audiometric loss.

210 citations

Journal ArticleDOI
TL;DR: It is demonstrated that more channels are needed in noise than in quiet to reach a high level of sentence understanding and that, as the S/N becomes poorer, more channels will be needed to achieve a given level of performance.
Abstract: Sentences were processed through simulations of cochlear-implant signal processors with 6, 8, 12, 16, and 20 channels and were presented to normal-hearing listeners at +2 db S/N and at −2 db S/N. The signal-processing operations included bandpass filtering, rectification, and smoothing of the signal in each band, estimation of the rms energy of the signal in each band (computed every 4 ms), and generation of sinusoids with frequencies equal to the center frequencies of the bands and amplitudes equal to the rms levels in each band. The sinusoids were summed and presented to listeners for identification. At issue was the number of channels necessary to reach maximum performance on tests of sentence understanding. At +2 dB S/N, the performance maximum was reached with 12 channels of stimulation. At −2 dB S/N, the performance maximum was reached with 20 channels of stimulation. These results, in combination with the outcome that in quiet, asymptotic performance is reached with five channels of stimulation, demonstrate that more channels are needed in noise than in quiet to reach a high level of sentence understanding and that, as the S/N becomes poorer, more channels are needed to achieve a given level of performance.

209 citations


Network Information
Related Topics (5)
Speech processing
24.2K papers, 637K citations
73% related
Noise
110.4K papers, 1.3M citations
72% related
Signal processing
73.4K papers, 983.5K citations
69% related
Piston
176.1K papers, 825.4K citations
69% related
Hidden Markov model
28.3K papers, 725.3K citations
67% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20221
2021125
2020217
2019224
2018243
2017214