scispace - formally typeset
Search or ask a question
Topic

Noise measurement

About: Noise measurement is a research topic. Over the lifetime, 19776 publications have been published within this topic receiving 308180 citations.


Papers
More filters
01 Jan 2002
TL;DR: It is shown that in nonstationary noise environments and under low SNR conditions, the IMCRA approach is very effective, compared to a competitive method, it obtains a lower estimation error, and when integrated into a speech enhancement system achieves improved speech quality and lower residual noise.
Abstract: Noise spectrum estimation is a fundamental component of speech enhancement and speech recognition systems. In this paper, we present an Improved Minima Con- trolled Recursive Averaging (IMCRA) approach, for noise es- timation in adverse environments involving non-stationary noise, weak speech components, and low input signal-to- noise ratio (SNR). The noise estimate is obtained by av- eraging past spectral power values, using a time-varying frequency-dependent smoothing parameter that is adjusted by the signal presence probability. The speech presence probability is controlled by the minima values of a smoothed periodogram. The proposed procedure comprises two iter- ations of smoothing and minimum tracking. The rst it- eration provides a rough voice activity detection in each frequency band. Then, smoothing in the second iteration excludes relatively strong speech components, which makes the minimum tracking during speech activity robust. We show that in non-stationary noise environments and under low SNR conditions, the IMCRA approach is very eectiv e. In particular, compared to a competitive method, it obtains a lower estimation error, and when integrated into a speech enhancement system achieves improved speech quality and lower residual noise.

834 citations

Proceedings ArticleDOI
19 Mar 2008
TL;DR: This paper reformulates the problem by treating the 1-bit measurements as sign constraints and further constraining the optimization to recover a signal on the unit sphere, and demonstrates that this approach performs significantly better compared to the classical compressive sensing reconstruction methods, even as the signal becomes less sparse and as the number of measurements increases.
Abstract: Compressive sensing is a new signal acquisition technology with the potential to reduce the number of measurements required to acquire signals that are sparse or compressible in some basis. Rather than uniformly sampling the signal, compressive sensing computes inner products with a randomized dictionary of test functions. The signal is then recovered by a convex optimization that ensures the recovered signal is both consistent with the measurements and sparse. Compressive sensing reconstruction has been shown to be robust to multi-level quantization of the measurements, in which the reconstruction algorithm is modified to recover a sparse signal consistent to the quantization measurements. In this paper we consider the limiting case of 1-bit measurements, which preserve only the sign information of the random measurements. Although it is possible to reconstruct using the classical compressive sensing approach by treating the 1-bit measurements as plusmn 1 measurement values, in this paper we reformulate the problem by treating the 1-bit measurements as sign constraints and further constraining the optimization to recover a signal on the unit sphere. Thus the sparse signal is recovered within a scaling factor. We demonstrate that this approach performs significantly better compared to the classical compressive sensing reconstruction methods, even as the signal becomes less sparse and as the number of measurements increases.

793 citations

Journal ArticleDOI
TL;DR: A signal-dependent noise model, which gives the pointwise standard-deviation of the noise as a function of the expectation of the pixel raw-data output, is composed of a Poissonian part, modeling the photon sensing, and Gaussian part, for the remaining stationary disturbances in the output data.
Abstract: We present a simple and usable noise model for the raw-data of digital imaging sensors This signal-dependent noise model, which gives the pointwise standard-deviation of the noise as a function of the expectation of the pixel raw-data output, is composed of a Poissonian part, modeling the photon sensing, and Gaussian part, for the remaining stationary disturbances in the output data We further explicitly take into account the clipping of the data (over- and under-exposure), faithfully reproducing the nonlinear response of the sensor We propose an algorithm for the fully automatic estimation of the model parameters given a single noisy image Experiments with synthetic images and with real raw-data from various sensors prove the practical applicability of the method and the accuracy of the proposed model

789 citations

Journal ArticleDOI
H.T. Friis1
01 Jul 1944
TL;DR: In this article, a rigorous definition of the noise figure of radio receivers is given, which can be applied to four-terminal networks in general and is not limited to high-gain receivers.
Abstract: A rigorous definition of the noise figure of radio receivers is given in this paper. The definition is not limited to high-gain receivers, but can be applied to four-terminal networks in general. An analysis is made of the relationship between the noise figure of the receiver as a whole and the noise figures of its components. Mismatch relations between the components of the receiver and methods of measurements of noise figures are discussed briefly.

789 citations

Journal ArticleDOI
TL;DR: A systematic evaluation on the effect of noise in machine learning separates noise into two categories: class noise and attribute noise, and investigates the relationship between attribute noise and classification accuracy, the impact of noise at different attributes, and possible solutions in handling attribute noise.
Abstract: Real-world data is never perfect and can often suffer from corruptions (noise) that may impact interpretations of the data, models created from the data and decisions made based on the data. Noise can reduce system performance in terms of classification accuracy, time in building a classifier and the size of the classifier. Accordingly, most existing learning algorithms have integrated various approaches to enhance their learning abilities from noisy environments, but the existence of noise can still introduce serious negative impacts. A more reasonable solution might be to employ some preprocessing mechanisms to handle noisy instances before a learner is formed. Unfortunately, rare research has been conducted to systematically explore the impact of noise, especially from the noise handling point of view. This has made various noise processing techniques less significant, specifically when dealing with noise that is introduced in attributes. In this paper, we present a systematic evaluation on the effect of noise in machine learning. Instead of taking any unified theory of noise to evaluate the noise impacts, we differentiate noise into two categories: class noise and attribute noise, and analyze their impacts on the system performance separately. Because class noise has been widely addressed in existing research efforts, we concentrate on attribute noise. We investigate the relationship between attribute noise and classification accuracy, the impact of noise at different attributes, and possible solutions in handling attribute noise. Our conclusions can be used to guide interested readers to enhance data quality by designing various noise handling mechanisms.

786 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
88% related
Convolutional neural network
74.7K papers, 2M citations
84% related
Deep learning
79.8K papers, 2.1M citations
84% related
Artificial neural network
207K papers, 4.5M citations
83% related
Wireless
133.4K papers, 1.9M citations
83% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202377
2022162
2021495
2020525
2019489
2018755