scispace - formally typeset
Search or ask a question
Topic

Noise reduction

About: Noise reduction is a research topic. Over the lifetime, 25121 publications have been published within this topic receiving 300815 citations. The topic is also known as: denoising & noise removal.


Papers
More filters
Journal ArticleDOI
TL;DR: This paper presents a two-stage deep convolutional neural network to model both the noise and the medical image simultaneously and introduces both the short-term and long-term connections in the network which could promote the information propagation between different layers efficiently.
Abstract: Most of the existing medical image denoising methods focus on estimating either the image or the residual noise. Moreover, they are usually designed for one specific noise with a strong assumption of the noise distribution. However, not only the random independent Gaussian or speckle noise but also the structurally correlated ring or stripe noise is ubiquitous in various medical imaging instruments. Explicitly modeling the distributions of these complex noises in the medical image is extremely hard. They cannot be accurately held by the Gaussian or mixture of Gaussian model. To overcome the two drawbacks, in this paper, we propose to treat the image and noise components equally and convert the image denoising task into an image decomposition problem naturally. More precisely, we present a two-stage deep convolutional neural network (CNN) to model both the noise and the medical image simultaneously. On the one hand, we utilize both the image and noise to separate them better. On the other hand, the noise subnetwork serves as a noise estimator which guides the image subnetwork with sufficient information about the noise, thus we could easily handle different noise distributions and noise levels. To better cope with the gradient vanishing problem in this very deep network, we introduce both the short-term and long-term connections in the network which could promote the information propagation between different layers efficiently. Extensive experiments have been performed on several kinds of medical noise images, such as the computed tomography and ultrasound images, and the proposed method has consistently outperformed state-of-the-art denoising methods.

61 citations

Journal ArticleDOI
TL;DR: A novel nonlinear adaptive spatial filter (median‐modified Wiener filter, MMWF), is compared with five well‐established denoising techniques to suggest, by means of fuzzy sets evaluation, the bestDenoising approach to use in practice.
Abstract: Denoising is a fundamental early stage in 2-DE image analysis strongly influencing spot detection or pixel-based methods. A novel nonlinear adaptive spatial filter (median-modified Wiener filter, MMWF), is here compared with five well-established denoising techniques (Median, Wiener, Gaussian, and Polynomial-Savitzky-Golay filters; wavelet denoising) to suggest, by means of fuzzy sets evaluation, the best denoising approach to use in practice. Although median filter and wavelet achieved the best performance in spike and Gaussian denoising respectively, they are unsuitable for contemporary removal of different types of noise, because their best setting is noise-dependent. Vice versa, MMWF that arrived second in each single denoising category, was evaluated as the best filter for global denoising, being its best setting invariant of the type of noise. In addition, median filter eroded the edge of isolated spots and filled the space between close-set spots, whereas MMWF because of a novel filter effect (drop-off-effect) does not suffer from erosion problem, preserves the morphology of close-set spots, and avoids spot and spike fuzzyfication, an aberration encountered for Wiener filter. In our tests, MMWF was assessed as the best choice when the goal is to minimize spot edge aberrations while removing spike and Gaussian noise.

61 citations

Posted Content
TL;DR: In this paper, a network architecture for learning discriminative image models that are employed to efficiently tackle the problem of grayscale and color image denoising is proposed. And two different variants of the proposed network are introduced to handle a wide range of noise levels using a single set of learned parameters, while they are robust when the noise degrading the latent image does not match the statistics of the noise used during training.
Abstract: We design a novel network architecture for learning discriminative image models that are employed to efficiently tackle the problem of grayscale and color image denoising. Based on the proposed architecture, we introduce two different variants. The first network involves convolutional layers as a core component, while the second one relies instead on non-local filtering layers and thus it is able to exploit the inherent non-local self-similarity property of natural images. As opposed to most of the existing deep network approaches, which require the training of a specific model for each considered noise level, the proposed models are able to handle a wide range of noise levels using a single set of learned parameters, while they are very robust when the noise degrading the latent image does not match the statistics of the noise used during training. The latter argument is supported by results that we report on publicly available images corrupted by unknown noise and which we compare against solutions obtained by competing methods. At the same time the introduced networks achieve excellent results under additive white Gaussian noise (AWGN), which are comparable to those of the current state-of-the-art network, while they depend on a more shallow architecture with the number of trained parameters being one order of magnitude smaller. These properties make the proposed networks ideal candidates to serve as sub-solvers on restoration methods that deal with general inverse imaging problems such as deblurring, demosaicking, superresolution, etc.

61 citations

Patent
04 Jun 2003
TL;DR: In this paper, an adaptive noise canceller (ANC) is used to adjust the phase and gain of the near field noise reference signal in response to the magnitude of the error.
Abstract: A receiver with reduced near field noise is described. The receiver has a far range receiving section configured to sense a desired signal that includes near field noise. The receiver further includes a near range receiving section configured to sense a near field noise reference signal. An adaptive noise canceller (ANC) of the receiver is configured to detect the magnitude of an error vector from the far range receiving section. The ANC is configured to adjust the phase and gain of the near field noise reference signal in response to the magnitude of the error. Accordingly, the ANC can generate a corrected near field noise reference signal that is added to the desired signal with an adder. The near field noise is canceled by the addition of the corrected near field noise signal. The ANC uses a least mean square technique to determine the amount of correction needed.

60 citations

Journal ArticleDOI
TL;DR: It is demonstrated that such a partitioning of the feature space for a multiclassifier system yields superior noise performance for classification tasks, and validation studies with experimental hyperspectral data show that the proposed system significantly outperforms conventional denoising and classification approaches.
Abstract: Hyperspectral imagery comprises high-dimensional reflectance vectors representing the spectral response over a wide range of wavelengths per pixel in the image. The resulting high-dimensional feature spaces often result in statistically ill-conditioned class-conditional distributions. Conventional methods for alleviating this problem typically employ dimensionality reduction such as linear discriminant analysis along with single-classifier systems, yet these methods are suboptimal and lack noise robustness. In contrast, a divide-and-conquer approach is proposed to address the high dimensionality of hyperspectral data for effective and noise-robust classification. Central to the proposed framework is a redundant wavelet transform for representing the data in a feature space amenable to noise-robust multiscale analysis as well as a multiclassifier and decision-fusion system for classification and target recognition in high-dimensional spaces under small-sample-size conditions. The proposed partitioning of this feature space assigns a collection of all coefficients across all scales at a particular spectral wavelength to a dedicated classifier. It is demonstrated that such a partitioning of the feature space for a multiclassifier system yields superior noise performance for classification tasks. Additionally, validation studies with experimental hyperspectral data show that the proposed system significantly outperforms conventional denoising and classification approaches.

60 citations


Network Information
Related Topics (5)
Image processing
229.9K papers, 3.5M citations
90% related
Feature extraction
111.8K papers, 2.1M citations
89% related
Image segmentation
79.6K papers, 1.8M citations
88% related
Convolutional neural network
74.7K papers, 2M citations
88% related
Support vector machine
73.6K papers, 1.7M citations
88% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20231,511
20222,974
20211,123
20201,488
20191,702
20181,631