scispace - formally typeset
Search or ask a question

Showing papers on "Noise reduction published in 2017"


Journal ArticleDOI
TL;DR: Zhang et al. as mentioned in this paper proposed a feed-forward denoising convolutional neural networks (DnCNNs) to handle Gaussian denobling with unknown noise level.
Abstract: The discriminative model learning for image denoising has been recently attracting considerable attentions due to its favorable denoising performance. In this paper, we take one step forward by investigating the construction of feed-forward denoising convolutional neural networks (DnCNNs) to embrace the progress in very deep architecture, learning algorithm, and regularization method into image denoising. Specifically, residual learning and batch normalization are utilized to speed up the training process as well as boost the denoising performance. Different from the existing discriminative denoising models which usually train a specific model for additive white Gaussian noise at a certain noise level, our DnCNN model is able to handle Gaussian denoising with unknown noise level (i.e., blind Gaussian denoising). With the residual learning strategy, DnCNN implicitly removes the latent clean image in the hidden layers. This property motivates us to train a single DnCNN model to tackle with several general image denoising tasks, such as Gaussian denoising, single image super-resolution, and JPEG image deblocking. Our extensive experiments demonstrate that our DnCNN model can not only exhibit high effectiveness in several general image denoising tasks, but also be efficiently implemented by benefiting from GPU computing.

5,902 citations


Journal ArticleDOI
TL;DR: FFDNet as mentioned in this paper proposes a fast and flexible denoising convolutional neural network with a tunable noise level map as the input, which can handle a wide range of noise levels effectively with a single network.
Abstract: Due to the fast inference and good performance, discriminative learning methods have been widely studied in image denoising. However, these methods mostly learn a specific model for each noise level, and require multiple models for denoising images with different noise levels. They also lack flexibility to deal with spatially variant noise, limiting their applications in practical denoising. To address these issues, we present a fast and flexible denoising convolutional neural network, namely FFDNet, with a tunable noise level map as the input. The proposed FFDNet works on downsampled sub-images, achieving a good trade-off between inference speed and denoising performance. In contrast to the existing discriminative denoisers, FFDNet enjoys several desirable properties, including (i) the ability to handle a wide range of noise levels (i.e., [0, 75]) effectively with a single network, (ii) the ability to remove spatially variant noise by specifying a non-uniform noise level map, and (iii) faster speed than benchmark BM3D even on CPU without sacrificing denoising performance. Extensive experiments on synthetic and real noisy images are conducted to evaluate FFDNet in comparison with state-of-the-art denoisers. The results show that FFDNet is effective and efficient, making it highly attractive for practical denoising applications.

602 citations


Proceedings ArticleDOI
01 Jul 2017
TL;DR: In this paper, the authors proposed a methodology for benchmarking denoising techniques on real photographs by capturing pairs of images with different ISO values and appropriately adjusted exposure times, where the nearly noise-free low-ISO image serves as reference.
Abstract: Lacking realistic ground truth data, image denoising techniques are traditionally evaluated on images corrupted by synthesized i.i.d. Gaussian noise. We aim to obviate this unrealistic setting by developing a methodology for benchmarking denoising techniques on real photographs. We capture pairs of images with different ISO values and appropriately adjusted exposure times, where the nearly noise-free low-ISO image serves as reference. To derive the ground truth, careful post-processing is needed. We correct spatial misalignment, cope with inaccuracies in the exposure parameters through a linear intensity transform based on a novel heteroscedastic Tobit regression model, and remove residual low-frequency bias that stems, e.g., from minor illumination changes. We then capture a novel benchmark dataset, the Darmstadt Noise Dataset (DND), with consumer cameras of differing sensor sizes. One interesting finding is that various recent techniques that perform well on synthetic noise are clearly outperformed by BM3D on photographs with real noise. Our benchmark delineates realistic evaluation scenarios that deviate strongly from those commonly used in the scientific literature.

540 citations


Journal ArticleDOI
TL;DR: A practical “how‐to” guide to help determine whether single‐subject fMRI independent components (ICs) characterise structured noise or not and how the data quality, data type and preprocessing can influence the characteristics of ICs.

396 citations


Proceedings ArticleDOI
01 Oct 2017
TL;DR: In this paper, a fixed denoising neural network is proposed to replace the proximal operator of the regularization used in many convex energy minimization algorithms by a denoizing neural network.
Abstract: While variational methods have been among the most powerful tools for solving linear inverse problems in imaging, deep (convolutional) neural networks have recently taken the lead in many challenging benchmarks. A remaining drawback of deep learning approaches is their requirement for an expensive retraining whenever the specific problem, the noise level, noise type, or desired measure of fidelity changes. On the contrary, variational methods have a plug-and-play nature as they usually consist of separate data fidelity and regularization terms. In this paper we study the possibility of replacing the proximal operator of the regularization used in many convex energy minimization algorithms by a denoising neural network. The latter therefore serves as an implicit natural image prior, while the data term can still be chosen independently. Using a fixed denoising neural network in exemplary problems of image deconvolution with different blur kernels and image demosaicking, we obtain state-of-the-art reconstruction results. These indicate the high generalizability of our approach and a reduction of the need for problemspecific training. Additionally, we discuss novel results on the analysis of possible optimization algorithms to incorporate the network into, as well as the choices of algorithm parameters and their relation to the noise level the neural network is trained on.

323 citations


Proceedings ArticleDOI
01 Oct 2017
TL;DR: This paper proposes a multi-channel (MC) optimization model for real color image denoising under the weighted nuclear norm minimization (WNNM) framework, concatenate the RGB patches to make use of the channel redundancy, and introduces a weight matrix to balance the data fidelity of the three channels in consideration of their different noise statistics.
Abstract: Most of the existing denoising algorithms are developed for grayscale images. It is not trivial to extend them for color image denoising since the noise statistics in R, G, and B channels can be very different for real noisy images. In this paper, we propose a multi-channel (MC) optimization model for real color image denoising under the weighted nuclear norm minimization (WNNM) framework. We concatenate the RGB patches to make use of the channel redundancy, and introduce a weight matrix to balance the data fidelity of the three channels in consideration of their different noise statistics. The proposed MC-WNNM model does not have an analytical solution. We reformulate it into a linear equality-constrained problem and solve it via alternating direction method of multipliers. Each alternative updating step has a closed-form solution and the convergence can be guaranteed. Experiments on both synthetic and real noisy image datasets demonstrate the superiority of the proposed MC-WNNM over state-of-the-art denoising methods.

226 citations


Journal ArticleDOI
TL;DR: K-sparse auto encoder was used for unsupervised feature learning and a manifold was learned from normal-dose images and the distance between the reconstructed image and the manifold was minimized along with data fidelity during reconstruction.
Abstract: Dose reduction in computed tomography (CT) is essential for decreasing radiation risk in clinical applications. Iterative reconstruction algorithms are one of the most promising way to compensate for the increased noise due to reduction of photon flux. Most iterative reconstruction algorithms incorporate manually designed prior functions of the reconstructed image to suppress noises while maintaining structures of the image. These priors basically rely on smoothness constraints and cannot exploit more complex features of the image. The recent development of artificial neural networks and machine learning enabled learning of more complex features of image, which has the potential to improve reconstruction quality. In this letter, K-sparse auto encoder was used for unsupervised feature learning. A manifold was learned from normal-dose images and the distance between the reconstructed image and the manifold was minimized along with data fidelity during reconstruction. Experiments on 2016 Low-dose CT Grand Challenge were used for the method verification, and results demonstrated the noise reduction and detail preservation abilities of the proposed method.

206 citations


Journal ArticleDOI
TL;DR: An efficient scheme for denoising electrocardiogram (ECG) signals is proposed based on a wavelet-based threshold mechanism based on an opposition-based self-adaptive learning particle swarm optimisation (OSLPSO) in dual tree complex wavelet packet scheme, in which the OSL PSO is utilised to for threshold optimisation.
Abstract: Electrocardiogram (ECG) signal is significant to diagnose cardiac arrhythmia among various biological signals. The accurate analysis of noisy electrocardiographic (ECG) signal is a very motivating challenge. According to this automated analysis, the noises present in electrocardiogram signal need to be removed for perfect diagnosis. Numerous investigators have been reported different techniques for denoising the electrocardiographic signal in recent years. In this paper, an efficient scheme for denoising electrocardiogram (ECG) signals is proposed based on a wavelet-based threshold mechanism. This scheme is based on an opposition-based self-adaptive learning particle swarm optimisation (OSLPSO) in dual tree complex wavelet packet scheme, in which the OSLPSO is utilised to for threshold optimisation. Different abnormal and normal electrocardiographic signals are tested to evaluate this approach from MIT/BIH arrhythmia database, by artificially adding white Gaussian noises with variation of 5 dB, 10 dB and 15 dB. Simulation results illustrate that the proposed system has good performance in various noise level and obtains better visual quality compared with other methods.

192 citations


Journal ArticleDOI
TL;DR: Experimental results show that the proposed LRTR method outperforms other denoising algorithms on real corrupted hyperspectral data and can preserve the global structure of HSIs and simultaneously remove Gaussian noise and sparse noise.
Abstract: This paper studies the hyperspectral image (HSI) denoising problem under the assumption that the signal is low in rank. In this paper, a mixture of Gaussian noise and sparse noise is considered. The sparse noise includes stripes, impulse noise, and dead pixels. The denoising task is formulated as a low-rank tensor recovery (LRTR) problem from Gaussian noise and sparse noise. Traditional low-rank tensor decomposition methods are generally NP-hard to compute. Besides, these tensor decomposition based methods are sensitive to sparse noise. In contrast, the proposed LRTR method can preserve the global structure of HSIs and simultaneously remove Gaussian noise and sparse noise.The proposed method is based on a new tensor singular value decomposition and tensor nuclear norm. The NP-hard tensor recovery task is well accomplished by polynomial time algorithms. The convergence of the algorithm and the parameter settings are also described in detail. Preliminary numerical experiments have demonstrated that the proposed method is effective for low-rank tensor recovery from Gaussian noise and sparse noise. Experimental results also show that the proposed LRTR method outperforms other denoising algorithms on real corrupted hyperspectral data.

151 citations


Proceedings ArticleDOI
TL;DR: In this article, a fixed denoising neural network is proposed to replace the proximal operator of the regularization used in many convex energy minimization algorithms by a denoizing neural network.
Abstract: While variational methods have been among the most powerful tools for solving linear inverse problems in imaging, deep (convolutional) neural networks have recently taken the lead in many challenging benchmarks. A remaining drawback of deep learning approaches is their requirement for an expensive retraining whenever the specific problem, the noise level, noise type, or desired measure of fidelity changes. On the contrary, variational methods have a plug-and-play nature as they usually consist of separate data fidelity and regularization terms. In this paper we study the possibility of replacing the proximal operator of the regularization used in many convex energy minimization algorithms by a denoising neural network. The latter therefore serves as an implicit natural image prior, while the data term can still be chosen independently. Using a fixed denoising neural network in exemplary problems of image deconvolution with different blur kernels and image demosaicking, we obtain state-of-the-art reconstruction results. These indicate the high generalizability of our approach and a reduction of the need for problem-specific training. Additionally, we discuss novel results on the analysis of possible optimization algorithms to incorporate the network into, as well as the choices of algorithm parameters and their relation to the noise level the neural network is trained on.

146 citations


Journal ArticleDOI
Licheng Liu1, Long Chen1, C. L. Philip Chen1, Yuan Yan Tang1, Chi-Man Pun1 
TL;DR: This paper proposes a weighted JSR (WJSR) model to simultaneously encode a set of data samples that are drawn from the same subspace but corrupted with noise and outliers and introduces a greedy algorithm called weighted simultaneous orthogonal matching pursuit to efficiently approximate the global optimal solution.
Abstract: Joint sparse representation (JSR) has shown great potential in various image processing and computer vision tasks. Nevertheless, the conventional JSR is fragile to outliers. In this paper, we propose a weighted JSR (WJSR) model to simultaneously encode a set of data samples that are drawn from the same subspace but corrupted with noise and outliers. Our model is desirable to exploit the common information shared by these data samples while reducing the influence of outliers. To solve the WJSR model, we further introduce a greedy algorithm called weighted simultaneous orthogonal matching pursuit to efficiently approximate the global optimal solution. Then, we apply the WJSR for mixed noise removal by jointly coding the grouped nonlocal similar image patches. The denoising performance is further improved by incorporating it with the global prior and the sparse errors into a unified framework. Experimental results show that our denoising method is superior to several state-of-the-art mixed noise removal methods.

Journal ArticleDOI
Abstract: Recorded seismic signals are often corrupted by noise. We have developed an automatic noise-attenuation method for single-channel seismic data, based upon high-resolution time-frequency analysis. Synchrosqueezing is a time-frequency reassignment method aimed at sharpening a time-frequency picture. Noise can be distinguished from the signal and attenuated more easily in this reassigned domain. The threshold level is estimated using a general cross-validation approach that does not rely on any prior knowledge about the noise level. The efficiency of the thresholding has been improved by adding a preprocessing step based on kurtosis measurement and a postprocessing step based on adaptive hard thresholding. The proposed algorithm can either attenuate the noise (either white or colored) and keep the signal or remove the signal and keep the noise. Hence, it can be used in either normal denoising applications or preprocessing in ambient noise studies. We tested the performance of the proposed method on s...

Posted Content
TL;DR: The proposed architecture dramatically improves the accuracy of a classification network in low light and other challenging conditions, outperforming alternative approaches such as retraining the network on noisy and blurry images and preprocessing raw sensor inputs with conventional denoising and deblurring algorithms.
Abstract: Real-world sensors suffer from noise, blur, and other imperfections that make high-level computer vision tasks like scene segmentation, tracking, and scene understanding difficult. Making high-level computer vision networks robust is imperative for real-world applications like autonomous driving, robotics, and surveillance. We propose a novel end-to-end differentiable architecture for joint denoising, deblurring, and classification that makes classification robust to realistic noise and blur. The proposed architecture dramatically improves the accuracy of a classification network in low light and other challenging conditions, outperforming alternative approaches such as retraining the network on noisy and blurry images and preprocessing raw sensor inputs with conventional denoising and deblurring algorithms. The architecture learns denoising and deblurring pipelines optimized for classification whose outputs differ markedly from those of state-of-the-art denoising and deblurring methods, preserving fine detail at the cost of more noise and artifacts. Our results suggest that the best low-level image processing for computer vision is different from existing algorithms designed to produce visually pleasing images. The principles used to design the proposed architecture easily extend to other high-level computer vision tasks and image formation models, providing a general framework for integrating low-level and high-level image processing.

Journal ArticleDOI
TL;DR: The proposed speech enhancement algorithm has the potential to improve the intelligibility of speech in noise for CI users while meeting the requirements of low computational complexity and processing delay for application in CI devices.

Journal ArticleDOI
TL;DR: A probabilistic prior distribution for a spatial correlation matrix (a CGMM parameter), which enables more stable steering vector estimation in the presence of interfering speakers, is introduced in this paper.
Abstract: This paper considers acoustic beamforming for noise robust automatic speech recognition. A beamformer attenuates background noise by enhancing sound components coming from a direction specified by a steering vector. Hence, accurate steering vector estimation is paramount for successful noise reduction. Recently, time-frequency masking has been proposed to estimate the steering vectors that are used for a beamformer. In particular, we have developed a new form of this approach, which uses a speech spectral model based on a complex Gaussian mixture model CGMM to estimate the time-frequency masks needed for steering vector estimation, and extended the CGMM-based beamformer to an online speech enhancement scenario. Our previous experiments showed that the proposed CGMM-based approach outperforms a recently proposed mask estimator based on a Watson mixture model and the baseline speech enhancement system of the CHiME-3 challenge. This paper provides additional experimental results for our online processing, which achieves performance comparable to that of batch processing with a suitable block-batch size. This online version reduces the CHiME-3 word error rate WER on the evaluation set from 8.37% to 8.06%. Moreover, in this paper, we introduce a probabilistic prior distribution for a spatial correlation matrix a CGMM parameter, which enables more stable steering vector estimation in the presence of interfering speakers. In practice, the performance of the proposed online beamformer degrades with observations that contain only noise or/and interference because of the failure of the CGMM parameter estimation. The introduced spatial prior enables the target speaker's parameter to avoid overfitting to noise or/and interference. Experimental results show that the spatial prior reduces the WER from 38.4% to 29.2% in a conversation recognition task compared with the CGMM-based approach without the prior, and outperforms a conventional online speech enhancement approach.

Journal ArticleDOI
TL;DR: A CM noise reduction technique with a balance bridge at a large impedance ratio is proposed based on the developed model, which can be easily implemented at low cost and well predicted.
Abstract: This paper develops a common-mode (CM) electromagnetic interference noise model for a three-level neutral point clamped topology. Compared with existing modeling techniques with only one CM noise source, two extra important CM noise sources and their characteristics are identified and derived for an accurate CM noise model. The impedances of CM noise path are also extracted. Based on the developed CM noise model, the CM noise spectrum can be well predicted. The effect of CM noise paths on CM noise is discussed based on two different LCL filters. A CM noise reduction technique with a balance bridge at a large impedance ratio is proposed based on the developed model. The technique can be easily implemented at low cost. Both simulations and experiments validate the developed theory and technique.

Journal ArticleDOI
TL;DR: An effective variation model for multimodality medical image fusion and denoising is proposed, which performs well with both noisy and normal medical images, outperforming conventional methods in terms of fusion quality and noise reduction.
Abstract: Medical image fusion aims at integrating information from multimodality medical images to obtain a more complete and accurate description of the same object, which provides an easy access for image-guided medical diagnostic and treatment Unfortunately, medical images are often corrupted by noise in acquisition or transmission, and the noise signal is easily mistaken for a useful characterization of the image, making the fusion effect drop significantly Thus, the existence of noise presents a great challenge for most of traditional image fusion methods To address this problem, an effective variation model for multimodality medical image fusion and denoising is proposed First, a multiscale alternating sequential filter is exploited to extract the useful characterizations (eg, details and edges) from noisy input medical images Then, a recursive filtering-based weight map is constructed to guide the fusion of main features of input images Additionally, total variation (TV) constraint is developed by constructing an adaptive fractional order $p$ based on the local contrast of fused image, further effectively suppressing noise while avoiding the staircase effect of the TV The experimental results indicate that the proposed method performs well with both noisy and normal medical images, outperforming conventional methods in terms of fusion quality and noise reduction

Journal ArticleDOI
TL;DR: In this article, a general scheme, called MuLoG (MUlti-channel LOgarithm with Gaussian denoising), is proposed to include such denoisers within a multi-channel SAR speckle reduction technique.
Abstract: Speckle reduction is a longstanding topic in synthetic aperture radar (SAR) imaging. Since most current and planned SAR imaging satellites operate in polarimetric, interferometric, or tomographic modes, SAR images are multi-channel and speckle reduction techniques must jointly process all channels to recover polarimetric and interferometric information. The distinctive nature of SAR signal (complex-valued, corrupted by multiplicative fluctuations) calls for the development of specialized methods for speckle reduction. Image denoising is a very active topic in image processing with a wide variety of approaches and many denoising algorithms available, almost always designed for additive Gaussian noise suppression. This paper proposes a general scheme, called MuLoG (MUlti-channel LOgarithm with Gaussian denoising), to include such Gaussian denoisers within a multi-channel SAR speckle reduction technique. A new family of speckle reduction algorithms can thus be obtained, benefiting from the ongoing progress in Gaussian denoising, and offering several speckle reduction results often displaying method-specific artifacts that can be dismissed by comparison between results.

Journal ArticleDOI
TL;DR: A general scheme is proposed, called MuLoG (MUlti-channel LOgarithm with Gaussian denoising), to include suchGaussian denoisers within a multi-channel SAR speckle reduction technique, benefiting from the ongoing progress in GaussianDenoising and offering several speckel reduction results often displaying method-specific artifacts that can be dismissed by comparison between results.
Abstract: Speckle reduction is a longstanding topic in synthetic aperture radar (SAR) imaging. Since most current and planned SAR imaging satellites operate in polarimetric, interferometric or tomographic modes, SAR images are multi-channel and speckle reduction techniques must jointly process all channels to recover polarimetric and interferometric information. The distinctive nature of SAR signal (complex-valued, corrupted by multiplicative fluctuations) calls for the development of specialized methods for speckle reduction. Image denoising is a very active topic in image processing with a wide variety of approaches and many denoising algorithms available, almost always designed for additive Gaussian noise suppression. This paper proposes a general scheme, called MuLoG (MUlti-channel LOgarithm with Gaussian denoising), to include such Gaussian denoisers within a multi-channel SAR speckle reduction technique. A new family of speckle reduction algorithms can thus be obtained, benefiting from the ongoing progress in Gaussian denoising, and offering several speckle reduction results often displaying method-specific artifacts that can be dismissed by comparison between results.

Journal ArticleDOI
TL;DR: The nonlocal means (NLM) algorithm was introduced as a non‐iterative edge‐preserving filter to denoise natural images corrupted by additive Gaussian noise, and showed superior performance.
Abstract: Low-dose X-ray computed tomography (LDCT) imaging is highly recommended for use in the clinic because of growing concerns over excessive radiation exposure. However, the CT images reconstructed by the conventional filtered back-projection (FBP) method from low-dose acquisitions may be severely degraded with noise and streak artifacts due to excessive X-ray quantum noise, or with view-aliasing artifacts due to insufficient angular sampling. In 2005, the nonlocal means (NLM) algorithm was introduced as a non-iterative edge-preserving filter to denoise natural images corrupted by additive Gaussian noise, and showed superior performance. It has since been adapted and applied to many other image types and various inverse problems. This paper specifically reviews the applications of the NLM algorithm in LDCT image processing and reconstruction, and explicitly demonstrates its improving effects on the reconstructed CT image quality from low-dose acquisitions. The effectiveness of these applications on LDCT and their relative performance are described in detail.

Proceedings Article
13 Feb 2017
TL;DR: In this paper, a modified variational lower bound is proposed for denoising variational autoencoder. But the authors do not propose a modified training criterion which corresponds to a tractable bound when input is corrupted.
Abstract: Denoising autoencoders (DAE) are trained to reconstruct their clean inputs with noise injected at the input level, while variational autoencoders (VAE) are trained with noise injected in their stochastic hidden layer, with a regularizer that encourages this noise injection. In this paper, we show that injecting noise both in input and in the stochastic hidden layer can be advantageous and we propose a modified variational lower bound as an improved objective function in this setup. When input is corrupted, then the standard VAE lower bound involves marginalizing the encoder conditional distribution over the input noise, which makes the training criterion intractable. Instead, we propose a modified training criterion which corresponds to a tractable bound when input is corrupted. Experimentally, we find that the proposed denoising variational autoencoder (DVAE) yields better average log-likelihood than the VAE and the importance weighted autoencoder on the MNIST and Frey Face datasets.

Journal ArticleDOI
TL;DR: A novel approach for removing noise from multiple reflections based on an adaptive randomized-order empirical mode decomposition (EMD) framework and the EMD-based smoothing method can help preserve the flattened signals better, without the need of exact flattening, and can preserve the amplitude variation much better.
Abstract: We propose a novel approach for removing noise from multiple reflections based on an adaptive randomized-order empirical mode decomposition (EMD) framework. We first flatten the primary reflections in common midpoint gather using the automatically picked normal moveout velocities that correspond to the primary reflections and then randomly permutate all the traces. Next, we remove the spatially distributed random spikes that correspond to the multiple reflections using the EMD-based smoothing approach that is implemented in the $f-x$ domain. The trace randomization approach can make the spatially coherent multiple reflections random along the space direction and can decrease the coherency of near-offset multiple reflections. The EMD-based smoothing method is superior to median filter and prediction error filter in that it can help preserve the flattened signals better, without the need of exact flattening, and can preserve the amplitude variation much better. In addition, EMD is a fully adaptive algorithm and the parameterization for EMD-based smoothing can be very convenient.

Journal ArticleDOI
TL;DR: An adaptive image restoration algorithm based on 2-D bilateral filtering has been proposed to enhance the signal-to-noise ratio (SNR) of the intrusion location for phase-sensitive optical time domain reflectometry system and has the potential to precisely extract intrusion location from a harsh environment with strong background noise.
Abstract: An adaptive image restoration algorithm based on 2-D bilateral filtering has been proposed to enhance the signal-to-noise ratio (SNR) of the intrusion location for phase-sensitive optical time domain reflectometry (Ф-OTDR) system. By converting the spatial and time information of the Ф-OTDR traces into 2-D image, the proposed 2-D bilateral filtering algorithm can smooth the noise and preserve the useful signal efficiently. To simplify the algorithm, a Lorentz spatial function is adopted to replace the original Gaussian function, which has higher practicability. Furthermore, an adaptive parameter setting method is developed according to the relation between the optimal gray level standard deviation and noise standard deviation, which is much faster and more robust for different types of signals. In the experiment, the SNR of location information has been improved over 14 dB without spatial resolution loss for a signal with original SNR of 6.43 dB in 27.6 km sensing fiber. The proposed method has the potential to precisely extract intrusion location from a harsh environment with strong background noise.

Journal ArticleDOI
TL;DR: An adaptive denoising method based on data-driven signal mode decomposition, where the noise is represented by the residual/last mode, which shows excellent performance on both synthetic and field data applications.
Abstract: Noise reduction is important for signal analysis. In this paper, we propose a hybrid denoising method based on thresholding and data-driven signal decomposition. The principle of this method is to reconstruct the signal with previously thresholded intrinsic mode functions (IMFs). Empirical mode decomposition (EMD) based methods decompose a signal into a sum of oscillatory components, while variational mode decomposition (VMD) generates an ensemble of modes with their respective centre frequencies, which enables VMD to further decrease redundant modes and keep less residual noise in the modes. To illustrate its superiority, we compare VMD with EMD as well as its derivations, such as ensemble EMD (EEMD), complete EEMD (CEEMD), improved CEEMD (ICEEMD) using synthetic signals and field seismic traces. Compared with EMD and its derivations, VMD has a solid mathematical foundation and is less sensitive to noise, while both make it more suitable for non-stationary seismic signal decomposition. The determination of mode number is key for successful denoising. We develop an empirical equation, which is based on the detrended fluctuation analysis (DFA), to adaptively determine the number of IMFs for signal reconstruction. Then, a scaling exponent obtained by DFA is used as a threshold to distinguish random noise and signal between IMFs and the reconstruction residual. The proposed thresholded VMD denoising method shows excellent performance on both synthetic and field data applications.

Journal ArticleDOI
TL;DR: A model-based Bayesian filtering framework called the “marginalized particle-extended Kalman filter (MP-EKF) algorithm” is proposed for electrocardiogram (ECG) denoising and shows that in the presence of Gaussian white noise, the proposed framework outperforms the EKF and EKS algorithms in lower input SNRs where the measurements and state model are not reliable.
Abstract: In this paper, a model-based Bayesian filtering framework called the “marginalized particle-extended Kalman filter (MP-EKF) algorithm” is proposed for electrocardiogram (ECG) denoising. This algorithm does not have the extended Kalman filter (EKF) shortcoming in handling non-Gaussian nonstationary situations because of its nonlinear framework. In addition, it has less computational complexity compared with particle filter. This filter improves ECG denoising performance by implementing marginalized particle filter framework while reducing its computational complexity using EKF framework. An automatic particle weighting strategy is also proposed here that controls the reliance of our framework to the acquired measurements. We evaluated the proposed filter on several normal ECGs selected from MIT-BIH normal sinus rhythm database. To do so, artificial white Gaussian and colored noises as well as nonstationary real muscle artifact (MA) noise over a range of low SNRs from 10 to −5 dB were added to these normal ECG segments. The benchmark methods were the EKF and extended Kalman smoother (EKS) algorithms which are the first model-based Bayesian algorithms introduced in the field of ECG denoising. From SNR viewpoint, the experiments showed that in the presence of Gaussian white noise, the proposed framework outperforms the EKF and EKS algorithms in lower input SNRs where the measurements and state model are not reliable. Owing to its nonlinear framework and particle weighting strategy, the proposed algorithm attained better results at all input SNRs in non-Gaussian nonstationary situations (such as presence of pink noise, brown noise, and real MA). In addition, the impact of the proposed filtering method on the distortion of diagnostic features of the ECG was investigated and compared with EKF/EKS methods using an ECG diagnostic distortion measure called the “Multi-Scale Entropy Based Weighted Distortion Measure” or MSEWPRD. The results revealed that our proposed algorithm had the lowest MSEPWRD for all noise types at low input SNRs. Therefore, the morphology and diagnostic information of ECG signals were much better conserved compared with EKF/EKS frameworks, especially in non-Gaussian nonstationary situations.

Journal ArticleDOI
TL;DR: A new approach is used to filter baseline wander and power line interference from the ECG signal using empirical wavelet transform (EWT), which is a new method used to compute the building modes of a given signal.
Abstract: This paper presents new methods for baseline wander correction and powerline interference reduction in electrocardiogram (ECG) signals using empirical wavelet transform (EWT). During data acquisition of ECG signal, various noise sources such as powerline interference, baseline wander and muscle artifacts contaminate the information bearing ECG signal. For better analysis and interpretation, the ECG signal must be free of noise. In the present work, a new approach is used to filter baseline wander and power line interference from the ECG signal. The technique utilized is the empirical wavelet transform, which is a new method used to compute the building modes of a given signal. Its performance as a filter is compared to the standard linear filters and empirical mode decomposition.The results show that EWT delivers a better performance.

Posted Content
TL;DR: In this article, a ten convolutional layers neural network combined with residual learning and multi-channel strategy was proposed to denoising MRI Rician noise using a convolution neural network.
Abstract: The denoising of magnetic resonance (MR) images is a task of great importance for improving the acquired image quality. Many methods have been proposed in the literature to retrieve noise free images with good performances. Howerever, the state-of-the-art denoising methods, all needs a time-consuming optimization processes and their performance strongly depend on the estimated noise level parameter. Within this manuscript we propose the idea of denoising MRI Rician noise using a convolutional neural network. The advantage of the proposed methodology is that the learning based model can be directly used in the denosing process without optimization and even without the noise level parameter. Specifically, a ten convolutional layers neural network combined with residual learning and multi-channel strategy was proposed. Two training ways: training on a specific noise level and training on a general level were conducted to demonstrate the capability of our methods. Experimental results over synthetic and real 3D MR data demonstrate our proposed network can achieve superior performance compared with other methods in term of both of the peak signal to noise ratio and the global of structure similarity index. Without noise level parameter, our general noise-applicable model is also better than the other compared methods in two datasets. Furthermore, our training model shows good general applicability.

Posted Content
TL;DR: In this paper, a network architecture for learning discriminative image models that are employed to efficiently tackle the problem of grayscale and color image denoising is proposed. And two different variants of the proposed network are introduced to handle a wide range of noise levels using a single set of learned parameters, while they are robust when the noise degrading the latent image does not match the statistics of the noise used during training.
Abstract: We design a novel network architecture for learning discriminative image models that are employed to efficiently tackle the problem of grayscale and color image denoising. Based on the proposed architecture, we introduce two different variants. The first network involves convolutional layers as a core component, while the second one relies instead on non-local filtering layers and thus it is able to exploit the inherent non-local self-similarity property of natural images. As opposed to most of the existing deep network approaches, which require the training of a specific model for each considered noise level, the proposed models are able to handle a wide range of noise levels using a single set of learned parameters, while they are very robust when the noise degrading the latent image does not match the statistics of the noise used during training. The latter argument is supported by results that we report on publicly available images corrupted by unknown noise and which we compare against solutions obtained by competing methods. At the same time the introduced networks achieve excellent results under additive white Gaussian noise (AWGN), which are comparable to those of the current state-of-the-art network, while they depend on a more shallow architecture with the number of trained parameters being one order of magnitude smaller. These properties make the proposed networks ideal candidates to serve as sub-solvers on restoration methods that deal with general inverse imaging problems such as deblurring, demosaicking, superresolution, etc.

Journal ArticleDOI
TL;DR: The proposed technique effectively combines the power of both NLM and DWT, and is found to be superior to the existing state-of-the-art techniques when tested on the MIT-BIH arrhythmia database.

Journal ArticleDOI
TL;DR: The use of the state-of-the-art patch-based denoising methods for additive noise reduction is investigated and fast patch similarity measurements produce fast patch- based image denoizing methods.
Abstract: Digital images are captured using sensors during the data acquisition phase, where they are often contaminated by noise (an undesired random signal). Such noise can also be produced during transmission or by poor-quality lossy image compression. Reducing the noise and enhancing the images are considered the central process to all other digital image processing tasks. The improvement in the performance of image denoising methods would contribute greatly on the results of other image processing techniques. Patch-based denoising methods recently have merged as the state-of-the-art denoising approaches for various additive noise levels. In this work, the use of the state-of-the-art patch-based denoising methods for additive noise reduction is investigated. Various types of image datasets are addressed to conduct this study. We first explain the type of noise in digital images and discuss various image denoising approaches, with a focus on patch-based denoising methods. Then, we experimentally evaluate both quantitatively and qualitatively the patch-based denoising methods. The patch-based image denoising methods are analyzed in terms of quality and computational time. Despite the sophistication of patch-based image denoising approaches, most patch-based image denoising methods outperform the rest. Fast patch similarity measurements produce fast patch-based image denoising methods. Patch-based image denoising approaches can effectively reduce noise and enhance images. Patch-based image denoising approach is the state-of-the-art image denoising approach.