scispace - formally typeset
Search or ask a question

Showing papers on "Noise reduction published in 2011"


Journal ArticleDOI
TL;DR: In this article, a rank reduction algorithm for simultaneous reconstruction and random noise attenuation of seismic records is proposed, which is based on multichannel singular spectrum analysis (MSSA).
Abstract: We present a rank reduction algorithm that permits simultaneous reconstruction and random noise attenuation of seismic records. We based our technique on multichannel singular spectrum analysis (MSSA). The technique entails organizing spatial data at a given temporal frequency into a block Hankel matrix that in ideal conditions is a matrix of rank k , where k is the number of plane waves in the window of analysis. Additive noise and missing samples will increase the rank of the block Hankel matrix of the data. Consequently, rank reduction is proposed as a means to attenuate noise and recover missing traces. We present an iterative algorithm that resembles seismic data reconstruction with the method of projection onto convex sets. In addition, we propose to adopt a randomized singular value decomposition to accelerate the rank reduction stage of the algorithm. We apply MSSA reconstruction to synthetic examples and a field data set. Synthetic examples were used to assess the performance of the method in two...

598 citations


Journal ArticleDOI
TL;DR: The denoising process is expressed as a linear expansion of thresholds (LET) that is optimized by relying on a purely data-adaptive unbiased estimate of the mean-squared error (MSE) derived in a non-Bayesian framework (PURE: Poisson-Gaussian unbiased risk estimate).
Abstract: We propose a general methodology (PURE-LET) to design and optimize a wide class of transform-domain thresholding algorithms for denoising images corrupted by mixed Poisson-Gaussian noise. We express the denoising process as a linear expansion of thresholds (LET) that we optimize by relying on a purely data-adaptive unbiased estimate of the mean-squared error (MSE), derived in a non-Bayesian framework (PURE: Poisson-Gaussian unbiased risk estimate). We provide a practical approximation of this theoretical MSE estimate for the tractable optimization of arbitrary transform-domain thresholding. We then propose a pointwise estimator for undecimated filterbank transforms, which consists of subband-adaptive thresholding functions with signal-dependent thresholds that are globally optimized in the image domain. We finally demonstrate the potential of the proposed approach through extensive comparisons with state-of-the-art techniques that are specifically tailored to the estimation of Poisson intensities. We also present denoising results obtained on real images of low-count fluorescence microscopy.

434 citations


Journal ArticleDOI
TL;DR: A new denoising method is proposed for hyperspectral data cubes that already have a reasonably good signal-to-noise ratio (SNR) (such as 600 : 1), using principal component analysis (PCA) and removing the noise in the low-energy PCA output channels.
Abstract: In this paper, a new denoising method is proposed for hyperspectral data cubes that already have a reasonably good signal-to-noise ratio (SNR) (such as 600 : 1). Given this level of the SNR, the noise level of the data cubes is relatively low. The conventional image denoising methods are likely to remove the fine features of the data cubes during the denoising process. We propose to decorrelate the image information of hyperspectral data cubes from the noise by using principal component analysis (PCA) and removing the noise in the low-energy PCA output channels. The first PCA output channels contain a majority of the total energy of a data cube, and the rest PCA output channels contain a small amount of energy. It is believed that the low-energy channels also contain a large amount of noise. Removing noise in the low-energy PCA output channels will not harm the fine features of the data cubes. A 2-D bivariate wavelet thresholding method is used to remove the noise for low-energy PCA channels, and a 1-D dual-tree complex wavelet transform denoising method is used to remove the noise of the spectrum of each pixel of the data cube. Experimental results demonstrated that the proposed denoising method produces better denoising results than other denoising methods published in the literature.

374 citations


Journal ArticleDOI
TL;DR: This work introduces optimal inverses for the Anscombe transformation, in particular the exact unbiased inverse, a maximum likelihood (ML) inverse, and a more sophisticated minimum mean square error (MMSE) inverse.
Abstract: The removal of Poisson noise is often performed through the following three-step procedure. First, the noise variance is stabilized by applying the Anscombe root transformation to the data, producing a signal in which the noise can be treated as additive Gaussian with unitary variance. Second, the noise is removed using a conventional denoising algorithm for additive white Gaussian noise. Third, an inverse transformation is applied to the denoised signal, obtaining the estimate of the signal of interest. The choice of the proper inverse transformation is crucial in order to minimize the bias error which arises when the nonlinear forward transformation is applied. We introduce optimal inverses for the Anscombe transformation, in particular the exact unbiased inverse, a maximum likelihood (ML) inverse, and a more sophisticated minimum mean square error (MMSE) inverse. We then present an experimental analysis using a few state-of-the-art denoising algorithms and show that the estimation can be consistently improved by applying the exact unbiased inverse, particularly at the low-count regime. This results in a very efficient filtering solution that is competitive with some of the best existing methods for Poisson image denoising.

341 citations


Journal ArticleDOI
TL;DR: A novel feature extraction method for sound event classification, based on the visual signature extracted from the sound's time-frequency representation, which shows a significant improvement over other methods in mismatched conditions, without the need for noise reduction.
Abstract: In this letter, we present a novel feature extraction method for sound event classification, based on the visual signature extracted from the sound's time-frequency representation. The motivation stems from the fact that spectrograms form recognisable images, that can be identified by a human reader, with perception enhanced by pseudo-coloration of the image. The signal processing in our method is as follows. 1) The spectrogram is normalised into greyscale with a fixed range. 2) The dynamic range is quantized into regions, each of which is then mapped to form a monochrome image. 3) The monochrome images are partitioned into blocks, and the distribution statistics in each block are extracted to form the feature. The robustness of the proposed method comes from the fact that the noise is normally more diffuse than the signal and therefore the effect of the noise is limited to a particular quantization region, leaving the other regions less changed. The method is tested on a database of 60 sound classes containing a mixture of collision, action and characteristic sounds and shows a significant improvement over other methods in mismatched conditions, without the need for noise reduction.

196 citations


Journal ArticleDOI
TL;DR: Zhang et al. as discussed by the authors proposed a l"1-l"0 minimization approach, where the l" 1 term is used for impulse denoising and the l' 0 term was used for sparse representation over a dictionary of images patches.

182 citations


Journal ArticleDOI
TL;DR: Experimental results demonstrate that the proposed modification achieves better results in terms of both peak signal-to-noise ratio and subjective visual quality than the original method for strong noise.
Abstract: In order to resolve the problem that the denoising performance has a sharp drop when noise standard deviation reaches 40, proposed to replace the wavelet transform by the DCT. In this comment, we argue that this replacement is unnecessary, and that the problem can be solved by adjusting some numerical parameters. We also present this parameter modification approach here. Experimental results demonstrate that the proposed modification achieves better results in terms of both peak signal-to-noise ratio and subjective visual quality than the original method for strong noise.

152 citations


Journal ArticleDOI
TL;DR: This work combines the multichannel speech presence probability (MC-SPP) that was proposed in an earlier contribution with an alternative formulation of the minima-controlled recursive averaging (MCRA) technique that generalize from the single-channel to the multICHannel case.
Abstract: Noise statistics estimation is a paramount issue in the design of reliable noise-reduction algorithms. Although significant efforts have been devoted to this problem in the literature, most developed methods so far have focused on the single-channel case. When multiple microphones are used, it is important that the data from all the sensors are optimally combined to achieve judicious updates of the noise statistics and the noise-reduction filter. This contribution is devoted to the development of a practical approach to multichannel noise tracking and reduction. We combine the multichannel speech presence probability (MC-SPP) that we proposed in an earlier contribution with an alternative formulation of the minima-controlled recursive averaging (MCRA) technique that we generalize from the single-channel to the multichannel case. To demonstrate the effectiveness of the proposed MC-SPP and multichannel noise estimator, we integrate them into three variants of the multichannel noise reduction Wiener filter. Experimental results show the advantages of the proposed solution.

152 citations


Proceedings ArticleDOI
09 Oct 2011
TL;DR: The experimental results show that stochastic implementations tolerate more noise and consume less hardware than their conventional counterparts, and the validity of the present stoChastic computational elements is demonstrated through four basic digital image processing algorithms.
Abstract: As device scaling continues to nanoscale dimensions, circuit reliability will continue to become an ever greater problem. Stochastic computing, which performs computing with random bits (stochastic bits streams), can be used to enable reliable computation using those unreliable devices. However, one of the major issues of stochastic computing is that applications implemented with this technique are limited by the available computational elements. In this paper, first we will introduce and prove a stochastic absolute value function. Second, we will demonstrate a mathematical analysis of a stochastic tanh function, which is a key component used in a stochastic comparator. Third, we will present a quantitative analysis of a one-parameter linear gain function, and propose a new two-parameter version. The validity of the present stochastic computational elements is demonstrated through four basic digital image processing algorithms: edge detection, frame difference based image segmentation, median filter based noise reduction, and image contrast stretching. Our experimental results show that stochastic implementations tolerate more noise and consume less hardware than their conventional counterparts.

150 citations


Proceedings ArticleDOI
09 Jun 2011
TL;DR: This work develops optimal forward and inverse variance-stabilizing transformations for the Rice distribution in order to approach the problem of magnetic resonance (MR) image filtering by means of standard denoising algorithms designed for homoskedastic observations.
Abstract: We develop optimal forward and inverse variance-stabilizing transformations for the Rice distribution, in order to approach the problem of magnetic resonance (MR) image filtering by means of standard denoising algorithms designed for homoskedastic observations.

147 citations


Journal ArticleDOI
01 Aug 2011
TL;DR: Experimental result showed that EEMD had better noise-filtering performance than EMD and FIR Wiener filter, based on the mode-mixing reduction between near IMF scales.
Abstract: Empirical mode decomposition (EMD) is a powerful algorithm that decomposes signals as a set of intrinsic mode function (IMF) based on the signal complexity. In this study, partial reconstruction of IMF acting as a filter was used for noise reduction in ECG. An improved algorithm, ensemble EMD (EEMD), was used for the first time to improve the noise-filtering performance, based on the mode-mixing reduction between near IMF scales. Both standard ECG templates derived from simulator and Arrhythmia ECG database were used as ECG signal, while Gaussian white noise was used as noise source. Mean square error (MSE) between the reconstructed ECG and original ECG was used as the filter performance indicator. FIR Wiener filter was also used to compare the filtering performance with EEMD. Experimental result showed that EEMD had better noise-filtering performance than EMD and FIR Wiener filter. The average MSE ratios of EEMD to EMD and FIR Wiener filter were 0.71 and 0.61, respectively. Thus, this study investigated an ECG noise-filtering procedure based on EEMD. Also, the optimal added noise power and trial number for EEMD was also examined.

Journal ArticleDOI
06 Sep 2011-PLOS ONE
TL;DR: An adaptive algorithm is presented that offers a new formulation of fractal and multifractal analysis that is better than existing methods when a biosignal contains a strong oscillatory component and is demonstrated by offering new important insights into brainwave dynamics and the very high accuracy in automatically detecting epileptic seizures from EEG signals.
Abstract: Chaos and random fractal theories are among the most important for fully characterizing nonlinear dynamics of complicated multiscale biosignals. Chaos analysis requires that signals be relatively noise-free and stationary, while fractal analysis demands signals to be non-rhythmic and scale-free. To facilitate joint chaos and fractal analysis of biosignals, we report an adaptive multiscale decomposition algorithm, which: (1) can readily remove nonstationarities from the signal, (2) can more effectively reduce noise in the signals than linear filters, wavelet denoising, and chaos-based noise reduction schemes; (3) can readily decompose a multiscale biosignal into a series of intrinsically bandlimited functions; (4) offers a new formulation of fractal and multifractal analysis that is better than the popular detrended fluctuation analysis when a biosignal contains a strong oscillatory component. The effectiveness of the approach is demonstrated by applying it to classify EEGs for the purpose of detecting epileptic seizures.Copyright © 2011 by ASME

Journal ArticleDOI
TL;DR: The efficacy of SVD denoising method in electronic nose data analysis is demonstrated by analyzing five data sets available in public domain which are based on surface acoustic wave sensors, conducting composite polymer sensors and the tin-oxide sensors arrays.
Abstract: This paper analyzes the role of singular value decomposition (SVD) in denoising sensor array data of electronic nose systems. It is argued that the SVD decomposition of raw data matrix distributes additive noise over orthogonal singular directions representing both the sensor and the odor variables. The noise removal is done by truncating the SVD matrices up to a few largest singular value components, and then reconstructing a denoised data matrix by using the remaining singular vectors. In electronic nose systems this method seems to be very effective in reducing noise components arising from both the odor sampling and delivery system and the sensors electronics. The feature extraction by principal component analysis based on the SVD denoised data matrix is seen to reduce separation between samples of the same class and increase separation between samples of different classes. This is beneficial for improving classification efficiency of electronic noses by reducing overlap between classes in feature space. The efficacy of SVD denoising method in electronic nose data analysis is demonstrated by analyzing five data sets available in public domain which are based on surface acoustic wave (SAW) sensors, conducting composite polymer sensors and the tin-oxide sensors arrays.

Journal ArticleDOI
TL;DR: The main aim is to modify the wavelet coefficients in the new basis, the noise can be removed from the data and a signal to noise ratio as a measure of the quality of denoising was preferred.
Abstract: This paper proposes different approaches of wavelet based image denoising methods. The search for efficient image denoising methods is still a valid challenge at the crossing of functional analysis and statistics. In spite of the sophistication of the recently proposed methods, most algorithms have not yet attained a desirable level of applicability. Wavelet algorithms are useful tool for signal processing such as image compression and denoising. Multi wavelets can be considered as an extension of scalar wavelets. The main aim is to modify the wavelet coefficients in the new basis, the noise can be removed from the data. In this paper, we extend the existing technique and providing a comprehensive evaluation of the proposed method. Results based on different noise, such as Gaussian, Poisson's, Salt and Pepper, and Speckle performed in this paper. A signal to noise ratio as a measure of the quality of denoising was preferred.

Journal ArticleDOI
TL;DR: Several simple and efficient sign based normalized adaptive filters, which are computationally superior having multiplier free weight update loops are used for cancelation of noise in electrocardiographic signals.

Proceedings ArticleDOI
18 Nov 2011
TL;DR: This work proposes to replace the hard decision of the VAD by a soft speech presence probability (SPP) and shows that by doing so, the proposed estimator does not require a bias correction and safety-net as is required by the MMSE estimator presented.
Abstract: In this paper, we analyze the minimum mean square error (MMSE) based spectral noise power estimator [1] and present an improvement. We will show that the MMSE based spectral noise power estimate is only updated when the a posteriori signal-to-noise ratio (SNR) is lower than one. This threshold on the a posteriori SNR can be interpreted as a voice activity detector (VAD). We propose in this work to replace the hard decision of the VAD by a soft speech presence probability (SPP). We show that by doing so, the proposed estimator does not require a bias correction and safety-net as is required by the MMSE estimator presented in [1]. At the same time, the proposed estimator maintains the quick noise tracking capability which is characteristic for the MMSE noise tracker, results in less noise power overestimation and is computationally less expensive.

Journal ArticleDOI
Gang Xu1, Mengdao Xing1, Lei Zhang1, Yabo Liu1, Yachao Li1 
TL;DR: A novel algorithm of inverse synthetic aperture radar (ISAR) imaging based on Bayesian estimation is proposed, wherein the ISAR imaging joint with phase adjustment is mathematically transferred into signal reconstruction via maximum a posteriori estimation.
Abstract: In this letter, a novel algorithm of inverse synthetic aperture radar (ISAR) imaging based on Bayesian estimation is proposed, wherein the ISAR imaging joint with phase adjustment is mathematically transferred into signal reconstruction via maximum a posteriori estimation. In the scheme, phase errors are treated as model errors and are overcome in the sparsity-driven optimization regardless of the formats, while data-driven estimation of the statistical parameters for both noise and target is developed, which guarantees the high precision of image generation. Meanwhile, the fast Fourier transform is utilized to implement the solution to image formation, promoting its efficiency effectively. Due to the high denoising capability of the proposed algorithm, high-quality image also could be achieved even under strong noise. The experimental results using simulated and measured data confirm the validity.

Proceedings ArticleDOI
06 Jun 2011
TL;DR: The requirements, constraints, and design of NASA's next generation Aircraft NOise Prediction Program (ANOPP2) are introduced in this article, which is designed to facilitate the combination of acoustic approaches of varying fidelity for the analysis of noise from conventional and unconventional aircraft.
Abstract: The requirements, constraints, and design of NASA's next generation Aircraft NOise Prediction Program (ANOPP2) are introduced. Similar to its predecessor (ANOPP), ANOPP2 provides the U.S. Government with an independent aircraft system noise prediction capability that can be used as a stand-alone program or within larger trade studies that include performance, emissions, and fuel burn. The ANOPP2 framework is designed to facilitate the combination of acoustic approaches of varying fidelity for the analysis of noise from conventional and unconventional aircraft. ANOPP2 integrates noise prediction and propagation methods, including those found in ANOPP, into a unified system that is compatible for use within general aircraft analysis software. The design of the system is described in terms of its functionality and capability to perform predictions accounting for distributed sources, installation effects, and propagation through a non-uniform atmosphere including refraction and the influence of terrain. The philosophy of mixed fidelity noise prediction through the use of nested Ffowcs Williams and Hawkings surfaces is presented and specific issues associated with its implementation are identified. Demonstrations for a conventional twin-aisle and an unconventional hybrid wing body aircraft configuration are presented to show the feasibility and capabilities of the system. Isolated model-scale jet noise predictions are also presented using high-fidelity and reduced order models, further demonstrating ANOPP2's ability to provide predictions for model-scale test configurations.

Journal ArticleDOI
TL;DR: A new algorithm for striping noise reduction in hyperspectral images is proposed that exploits the orthogonal subspace approach to estimate the striping component and to remove it from the image, preserving the useful signal.
Abstract: In this paper, a new algorithm for striping noise reduction in hyperspectral images is proposed. The new algorithm exploits the orthogonal subspace approach to estimate the striping component and to remove it from the image, preserving the useful signal. The algorithm does not introduce artifacts in the data and also takes into account the dependence on the signal intensity of the striping component. The effectiveness of the algorithm in reducing striping noise is experimentally demonstrated on real data acquired both by airborne and satellite hyperspectral sensors.

Proceedings ArticleDOI
10 Jul 2011
TL;DR: A channel pattern noise based approach to guard speaker recognition system against playback attacks and the experimental results indicate that, with the designed playback detector, the equal error rate of speakers recognition system is reduced by 30%.
Abstract: This paper proposes a channel pattern noise based approach to guard speaker recognition system against playback attacks. For each recording under investigation, the channel pattern noise severs as a unique channel identification fingerprint. Denoising filter and statistical frames are applied to extract channel pattern noise, and 6 Legendre coefficients and 6 statistical features are extracted. SVM is used to train channel noise model to judge whether the input speech is an authentic or a playback recording. The experimental results indicate that, with the designed playback detector, the equal error rate of speaker recognition system is reduced by 30%.

Book
16 Sep 2011
TL;DR: This work addresses the problem of multichannel noise reduction in the STFT domain with and without interframe correlation and proposes different optimization cost functions from which the optimal filters are derived.
Abstract: This work addresses this problem in the short-time Fourier transform (STFT) domain. We divide the general problem into five basic categories depending on the number of microphones being used and whether the interframe or interband correlation is considered. The first category deals with the single-channel problem where STFT coefficients at different frames and frequency bands are assumed to be independent. In this case, the noise reduction filter in each frequency band is basically a real gain. Since a gain does not improve the signal-to-noise ratio (SNR) for any given subband and frame, the noise reduction is basically achieved by liftering the subbands and frames that are less noisy while weighing down on those that are more noisy. The second category also concerns the single-channel problem. The difference is that now the interframe correlation is taken into account and a filter is applied in each subband instead of just a gain. The advantage of using the interframe correlation is that we can improve not only the long-time fullband SNR, but the frame-wise subband SNR as well. The third and fourth classes discuss the problem of multichannel noise reduction in the STFT domain with and without interframe correlation, respectively. In the last category, we consider the interband correlation in the design of the noise reduction filters. We illustrate the basic principle for the single-channel case as an example, while this concept can be generalized to other scenarios. In all categories, we propose different optimization cost functions from which we derive the optimal filters and we also define the performance measures that help analyzing them.

Journal ArticleDOI
Huaqing Li1, Xiaofeng Liao1, Chuandong Li1, Hongyu Huang1, Chaojie Li1 
TL;DR: Based on the Lyapunov stability theorem, a criterion for global asymptotical stability of a unique equilibrium of the noise reduction CNN is derived, and an approach to train edge detection templates can detect the edge precisely and efficiently i.e., by only one iteration.

Journal ArticleDOI
TL;DR: A theoretical study and experimental validation on a binaural hearing aid setup of this standard SDW-MWF implementation, where the effect of estimation errors in the second-order statistics is analyzed and two recently introduced alternative filters are studied.
Abstract: The speech distortion weighted multichannel Wiener filter (SDW-MWF) is a promising multi-microphone noise reduction technique, in particular for hearing aid applications. Its benefit over other single- and multi-microphone techniques has been shown in several previous contributions, theoretically as well as experimentally. In theoretical studies, it is usually assumed that there is a single target speech source. The filter can then be decomposed into a conceptually interesting structure, i.e., into a spatial filter (related to other known techniques) and a single-channel postfilter, which then also allows for a performance analysis. Unfortunately, it is not straightforward to make a robust practical implementation based on this decomposition. Instead, a general SDW-MWF implementation, which only requires a (relatively easy) estimation of speech and noise correlation matrices, is mostly used in practice. This paper features a theoretical study and experimental validation on a binaural hearing aid setup of this standard SDW-MWF implementation, where the effect of estimation errors in the second-order statistics is analyzed. In this case, and for a single target speech source, the standard SDW-MWF implementation is found not to behave as predicted theoretically. Second, two recently introduced alternative filters, namely the rank-one SDW-MWF and the spatial prediction SDW-MWF, are also studied in the presence of estimation errors in the second-order statistics. These filters implicitly assume a single target speech source, but still only rely on the speech and noise correlation matrices. It is proven theoretically and illustrated through experiments that these alternative SDW-MWF implementations behave close to the theoretical optimum, and hence outperform the standard SDW-MWF implementation.

Journal ArticleDOI
TL;DR: The noise reduction algorithm was successful in improving sentence perception in speech-weighted noise, as well as in more dynamic types of background noise, which is currently being trialed in a behind-the-ear processor for take-home use.
Abstract: Objective: The aim of this study was to investigate whether a real-time noise reduction algorithm provided speech perception benefit for Cochlear™ Nucleus® cochlear implant recipients in the laboratory. Design: The noise reduction algorithm attenuated masker-dominated channels. It estimated the signal-to-noise ratio of each channel on a short-term basis from a single microphone input, using a recursive minimum statistics method. In this clinical evaluation, the algorithm was implemented in two programs (noise reduction programs 1 [NR1] and 2 [NR2]), which differed in their level of noise reduction. These programs used advanced combination encoder (ACE™) channel selection and were compared with ACE without noise reduction in 13 experienced cochlear implant subjects. An adaptive speech reception threshold (SRT) test provided the signal-to-noise ratio for 50% sentence intelligibility in three different types of noises: speech-weighted, cocktail party, and street-side city noise. Results: In all three noise types, mean SRTs for both NR programs were significantly better than those for ACE. The greatest improvement occurred for speech-weighted noise; the SRT benefit over ACE was 1.77 dB for NR1 and 2.14 dB for NR2. There were no significant differences in speech perception scores between the two NR programs. Subjects reported no degradation in sound quality with the experimental programs. Conclusions: The noise reduction algorithm was successful in improving sentence perception in speech-weighted noise, as well as in more dynamic types of background noise. The algorithm is currently being trialed in a behind-the-ear processor for take-home use.

Journal ArticleDOI
TL;DR: An algorithm based on minimizing the squared logarithmic transformation of the error signal is proposed in this correspondence and is more robust for impulsive noise control and does not need the parameter selection and thresholds estimation according to the noise characteristics.
Abstract: To overcome the limitations of the existing algorithms for active impulsive noise control, an algorithm based on minimizing the squared logarithmic transformation of the error signal is proposed in this correspondence. The proposed algorithm is more robust for impulsive noise control and does not need the parameter selection and thresholds estimation according to the noise characteristics. These are verified by theoretical analysis and numerical simulations.

Journal Article
TL;DR: This paper proposes filtering techniques for the removal of speckle noise from the digital images by using signal to noise ration and noise level is measured by the standard deviation.
Abstract: Reducing noise from the medical images, a satellite image etc. is a challenge for the researchers in digital image processing. Several approaches are there for noise reduction. Generally speckle noise is commonly found in synthetic aperture radar images, satellite images and medical images. This paper proposes filtering techniques for the removal of speckle noise from the digital images. Quantitative measures are done by using signal to noise ration and noise level is measured by the standard deviation.

Journal ArticleDOI
TL;DR: Results show that the modified non local-based (MNL) filter is capable of effectively reducing the speckle noise while well preserving tissue boundaries for ultrasonic images.

Journal ArticleDOI
TL;DR: Here both hard and soft thresholding method performs better than hard thresholding at all input SNR levels and output SNR and MSE is calculated & compared using both types of thresholding methods.
Abstract: In this paper, Discrete-wavelet transform (DWT) based algorithm are used for speech signal denoising. Here both hard and soft thresholding are used for denoising. Analysis is done on noisy speech signal corrupted by babble noise at 0dB, 5dB, 10dB and 15dB SNR levels. Simulation & results are performed in MATLAB 7.10.0 (R2010a). Output SNR (Signal to Noise Ratio) and MSE (Mean Square Error) is calculated & compared using both types of thresholding methods. Soft thresholding method performs better than hard thresholding at all input SNR levels. Hard thresholding shows a maximum of 21.79 dB improvement whereas soft thresholding shows a maximum of 35.16 dB improvement in output SNR. General Terms Thresholding, multi-resolution analysis, wavelet.

Journal ArticleDOI
TL;DR: A modified version of the NL-means method is presented that incorporates an ultrasound dedicated noise model, as well as a GPU implementation of the algorithm that demonstrates that the proposed method is very efficient in terms of denoising quality and is real-time.
Abstract: Image denoising is the process of removing the noise that perturbs image analysis methods. In some applications like segmentation or registration, denoising is intended to smooth homogeneous areas while preserving the contours. In many applications like video analysis, visual servoing or image-guided surgical interventions, real-time denoising is required. This paper presents a method for real-time denoising of ultrasound images: a modified version of the NL-means method is presented that incorporates an ultrasound dedicated noise model, as well as a GPU implementation of the algorithm. Results demonstrate that the proposed method is very efficient in terms of denoising quality and is real-time.

Journal ArticleDOI
Feng Pan1, Wen Xiao1, Shuo Liu1, Fanjing Wang1, Lu Rong1, Rui Li1 
TL;DR: By a proper averaging procedure, the coherent noise of phase contrast image is reduced significantly.
Abstract: A method to reduce coherent noise in digital holographic phase contrast microscopy is proposed. By slightly shifting the specimen, a series of digital holograms with different coherent noise patterns is recorded. Each hologram is reconstructed individually, while the different phase tilts of the reconstructed complex amplitudes due to the specimen shifts are corrected in the hologram plane by using numerical parametric lens method. Afterward, the lateral displacements of the phase maps from different holograms are compensated in the image plane by using digital image registration method. Thus, all phase images have same distribution, but uncorrelated coherent noise patterns. By a proper averaging procedure, the coherent noise of phase contrast image is reduced significantly. The experimental results are given to confirm the proposed method.