scispace - formally typeset
Search or ask a question

Showing papers on "Noise reduction published in 2014"


Posted Content
TL;DR: Denoising-based approximate message passing (D-AMP) as mentioned in this paper integrates a wide class of denoisers within its iterations to improve the performance of compressed sensing (CS) reconstruction.
Abstract: A denoising algorithm seeks to remove noise, errors, or perturbations from a signal. Extensive research has been devoted to this arena over the last several decades, and as a result, today's denoisers can effectively remove large amounts of additive white Gaussian noise. A compressed sensing (CS) reconstruction algorithm seeks to recover a structured signal acquired using a small number of randomized measurements. Typical CS reconstruction algorithms can be cast as iteratively estimating a signal from a perturbed observation. This paper answers a natural question: How can one effectively employ a generic denoiser in a CS reconstruction algorithm? In response, we develop an extension of the approximate message passing (AMP) framework, called Denoising-based AMP (D-AMP), that can integrate a wide class of denoisers within its iterations. We demonstrate that, when used with a high performance denoiser for natural images, D-AMP offers state-of-the-art CS recovery performance while operating tens of times faster than competing methods. We explain the exceptional performance of D-AMP by analyzing some of its theoretical features. A key element in D-AMP is the use of an appropriate Onsager correction term in its iterations, which coerces the signal perturbation at each iteration to be very close to the white Gaussian noise that denoisers are typically designed to remove.

337 citations


Journal ArticleDOI
TL;DR: A novel denoising algorithm for photon-limited images which combines elements of dictionary learning and sparse patch-based representations of images and reveals that, despite its conceptual simplicity, Poisson PCA-based Denoising appears to be highly competitive in very low light regimes.
Abstract: Photon-limited imaging arises when the number of photons collected by a sensor array is small relative to the number of detector elements. Photon limitations are an important concern for many applications such as spectral imaging, night vision, nuclear medicine, and astronomy. Typically a Poisson distribution is used to model these observations, and the inherent heteroscedasticity of the data combined with standard noise removal methods yields significant artifacts. This paper introduces a novel denoising algorithm for photon-limited images which combines elements of dictionary learning and sparse patch-based representations of images. The method employs both an adaptation of Principal Component Analysis (PCA) for Poisson noise and recently developed sparsity-regularized convex optimization algorithms for photon-limited images. A comprehensive empirical evaluation of the proposed method helps characterize the performance of this approach relative to other state-of-the-art denoising methods. The results reveal that, despite its conceptual simplicity, Poisson PCA-based denoising appears to be highly competitive in very low light regimes.

289 citations


Journal ArticleDOI
TL;DR: A survey of the published literature in dealing with denoising methods in MR images is presented and the popular approaches are classified into different groups and an overview of various methods is provided.

238 citations


Journal ArticleDOI
TL;DR: In this paper, the authors developed a novel denoising method termed fx empirical-mode decomposition (EMD) predictive filtering, which solved the problem that makes fx predictive filtering ineffective with complex seismic data.
Abstract: Random noise attenuation always played an important role in seismic data processing. One of the most widely used methods for suppressing random noise was f‐x predictive filtering. When the subsurface structure becomes complex, this method suffered from higher prediction errors owing to the large number of different dip components that need to be predicted. We developed a novel denoising method termed f‐x empirical-mode decomposition (EMD) predictive filtering. This new scheme solved the problem that makes f‐x EMD ineffective with complex seismic data. Also, by making the prediction more precise, the new scheme removed the limitation of conventional f‐x predictive filtering when dealing with multidip seismic profiles. In this new method, we first applied EMD to each frequency slice in the f‐x domain and obtained several intrinsic mode functions (IMFs). Then, an autoregressive model was applied to the sum of the first few IMFs, which contained the high-dip-angle components, to predict the useful ste...

217 citations


Journal ArticleDOI
TL;DR: In WESNR, soft impulse pixel detection via weighted encoding is used to deal with IN and AWGN simultaneously and the image sparsity prior and nonlocal self-similarity prior are integrated into a regularization term and introduced into the variational encoding framework.
Abstract: Mixed noise removal from natural images is a challenging task since the noise distribution usually does not have a parametric model and has a heavy tail. One typical kind of mixed noise is additive white Gaussian noise (AWGN) coupled with impulse noise (IN). Many mixed noise removal methods are detection based methods. They first detect the locations of IN pixels and then remove the mixed noise. However, such methods tend to generate many artifacts when the mixed noise is strong. In this paper, we propose a simple yet effective method, namely weighted encoding with sparse nonlocal regularization (WESNR), for mixed noise removal. In WESNR, there is not an explicit step of impulse pixel detection; instead, soft impulse pixel detection via weighted encoding is used to deal with IN and AWGN simultaneously. Meanwhile, the image sparsity prior and nonlocal self-similarity prior are integrated into a regularization term and introduced into the variational encoding framework. Experimental results show that the proposed WESNR method achieves leading mixed noise removal performance in terms of both quantitative measures and visual quality.

155 citations


Journal ArticleDOI
TL;DR: A simple and effective unsupervised approach based on the combined difference image and k-means clustering is proposed for the synthetic aperture radar (SAR) image change detection task, and local consistency and edge information of the difference image are considered.
Abstract: In this letter, a simple and effective unsupervised approach based on the combined difference image and k-means clustering is proposed for the synthetic aperture radar (SAR) image change detection task. First, we use one of the most popular denoising methods, the probabilistic-patch-based algorithm, for speckle noise reduction of the two multitemporal SAR images, and the subtraction operator and the log ratio operator are applied to generate two kinds of simple change maps. Then, the mean filter and the median filter are used to the two change maps, respectively, where the mean filter focuses on making the change map smooth and the local area consistent, and the median filter is used to preserve the edge information. Second, a simple combination framework which uses the maps obtained by the mean filter and the median filter is proposed to generate a better change map. Finally, the k-means clustering algorithm with k = 2 is used to cluster it into two classes, changed area and unchanged area. Local consistency and edge information of the difference image are considered in this method. Experimental results obtained on four real SAR image data sets confirm the effectiveness of the proposed approach.

148 citations


Journal ArticleDOI
TL;DR: A variational approach that corrects the over-smoothing and reduces the residual noise of the NL-means by adaptively regularizing nonlocal methods with the total variation by minimizing an adaptive total variation with a nonlocal data fidelity term is introduced.
Abstract: Image denoising is a central problem in image processing and it is often a necessary step prior to higher level analysis such as segmentation, reconstruction, or super-resolution. The nonlocal means (NL-means) perform denoising by exploiting the natural redundancy of patterns inside an image; they perform a weighted average of pixels whose neighborhoods (patches) are close to each other. This reduces significantly the noise while preserving most of the image content. While it performs well on flat areas and textures, it suffers from two opposite drawbacks: it might over-smooth low-contrasted areas or leave a residual noise around edges and singular structures. Denoising can also be performed by total variation minimization-the Rudin, Osher and Fatemi model-which leads to restore regular images, but it is prone to over-smooth textures, staircasing effects, and contrast losses. We introduce in this paper a variational approach that corrects the over-smoothing and reduces the residual noise of the NL-means by adaptively regularizing nonlocal methods with the total variation. The proposed regularized NL-means algorithm combines these methods and reduces both of their respective defaults by minimizing an adaptive total variation with a nonlocal data fidelity term. Besides, this model adapts to different noise statistics and a fast solution can be obtained in the general case of the exponential family. We develop this model for image denoising and we adapt it to video denoising with 3D patches.

142 citations


Journal ArticleDOI
TL;DR: It is shown that, in comparison with the reference method which is the Wiener filtering method with decision-directed approach for SNR estimation, the WDA-based speech enhancement methods could achieve better objective speech quality, no matter whether the noise conditions are included in the training set or not.

139 citations


Proceedings ArticleDOI
04 May 2014
TL;DR: The proposed Long Short-Term Memory recurrent neural networks are trained to predict clean speech as well as noise features from noisy speech features, and a magnitude domain soft mask is constructed from these features, which outperforms unsupervised magnitude domain spectral subtraction by a large margin in terms of source-distortion ratio.
Abstract: In this paper we propose the use of Long Short-Term Memory recurrent neural networks for speech enhancement. Networks are trained to predict clean speech as well as noise features from noisy speech features, and a magnitude domain soft mask is constructed from these features. Extensive tests are run on 73 k noisy and reverberated utterances from the Audio-Visual Interest Corpus of spontaneous, emotionally colored speech, degraded by several hours of real noise recordings comprising stationary and non-stationary sources and convolutive noise from the Aachen Room Impulse Response database. In the result, the proposed method is shown to provide superior noise reduction at low signal-to-noise ratios while creating very little artifacts at higher signal-to-noise ratios, thereby outperforming unsupervised magnitude domain spectral subtraction by a large margin in terms of source-distortion ratio.

135 citations


Journal ArticleDOI
TL;DR: This paper addresses the problem of filtering noisy data for the particular case where the underlying signal comprises a low-frequency component and a sparse or sparse-derivative component and shows that a particular choice of discrete-time filter, namely zero-phase noncausal recursive filters for finite-length data formulated in terms of banded matrices, makes the algorithms effective and computationally efficient.
Abstract: This paper seeks to combine linear time-invariant (LTI) filtering and sparsity-based denoising in a principled way in order to effectively filter (denoise) a wider class of signals. LTI filtering is most suitable for signals restricted to a known frequency band, while sparsity-based denoising is suitable for signals admitting a sparse representation with respect to a known transform. However, some signals cannot be accurately categorized as either band-limited or sparse. This paper addresses the problem of filtering noisy data for the particular case where the underlying signal comprises a low-frequency component and a sparse or sparse-derivative component. A convex optimization approach is presented and two algorithms derived: one based on majorization-minimization (MM), and the other based on the alternating direction method of multipliers (ADMM). It is shown that a particular choice of discrete-time filter, namely zero-phase noncausal recursive filters for finite-length data formulated in terms of banded matrices, makes the algorithms effective and computationally efficient. The efficiency stems from the use of fast algorithms for solving banded systems of linear equations. The method is illustrated using data from a physiological-measurement technique (i.e., near infrared spectroscopic time series imaging) that in many cases yields data that is well-approximated as the sum of low-frequency, sparse or sparse-derivative, and noise components.

130 citations


Journal ArticleDOI
TL;DR: An iterative image-domain decomposition method for noise suppression in DECT, using the full variance-covariance matrix of the decomposed images, which shows superior performance on noise suppression with high image spatial resolution and low-contrast detectability.
Abstract: Purpose: Dual energy CT (DECT) imaging plays an important role in advanced imaging applications due to its capability of material decomposition. Direct decomposition via matrix inversion suffers from significant degradation of image signal-to-noise ratios, which reduces clinical values of DECT. Existing denoising algorithms achieve suboptimal performance since they suppress image noise either before or after the decomposition and do not fully explore the noise statistical properties of the decomposition process. In this work, the authors propose an iterative image-domain decomposition method for noise suppression in DECT, using the full variance-covariance matrix of the decomposed images. Methods: The proposed algorithm is formulated in the form of least-square estimation with smoothness regularization. Based on the design principles of a best linear unbiased estimator, the authors include the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. The regularization term enforces the image smoothness by calculating the square sum of neighboring pixel value differences. To retain the boundary sharpness of the decomposed images, the authors detect the edges in the CT images before decomposition. These edge pixels have small weights in the calculation of the regularization term. Distinct from the existing denoising algorithms applied on the images before or after decomposition, the method has an iterative process for noise suppression, with decomposition performed in each iteration. The authors implement the proposed algorithm using a standard conjugate gradient algorithm. The method performance is evaluated using an evaluation phantom (Catphan©600) and an anthropomorphic head phantom. The results are compared with those generated using direct matrix inversion with no noise suppression, a denoising method applied on the decomposed images, and an existing algorithm with similar formulation as the proposed method but with an edge-preserving regularization term. Results: On the Catphan phantom, the method maintains the same spatial resolution on the decomposed images as that of the CT images before decomposition (8 pairs/cm) while significantly reducing their noise standard deviation. Compared to that obtained by the direct matrix inversion, the noise standard deviation in the images decomposed by the proposed algorithm is reduced by over 98%. Without considering the noise correlation properties in the formulation, the denoising scheme degrades the spatial resolution to 6 pairs/cm for the same level of noise suppression. Compared to the edge-preserving algorithm, the method achieves better low-contrast detectability. A quantitative study is performed on the contrast-rod slice of Catphan phantom. The proposed method achieves lower electron density measurement error as compared to that by the direct matrix inversion, and significantly reduces the error variation by over 97%. On the head phantom, the method reduces the noise standard deviation of decomposed images by over 97% without blurring the sinus structures. Conclusions: The authors propose an iterative image-domain decomposition method for DECT. The method combines noise suppression and material decomposition into an iterative process and achieves both goals simultaneously. By exploring the full variance-covariance properties of the decomposed images and utilizing the edge predetection, the proposed algorithm shows superior performance on noise suppression with high image spatial resolution and low-contrast detectability.

Journal ArticleDOI
TL;DR: Five common and important denoising methods are presented and applied on real ECG signals contaminated with different levels of noise, including discrete wavelet transform, adaptive filters, LMS and RLS, and Savitzky-Golay filtering.

Journal ArticleDOI
TL;DR: Simulation results show that the VMD- DWT approach outperforms the conventional EMD-DWT approach, and a non-local means approach used as a reference technique provides better results than the V MD-D WT approach.
Abstract: Hybrid denoising models based on combining empirical mode decomposition (EMD) and discrete wavelet transform (DWT) were found to be effective in removing additive Gaussian noise from electrocardiogram (ECG) signals. Recently, variational mode decomposition (VMD) has been proposed as a multiresolution technique that overcomes some of the limits of the EMD. Two ECG denoising approaches are compared. The first is based on denoising in the EMD domain by DWT thresholding, whereas the second is based on noise reduction in the VMD domain by DWT thresholding. Using signal-to-noise ratio and mean of squared errors as performance measures, simulation results show that the VMD-DWT approach outperforms the conventional EMD-DWT. In addition, a non-local means approach used as a reference technique provides better results than the VMD-DWT approach.

Journal ArticleDOI
TL;DR: An overview of existing noise reduction strategies for low-dose abdominopelvic CT, including analytic reconstruction, image and projection space denoising, and iterative reconstruction is provided; qualitative and quantitative tools for evaluating these strategies are reviewed; and the strengths and limitations of individual noise reduction methods are discussed.
Abstract: Most noise reduction methods involve nonlinear processes, and objective evaluation of image quality can be challenging, since image noise cannot be fully characterized on the sole basis of the noise level at computed tomography (CT). Noise spatial correlation (or noise texture) is closely related to the detection and characterization of low-contrast objects and may be quantified by analyzing the noise power spectrum. High-contrast spatial resolution can be measured using the modulation transfer function and section sensitivity profile and is generally unaffected by noise reduction. Detectability of low-contrast lesions can be evaluated subjectively at varying dose levels using phantoms containing low-contrast objects. Clinical applications with inherent high-contrast abnormalities (eg, CT for renal calculi, CT enterography) permit larger dose reductions with denoising techniques. In low-contrast tasks such as detection of metastases in solid organs, dose reduction is substantially more limited by loss of lesion conspicuity due to loss of low-contrast spatial resolution and coarsening of noise texture. Existing noise reduction strategies for dose reduction have a substantial impact on lowering the radiation dose at CT. To preserve the diagnostic benefit of CT examination, thoughtful utilization of these strategies must be based on the inherent lesion-to-background contrast and the anatomy of interest. The authors provide an overview of existing noise reduction strategies for low-dose abdominopelvic CT, including analytic reconstruction, image and projection space denoising, and iterative reconstruction; review qualitative and quantitative tools for evaluating these strategies; and discuss the strengths and limitations of individual noise reduction methods.

Journal ArticleDOI
TL;DR: The RSCFCM algorithm is proposed, utilizing the negative log-posterior as the dissimilarity function, introducing a novel factor and integrating the bias field estimation model into the fuzzy objective function, which successfully overcomes the drawbacks of existing FCM-type clustering schemes and EM-type mixture models.

Journal ArticleDOI
TL;DR: Following a Bayesian modeling approach, a generalized total variation-based MRI denoising model is proposed based on global hyper-Laplacian prior and Rician noise assumption and has the properties of backward diffusion in local normal directions and forward diffusion inLocal tangent directions.

Journal ArticleDOI
TL;DR: The scheme uses a greedy pursuit with boot-strapping-based stopping condition and dictionary learning within the denoising process, and the reconstruction performance is competitive with leading methods in high SNR and achieving state-of-the-art results in cases of low SNR.
Abstract: The problem of Poisson denoising appears in various imaging applications, such as low-light photography, medical imaging, and microscopy. In cases of high SNR, several transformations exist so as to convert the Poisson noise into an additive-independent identically distributed. Gaussian noise, for which many effective algorithms are available. However, in a low-SNR regime, these transformations are significantly less accurate, and a strategy that relies directly on the true noise statistics is required. Salmon et al took this route, proposing a patch-based exponential image representation model based on Gaussian mixture model, leading to state-of-the-art results. In this paper, we propose to harness sparse-representation modeling to the image patches, adopting the same exponential idea. Our scheme uses a greedy pursuit with boot-strapping-based stopping condition and dictionary learning within the denoising process. The reconstruction performance of the proposed scheme is competitive with leading methods in high SNR and achieving state-of-the-art results in cases of low SNR.

Journal ArticleDOI
TL;DR: A novel adaptive iterative fuzzy filter for denoising images corrupted by impulse noise that operates in two stages-detection of noisy pixels with an adaptive fuzzy detector followed by denoised using a weighted mean filter on the “good” pixels in the filter window.
Abstract: Suppression of impulse noise in images is an important problem in image processing. In this paper, we propose a novel adaptive iterative fuzzy filter for denoising images corrupted by impulse noise. It operates in two stages-detection of noisy pixels with an adaptive fuzzy detector followed by denoising using a weighted mean filter on the “good” pixels in the filter window. Experimental results demonstrate the algorithm to be superior to state-of-the-art filters. The filter is also shown to be robust to very high levels of noise, retrieving meaningful detail at noise levels as high as 97%.

Journal ArticleDOI
TL;DR: Theoretical analysis and experimental results show that the proposed approach for decoupling noise and features on 3D shapes can reliably and robustly remove noise and extract sharp features on3D shapes.
Abstract: Many geometry processing applications are sensitive to noise and sharp features. Although there are a number of works on detecting noise and sharp features in the literature, they are heuristic. On one hand, traditional denoising methods use filtering operators to remove noise, however, they may blur sharp features and shrink the object. On the other hand, noise makes detection of features, which relies on computation of differential properties, unreliable and unstable. Therefore, detecting noise and features on discrete surfaces still remains challenging.In this article, we present an approach for decoupling noise and features on 3D shapes. Our approach consists of two phases. In the first phase, a base mesh is estimated from the input noisy data by a global Laplacian regularization denoising scheme. The estimated base mesh is guaranteed to asymptotically converge to the true underlying surface with probability one as the sample size goes to infinity. In the second phase, an e1-analysis compressed sensing optimization is proposed to recover sharp features from the residual between base mesh and input mesh. This is based on our discovery that sharp features can be sparsely represented in some coherent dictionary which is constructed by the pseudo-inverse matrix of the Laplacian of the shape. The features are recovered from the residual in a progressive way. Theoretical analysis and experimental results show that our approach can reliably and robustly remove noise and extract sharp features on 3D shapes.

Journal ArticleDOI
TL;DR: In this article, the authors investigate the intersensory perceptions of noise barrier performance in terms of the spectral characteristics of noise reduction combined with visual impressions of five different barrier types: aluminum, timber, translucent acrylic, concrete, and vegetated barriers.

Journal ArticleDOI
TL;DR: In this article, a time-frequency analysis method that combines the Bark-wavelet analysis and Hilbert-Huang transform is presented for underwater noise targets classification, which is inspired by human auditory perception.

Journal ArticleDOI
TL;DR: The proposed DFA threshold and denoising by DFA-EMD are tested on different synthetic and real signals at various signal to noise ratios (SNR) and the results are promising especially at 0 dB when signal is corrupted by white Gaussian noise.

Journal ArticleDOI
TL;DR: This paper presents an approach to the design of linear DMAs that first transforms the microphone array signals into the short-time Fourier transform (STFT) domain and then converts the DMA beamforming design to simple linear systems to solve.
Abstract: Differential microphone array (DMA), a particular kind of sensor array that is responsive to the differential sound pressure field, has a broad range of applications in sound recording, noise reduction, signal separation, dereverberation, etc. Traditionally, an Nth-order DMA is formed by combining, in a linear manner, the outputs of a number of DMAs up to (including) the order of N − 1. This method, though simple and easy to implement, suffers from a number of drawbacks and practical limitations. This paper presents an approach to the design of linear DMAs. The proposed technique first transforms the microphone array signals into the short-time Fourier transform (STFT) domain and then converts the DMA beamforming design to simple linear systems to solve. It is shown that this approach is much more flexible as compared to the traditional methods in the design of different directivity patterns. Methods are also presented to deal with the white noise amplification problem that is considered to be the biggest hurdle for DMAs, particularly higher-order implementations.

Journal ArticleDOI
TL;DR: Adaptation of a wavelet denoising algorithm for the filtration of real PCG signal disturbances from signals recorded by a mobile devices in a noisy environment is shown.

Journal ArticleDOI
TL;DR: In this article, a discrete curvelet transform (DCT) based approach is proposed to eliminate different types of noises like coherent or incoherent noise and multiples, but optimal random noise attenuation remains difficult.

Journal ArticleDOI
TL;DR: This review focuses on internal noise, the analysis of the noise contributions and a summary of noise reduction strategies, and concludes with an outlook on future possibilities and scientific challenges in the field of ME magnetic sensors.
Abstract: Since the turn of the millennium, multi-phase magnetoelectric (ME) composites have been subject to attention and development, and giant ME effects have been found in laminate composites of piezoelectric and magnetostrictive layers. From an application perspective, the practical usefulness of a magnetic sensor is determined not only by the output signal of the sensor in response to an incident magnetic field, but also by the equivalent magnetic noise generated in the absence of such an incident field. Here, a short review of developments in equivalent magnetic noise reduction for ME sensors is presented. This review focuses on internal noise, the analysis of the noise contributions and a summary of noise reduction strategies. Furthermore, external vibration noise is also discussed. The review concludes with an outlook on future possibilities and scientific challenges in the field of ME magnetic sensors.

Journal ArticleDOI
TL;DR: The proposed algorithm is built on additive signal-dependent noise model and derive a PCA-based LMMSE denoising model for multiplicative noise to achieve better performance than the referenced state-of-the-art methods in terms of both noise reduction and image detail preservation.
Abstract: The combination of nonlocal grouping and transformed domain filtering has led to the state-of-the-art denoising techniques. In this paper, we extend this line of study to the denoising of synthetic aperture radar (SAR) images based on clustering the noisy image into disjoint local regions with similar spatial structure and denoising each region by the linear minimum mean-square error (LMMSE) filtering in principal component analysis (PCA) domain. Both clustering and denoising are performed on image patches. For clustering, to reduce dimensionality and resist the influence of noise, several leading principal components identified by the minimum description length criterion are used to feed the K-means clustering algorithm. For denoising, to avoid the limitations of the homomorphic approach, we build our denoising scheme on additive signal-dependent noise model and derive a PCA-based LMMSE denoising model for multiplicative noise. Denoised patches of all clusters are finally used to reconstruct the noise-free image. The experiments demonstrate that the proposed algorithm achieved better performance than the referenced state-of-the-art methods in terms of both noise reduction and image detail preservation.

Journal ArticleDOI
TL;DR: An iterative approach that combines the reconstruction and the signal decomposition procedures to minimize the DECT image noise without noticeable loss of resolution is proposed and achieves a superior performance on DECT imaging with respect to decomposition accuracy, noise reduction, and spatial resolution.
Abstract: Purpose: Dual-energy CT (DECT) is being increasingly used for its capability of material decomposition and energy-selective imaging. A generic problem of DECT, however, is that the decomposition process is unstable in the sense that the relative magnitude of decomposed signals is reduced due to signal cancellation while the image noise is accumulating from the two CT images of independent scans. Direct image decomposition, therefore, leads to severe degradation of signal-to-noise ratio on the resultant images. Existing noise suppression techniques are typically implemented in DECT with the procedures of reconstruction and decomposition performed independently, which do not explore the statistical properties of decomposed images during the reconstruction for noise reduction. In this work, the authors propose an iterative approach that combines the reconstruction and the signal decomposition procedures to minimize the DECT image noise without noticeable loss of resolution. Methods: The proposed algorithm is formulated as an optimization problem, which balances the data fidelity and total variation of decomposed images in one framework, and the decomposition step is carried out iteratively together with reconstruction. The noise in the CT images from the proposed algorithm becomes well correlated even though the noise of the raw projections is independent on the two CT scans. Due to this feature, the proposed algorithm avoids noise accumulation during the decomposition process. The authors evaluate the method performance on noise suppression and spatial resolution using phantom studies and compare the algorithm with conventional denoising approaches as well as combined iterative reconstruction methods with different forms of regularization. Results: On the Catphan©600 phantom, the proposed method outperforms the existing denoising methods on preserving spatial resolution at the same level of noise suppression, i.e., a reduction of noise standard deviation by one order of magnitude. This improvement is mainly attributed to the high noise correlation in the CT images reconstructed by the proposed algorithm. Iterative reconstruction using different regularization, including quadratic orq-generalized Gaussian Markov random field regularization, achieves similar noise suppression from high noise correlation. However, the proposed TV regularization obtains a better edge preserving performance. Studies of electron density measurement also show that our method reduces the average estimation error from 9.5% to 7.1%. On the anthropomorphic head phantom, the proposed method suppresses the noise standard deviation of the decomposed images by a factor of ∼14 without blurring the fine structures in the sinus area. Conclusions: The authors propose a practical method for DECT imaging reconstruction, which combines the image reconstruction and material decomposition into one optimization framework. Compared to the existing approaches, our method achieves a superior performance on DECT imaging with respect to decomposition accuracy, noise reduction, and spatial resolution.

Journal ArticleDOI
TL;DR: This paper proposes a new method for detail enhancement and noise reduction of high dynamic range infrared images that is significantly better than those based on histogram equalization (HE), and it also has better visual effect than bilateral filter-based methods.

Journal ArticleDOI
TL;DR: This work proposes a framework for the denoising of videos jointly corrupted by spatially correlated random noise and fixed-pattern noise based on motion-compensated 3D spatiotemporal volumes, i.e., a sequence of 2D square patches extracted along the motion trajectories of the noisy video.
Abstract: We propose a framework for the denoising of videos jointly corrupted by spatially correlated (i.e., nonwhite) random noise and spatially correlated fixed-pattern noise. Our approach is based on motion-compensated 3D spatiotemporal volumes, i.e., a sequence of 2D square patches extracted along the motion trajectories of the noisy video. First, the spatial and temporal correlations within each volume are leveraged to sparsify the data in 3D spatiotemporal transform domain, and then the coefficients of the 3D volume spectrum are shrunk using an adaptive 3D threshold array. Such array depends on the particular motion trajectory of the volume, the individual power spectral densities of the random and fixed-pattern noise, and also the noise variances which are adaptively estimated in transform domain. Experimental results on both synthetically corrupted data and real infrared videos demonstrate a superior suppression of the random and fixed-pattern noise from both an objective and a subjective point of view.