scispace - formally typeset
Search or ask a question

Showing papers on "Noise reduction published in 2006"


Journal ArticleDOI
TL;DR: A new method is proposed for the problem of digital camera identification from its images based on the sensor's pattern noise, which serves as a unique identification fingerprint for each camera under investigation by averaging the noise obtained from multiple images using a denoising filter.
Abstract: In this paper, we propose a new method for the problem of digital camera identification from its images based on the sensor's pattern noise. For each camera under investigation, we first determine its reference pattern noise, which serves as a unique identification fingerprint. This is achieved by averaging the noise obtained from multiple images using a denoising filter. To identify the camera from a given image, we consider the reference pattern noise as a spread-spectrum watermark, whose presence in the image is established by using a correlation detector. Experiments on approximately 320 images taken with nine consumer digital cameras are used to estimate false alarm rates and false rejection rates. Additionally, we study how the error rates change with common image processing, such as JPEG compression or gamma correction.

1,195 citations


Journal ArticleDOI
TL;DR: The proposed noise-estimation algorithm when integrated in speech enhancement was preferred over other noise-ESTimation algorithms, indicating that the local minimum estimation algorithm adapts very quickly to highly non-stationary noise environments.

448 citations


Proceedings ArticleDOI
17 Jun 2006
TL;DR: This work addresses the image denoising problem, where zeromean white and homogeneous Gaussian additive noise should be removed from a given image, by defining a global image prior that forces sparsity over patches in every location in the image.
Abstract: We address the image denoising problem, where zeromean white and homogeneous Gaussian additive noise should be removed from a given image. The approach taken is based on sparse and redundant representations over a trained dictionary. The proposed algorithm denoises the image, while simultaneously trainining a dictionary on its (corrupted) content using the K-SVD algorithm. As the dictionary training algorithm is limited in handling small image patches, we extend its deployment to arbitrary image sizes by defining a global image prior that forces sparsity over patches in every location in the image. We show how such Bayesian treatment leads to a simple and effective denoising algorithm, with state-of-the-art performance, equivalent and sometimes surpassing recently published leading alternative denoising methods.

425 citations


Journal ArticleDOI
TL;DR: This brief reviews existing solutions to minimize the kickback noise and proposes two new ones and HSPICE simulations of comparators implemented in a 0.18-/spl mu/m technology demonstrate their effectiveness.
Abstract: The latched comparator is a building block of virtually all analog-to-digital converter architectures. It uses a positive feedback mechanism to regenerate the analog input signal into a full-scale digital level. The large voltage variations in the internal nodes are coupled to the input, disturbing the input voltage-this is usually called kickback noise. This brief reviews existing solutions to minimize the kickback noise and proposes two new ones. HSPICE simulations of comparators implemented in a 0.18-/spl mu/m technology demonstrate their effectiveness.

324 citations


Journal ArticleDOI
TL;DR: A new noise reduction algorithm is introduced and applied to the problem of denoising hyperspectral imagery, and provides signal-to-noise-ratio improvement up to 84.44% and 98.35% in the first and the second datacubes, respectively.
Abstract: In this paper, a new noise reduction algorithm is introduced and applied to the problem of denoising hyperspectral imagery. This algorithm resorts to the spectral derivative domain, where the noise level is elevated, and benefits from the dissimilarity of the signal regularity in the spatial and the spectral dimensions of hyperspectral images. The performance of the new algorithm is tested on two different hyperspectral datacubes: an Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) datacube that is acquired in a vegetation-dominated site and a simulated AVIRIS datacube that simulates a geological site. The new algorithm provides signal-to-noise-ratio improvement up to 84.44% and 98.35% in the first and the second datacubes, respectively.

310 citations


Journal ArticleDOI
TL;DR: A method called two-step noise reduction (TSNR) technique is proposed which solves this problem while maintaining the benefits of the decision-directed approach and a significant improvement is brought by HRNR compared to TSNR thanks to the preservation of harmonics.
Abstract: This paper addresses the problem of single-microphone speech enhancement in noisy environments. State-of-the-art short-time noise reduction techniques are most often expressed as a spectral gain depending on the signal-to-noise ratio (SNR). The well-known decision-directed (DD) approach drastically limits the level of musical noise, but the estimated a priori SNR is biased since it depends on the speech spectrum estimation in the previous frame. Therefore, the gain function matches the previous frame rather than the current one which degrades the noise reduction performance. The consequence of this bias is an annoying reverberation effect. We propose a method called two-step noise reduction (TSNR) technique which solves this problem while maintaining the benefits of the decision-directed approach. The estimation of the a priori SNR is refined by a second step to remove the bias of the DD approach, thus removing the reverberation effect. However, classic short-time noise reduction techniques, including TSNR, introduce harmonic distortion in enhanced speech because of the unreliability of estimators for small signal-to-noise ratios. This is mainly due to the difficult task of noise power spectrum density (PSD) estimation in single-microphone schemes. To overcome this problem, we propose a method called harmonic regeneration noise reduction (HRNR). A nonlinearity is used to regenerate the degraded harmonics of the distorted signal in an efficient way. The resulting artificial signal is produced in order to refine the a priori SNR used to compute a spectral gain able to preserve the speech harmonics. These methods are analyzed and objective and formal subjective test results between HRNR and TSNR techniques are provided. A significant improvement is brought by HRNR compared to TSNR thanks to the preservation of harmonics

286 citations


Journal ArticleDOI
TL;DR: In this article, the authors describe the framework in which noise is studied and propose that Wnt signalling is a noise filter, which indicates that there are molecular mechanisms that filter out noise.
Abstract: Even complicated networks such as those involved in eukaryotic development lead to reproducible outcomes, which indicates that there are molecular mechanisms that filter out noise. The authors describe the framework in which noise is studied and propose that Wnt signalling is a noise filter.

262 citations


Journal ArticleDOI
TL;DR: Computer simulations show that the proposed method for online secondary path modeling in active noise control systems gives better performance than the existing methods, but at the cost of a slightly increased computational complexity.
Abstract: This paper proposes a new method for online secondary path modeling in active noise control systems. The existing methods for active noise control systems with online secondary path modeling consist of three adaptive filters. The main feature of the proposed method is that it uses only two adaptive filters. In the proposed method, the modified-FxLMS (MFxLMS) algorithm is used in adapting the noise control filter and a new variable step size (VSS) least mean square (LMS) algorithm is proposed for adaptation of the secondary path modeling filter. This VSS LMS algorithm is different from the normalized-LMS (NLMS) algorithm, where the step size is varied in accordance with the power of the reference signal. Here, on the other hand, the step size is varied in accordance with the power of the disturbance signal in the desired response of the modeling filter. The basic idea of the proposed VSS algorithm stems from the fact that the disturbance signal in the desired response of the modeling filter is decreasing in nature, (ideally) converging to zero. Hence, a small step size is used initially and later its value is increased accordingly. The disturbance signal, however, is not available directly, and we propose an indirect method to track its variations. Computer simulations show that the proposed method gives better performance than the existing methods. This improved performance is achieved at the cost of a slightly increased computational complexity.

200 citations


Book ChapterDOI
01 Oct 2006
TL;DR: The results show that the NL-means approach outperforms other classical denoising methods, such as Anisotropic Diffusion Filter and Total Variation.
Abstract: One critical issue in the context of image restoration is the problem of noise removal while keeping the integrity of relevant image information Denoising is a crucial step to increase image conspicuity and to improve the performances of all the processings needed for quantitative imaging analysis The method proposed in this paper is based on an optimized version of the Non Local (NL) Means algorithm This approach uses the natural redundancy of information in image to remove the noise Tests were carried out on synthetic datasets and on real 3T MR images The results show that the NL-means approach outperforms other classical denoising methods, such as Anisotropic Diffusion Filter and Total Variation

194 citations


Journal ArticleDOI
TL;DR: An irregularly spaced sampling raster formed from a sequence of low-resolution frames is the input to an image sequence superresolution algorithm whose output is the set of image intensity values at the desired high-resolution image grid.
Abstract: An irregularly spaced sampling raster formed from a sequence of low-resolution frames is the input to an image sequence superresolution algorithm whose output is the set of image intensity values at the desired high-resolution image grid. The method of moving least squares (MLS) in polynomial space has proved to be useful in filtering the noise and approximating scattered data by minimizing a weighted mean-square error norm, but introducing blur in the process. Starting with the continuous version of the MLS, an explicit expression for the filter bandwidth is obtained as a function of the polynomial order of approximation and the standard deviation (scale) of the Gaussian weight function. A discrete implementation of the MLS is performed on images and the effect of choice of the two dependent parameters, scale and order, on noise filtering and reduction of blur introduced during the MLS process is studied

170 citations


Journal ArticleDOI
TL;DR: In this paper, a four-receiver acoustic Doppler velocimeters (ADV) with redundant information for all velocity components is proposed to achieve noise-free turbulence measurements.
Abstract: Although three-receiver acoustic Doppler velocimeters (ADV) can accurately measure the three-dimensional mean flowfield, their turbulence measurements suffer from parasitical noise contributions. By adding a fourth receiver and optimizing the transducer configuration, the turbulence results can be considerably improved. Redundant information is obtained for all velocity components, which theoretically allows to achieve noise-free turbulence measurements. Experiments show that the parasitical noise contribution is not completely eliminated but reduced by an order of magnitude. At the same time, the useful low-noise frequency range is extended by one order of magnitude. Furthermore, the noise levels of the different components can be directly estimated from the redundant information, which allows to (i) check the quality of the measurements and the system; (ii) estimate the accuracy of the turbulence measurements; and (iii) optimally choose the measuring frequency. Good turbulence results with a four-receiv...

Journal ArticleDOI
TL;DR: The use of multiresolution speckle filters are applied to improve the automatic processing steps in the clinical research of non-cystic periventricular leukomalacia and in particular to ultrasound neonatal brain images.
Abstract: There is a growing interest in using multiresolution noise filters in a variety of medical imaging applications. We review recent wavelet denoising techniques for medical ultrasound and for magnetic resonance images and discuss some of their potential applications in the clinical investigations of the brain. Our goal is to present and evaluate noise suppression methods based on both image processing and clinical expertise. We analyze two types of filters for magnetic resonance images (MRI): noise Suppression in magnitude MRI images and denoising blood oxygen level-dependent (BOLD) response in functional MRI images (fMRI). The noise distribution in magnitude MRI images is Rician, while the noise distribution in BOLD images has been recently shown to follow a Gaussian model well. We evaluate different methods based on signal to noise ratio improvement and based on the preservation of the shape of the activated regions in fMRI. A critical view on the problem of speckle filtering in ultrasound images is given where we discuss some of the issues that are overlooked in many speckle filters like the relevance of the "speckled texture", expert-defined features of interest and the reliability of the common speckle models. We analyze the use of multiresolution speckle filters to improve the automatic processing steps in the clinical research of non-cystic periventricular leukomalacia. In particular we apply speckle filters to ultrasound neonatal brain images and we evaluate the influence of the filtering on the effectiveness of the subsequent classification and segmentation of flares of affected tissue in comparison with the manual delineation of clinicians.

Proceedings ArticleDOI
18 Jun 2006
TL;DR: Theoretical analysis, simulation, and experiment prove that the proposed balance technique is efficient enough to reduce common mode noise.
Abstract: In this paper, the boost converter model for electromagnetic interference noise analysis is first investigated. Based on this model, a general balance concept is proposed to cancel the common mode noise. Theoretical analysis, simulation, and experiment prove that the proposed balance technique is efficient enough to reduce common mode noise.

Proceedings ArticleDOI
01 Jan 2006
TL;DR: A new ECG denoising method based on the recently developed Empirical Mode Decomposition (EMD) is proposed, able to remove high frequency noise with minimum signal distortion.
Abstract: The electrocardiogram (ECG) has been widely used for diagnosis purposes of heart diseases. Good quality ECG are utilized by the physicians for interpretation and identification of physiological and pathological phenomena. However, in real situations, ECG recordings are often corrupted by artifacts. One prominent artifact is the high frequency noise caused by electromyogram induced noise, power line interferences, or mechanical forces acting on the electrodes. Noise severely limits the utility of the recorded ECG and thus need to be removed for better clinical evaluation. Several methods have been developed for ECG denoising. In this paper, we proposed a new ECG denoising method based on the recently developed Empirical Mode Decomposition (EMD). The proposed EMD-based method is able to remove high frequency noise with minimum signal distortion. The method is validated through experiments on the MIT-BIH database. Both quantitative and qualitative results are given. The results show that the proposed method provides very good results for denoising.

Journal ArticleDOI
TL;DR: This letter describes a data acquisition setup for recording, and processing, running speech from a person in a magnetic resonance imaging (MRI) scanner, with main focus on ensuring synchronicity between image and audio acquisition, and in obtaining good signal to noise ratio.
Abstract: This letter describes a data acquisition setup for recording, and processing, running speech from a person in a magnetic resonance imaging (MRI) scanner. The main focus is on ensuring synchronicity between image and audio acquisition, and in obtaining good signal to noise ratio to facilitate further speech analysis and modeling. A field-programmable gate array based hardware design for synchronizing the scanner image acquisition to other external data such as audio is described. The audio setup itself features two fiber optical microphones and a noise-canceling filter. Two noise cancellation methods are described including a novel approach using a pulse sequence specific model of the gradient noise of the MRI scanner. The setup is useful for scientific speech production studies. Sample results of speech and singing data acquired and processed using the proposed method are given.

Journal ArticleDOI
TL;DR: Experiments show that the proposed filter can be used for efficient removal of impulse noise from color images without distorting the useful information in the image.
Abstract: A new framework for reducing impulse noise from digital color images is presented, in which a fuzzy detection phase is followed by an iterative fuzzy filtering technique. We call this filter the fuzzy two-step color filter. The fuzzy detection method is mainly based on the calculation of fuzzy gradient values and on fuzzy reasoning. This phase determines three separate membership functions that are passed to the filtering step. These membership functions will be used as a representation of the fuzzy set impulse noise (one function for each color component). Our proposed new fuzzy method is especially developed for reducing impulse noise from color images while preserving details and texture. Experiments show that the proposed filter can be used for efficient removal of impulse noise from color images without distorting the useful information in the image

Proceedings Article
01 Mar 2006
TL;DR: A signal denoising scheme based a multiresolution approach referred to as Empirical mode decomposition (EMD) is presented and the results compared to Wavelets, Averaging and Median methods are analyzed.
Abstract: In this paper a signal denoising scheme based a multiresolution approach referred to as Empirical mode decomposition (EMD) [1] is presented. The denoising method is a fully data driven approach. Noisy signal is decomposed adaptively into intrinsic oscillatory components called Intrinsic mode functions (IMFs) using a decomposition algorithm algorithm called sifting process. The basic principle of the method is to reconstruct the signal with IMFs previously filtered or thresholded. The denoising method is applied to one real signal et to four simulated signals with different noise levels and the results compared to Wavelets, Averaging and Median methods. The effect of level noise value on the performances of the proposed denoising is analyzed. The study is limited to signals corrupted by additive white Gaussian random noise.

Journal ArticleDOI
TL;DR: This paper focuses on either finding the proper weight of the fidelity term in the energy minimization formulation or on determining the optimal stopping time of a nonlinear diffusion process, and provides two practical alternatives for estimating this condition, based on the covariance of the noise and the residual part.
Abstract: This paper is concerned with finding the best partial differential equation-based denoising process, out of a set of possible ones. We focus on either finding the proper weight of the fidelity term in the energy minimization formulation or on determining the optimal stopping time of a nonlinear diffusion process. A necessary condition for achieving maximal SNR is stated, based on the covariance of the noise and the residual part. We provide two practical alternatives for estimating this condition by observing that the filtering of the image and the noise can be approximated by a decoupling technique, with respect to the weight or time parameters. Our automatic algorithm obtains quite accurate results on a variety of synthetic and natural images, including piecewise smooth and textured ones. We assume that the statistics of the noise were previously estimated. No a priori knowledge regarding the characteristics of the clean image is required. A theoretical analysis is carried out, where several SNR performance bounds are established for the optimal strategy and for a widely used method, wherein the variance of the residual part equals the variance of the noise

01 Jan 2006
TL;DR: It is shown that the Mumford-Shah regularizer can be viewed as an extended line process that reflects spatial organization properties of the image edges, that do not appear in the common line process or anisotropic diffusion, which allows to distinguish outliers from edges and leads to superior experimental results.
Abstract: Consider the problem of image deblurring in the presence of impulsive noise. Standard image deconvolution methods rely on the Gaussian noise model and do not perform well with impulsive noise. The main challenge is to deblur the image, recover its discontinuities and at the same time remove the impulse noise. Median-based approaches are inadequate, because at high noise levels they induce nonlinear distortion that hampers the deblurring process. Distinguishing outliers from edge elements is difficult in current gradient-based edge-preserving restoration methods. The suggested approach integrates and extends the robust statistics, line process (half quadratic) and anisotropic diffusion points of view. We present a unified variational approach to image deblurring and impulse noise removal. The objective functional consists of a fidelity term and a regularizer. Data fidelity is quantified using the robust modified L 1 norm, and elements from the Mumford-Shah functional are used for regularization. We show that the Mumford-Shah regularizer can be viewed as an extended line process. It reflects spatial organization properties of the image edges, that do not appear in the common line process or anisotropic diffusion. This allows to distinguish outliers from edges and leads to superior experimental results.

Journal ArticleDOI
TL;DR: A circuit design to implement tail current-shaping is presented that does not dissipate any extra power, does not use additional (noisy) active devices and occupies a small area and is extensively analyzed and compared to an ideal pulse biased technique.
Abstract: This paper introduces a tail current-shaping technique in LC-VCOs to increase the amplitude and to reduce the phase noise while keeping the power dissipation constant The tail current is made large when the oscillator output voltage reaches its maximum or minimum value and when the sensitivity of the output phase to injected noise is the smallest; the tail current is made small during the zero crossings of the output voltage when the phase noise sensitivity is large The phase noise contributions of the active devices are decreased and the VCO has a larger oscillation amplitude and thus better DC to RF conversion compared to a standard VCO with equal power dissipation A circuit design to implement tail current-shaping is presented that does not dissipate any extra power, does not use additional (noisy) active devices and occupies a small area The operation and performance of the presented circuit is extensively analyzed and compared to an ideal pulse biased technique The presented analysis is confirmed by measurement results of two 2-GHz differential nMOS VCOs fabricated in 025-mum BiCMOS process


Patent
11 Oct 2006
TL;DR: In this article, a noise reducer is used to remove noise of random nature from the image signal before correcting the primary black level for image signal, and thereafter the black level is corrected.
Abstract: In an image pickup apparatus for preventing linearity defect at the time of photographing in a high-sensitivity mode, when processing an image signal produced by a solid-state image pickup device under a predetermined condition, such as photographing in a super high-sensitivity mode, at a high temperature or with a long-time exposure, a signal processor increases a clamp level for clamping the image signal. A noise reducer then executes noise reduction for removing noise of random nature from the image signal before correcting the primary black level for the image signal, and thereafter the black level is corrected.

Journal ArticleDOI
TL;DR: This paper presents approximations which allow an efficient computation and compensation of the bias in moving average and first-order recursive smoothed psd estimates and discusses factors that influence the bias.

Journal ArticleDOI
TL;DR: A new parametric model of OFDM signals is proposed in this paper which shows that, in the presence of phase noise, each received frequency-domain subcarrier signal can be expressed as a sum of all sub carrier signals weighted by a vector parameter.
Abstract: OFDM suffers from severe performance degradation in the presence of phase noise. In particular, phase noise leads to common phase error (CPE) as well as intercarrier interference (ICI) in the frequency domain. Some approaches in the literature mitigate phase noise by directly evaluating and then compensating for CPE or ICI, while others choose to correct phase noise in the time domain. A new parametric model of OFDM signals is proposed in this paper which shows that, in the presence of phase noise, each received frequency-domain subcarrier signal can be expressed as a sum of all subcarrier signals weighted by a vector parameter. Then, two reduced-complexity techniques are presented to estimate this weighting vector. The first is a maximum likelihood (ML) method whereas the second one is a linear minimum mean square error (LMMSE) technique. Using the obtained estimates, we also propose two approaches, i.e., a decorrelator and an interference canceler, to mitigate phase noise. It is shown that most conventional methods can be readily obtained from our approaches with some approximation or orthogonal transform. Theoretical analysis and numerical results are provided to elaborate the proposed schemes. We show that the performance of both approaches is superior to that of conventional methods. Furthermore, LMMSE gives the best performance, while ML provides a much simpler yet effective way to mitigate phase noise

Journal ArticleDOI
TL;DR: A combined spatial- and temporal-domain wavelet shrinkage algorithm for video denoising is presented in this paper, which is robust to various levels of noise corruption andVarious levels of motion.
Abstract: A combined spatial- and temporal-domain wavelet shrinkage algorithm for video denoising is presented in this paper. The spatial-domain denoising technique is a selective wavelet shrinkage method which uses a two-threshold criteria to exploit the geometry of the wavelet subbands of each video frame, and each frame of the image sequence is spatially denoised independently of one another. The temporal-domain denoising technique is a selective wavelet shrinkage method which estimates the level of noise corruption as well as the amount of motion in the image sequence. The amount of noise is estimated to determine how much filtering is needed in the temporal-domain, and the amount of motion is taken into consideration to determine the degree of similarity between consecutive frames. The similarity affects how much noise removal is possible using temporal-domain processing. Using motion and noise level estimates, a video denoising technique is established which is robust to various levels of noise corruption and various levels of motion.

Journal ArticleDOI
TL;DR: This paper presents a simple, linear zero-forcing crosstalk canceler, which has a low complexity and no latency and does not suffer from error propagation, and due to the well-conditioned structure of the VDSL channel matrix, the ZF design causes negligible noise enhancement.
Abstract: Crosstalk is the major source of performance degradation in VDSL. Several crosstalk cancelers have been proposed to address this. Unfortunately, they suffer from error propagation, high complexity, and long latency. This paper presents a simple, linear zero-forcing (ZF) crosstalk canceler. This design has a low complexity and no latency and does not suffer from error propagation. Furthermore, due to the well-conditioned structure of the VDSL channel matrix, the ZF design causes negligible noise enhancement. A lower bound on the performance of the linear ZF canceler is derived. This allows performance to be predicted without explicit knowledge of the crosstalk channels, which simplifies service provisioning considerably. This bound shows that the linear ZF canceler operates close to the single-user bound. Therefore, the linear ZF canceler is a low-complexity, low-latency design with predictable near-optimal performance. The combination of spectral optimization and crosstalk cancellation is also considered. Spectra optimization in a multiaccess channel generally involves a complex optimization problem. Since the linear ZF canceler decouples transmission on each line, the spectrum on each modem can be optimized independently, leading to a significant reduction in complexity

Journal ArticleDOI
TL;DR: Thermally excited transverse phonons in glass fibers generate guided acoustic wave Brillouin scattering (GAWBS) which afflicts the propagating light with phase and polarization noise and it is important to reduce the harmful effect of GAWBS.
Abstract: Guided acoustic wave Brillouin scattering (GAWBS) generates phase and polarization noise of light propagating in glass fibers. This excess noise affects the performance of various experiments operating at the quantum noise limit. We experimentally demonstrate the reduction of GAWBS noise in a photonic crystal fiber in a broad frequency range by tailoring the acoustic modes using the photonic also as a phononic crystal. We compare the noise spectrum to the one of a standard fiber and observe a tenfold noise reduction in the frequency range up to 200 MHz. Based on our measurement results as well as on numerical simulations, we establish a model for the reduction of GAWBS noise in photonic crystal fibers.

Journal ArticleDOI
TL;DR: The proposed operator is a hybrid filter obtained by appropriately combining a median filter, an edge detector, and a neuro-fuzzy network that offers excellent line, edge, detail, and texture preservation performance while, at the same time, effectively removing noise from the input image.
Abstract: A new operator for restoring digital images corrupted by impulse noise is presented. The proposed operator is a hybrid filter obtained by appropriately combining a median filter, an edge detector, and a neuro-fuzzy network. The internal parameters of the neuro-fuzzy network are adaptively optimized by training. The training is easily accomplished by using simple artificial images that can be generated in a computer. The most distinctive feature of the proposed operator over most other operators is that it offers excellent line, edge, detail, and texture preservation performance while, at the same time, effectively removing noise from the input image. Extensive simulation experiments show that the proposed operator may be used for efficient restoration of digital images corrupted by impulse noise without distorting the useful information in the image.

Proceedings ArticleDOI
TL;DR: A recursive filter for IR is introduced, which conserves the statistical properties of the measured data while pre-processing attenuation measurements, and is shown to successfully eliminate streaking artifacts in photon-starved situations.
Abstract: Computed Tomography (CT) screening and pediatric imaging, among other applications, demand the development of more efficient reconstruction techniques to diminish radiation dose to the patient. While many methods are proposed to limit or modulate patient exposure to x-ray at scan time, the resulting data is excessively noisy, and generates image artifacts unless properly corrected. Statistical iterative reconstruction (IR) techniques have recently been introduced for reconstruction of low-dose CT data, and rely on the accurate modeling of the distribution of noise in the acquired data. After conversion from detector counts to attenuation measurements, however, noisy data usually deviate from simple Gaussian or Poisson representation, which limits the ability of IR to generate artifact-free images. This paper introduces a recursive filter for IR, which conserves the statistical properties of the measured data while pre-processing attenuation measurements. A basic framework for inclusion of detector electronic noise into the statistical model for IR is also presented. The results are shown to successfully eliminate streaking artifacts in photon-starved situations.

Journal ArticleDOI
TL;DR: The noise variance (NOVA) filter is presented: a general framework for (iterative) nonlinear filtering, which uses an estimate of the spatially dependent noise variance in an image that is suitable for routine use in clinical practice.
Abstract: Computed tomography (CT) has become the new reference standard for quantification of emphysema. The most popular measure of emphysema derived from CT is the pixel index (PI), which expresses the fraction of the lung volume with abnormally low intensity values. As PI is calculated from a single, fixed threshold on intensity, this measure is strongly influenced by noise. This effect shows up clearly when comparing the PI score of a high-dose scan to the PI score of a low-dose (i.e., noisy) scan of the same subject. In this paper, the noise variance (NOVA) filter is presented: a general framework for (iterative) nonlinear filtering, which uses an estimate of the spatially dependent noise variance in an image. The NOVA filter iteratively estimates the local image noise and filters the image. For the specific purpose of emphysema quantification of low-dose CT images, a dedicated, noniterative NOVA filter is constructed by using prior knowledge of the data to obtain a good estimate of the spatially dependent noise in an image. The performance of the NOVA filter is assessed by comparing characteristics of pairs of high-dose and low-dose scans. The compared characteristics are the PI scores for different thresholds and the size distributions of emphysema bullae. After filtering, the PI scores of high-dose and low-dose images agree to within 2%-3%points. The reproducibility of the high-dose bullae size distribution is also strongly improved. NOVA filtering of a CT image of typically 400/spl times/512/spl times/512 voxels takes only a couple of minutes which makes it suitable for routine use in clinical practice.