scispace - formally typeset
Search or ask a question

Showing papers on "Noise reduction published in 2012"


Journal ArticleDOI
TL;DR: A novel despeckling algorithm for synthetic aperture radar (SAR) images based on the concepts of nonlocal filtering and wavelet-domain shrinkage, which compares favorably w.r.t. several state-of-the-art reference techniques, with better results both in terms of signal-to-noise ratio and of perceived image quality.
Abstract: We propose a novel despeckling algorithm for synthetic aperture radar (SAR) images based on the concepts of nonlocal filtering and wavelet-domain shrinkage. It follows the structure of the block-matching 3-D algorithm, recently proposed for additive white Gaussian noise denoising, but modifies its major processing steps in order to take into account the peculiarities of SAR images. A probabilistic similarity measure is used for the block-matching step, while the wavelet shrinkage is developed using an additive signal-dependent noise model and looking for the optimum local linear minimum-mean-square-error estimator in the wavelet domain. The proposed technique compares favorably w.r.t. several state-of-the-art reference techniques, with better results both in terms of signal-to-noise ratio (on simulated speckled images) and of perceived image quality.

601 citations


Journal ArticleDOI
TL;DR: A hyperspectral image denoising algorithm employing a spectral-spatial adaptive total variation (TV) model, in which the spectral noise differences and spatial information differences are both considered in the process of noise reduction.
Abstract: The amount of noise included in a hyperspectral image limits its application and has a negative impact on hyperspectral image classification, unmixing, target detection, and so on In hyperspectral images, because the noise intensity in different bands is different, to better suppress the noise in the high-noise-intensity bands and preserve the detailed information in the low-noise-intensity bands, the denoising strength should be adaptively adjusted with the noise intensity in the different bands Meanwhile, in the same band, there exist different spatial property regions, such as homogeneous regions and edge or texture regions; to better reduce the noise in the homogeneous regions and preserve the edge and texture information, the denoising strength applied to pixels in different spatial property regions should also be different Therefore, in this paper, we propose a hyperspectral image denoising algorithm employing a spectral-spatial adaptive total variation (TV) model, in which the spectral noise differences and spatial information differences are both considered in the process of noise reduction To reduce the computational load in the denoising process, the split Bregman iteration algorithm is employed to optimize the spectral-spatial hyperspectral TV model and accelerate the speed of hyperspectral image denoising A number of experiments illustrate that the proposed approach can satisfactorily realize the spectral-spatial adaptive mechanism in the denoising process, and superior denoising results are produced

520 citations


Journal ArticleDOI
TL;DR: The proposed method to perform windowing in the EMD domain in order to reduce the noise from the initial IMFs instead of discarding them completely thus preserving the QRS complex and yielding a relatively cleaner ECG signal.

362 citations


Proceedings Article
01 Jan 2012
TL;DR: This work introduces a model which uses a deep recurrent auto encoder neural network to denoise input features for robust ASR, and demonstrates the model is competitive with existing feature denoising approaches on the Aurora2 task, and outperforms a tandem approach where deep networks are used to predict phoneme posteriors directly.
Abstract: Recent work on deep neural networks as acoustic models for automatic speech recognition (ASR) have demonstrated substantial performance improvements. We introduce a model which uses a deep recurrent auto encoder neural network to denoise input features for robust ASR. The model is trained on stereo (noisy and clean) audio features to predict clean features given noisy input. The model makes no assumptions about how noise affects the signal, nor the existence of distinct noise environments. Instead, the model can learn to model any type of distortion or additive noise given sufficient training data. We demonstrate the model is competitive with existing feature denoising approaches on the Aurora2 task, and outperforms a tandem approach where deep networks are used to predict phoneme posteriors directly.

346 citations


Journal ArticleDOI
TL;DR: This paper proposes a patch-based Wiener filter that exploits patch redundancy for image denoising that is on par or exceeding the current state of the art, both visually and quantitatively.
Abstract: In this paper, we propose a denoising method motivated by our previous analysis of the performance bounds for image denoising. Insights from that study are used here to derive a high-performance practical denoising algorithm. We propose a patch-based Wiener filter that exploits patch redundancy for image denoising. Our framework uses both geometrically and photometrically similar patches to estimate the different filter parameters. We describe how these parameters can be accurately estimated directly from the input noisy image. Our denoising approach, designed for near-optimal performance (in the mean-squared error sense), has a sound statistical foundation that is analyzed in detail. The performance of our approach is experimentally verified on a variety of images and noise levels. The results presented here demonstrate that our proposed method is on par or exceeding the current state of the art, both visually and quantitatively.

320 citations


Journal ArticleDOI
TL;DR: This work has shown that it is possible to estimate the signal-to-noise ratio of a noise model from a single noisy image, and that this model is relatively easy to obtain.
Abstract: Digital images are matrices of equally spaced pixels, each containing a photon count. This photon count is a stochastic process due to the quantum nature of light. It follows that all images are noisy. Ever since digital images have existed, numerical methods have been proposed to improve the signal-to-noise ratio. Such ‘denoising’ methods require a noise model and an image model. It is relatively easy to obtain a noise model. As will be explained in the present paper, it is even possible to estimate it from a single noisy image.

194 citations


Journal ArticleDOI
TL;DR: A novel speckle noise reduction algorithm for OCT images that uses wavelet decompositions of the single frames for a local noise and structure estimation and observes only a minor sharpness decrease at a signal-to-noise gain.
Abstract: We introduce a novel speckle noise reduction algorithm for OCT images. Contrary to present approaches, the algorithm does not rely on simple averaging of multiple image frames or denoising on the final averaged image. Instead it uses wavelet decompositions of the single frames for a local noise and structure estimation. Based on this analysis, the wavelet detail coefficients are weighted, averaged and reconstructed. At a signal-to-noise gain at about 100% we observe only a minor sharpness decrease, as measured by a full-width-half-maximum reduction of 10.5%. While a similar signal-to-noise gain would require averaging of 29 frames, we achieve this result using only 8 frames as input to the algorithm. A possible application of the proposed algorithm is preprocessing in retinal structure segmentation algorithms, to allow a better differentiation between real tissue information and unwanted speckle noise.

170 citations


Journal ArticleDOI
TL;DR: An improved median filtering algorithm is proposed that reduces the noise and retains the details of the image and the complexity is decreased to O (N), and the performance of noise reduction has effectively improved.

167 citations


Journal ArticleDOI
TL;DR: In this paper, the non-local means algorithm is used to attenuate random noise in seismic data, which is a noise attenuation filter that was originally developed for the purposes of image denoising.
Abstract: The nonlocal means algorithm is a noise attenuation filter that was originally developed for the purposes of image denoising. This algorithm denoises each sample or pixel within an image by utilizing other similar samples or pixels regardless of their spatial proximity, making the process nonlocal. Such a technique places no assumptions on the data except that structures within the data contain a degree of redundancy. Because this is generally true for reflection seismic data, we propose to adopt the nonlocal means algorithm to attenuate random noise in seismic data. Tests with synthetic and real data sets demonstrate that the nonlocal means algorithm does not smear seismic energy across sharp discontinuities or curved events when compared to seismic denoising methods such as f-x deconvolution.

151 citations


Journal ArticleDOI
TL;DR: The proposed criterion based on the generalized likelihood ratio is shown to be both easy to derive and powerful in these diverse applications: patch discrimination, image denoising, stereo-matching and motion-tracking under gamma and Poisson noises.
Abstract: Many tasks in computer vision require to match image parts. While higher-level methods consider image features such as edges or robust descriptors, low-level approaches (so-called image-based) compare groups of pixels (patches) and provide dense matching. Patch similarity is a key ingredient to many techniques for image registration, stereo-vision, change detection or denoising. Recent progress in natural image modeling also makes intensive use of patch comparison. A fundamental difficulty when comparing two patches from "real" data is to decide whether the differences should be ascribed to noise or intrinsic dissimilarity. Gaussian noise assumption leads to the classical definition of patch similarity based on the squared differences of intensities. For the case where noise departs from the Gaussian distribution, several similarity criteria have been proposed in the literature of image processing, detection theory and machine learning. By expressing patch (dis)similarity as a detection test under a given noise model, we introduce these criteria with a new one and discuss their properties. We then assess their performance for different tasks: patch discrimination, image denoising, stereo-matching and motion-tracking under gamma and Poisson noises. The proposed criterion based on the generalized likelihood ratio is shown to be both easy to derive and powerful in these diverse applications.

150 citations


Patent
22 Nov 2012
TL;DR: In this article, the authors proposed a method for inexpensively performing noise removal and sensitization processing of a video image when connected to a common video camera, where the pixel values equivalent to the latest and past multiple frames for the input image are stored into a first ring buffer and computed by a first computation means.
Abstract: [Problem] To provide a device that inexpensively performs noise removal and sensitization processing of a video image when connected to a common video camera. [Solution] Noise reduction processing, through which brightness is averaged by adding multiple frames at a ratio in accordance with a geometric series, is applied to still pixels; and the noise reduction processing, through which brightness is averaged by adding multiple frames at a ratio in accordance with a geometric series, and sensitization processing involving a sensitization multiplication factor equal to or greater than one are applied to dark pixels. Upon determining whether pixels to be processed are moving pixels or still pixels, pixels to which only the sensitization processing has been applied are selected with respect to the moving pixels, and pixels to which the noise reduction processing and the sensitization processing have been applied are selected with respect to the still pixels. After pixel values equivalent to the latest and past multiple frames for the input image are stored into a first ring buffer and computed by a first computation means, the resulting pixel values are stored into a second ring buffer. The absolute value of the difference between the total value of the oldest multiple frames stored in the second ring buffer and the total value of the latest frames computed by the first computation means is computed by the second computation means, and the pixels are determined to be moving if the absolute value is greater than a predetermined value.

Journal ArticleDOI
TL;DR: This work considers analysis of noisy and incomplete hyperspectral imagery, with the objective of removing the noise and inferring the missing data, and addresses dictionary learning from a Bayesian perspective, considering two distinct means of imposing sparse dictionary usage.
Abstract: We consider analysis of noisy and incomplete hyperspectral imagery, with the objective of removing the noise and inferring the missing data. The noise statistics may be wavelength dependent, and the fraction of data missing (at random) may be substantial, including potentially entire bands, offering the potential to significantly reduce the quantity of data that need be measured. To achieve this objective, the imagery is divided into contiguous three-dimensional (3D) spatio-spectral blocks of spatial dimension much less than the image dimension. It is assumed that each such 3D block may be represented as a linear combination of dictionary elements of the same dimension, plus noise, and the dictionary elements are learned in situ based on the observed data (no a priori training). The number of dictionary elements needed for representation of any particular block is typically small relative to the block dimensions, and all the image blocks are processed jointly (“collaboratively") to infer the underlying dictionary. We address dictionary learning from a Bayesian perspective, considering two distinct means of imposing sparse dictionary usage. These models allow inference of the number of dictionary elements needed as well as the underlying wavelength-dependent noise statistics. It is demonstrated that drawing the dictionary elements from a Gaussian process prior, imposing structure on the wavelength dependence of the dictionary elements, yields significant advantages, relative to the more conventional approach of using an independent and identically distributed Gaussian prior for the dictionary elements; this advantage is particularly evident in the presence of noise. The framework is demonstrated by processing hyperspectral imagery with a significant number of voxels missing uniformly at random, with imagery at specific wavelengths missing entirely, and in the presence of substantial additive noise.

Journal ArticleDOI
TL;DR: A framework and an algorithm are presented in order to remove stationary noise from images using different modalities: scanning electron microscope, FIB-nanotomography, and an emerging fluorescence microscopy technique called selective plane illumination microscopy.
Abstract: A framework and an algorithm are presented in order to remove stationary noise from images. This algorithm is called variational stationary noise remover. It can be interpreted both as a restoration method in a Bayesian framework and as a cartoon+texture decomposition method. In numerous denoising applications, the white noise assumption fails. For example, structured patterns such as stripes appear in the images. The model described here addresses these cases. Applications are presented with images acquired using different modalities: scanning electron microscope, FIB-nanotomography, and an emerging fluorescence microscopy technique called selective plane illumination microscopy.

Journal ArticleDOI
10 Aug 2012-Sensors
TL;DR: Results reveal that the proposed method offers superior performance than the traditional methods no matter whether the signals have heavy or light noises embedded.
Abstract: In structural vibration tests, one of the main factors which disturb the reliability and accuracy of the results are the noise signals encountered. To overcome this deficiency, this paper presents a discrete wavelet transform (DWT) approach to denoise the measured signals. The denoising performance of DWT is discussed by several processing parameters, including the type of wavelet, decomposition level, thresholding method, and threshold selection rules. To overcome the disadvantages of the traditional hard- and soft-thresholding methods, an improved thresholding technique called the sigmoid function-based thresholding scheme is presented. The procedure is validated by using four benchmarks signals with three degrees of degradation as well as a real measured signal obtained from a three-story reinforced concrete scale model shaking table experiment. The performance of the proposed method is evaluated by computing the signal-to-noise ratio (SNR) and the root-mean-square error (RMSE) after denoising. Results reveal that the proposed method offers superior performance than the traditional methods no matter whether the signals have heavy or light noises embedded.

Journal ArticleDOI
TL;DR: The Discrete Wavelet Transform based wavelet denoising have incorporated using different thresholding techniques to remove three major sources of noises from the acquired ECG signals namely, power line interference, baseline wandering, and high frequency noises and the experimental result shows the "coif5" wavelet andigrsurethresholding rule is optimal for unknown Signal to Noise Ratio (SNR) in the real time ECG messages.
Abstract: In recent years, Electrocardiogram (ECG) plays an imperative role in heart disease diagnostics, Human Computer Interface (HCI), stress and emotional states assessment, etc. In general, ECG signals affected by noises such as baseline wandering, power line interference, electromagnetic interference, and high frequency noises during data acquisition. In order to retain the ECG signal morphology, several researches have adopted using different preprocessing methods. In this work, the stroop color word test based mental stress inducement have done and ECG signals are acquired from 10 female subjects in the age range of 20 years to 25 years. We have considered the Discrete Wavelet Transform (DWT) based wavelet denoising have incorporated using different thresholding techniques to remove three major sources of noises from the acquired ECG signals namely, power line interference, baseline wandering, and high frequency noises. Three wavelet functions ("db4", "coif5" and "sym7") and four different thresholding methods are used to denoise the noise in ECG signals. The experimental result shows the significant reduction of above considered noises and it retains the ECG signal morphology effectively. Four different performance measures were considered to select the appropriate wavelet function and thresholding rule for efficient noise removal methods such as, Signal to Interference Ratio (SIR), noise power, Percentage Root Mean Square Difference (PRD) and finally periodogramof Power Spectral Density (PSD). The experimental result shows the "coif5" wavelet andrigrsurethresholding rule is optimal for unknown Signal to Noise Ratio (SNR) in the real time ECG signals.

Journal ArticleDOI
01 Sep 2012
TL;DR: In this paper, the effect of Reynolds number, surface roughness, freestream turbulence, proximity and wake interference on the radiated noise was studied on single and multiple rod configurations.
Abstract: Acoustic measurements were performed on single and multiple rod configurations to study the effect of Reynolds number, surface roughness, freestream turbulence, proximity and wake interference on the radiated noise. The Reynolds number ranged from 3.8 × 103 to 105. Directivity measurements were performed to determine how well the dipole assumption for the radiation of vortex shedding noise holds for the different model configurations tested. The dependence of the peak Sound Pressure Level on velocity was also examined. Several concepts for the reduction of the noise radiating from cylindrical rods were tested. It was shown that wire wraps and collar distributions could be used to significantly reduce the noise radiating from rods in tandem configurations.

Journal ArticleDOI
TL;DR: Experimental results show that the proposed approach to efficiently remove background noise by detecting and modifying noisy pixels in an image cannot only efficiently suppress high-density impulse noise, but also can well preserve the detailed information of an image.

Proceedings Article
Xiangdong Zhang1, Peiyi Shen1, Luo Lingli1, Liang Zhang1, Juan Song1 
01 Nov 2012
TL;DR: A general method for image contrast enhancement and noise reduction is proposed, developed especially for enhancing images acquired under very low light conditions where the features of images are nearly invisible and the noise is serious.
Abstract: A general method for image contrast enhancement and noise reduction is proposed in this paper. The method is developed especially for enhancing images acquired under very low light conditions where the features of images are nearly invisible and the noise is serious. By applying an improved and effective image de-haze algorithm to the inverted input image, the intensity can be amplified so that the dark areas become bright and the contrast get enhanced. Then, the joint-bilateral filter with the original green component as the edge image is introduced to suppress the noise. Experimental results validate the performance of the proposed approach.

Journal ArticleDOI
TL;DR: The proposed coherence-based algorithm was found to yield substantially higher intelligibility than that obtained by the beamforming algorithm, particularly when multiple noise sources or competing talker(s) were present.
Abstract: A novel dual-microphone speech enhancement technique is proposed in the present paper. The technique utilizes the coherence between the target and noise signals as a criterion for noise reduction and can be generally applied to arrays with closely spaced microphones, where noise captured by the sensors is highly correlated. The proposed algorithm is simple to implement and requires no estimation of noise statistics. In addition, it offers the capability of coping with multiple interfering sources that might be located at different azimuths. The proposed algorithm was evaluated with normal hearing listeners using intelligibility listening tests and compared against a well-established beamforming algorithm. Results indicated large gains in speech intelligibility relative to the baseline (front microphone) algorithm in both single and multiple-noise source scenarios. The proposed algorithm was found to yield substantially higher intelligibility than that obtained by the beamforming algorithm, particularly when multiple noise sources or competing talker(s) were present. Objective quality evaluation of the proposed algorithm also indicated significant quality improvement over that obtained by the beamforming algorithm. The intelligibility and quality benefits observed with the proposed coherence-based algorithm make it a viable candidate for hearing aid and cochlear implant devices.

Journal ArticleDOI
TL;DR: A new ALE-based on singular spectrum analysis (SSA) where full eigen-spectrum of the embedding matrix is exploited and the eigentriples are adaptively selected using the delayed version of the data.
Abstract: Original adaptive line enhancer (ALE) is used for denoising periodic signals from white noise. ALE, however, relies mainly on second order similarity between the signal and its delayed version and is more effective when the signal is narrowband. A new ALE based on singular spectrum analysis (SSA) is proposed here. In this approach in the reconstruction stage of SSA, the eigentriples are adaptively selected (filtered) using the delayed version of the data. Unlike the conventional ALE where (second) order statistics are taken into account, here the full eigen-spectrum of the embedding matrix is exploited. Consequently, the system works for non-Gaussian noise and wideband periodic signals. By performing some experiments on synthetic signals it is demonstrated that the proposed system is very effective for separation of biomedical data, which often have some periodic or quasi-periodic components, such as EMG affected by ECG artefacts. This data are examined here.

Journal ArticleDOI
TL;DR: A new noise-robust edge detector is proposed, which combines a small-scaled isotropic Gaussian kernel and large-scaling anisotropic Gaussian kernels to obtain edge maps of images to achieve noise reduction while maintaining high edge resolution.

Journal ArticleDOI
TL;DR: Numerical results show that the proposed adaptive parameter selection method can not only remove noise and eliminate the staircase effect efficiently in the non-textured region, but also preserve the small details such as textures well in the textured region.
Abstract: The total variation model proposed by Rudin, Osher, and Fatemi performs very well for removing noise while preserving edges. However, it favors a piecewise constant solution in BV space which often leads to the staircase effect, and small details such as textures are often filtered out with noise in the process of denoising. In this paper, we propose a fractional-order multi-scale variational model which can better preserve the textural information and eliminate the staircase effect. This is accomplished by replacing the first-order derivative with the fractional-order derivative in the regularization term, and substituting a kind of multi-scale norm in negative Sobolev space for the L 2 norm in the fidelity term of the ROF model. To improve the results, we propose an adaptive parameter selection method for the proposed model by using the local variance measures and the wavelet based estimation of the singularity. Using the operator splitting technique, we develop a simple alternating projection algorithm to solve the new model. Numerical results show that our method can not only remove noise and eliminate the staircase effect efficiently in the non-textured region, but also preserve the small details such as textures well in the textured region. It is for this reason that our adaptive method can improve the result both visually and in terms of the peak signal to noise ratio efficiently.

Proceedings ArticleDOI
25 Mar 2012
TL;DR: It is demonstrated that for the task of image denoising, nearly state-of-the-art results can be achieved using small dictionaries only, provided that they are learned directly from the noisy image.
Abstract: Photon limitations arise in spectral imaging, nuclear medicine, astronomy and night vision. The Poisson distribution used to model this noise has variance equal to its mean so blind application of standard noise removals methods yields significant artifacts. Recently, overcomplete dictionaries combined with sparse learning techniques have become extremely popular in image reconstruction. The aim of the present work is to demonstrate that for the task of image denoising, nearly state-of-the-art results can be achieved using small dictionaries only, provided that they are learned directly from the noisy image. To this end, we introduce patch-based denoising algorithms which perform an adaptation of PCA (Principal Component Analysis) for Poisson noise. We carry out a comprehensive empirical evaluation of the performance of our algorithms in terms of accuracy when the photon count is really low. The results reveal that, despite its simplicity, PCA-flavored denoising appears to be competitive with other state-of-the-art denoising algorithms.

Journal ArticleDOI
TL;DR: In this paper, an adaptive multiresolution version of the blockwise non-local (NL)-means filter is presented for three-dimensional (3D) magnetic resonance (MR) images.
Abstract: In this study, an adaptive multiresolution version of the blockwise non-local (NL)-means filter is presented for three-dimensional (3D) magnetic resonance (MR) images. On the basis of an adaptive soft wavelet coefficient mixing, the proposed filter implicitly adapts the amount of denoising according to the spatial and frequency information contained in the image. Two versions of the filter are described for Gaussian and Rician noise. Quantitative validation was carried out on BrainWeb datasets by using several quality metrics. The results show that the proposed multiresolution filter obtained competitive performance compared with recently proposed Rician NL-means filters. Finally, qualitative experiments on anatomical and diffusion-weighted MR images show that the proposed filter efficiently removes noise while preserving fine structures in classical and very noisy cases. The impact of the proposed denoising method on fibre tracking is also presented on a HARDI dataset.

Journal ArticleDOI
TL;DR: A structure adaptive sinogram (SAS) filter that incorporates the specific properties of the CT measurement process that preserves edge information and high-frequency components of organ textures well and shows a homogeneous noise reduction behavior throughout the whole frequency range is introduced.
Abstract: The patient dose in computed tomography (CT) imaging is linked to measurement noise. Various noise-reduction techniques have been developed that adapt structure preserving filters like anisotropic diffusion or bilateral filters to CT noise properties. We introduce a structure adaptive sinogram (SAS) filter that incorporates the specific properties of the CT measurement process. It uses a point-based forward projector to generate a local structure representation called ray contribution mask (RCM). The similarities between neighboring RCMs are used in an enhanced variant of the bilateral filtering concept, where the photometric similarity is replaced with the structural similarity. We evaluate the performance in four different scenarios: The robustness against reconstruction artifacts is demonstrated by a scan of a high-resolution-phantom. Without changing the modulation transfer function (MTF) nor introducing artifacts, the SAS filter reduces the noise level by 13.6%. The image sharpness and noise reduction capabilities are visually assessed on in vivo patient scans and quantitatively evaluated on a simulated phantom. Unlike a standard bilateral filter, the SAS filter preserves edge information and high-frequency components of organ textures well. It shows a homogeneous noise reduction behavior throughout the whole frequency range. The last scenario uses a simulated edge phantom to estimate the filter MTF for various contrasts: the noise reduction for the simple edge phantom exceeds 80%. For low contrasts at 55 Hounsfield units (HU), the mid-frequency range is slightly attenuated, at higher contrasts of approximately 100 HU and above, the MTF is fully preserved.

Journal ArticleDOI
TL;DR: This paper proposes to learn a dictionary from the logarithmic transformed image, and then to use it in a variational model built for noise removal, suggesting that in terms of visual quality, peak signal-to-noise ratio, and mean absolute deviation error, the proposed algorithm outperforms state-of-the-art methods.
Abstract: Multiplicative noise removal is a challenging image processing problem, and most existing methods are based on the maximum a posteriori formulation and the logarithmic transformation of multiplicative denoising problems into additive denoising problems. Sparse representations of images have shown to be efficient approaches for image recovery. Following this idea, in this paper, we propose to learn a dictionary from the logarithmic transformed image, and then to use it in a variational model built for noise removal. Extensive experimental results suggest that in terms of visual quality, peak signal-to-noise ratio, and mean absolute deviation error, the proposed algorithm outperforms state-of-the-art methods.

Journal ArticleDOI
TL;DR: Multimicrophone directionality was effective in improving speech understanding in spatially separated noisy conditions and further enhanced speech intelligibility in speech-weighted noise for cochlear implant users while maintaining equivalent performance in quiet situations and when listening to music.
Abstract: OBJECTIVES This study tested a combination of algorithms designed to improve cochlear implant performance in noise. A noise reduction (NR) algorithm, based on signal to noise ratio estimation was evaluated in combination with several directional microphone algorithms available in the Cochlear CP810 sound processor. DESIGN Fourteen adult unilateral cochlear implant users participated in the study. Evaluation was conducted using word recognition in quiet, sentence recognition in noise, and subjective feedback via questionnaire after a period of take-home use. Music appreciation was also evaluated in a controlled listening task. The sentence recognition task measured speech reception threshold for 50% morphemes correct. The interfering maskers were speech-weighted noise and competing talkers, which were spatially separated from the target speech. In addition, the locations of the noise maskers changed during the test in an effort to replicate relevant real-world listening conditions. SmartSound directionality settings Standard, Zoom, and Beam (used in the SmartSound programs Everyday, Noise, and Focus, respectively) were all evaluated with and without NR. RESULTS Microphone directionality demonstrated a consistent benefit in sentence recognition in all noise conditions tested. The group average speech reception threshold benefit over the Standard setting was 3.7 dB for Zoom and 5.3 dB for Beam. Addition of the NR algorithm further improved sentence recognition by 1.3 dB when the noise maskers were speech-weighted noise. There was an overall group preference for the NR algorithm in noisy environments. Group mean word recognition in quiet, preference in quiet conditions, and music appreciation were all unaffected by the NR algorithm. CONCLUSIONS Multimicrophone directionality was effective in improving speech understanding in spatially separated noisy conditions. The single-channel NR algorithm further enhanced speech intelligibility in speech-weighted noise for cochlear implant users while maintaining equivalent performance in quiet situations and when listening to music.

Journal ArticleDOI
TL;DR: In this article, the authors developed a package on graphics processing unit (GPU), called gDRR, for the accurate and efficient computations of x-ray projection images in CBCT under clinically realistic conditions.
Abstract: Purpose: Simulation of x-ray projection images plays an important role in cone beam CT (CBCT) related research projects, such as the design of reconstruction algorithms or scanners. A projection image contains primary signal, scatter signal, and noise. It is computationally demanding to perform accurate and realistic computations for all of these components. In this work, the authors develop a package on graphics processing unit (GPU), called gDRR, for the accurate and efficient computations of x-ray projection images in CBCT under clinically realistic conditions. Methods: The primary signal is computed by a trilinear ray-tracing algorithm. A Monte Carlo (MC) simulation is then performed, yielding the primary signal and the scatter signal, both with noise. A denoising process specifically designed for Poisson noise removal is applied to obtain a smooth scatter signal. The noise component is then obtained by combining the difference between the MC primary and the ray-tracing primary signals, and the difference between the MC simulated scatter and the denoised scatter signals. Finally, a calibration step converts the calculated noise signal into a realistic one by scaling its amplitude according to a specified mAs level. The computations of gDRR include a number of realistic features, e.g., a bowtie filter, a polyenergetic spectrum, and detector response. The implementation is fine-tuned for a GPU platform to yield high computational efficiency. Results: For a typical CBCT projection with a polyenergetic spectrum, the calculation time for the primary signal using the ray-tracing algorithms is 1.2–2.3 s, while the MC simulations take 28.1–95.3 s, depending on the voxel size. Computation time for all other steps is negligible. The ray-tracing primary signal matches well with the primary part of the MC simulation result. The MC simulated scatter signal using gDRR is in agreement with EGSnrc results with a relative difference of 3.8%. A noise calibration process is conducted to calibrate gDRR against a real CBCT scanner. The calculated projections are accurate and realistic, such that beam-hardening artifacts and scatter artifacts can be reproduced using the simulated projections. The noise amplitudes in the CBCT images reconstructed from the simulated projections also agree with those in the measured images at corresponding mAs levels. Conclusions: A GPU computational tool, gDRR, has been developed for the accurate and efficient simulations of x-ray projections of CBCT with realistic configurations.

Patent
24 Aug 2012
TL;DR: In this article, a method for processing M subband communication signals and N target-cancelled signals in each subband with a set of beamformer coefficients to obtain an inverse target cancellation covariance matrix of order N in each band was proposed.
Abstract: A method comprises processing M subband communication signals and N target-cancelled signals in each subband with a set of beamformer coefficients to obtain an inverse target-cancelled covariance matrix of order N in each band; using a target absence signal to obtain an initial estimate of the noise power in a beamformer output signal averaged over recent frames with target absence in each subband; multiplying the initial noise estimate with a noise correction factor to obtain a refined estimate of the power of the beamformer output noise signal component in each subband; processing the refined estimate with the magnitude of the beamformer output to obtain a postfilter gain value in each subband; processing the beamformer output signal with the postfilter gain value to obtain a postfilter output signal in each subband; and processing the postfilter output subband signals to obtain an enhanced beamformed output signal.

Proceedings ArticleDOI
25 Mar 2012
TL;DR: A novel dual-channel noise reduction algorithm with key components are a noise PSD estimator and an improved spectral weighting rule which both explicitly exploit the Power Level Differences of the desired speech signal between the microphones.
Abstract: This paper discusses the application of noise reduction algorithms for dual-microphone mobile phones. An analysis of the acoustical environment based on recordings with a dual-microphone mock-up phone mounted on a dummy head is given. Motivated by the recordings, a novel dual-channel noise reduction algorithm is proposed. The key components are a noise PSD estimator and an improved spectral weighting rule which both explicitly exploit the Power Level Differences (PLD) of the desired speech signal between the microphones. Experiments with recorded data show that this low complexity system has a good performance and is beneficial for an integration into future mobile communication devices.