scispace - formally typeset
Search or ask a question

Showing papers on "Noise reduction published in 2004"


Journal ArticleDOI
TL;DR: This paper proposes an alternate approach using L/sub 1/ norm minimization and robust regularization based on a bilateral prior to deal with different data and noise models and demonstrates its superiority to other super-resolution methods.
Abstract: Super-resolution reconstruction produces one or a set of high-resolution images from a set of low-resolution images. In the last two decades, a variety of super-resolution methods have been proposed. These methods are usually very sensitive to their assumed model of data and noise, which limits their utility. This paper reviews some of these methods and addresses their shortcomings. We propose an alternate approach using L/sub 1/ norm minimization and robust regularization based on a bilateral prior to deal with different data and noise models. This computationally inexpensive method is robust to errors in motion and blur estimation and results in images with sharp edges. Simulation results confirm the effectiveness of our method and demonstrate its superiority to other super-resolution methods.

2,175 citations


Journal ArticleDOI
TL;DR: An exact analysis of orthogonal frequency-division multiplexing (OFDM) performance in the presence of phase noise and a general phase-noise suppression scheme which, by analytical and numerical results, proves to be quite effective in practice.
Abstract: We provide an exact analysis of orthogonal frequency-division multiplexing (OFDM) performance in the presence of phase noise Unlike most methods which assume small phase noise, we examine the general case for any phase noise levels After deriving a closed-form expression for the signal-to-noise-plus-interference ratio (SINR), we exhibit the effects of phase noise by precisely expressing the OFDM system performance as a function of its critical parameters This helps in understanding the meaning of small phase noise and how it reflects on the proper parameters selection of a specific OFDM system In order to combat phase noise, we also provide in this paper a general phase-noise suppression scheme, which, by analytical and numerical results, proves to be quite effective in practice

355 citations


01 Jan 2004
TL;DR: This paper presents a review of some significant work in the area of image denoising and some popular approaches are classified into different groups and an overview of various algorithms and analysis is provided.
Abstract: Removing noise from the original signal is still a challenging problem for researchers. There have been several published algorithms and each approach has its assumptions, advantages, and limitations. This paper presents a review of some significant work in the area of image denoising. After a brief introduction, some popular approaches are classified into different groups and an overview of various algorithms and analysis is provided. Insights and potential future trends in the area of denoising are also discussed.

307 citations


01 Jan 2004
TL;DR: A general mathematical and experimental methodology to compare and classify classical image denoising algorithms is defined, and an algorithm (Non Local Means) addressing the preservation of structure in a digital image is proposed.
Abstract: The search for efficient image denoising methods still is a valid challenge, at the crossing of functional analysis and statistics. In spite of the sophistication of the recently proposed methods, most algorithms have not yet attained a desirable level of applicability. All show an outstanding performance when the image model corresponds to the algorithm assumptions, but fail in general and create artifacts or remove image fine structures. The main focus of this paper is, first, to define a general mathematical and experimental methodology to compare and classify classical image denoising algorithms, second, to propose an algorithm (Non Local Means) addressing the preservation of structure in a digital image. The mathematical analysis is based on the analysis of the “method noise”, defined as the difference between a digital image and its denoised version. The NL-means algorithm is also proven to be asymptotically optimal under a generic statistical image model. The denoising performance of all considered methods are compared in four ways ; mathematical: asymptotic order of magnitude of the method noise under regularity assumptions; perceptual-mathematical: the algorithms artifacts and their explanation as a violation of the image model; quantitative experimental: by tables of L distances of the denoised version to the original image. The most powerful evaluation method seems, however, to be the visualization of the method noise on natural images. The more this method noise looks like a real white noise, the better the method.

297 citations


Journal ArticleDOI
TL;DR: A spatially adaptive two-dimensional wavelet filter is used to reduce speckle noise in time-domain and Fourier-domain optical coherence tomography (OCT) images.
Abstract: A spatially adaptive two-dimensional wavelet filter is used to reduce speckle noise in time-domain and Fourier-domain optical coherence tomography (OCT) images. Edges can be separated from discontinuities that are due to noise, and noise power can be attenuated in the wavelet domain without significantly compromising image sharpness. A single parameter controls the degree of noise reduction. When this filter is applied to ophthalmic OCT images, signal-to-noise ratio improvements of >7 dB are attained, with a sharpness reduction of <3%.

289 citations


BookDOI
01 Apr 2004
TL;DR: The author explains the development of the Multichannel Frequency-domain Adaptive Algorithm and its applications in Speech Acquisition and Enhancement and real-Time Hands-Free Stereo Communication.
Abstract: Preface. Contributing Authors. 1: Introduction Yiteng (Arden) Huang, J. Benesty. 1. Multimedia Communications. 2. Challenges and Opportunities. 3. Organization of the Book. I: Speech Acquisition and Enhancement. 2: Differential Microphone Arrays G.W. Elko. 1. Introduction. 2. Differential Microphone Arrays. 3. Array Directional Gain. 4. Optimal Arrays for Isotropic Fields. 5. Design Examples. 6. Sensitivity to Microphone Mismatch and Noise. 7. Conclusions. 3: Spherical Microphone Arrays for 3D Sound Recording J. Meyer, G.W. Elko. 1. Introduction. 2. Fundamental Concept. 3. The Eigenbeamformer. 4. Modal-Beamformer. 5. Robustness Measure. 6. Beampattern Design. 7. Measurements. 8. Summary. 9. Appendix A. 4: Subband Noise Reduction Methods for Speech Enhancement E.J. Diethorn. 1. Introduction. 2. Wiener Filtering. 3. Speech Enhancement by Short-Time Spectral Modification. 4. Averaging Techniques for Envelope Estimation. 5. Example Implementation. 6. Conclusion. II: Acoustic Echo Cancellation. 5: Adaptive Algorithms for MIMO Acoustic Echo Cancellation J. Benesty, T. Gansler, Yiteng (Arden) Huang, M. Rupp. 1. Introduction. 2. Normal Equations and Identification of a MIMO System. 3. The Classical and Factorized Multichannel RLS. 4. The Multichannel Fast RLS. 5. TheMultichannel LMS Algorithm. 6. The Multichannel APA. 7. The Multichannel Exponentiated Gradient Algorithm. 8. The Multichannel Frequency-domain Adaptive Algorithm. 9. Conclusions. 6: Double-talk Detectors for Acoustic Echo Cancellers T. Gansler, J. Benesty. 1. Introduction. 2. Basics of AEC and DTD. 3. Double-talk Detection Algorithms. 4. Comparison of DTDs by Means of the ROC. 5. Discussion. 7: The WinEC: A Real-Time Hands-Free Stereo Communication System T. Gansler, V. Fischer, E.J. Diethorn, J. Benesty. 1. Introduction. 2. System Description. 3. Algorithms of the Echo Canceller Module. 4. Residual Echo and Noise Suppression. 5. Simulations. 6. Real-Time Tests with Different Modes of Operation. 7. Discussion. III: Sound Source Tracking and Separation. 8: Time Delay Estimation Jingdong Chen, Yiteng (Arden) Huang, J. Benesty. 1. Introduction. 2. Signal Models. 3. Generalized Cross-Correlation Method. 4. The Multichannel Cross-Correlation Algorithm. 5. Adaptive Eigenvalue Decomposition Algorithm. 6. Adaptive Multichannel Time Delay Estimation. 7. Experiments. 8. Conclusions. 9: Source Localization Yiteng (Arden) Huang, J. Benesty, G.W. Elko. 1. Introduction. 2. Source Localization Problem. 3. Measurement Model and Cramer-Rao lower Bound for Source Localization. 4. Maximum Liklihood Estimator. 5. Least Squares Estimate. 6. Example

284 citations


Journal ArticleDOI
TL;DR: The present noise reduction procedure, including ICA separation phase, automatic artifactual ICs selection and 'discrepancy' control cycle, showed good performances both on simulated and real MEG data and suggests the procedure to be able to separate different cerebral activity sources, even if characterized by very similar frequency contents.

269 citations


Journal ArticleDOI
TL;DR: A novel filtered-s least mean square (FSLMS) algorithm based ANC structure, which functions as a nonlinear controller, is proposed in this paper and substantially reduces the number of operations compared to that of FSLMS as well as VFXLMS.
Abstract: In many practical applications the acoustic noise generated from dynamical systems is nonlinear and deterministic or stochastic, colored, and non-Gaussian. It has been reported that the linear techniques used to control such noise exhibit degradation in performance. In addition, the actuators of an active noise control (ANC) system very often have nonminimum-phase response. A linear controller under such situations can not model the inverse of the actuator, and hence yields poor performance. A novel filtered-s least mean square (FSLMS) algorithm based ANC structure, which functions as a nonlinear controller, is proposed in this paper. A fast implementation scheme of the FSLMS algorithm is also presented. Computer simulations have been carried out to demonstrate that the proposed algorithm outperforms the standard filtered-x least mean square (FXLMS) algorithm and even performs better than the recently proposed Volterra filtered-x least mean square (VFXLMS) algorithm, in terms of mean square error (MSE), for active control of nonlinear noise processes. An evaluation of the computational requirements shows that the FSLMS algorithm offers a computational advantage over VFXLMS when the secondary path estimate is of length less than 6. However, the fast implementation of the FSLMS algorithm substantially reduces the number of operations compared to that of FSLMS as well as VFXLMS algorithm.

229 citations


Journal ArticleDOI
TL;DR: This work uses partial differential equation techniques to remove noise from digital images using a total-variation filter to smooth the normal vectors of the level curves of a noise image and finite difference schemes are used to solve these equations.
Abstract: In this work, we use partial differential equation techniques to remove noise from digital images. The removal is done in two steps. We first use a total-variation filter to smooth the normal vectors of the level curves of a noise image. After this, we try to find a surface to fit the smoothed normal vectors. For each of these two stages, the problem is reduced to a nonlinear partial differential equation. Finite difference schemes are used to solve these equations. A broad range of numerical examples are given in the paper.

217 citations


Journal ArticleDOI
King Chung1
TL;DR: This review discusses the challenges in hearing aid design and fitting and the recent developments in advanced signal processing technologies to meet these challenges and discusses the basic concepts and the building blocks of digital signal processing algorithms.
Abstract: This review discusses the challenges in hearing aid design and fitting and the recent developments in advanced signal processing technologies to meet these challenges. The first part of the review discusses the basic concepts and the building blocks of digital signal processing algorithms, namely, the signal detection and analysis unit, the decision rules, and the time constants involved in the execution of the decision. In addition, mechanisms and the differences in the implementation of various strategies used to reduce the negative effects of noise are discussed. These technologies include the microphone technologies that take advantage of the spatial differences between speech and noise and the noise reduction algorithms that take advantage of the spectral difference and temporal separation between speech and noise. The specific technologies discussed in this paper include first-order directional microphones, adaptive directional microphones, second-order directional microphones, microphone matching algorithms, array microphones, multichannel adaptive noise reduction algorithms, and synchrony detection noise reduction algorithms. Verification data for these technologies, if available, are also summarized.

181 citations


Journal ArticleDOI
TL;DR: A generalized noise reduction scheme, called the Spatially Pre-processed Speech Distortion Weighted Multi-channel Wiener filter (SP-SDW-MWF), that encompasses the Generalized Sidelobe Canceller and a recently developed Multi-Channel Wiener Filtering technique as extreme cases and allows for in-between solutions.

Journal ArticleDOI
TL;DR: This work proposes three postfiltering methods for improving the performance of microphone arrays based on single-channel speech enhancers and making use of recently proposed algorithms concatenated to the beamformer output, and a multichannel speech enhancer which exploits noise-only components constructed within the TF-GSC structure.
Abstract: In speech enhancement applications microphone array postfiltering allows additional reduction of noise components at a beamformer output. Among microphone array structures the recently proposed general transfer function generalized sidelobe canceller (TF-GSC) has shown impressive noise reduction abilities in a directional noise field, while still maintaining low speech distortion. However, in a diffused noise field less significant noise reduction is obtainable. The performance is even further degraded when the noise signal is nonstationary. In this contribution we propose three postfiltering methods for improving the performance of microphone arrays. Two of which are based on single-channel speech enhancers and making use of recently proposed algorithms concatenated to the beamformer output. The third is a multichannel speech enhancer which exploits noise-only components constructed within the TF-GSC structure. This work concentrates on the assessment of the proposed postfiltering structures. An extensive experimental study, which consists of both objective and subjective evaluation in various noise fields, demonstrates the advantage of the multichannel postfiltering compared to the single-channel techniques.

Journal ArticleDOI
TL;DR: Two anti-vignetting methods which effectively estimate the distribution of correction factors are proposed which utilizes wavelet-based denoising for efficient suppression of input image noise and a single correction function in the form of a 2D hypercosine function.
Abstract: Vignetting is a position-dependent light intensity falloff commonly found for commercial digital cameras. An anti-vignetting method is concerned with the estimation of a correction factor at each pixel position. In this paper, we propose two anti-vignetting methods which effectively estimate the distribution of correction factors. The first method utilizes wavelet-based denoising for efficient suppression of input image noise. Decimation of the smoothed distribution of correction factors is then carried out. The second method is more concerned with the appropriateness for embedded digital imaging applications. We approximate the distribution of correction factors by a single correction function, which is in the form of a 2-D hypercosine function. Only five parameters are needed to describe an underlying input intensity distribution and are estimated by nonlinear model fitting against measured input illumination data. We show the performance of the proposed methods by experimental results using synthetic and real images.

01 Apr 2004
TL;DR: In this article, the authors investigate the lower bound of the noise generated by an aircraft modified with a virtual retrofit capable of eliminating all noise associated with the high lift system and landing gear.
Abstract: The NASA goal of reducing external aircraft noise by 10 dB in the near-term presents the acoustics community with an enormous challenge. This report identifies technologies with the greatest potential to reduce airframe noise. Acoustic and aerodynamic effects will be discussed, along with the likelihood of industry accepting and implementing the different technologies. We investigate the lower bound, defined as noise generated by an aircraft modified with a virtual retrofit capable of eliminating all noise associated with the high lift system and landing gear. However, the airframe noise of an aircraft in this 'clean' configuration would only be about 8 dB quieter on approach than current civil transports. To achieve the NASA goal of 10 dB noise reduction will require that additional noise sources be addressed. Research shows that energy in the turbulent boundary layer of a wing is scattered as it crosses trailing edge. Noise generated by scattering is the dominant noise mechanism on an aircraft flying in the clean configuration. Eliminating scattering would require changes to much of the aircraft, and practical reduction devices have yet to receive serious attention. Evidence suggests that to meet NASA goals in civil aviation noise reduction, we need to employ emerging technologies and improve landing procedures; modified landing patterns and zoning restrictions could help alleviate aircraft noise in communities close to airports.

Journal ArticleDOI
TL;DR: In this article, an offset compensation technique that can simultaneously minimize input-referred supply noise was proposed to reduce the resolution of a comparator by the dc input offset and the ac noise.
Abstract: The resolution of a comparator is determined by the dc input offset and the ac noise. For mixed-mode applications with significant digital switching, input-referred supply noise can be a significant source of error. This paper proposes an offset compensation technique that can simultaneously minimize input-referred supply noise. Demonstrated with digital offset compensation, this scheme reduces input-referred supply noise to a small fraction (13%) of one least significant bit (LSB) digital offset. In addition, the same analysis can be applied to analog offset compensation.

Journal ArticleDOI
TL;DR: This paper proposes a new method of the Lidar signal acquisition based on discrete wavelet transform (DWT), which can significantly improve the SNR so that the effective measured range of LIDar is increased.

Patent
09 Jan 2004
TL;DR: In this article, an audio-visual speech activience recognition system (200b/c) of a video-enabled telecommunication device which runs a real-time lip tracking application that can advantageously be used for a near-speaker detection algorithm in an environment where a speaker's voice is interfered by a statistically distributed background noise (n'(t)) including both environmental noise and surrounding persons' voices.
Abstract: The present invention generally relates to the field of noise reduction systems which are equipped with an audio-visual user interface, in particular to an audio-visual speech activ­ity recognition system (200b/c) of a video-enabled telecommunication device which runs a real-time lip tracking application that can advantageously be used for a near-speaker detection algorithm in an environment where a speaker's voice is interfered by a statistically distributed background noise (n'(t)) including both environmental noise (n(t)) and surrounding persons' voices

Proceedings ArticleDOI
15 Apr 2004
TL;DR: A novel filtering technique namely trilateral filter is proposed, which can achieve edge-preserving smoothing with a narrow spatial window in only a few iterations and provides greater noise reduction than bilateral filtering and smooths biomedical images without over-smoothing ridges and shifting the edge locations.
Abstract: Filtering is a core operation in low level computer vision. It is a preliminary process in many biomedical image processing applications. Bilateral filtering has been applied to smooth biomedical images while preserving the edges. However, to avoid oversmoothing structures of sizes comparable to the image resolutions, a narrow spatial window has to be used. This leads to the necessity of performing more iterations in the filtering process. In this paper, we propose a novel filtering technique namely trilateral filter, which can achieve edge-preserving smoothing with a narrow spatial window in only a few iterations. The experimental results have shown that our novel method provides greater noise reduction than bilateral filtering and smooths biomedical images without over-smoothing ridges and shifting the edge locations, as compared to other noise reduction methods.

Proceedings ArticleDOI
17 May 2004
TL;DR: A new method, called the two-step noise reduction (TSNR) technique, is proposed, which solves the problem of single microphone speech enhancement in noisy environments while maintaining the benefits of the decision-directed approach.
Abstract: The paper addresses the problem of single microphone speech enhancement in noisy environments Common short-time noise reduction techniques proposed in the art are expressed as a spectral gain depending on the a priori SNR In the well-known decision-directed approach, the a priori SNR depends on the speech spectrum estimation in the previous frame As a consequence, the gain function matches the previous frame rather than the current one which degrades the noise reduction performance We propose a new method, called the two-step noise reduction (TSNR) technique, which solves this problem while maintaining the benefits of the decision-directed approach This method is analyzed and results in voice communication and speech recognition contexts are given

Journal ArticleDOI
TL;DR: Three novel and alternative methods for estimating the noise standard deviation are proposed in this work and compared with the MAD method, which assumes specific characteristics of the noise-contaminated image component.
Abstract: The estimation of the standard deviation of noise contaminating an image is a fundamental step in wavelet-based noise reduction techniques. The method widely used is based on the mean absolute deviation (MAD). This model-based method assumes specific characteristics of the noise-contaminated image component. Three novel and alternative methods for estimating the noise standard deviation are proposed in this work and compared with the MAD method. Two of these methods rely on a preliminary training stage in order to extract parameters which are then used in the application stage. The sets used for training and testing, 13 and 5 images, respectively, are fully disjoint. The third method assumes specific statistical distributions for image and noise components. Results showed the prevalence of the training-based methods for the images and the range of noise levels considered.

Proceedings ArticleDOI
07 Jun 2004
TL;DR: This paper presents a scalable soft spot analysis methodology and provides guidelines to reduction of severe nano-meter noise effects caused by aggressive design in the pre-manufacturing phase, and guidelines to selective insertion of on-line protection schemes to achieve higher robustness.
Abstract: Circuits using nano-meter technologies are becoming increasingly vulnerable to signal interference from multiple noise sources as well as radiation-induced soft errors. One way to ensure reliable functioning of chips is to be able to analyze and identify the spots in the circuit which are susceptible to such effects (called "soft spots" in this paper), and to make sure such soft spots are "hardened" so as to resist multiple noise effects and soft errors. In this paper, we present a scalable soft spot analysis methodology to study the vulnerability of digital ICs exposed to nano-meter noise and transient soft errors. First, we define "softness" as an important characteristic to gauge system vulnerability. Then several key factors affecting softness are examined. Finally an efficient Automatic Soft Spot Analyzer (ASSA) is developed to obtain the softness distribution which reflects the unbalanced noise-tolerant capability of different regions in a design. The proposed methodology provides guidelines to reduction of severe nano-meter noise effects caused by aggressive design in the pre-manufacturing phase, and guidelines to selective insertion of on-line protection schemes to achieve higher robustness. The quality of the proposed soft-spot analysis technique is validated by HSPICE simulation, and its scalability is demonstrated on a commercial embedded processor.

Journal ArticleDOI
TL;DR: In this paper, principal component analysis (PCA) was applied to reduce the random noise present in the hyperspectral infrared observations, and the results obtained depend on the variability of selected sets of observations and on specific instrument characteristics such as spectral resolution and noise statistics.
Abstract: [1] This paper describes the application of principal component analysis to reduce the random noise present in the hyperspectral infrared observations. Within a set of spectral observations the number of components needed to characterize the atmosphere is far less than the number of wavelengths observed, typically by a factor between 50 and 70. The higher-order components, which mainly serve to characterize noise, can be eliminated along with the noise that they characterize. The results obtained depend on the variability of the selected sets of observations and on specific instrument characteristics such as spectral resolution and noise statistics. For a set of 10,000 Fourier transform spectrometer (FTS) simulated spectra, whose standard deviation is about 10% of the mean, we were able to obtain noise reduction factors between 5 and 8. Results obtained from real FTS, with standard deviation of about 10% of the mean, indicated practical noise reduction between 5 and 6. To avoid loss of information in the presence of highly deviant observations, it is necessary to use a conservative number of principal components higher than the optimum to maximum noise reduction. However, even then, noise reduction factors of 4 are still achievable.

Journal ArticleDOI
TL;DR: In this article, high performance passivated AlGaN/GaN high electron-mobility transistors (HEMTs) with 0.25-/spl mu/m gate-length for low noise applications were reported.
Abstract: This letter reports high-performance passivated AlGaN/GaN high electron-mobility transistors (HEMTs) with 0.25-/spl mu/m gate-length for low noise applications. The devices exhibited a minimum noise figure (NF/sub min/) of 0.98 dB and an associated gain (G/sub a/) of 8.97 dB at 18 GHz. The noise resistance (R/sub n/), the measure of noise sensitivity to source mismatch, is 31/spl Omega/ at 18 GHz, which is relatively low and suitable for broad-band low noise amplifiers. The noise modeling analysis shows that the minimum noise figure of the GaN HEMT can be reduced further by reducing noise contributions from parasitics. These results demonstrate the viability of AlGaN/GaN HEMTs for low-noise as well as high power amplifiers.

Journal ArticleDOI
TL;DR: The capability and real-time processing features of a robust filter for the removal of impulsive noise in image processing applications and extensive simulation results have demonstrated that the proposed filter consistently outperforms other filters by balancing the tradeoff between noise suppression and fine detail preservation.
Abstract: This paper presents the capability and real-time processing features of a robust filter for the removal of impulsive noise in image processing applications. The real-time implementation of image filtering was realized on the DSP TMS320C6701. Extensive simulation results with different images have demonstrated that the proposed filter consistently outperforms other filters by balancing the tradeoff between noise suppression and fine detail preservation. We simulated impulsive corrupted video sequences to demonstrate that the proposed method potentially could provide a real-time solution to quality video transmission.

Journal ArticleDOI
TL;DR: The Huber penalty function gives accurate and low noise images, but it may be difficult to determine the parameters.
Abstract: Iterative image reconstruction algorithms have the potential to produce low noise images. Early stopping of the iteration process is problematic because some features of the image may converge slowly. On the other hand, there may be noise build-up with increased number of iterations. Therefore, we examined the stabilizing effect of using two different prior functions as well as image representation by blobs so that the number of iterations could be increased without noise build-up. Reconstruction was performed of simulated phantoms and of real data acquired by positron emission tomography. Image quality measures were calculated for images reconstructed with or without priors. Both priors stabilized the iteration process. The first prior based on the Huber function reduced the noise without significant loss of contrast recovery of small spots, but the drawback of the method was the difficulty in finding optimal values of two free parameters. The second method based on a median root prior has only one Bayesian parameter which was easy to set, but it should be taken into account that the image resolution while using that prior has to be chosen sufficiently high not to cause the complete removal of small spots. In conclusion, the Huber penalty function gives accurate and low noise images, but it may be difficult to determine the parameters. The median root prior method is not quite as accurate but may be used if image resolution is increased.

Journal ArticleDOI
TL;DR: A comparison among some methods based on simultaneous representations in time and frequency/scale domains of the ultrasonic traces using a discrete wavelet processor with decomposition level-dependent threshold selection and a method that combines Wigner-Ville transform and filtering in the time-frequency domain.

Proceedings ArticleDOI
27 Dec 2004
TL;DR: Two combinations of time invariant wavelet and curvelet transforms will be used for denoising of SAR images using the combined filtering algorithm (CFA) and the adaptive combined method (ACM), which uses the wavelet transform to denoise homogeneous areas and the curvelet transformto denoise areas with edges.
Abstract: Synthetic aperture radar (SAR) images are corrupted by speckle noise due to random interference of electromagnetic waves. The speckle degrades the quality of the images and makes interpretations, analysis and classifications of SAR images harder. Therefore, some speckle reduction is necessary prior to the processing of SAR images. The speckle noise can be modeled as multiplicative i.i.d. Rayleigh noise. Logarithmic transformation of SAR images convert the multiplicative noise models to additive noise. In this paper, two combinations of time invariant wavelet and curvelet transforms will be used for denoising of SAR images. The first one is called the combined filtering algorithm (CFA). This method is based on a constrained optimization problem, both in the wavelet and curvelet domains. The second method is called the adaptive combined method (ACM) which uses the wavelet transform to denoise homogeneous areas and the curvelet transform to denoise areas with edges

Journal ArticleDOI
TL;DR: A comparison of different methods of noise reduction is performed in order to find out a method best suited for reducing noise in gel images, using the BayesThresh method of threshold value determination.
Abstract: Proteomics produces a huge amount of two-dimensional gel electrophoresis images. Their analysis can yield a lot of information concerning proteins responsible for different diseases or new unidentified proteins. However, an automatic analysis of such images requires an efficient tool for reducing noise in images. This allows proper detection of the spots' borders, which is important in protein quantification (as the spots' areas are used to determine the amounts of protein present in an analyzed mixture). Also in the feature-based matching methods the detected features (spots) can be described by additional attributes, such as area or shape. In our study, a comparison of different methods of noise reduction is performed in order to find out a method best suited for reducing noise in gel images. Among the compared methods there are the classical methods of linear filtering, e.g., the mean and Gaussian filtering, the nonlinear method, i.e., median filtering, and also the methods better suited for processing of nonstationary signals, such as spatially adaptive linear filtering and filtering in the wavelet domain. The best results are obtained by filtering of gel images in the wavelet domain, using the BayesThresh method of threshold value determination.

Journal ArticleDOI
TL;DR: Through the creation of a frequency-domain linearized parametric model for phase noise, an ICI reduction scheme to deal with phase noise is generated and it is shown that the algorithm can significantly reduce the symbol-error rate (SER) floor, caused by the residual phase noise after CPE correction, while sacrificing an acceptable transmission bandwidth.
Abstract: In Orthogonal Frequency Division Multiplexing (OFDM) communication systems, the local oscillator phase noise introduces two effects: common phase error (CPE) and inter-carrier interference (ICI). The correction of CPE does not suffice always, especially when high data rates are required. Through the creation of a frequency-domain linearized parametric model for phase noise, we generate an ICI reduction scheme to deal with phase noise. The effects of the transmitter high-power-amplifier (HPA) nonlinearity on the phase noise compensation are also investigated. The algorithm performance over AWGN channel is presented for DVB-T 2k and 8k modes. It is shown by the simulation results that the algorithm can significantly reduce the symbol-error rate (SER) floor, caused by the residual phase noise after CPE correction, while sacrificing an acceptable transmission bandwidth.

Proceedings ArticleDOI
27 Feb 2004
TL;DR: In this article, a sequential wavelet domain and temporal filtering scheme with jointly optimized parameters is proposed, which results in high-quality video denoising over a large range of noise levels.
Abstract: We develop a sequential wavelet domain and temporal filtering scheme, with jointly optimized parameters, which results in high-quality video denoising over a large range of noise levels. In this scheme, spatial filtering is performed by a spatially adaptive Bayesian wavelet shrinkage in a redundant wavelet representation. In the next filtering stage, a motion detector controls selective, recursive averaging of pixel intensities over time. The results demonstrate that the proposed filter outperforms recent single-resolution representatives as well as some recent motion-compensated wavelet based video filters. We also analyze important practical issues for possible industrial applications. In particular, we investigate the performance degradations that result from making the wavelet domain filtering part less complex, by removing the redundancy of the representation and/or by replacing a sophisticated spatially adaptive shrinkage method by soft-thresholding.