scispace - formally typeset
Search or ask a question

Showing papers on "Noise measurement published in 2010"


Journal ArticleDOI
TL;DR: The analysis established the relationship between the attenuation rates of the movement artifact and the sEMG signal as a function of the filter band pass, and a Butterworth filter with a corner frequency of 20 Hz and a slope of 12 dB/oct is recommended for general use.

937 citations


Proceedings ArticleDOI
13 Jun 2010
TL;DR: The robustness and effectiveness of the proposed Denoising algorithm on removing mixed noise, e.g. heavy Gaussian noise mixed with impulsive noise, is validated in the experiments and the proposed approach compares favorably against some existing video denoising algorithms.
Abstract: Most existing video denoising algorithms assume a single statistical model of image noise, e.g. additive Gaussian white noise, which often is violated in practice. In this paper, we present a new patch-based video denoising algorithm capable of removing serious mixed noise from the video data. By grouping similar patches in both spatial and temporal domain, we formulate the problem of removing mixed noise as a low-rank matrix completion problem, which leads to a denoising scheme without strong assumptions on the statistical properties of noise. The resulting nuclear norm related minimization problem can be efficiently solved by many recently developed methods. The robustness and effectiveness of our proposed denoising algorithm on removing mixed noise, e.g. heavy Gaussian noise mixed with impulsive noise, is validated in the experiments and our proposed approach compares favorably against some existing video denoising algorithms.

516 citations


Proceedings ArticleDOI
13 Jun 2010
TL;DR: In this article, a convex program, named Principal Component Pursuit (PCP), is proposed to recover the low-rank matrix from a high-dimensional data matrix despite both small entry-wise noise and gross sparse errors.
Abstract: In this paper, we study the problem of recovering a low-rank matrix (the principal components) from a high-dimensional data matrix despite both small entry-wise noise and gross sparse errors. Recently, it has been shown that a convex program, named Principal Component Pursuit (PCP), can recover the low-rank matrix when the data matrix is corrupted by gross sparse errors. We further prove that the solution to a related convex program (a relaxed PCP) gives an estimate of the low-rank matrix that is simultaneously stable to small entry-wise noise and robust to gross sparse errors. More precisely, our result shows that the proposed convex program recovers the low-rank matrix even though a positive fraction of its entries are arbitrarily corrupted, with an error bound proportional to the noise level. We present simulation results to support our result and demonstrate that the new convex program accurately recovers the principal components (the low-rank matrix) under quite broad conditions. To our knowledge, this is the first result that shows the classical Principal Component Analysis (PCA), optimal for small i.i.d. noise, can be made robust to gross sparse errors; or the first that shows the newly proposed PCP can be made stable to small entry-wise perturbations.

454 citations


Journal ArticleDOI
TL;DR: A no-reference metric Q is proposed which is based upon singular value decomposition of local image gradient matrix, and provides a quantitative measure of true image content in the presence of noise and other disturbances, and is used to automatically and effectively set the parameters of two leading image denoising algorithms.
Abstract: Across the field of inverse problems in image and video processing, nearly all algorithms have various parameters which need to be set in order to yield good results. In practice, usually the choice of such parameters is made empirically with trial and error if no “ground-truth” reference is available. Some analytical methods such as cross-validation and Stein's unbiased risk estimate (SURE) have been successfully used to set such parameters. However, these methods tend to be strongly reliant on restrictive assumptions on the noise, and also computationally heavy. In this paper, we propose a no-reference metric Q which is based upon singular value decomposition of local image gradient matrix, and provides a quantitative measure of true image content (i.e., sharpness and contrast as manifested in visually salient geometric features such as edges,) in the presence of noise and other disturbances. This measure 1) is easy to compute, 2) reacts reasonably to both blur and random noise, and 3) works well even when the noise is not Gaussian. The proposed measure is used to automatically and effectively set the parameters of two leading image denoising algorithms. Ample simulated and real data experiments support our claims. Furthermore, tests using the TID2008 database show that this measure correlates well with subjective quality evaluations for both blur and noise distortions.

388 citations


Journal ArticleDOI
TL;DR: A novel two-stage noise adaptive fuzzy switching median (NAFSM) filter for salt-and-pepper noise detection and removal that employs fuzzy reasoning to handle uncertainty present in the extracted local information as introduced by noise.
Abstract: This letter presents a novel two-stage noise adaptive fuzzy switching median (NAFSM) filter for salt-and-pepper noise detection and removal. Initially, the detection stage will utilize the histogram of the corrupted image to identify noise pixels. These detected ?noise pixels? will then be subjected to the second stage of the filtering action, while ?noise-free pixels? are retained and left unprocessed. Then, the NAFSM filtering mechanism employs fuzzy reasoning to handle uncertainty present in the extracted local information as introduced by noise. Simulation results indicate that the NAFSM is able to outperform some of the salt-and-pepper noise filters existing in literature.

385 citations


Journal ArticleDOI
TL;DR: It is found that, using these methods, compressed sensing can be carried out even when the quantization is very coarse, e.g., 1 or 2 bits per measurement.
Abstract: We consider the problem of estimating a sparse signal from a set of quantized, Gaussian noise corrupted measurements, where each measurement corresponds to an interval of values. We give two methods for (approximately) solving this problem, each based on minimizing a differentiable convex function plus an l 1 regularization term. Using a first order method developed by Hale et al, we demonstrate the performance of the methods through numerical simulation. We find that, using these methods, compressed sensing can be carried out even when the quantization is very coarse, e.g., 1 or 2 bits per measurement.

345 citations


Journal ArticleDOI
TL;DR: This work formally shows that the minimum variance distortionless response (MVDR) filter is a particular case of the PMWF by properly formulating the constrained optimization problem of noise reduction, and proposes new simplified expressions for thePMWF, the MVDR, and the generalized sidelobe canceller that depend on the signals' statistics only.
Abstract: Several contributions have been made so far to develop optimal multichannel linear filtering approaches and show their ability to reduce the acoustic noise. However, there has not been a clear unifying theoretical analysis of their performance in terms of both noise reduction and speech distortion. To fill this gap, we analyze the frequency-domain (non-causal) multichannel linear filtering for noise reduction in this paper. For completeness, we consider the noise reduction constrained optimization problem that leads to the parameterized multichannel non-causal Wiener filter (PMWF). Our contribution is fivefold. First, we formally show that the minimum variance distortionless response (MVDR) filter is a particular case of the PMWF by properly formulating the constrained optimization problem of noise reduction. Second, we propose new simplified expressions for the PMWF, the MVDR, and the generalized sidelobe canceller (GSC) that depend on the signals' statistics only. In contrast to earlier works, these expressions are explicitly independent of the channel transfer function ratios. Third, we quantify the theoretical gains and losses in terms of speech distortion and noise reduction when using the PWMF by establishing new simplified closed-form expressions for three performance measures, namely, the signal distortion index, the noise reduction factor (originally proposed in the paper titled ldquoNew insights into the noise reduction Wiener filter,rdquo by J. Chen (IEEE Transactions on Audio, Speech, and Language Processing, Vol. 15, no. 4, pp. 1218-1234, Jul. 2006) to analyze the single channel time-domain Wiener filter), and the output signal-to-noise ratio (SNR). Fourth, we analyze the effects of coherent and incoherent noise in addition to the benefits of utilizing multiple microphones. Fifth, we propose a new proof for the a posteriori SNR improvement achieved by the PMWF. Finally, we provide some simulations results to corroborate the findings of this work.

317 citations


Proceedings ArticleDOI
14 Mar 2010
TL;DR: This work presents a low complexity method for noise PSD estimation based on a minimum mean-squared error estimator of the noise magnitude-Squared DFT coefficients, which improves segmental SNR and PESQ for non-stationary noise sources.
Abstract: Most speech enhancement algorithms heavily depend on the noise power spectral density (PSD). Because this quantity is unknown in practice, estimation from the noisy data is necessary. We present a low complexity method for noise PSD estimation. The algorithm is based on a minimum mean-squared error estimator of the noise magnitude-squared DFT coefficients. Compared to minimum statistics based noise tracking, segmental SNR and PESQ are improved for non-stationary noise sources with 1 dB and 0.25 MOS points, respectively. Compared to recently published algorithms, similar good noise tracking performance is obtained, but at a computational complexity that is in the order of a factor 40 lower.

269 citations


Proceedings ArticleDOI
13 Nov 2010
TL;DR: An in-depth analysis of the impact of system noise on large-scale parallel application performance in realistic settings shows that not only collective operations but also point-to-point communications influence the application's sensitivity to noise.
Abstract: This paper presents an in-depth analysis of the impact of system noise on large-scale parallel application performance in realistic settings. Our analytical model shows that not only collective operations but also point-to-point communications influence the application's sensitivity to noise. We present a simulation toolchain that injects noise delays from traces gathered on common large-scale architectures into a LogGPS simulation and allows new insights into the scaling of applications in noisy environments. We investigate collective operations with up to 1 million processes and three applications (Sweep3D, AMG, and POP) with up to 32,000 processes.We show that the scale at which noise becomes a bottleneck is system-specific and depends on the structure of the noise. Simulations with different network speeds show that a 10x faster network does not improve application scalability. We quantify noise and conclude that our tools can be utilized to tune the noise signatures of a specific system.

236 citations


Journal ArticleDOI
TL;DR: The main advantage of this object-based method is its robustness to background artefacts such as ghosting, and within the validation on real data, the proposed method obtained very competitive results compared to the methods under study.

229 citations


Journal ArticleDOI
TL;DR: Using chaotic Lorenz data and calculating root-mean-square-error, Lyapunov exponent, and correlation dimension, it is shown that the adaptive algorithm more effectively reduces noise in the Chaos Lorenz system than wavelet denoising with three different thresholding choices.
Abstract: Time series measured in real world is often nonlinear, even chaotic. To effectively extract desired information from measured time series, it is important to preprocess data to reduce noise. In this Letter, we propose an adaptive denoising algorithm. Using chaotic Lorenz data and calculating root-mean-square-error, Lyapunov exponent, and correlation dimension, we show that our adaptive algorithm more effectively reduces noise in the chaotic Lorenz system than wavelet denoising with three different thresholding choices. We further analyze an electroencephalogram (EEG) signal in sleep apnea and show that the adaptive algorithm again more effectively reduces the Electrocardiogram (ECG) and other types of noise contaminated in EEG than wavelet approaches.

Journal ArticleDOI
TL;DR: This paper derives information theoretic performance bounds to sensing and reconstruction of sparse phenomena from noisy projections by developing novel extensions to Fano's inequality to handle continuous domains and arbitrary distortions and shows that with constant SNR the number of measurements scales linearly with the rate-distortion function of the sparse phenomena.
Abstract: In this paper, we derive information theoretic performance bounds to sensing and reconstruction of sparse phenomena from noisy projections. We consider two settings: output noise models where the noise enters after the projection and input noise models where the noise enters before the projection. We consider two types of distortion for reconstruction: support errors and mean-squared errors. Our goal is to relate the number of measurements, m , and SNR, to signal sparsity, k, distortion level, d, and signal dimension, n . We consider support errors in a worst-case setting. We employ different variations of Fano's inequality to derive necessary conditions on the number of measurements and SNR required for exact reconstruction. To derive sufficient conditions, we develop new insights on max-likelihood analysis based on a novel superposition property. In particular, this property implies that small support errors are the dominant error events. Consequently, our ML analysis does not suffer the conservatism of the union bound and leads to a tighter analysis of max-likelihood. These results provide order-wise tight bounds. For output noise models, we show that asymptotically an SNR of ((n)) together with (k (n/k)) measurements is necessary and sufficient for exact support recovery. Furthermore, if a small fraction of support errors can be tolerated, a constant SNR turns out to be sufficient in the linear sparsity regime. In contrast for input noise models, we show that support recovery fails if the number of measurements scales as o(n(n)/SNR), implying poor compression performance for such cases. Motivated by the fact that the worst-case setup requires significantly high SNR and substantial number of measurements for input and output noise models, we consider a Bayesian setup. To derive necessary conditions, we develop novel extensions to Fano's inequality to handle continuous domains and arbitrary distortions. We then develop a new max-likelihood analysis over the set of rate distortion quantization points to characterize tradeoffs between mean-squared distortion and the number of measurements using rate-distortion theory. We show that with constant SNR the number of measurements scales linearly with the rate-distortion function of the sparse phenomena.

Journal ArticleDOI
TL;DR: Four types of noise (Gaussian noise, Salt & Pepper noise, Speckle noise and Poisson noise) are used and image de-noising performed for different noise by Mean filter, Median filter and Wiener filter .
Abstract: Image processing is basically the use of computer algorithms to perform image processing on digital images. Digital image processing is a part of digital signal processing. Digital image processing has many significant advantages over analog image processing. Image processing allows a much wider range of algorithms to be applied to the input data and can avoid problems such as the build-up of noise and signal distortion during processing of images. Wavelet transforms have become a very powerful tool for de-noising an image. One of the most popular methods is wiener filter. In this work four types of noise (Gaussian noise , Salt & Pepper noise, Speckle noise and Poisson noise) is used and image de-noising performed for different noise by Mean filter, Median filter and Wiener filter . Further results have been compared for all noises.

Journal ArticleDOI
TL;DR: It is shown that, as the overall intensity of the underlying signal increases, an upper bound on the reconstruction error decays at an appropriate rate, but that for a fixed signal intensity, the error bound actually grows with the number of measurements or sensors.
Abstract: This paper describes performance bounds for compressed sensing (CS) where the underlying sparse or compressible (sparsely approximable) signal is a vector of nonnegative intensities whose measurements are corrupted by Poisson noise. In this setting, standard CS techniques cannot be applied directly for several reasons. First, the usual signal-independent and/or bounded noise models do not apply to Poisson noise, which is nonadditive and signal-dependent. Second, the CS matrices typically considered are not feasible in real optical systems because they do not adhere to important constraints, such as nonnegativity and photon flux preservation. Third, the typical l2 - l1 minimization leads to overfitting in the high-intensity regions and oversmoothing in the low-intensity areas. In this paper, we describe how a feasible positivity- and flux-preserving sensing matrix can be constructed, and then analyze the performance of a CS reconstruction approach for Poisson data that minimizes an objective function consisting of a negative Poisson log likelihood term and a penalty term which measures signal sparsity. We show that, as the overall intensity of the underlying signal increases, an upper bound on the reconstruction error decays at an appropriate rate (depending on the compressibility of the signal), but that for a fixed signal intensity, the error bound actually grows with the number of measurements or sensors. This surprising fact is both proved theoretically and justified based on physical intuition.

Journal ArticleDOI
TL;DR: Under minimal assumptions on the connectivity and triangulation of each sensor in the network, DILAND converges almost surely (a.s.) to the exact sensor locations.
Abstract: We present an algorithm for distributed sensor localization with noisy distance measurements (DILAND) that extends and makes the DLRE more robust. DLRE is a distributed sensor localization algorithm in Rm (m ? 1) introduced in our previous work (IEEE Trans. Signal Process., vol. 57, no. 5, pp. 2000-2016, May 2009). DILAND operates when: 1) the communication among the sensors is noisy; 2) the communication links in the network may fail with a nonzero probability; and 3) the measurements performed to compute distances among the sensors are corrupted with noise. The sensors (which do not know their locations) lie in the convex hull of at least m + 1 anchors (nodes that know their own locations). Under minimal assumptions on the connectivity and triangulation of each sensor in the network, we show that, under the broad random phenomena described above, DILAND converges almost surely (a.s.) to the exact sensor locations.

Journal ArticleDOI
TL;DR: A mutual incoherence condition which was previously used for exact recovery in the noiseless case is shown to be sufficient for stable recovery inThe noisy case and an oracle inequality is derived under the mutual incoherent condition in the case of Gaussian noise.
Abstract: This article considers sparse signal recovery in the presence of noise. A mutual incoherence condition which was previously used for exact recovery in the noiseless case is shown to be sufficient for stable recovery in the noisy case. Furthermore, the condition is proved to be sharp. A specific counterexample is given. In addition, an oracle inequality is derived under the mutual incoherence condition in the case of Gaussian noise.

Journal ArticleDOI
TL;DR: By utilizing the time difference of arrival (TDOA) of a signal received at spatially separated sensors, a novel algorithm for source location is proposed that gives sufficient accuracy with lower computational cost, and is more robust to large measurement noise than the compared algorithms.
Abstract: Determining the location of a source from its emissions has gained considerable interest over the past few years. In this paper, by utilizing the time difference of arrival (TDOA) of a signal received at spatially separated sensors, a novel algorithm for source location is proposed. The algorithm is based on the constrained total least-squares (CTLS) technique, and an iterative technique based on Newton's method is utilized to give a numerical solution. By using a perturbation analysis, the bias and covariance of the proposed CTLS algorithm are also derived. Simulation results show that the proposed CTLS algorithm gives sufficient accuracy with lower computational cost, and more importantly, it is more robust to large measurement noise than the compared algorithms.

Journal ArticleDOI
TL;DR: This paper proposes a switching bilateral filter with a texture and noise detector for universal noise removal that achieves high peak signal-to-noise ratio and great image quality by efficiently removing both types of mixed noise, salt-and-peppers with uniform noise and salt- and-pepper with Gaussian noise.
Abstract: In this paper, we propose a switching bilateral filter (SBF) with a texture and noise detector for universal noise removal. Operation was carried out in two stages: detection followed by filtering. For detection, we propose the sorted quadrant median vector (SQMV) scheme, which includes important features such as edge or texture information. This information is utilized to allocate a reference median from SQMV, which is in turn compared with a current pixel to classify it as impulse noise, Gaussian noise, or noise-free. The SBF removes both Gaussian and impulse noise without adding another weighting function. The range filter inside the bilateral filter switches between the Gaussian and impulse modes depending upon the noise classification result. Simulation results show that our noise detector has a high noise detection rate as well as a high classification rate for salt-and-pepper, uniform impulse noise and mixed impulse noise. Unlike most other impulse noise filters, the proposed SBF achieves high peak signal-to-noise ratio and great image quality by efficiently removing both types of mixed noise, salt-and-pepper with uniform noise and salt-and-pepper with Gaussian noise. In addition, the computational complexity of SBF is significantly less than that of other mixed noise filters.

Journal ArticleDOI
TL;DR: In this article, a detailed analysis of these noise terms with a clear distinction between their constituent terms is presented, and a classification of the narrowband interferences according to their power spectral density and their statistical behavior is also given.
Abstract: Indoor broadband power-line noise is composed of three main terms: impulsive components, narrowband interferences, and background noise. Most impulsive components have a cyclostationary behavior. However, while some of them consist of impulses of considerable amplitude, width, and repetition rates of 50/100 Hz (in Europe), others have lower amplitude and shorter width but repetition rates of up to hundreds of kilohertz. Classical studies compute statistics of the impulse characteristics without taking into account these significant differences. This paper presents a detailed analysis of these noise terms with a clear distinction between their constituent terms. A classification of the narrowband interferences according to their power spectral density and their statistical behavior is also given. Finally, the instantaneous power spectral density of the background noise and its probability distribution are investigated. Some of the results presented in this paper are available for download from the web site http://www.plc.uma.es/index_eng.htm.

Patent
16 Feb 2010
TL;DR: In this article, an operational noise measurement is obtained by measuring a noise value outside of a first device, but within a bandwidth of a second, subsequent device by tuning an input band of the element to shift the input band partially or completely outside of the first device to create an open band.
Abstract: A method of monitoring an element in wireless communication system is provided. An operational noise measurement is obtained by measuring a noise value outside of a bandwidth of a first device, but within a bandwidth of a second, subsequent device. The operational noise measurement is alternatively obtained by tuning an input band of the element to shift the input band partially or completely outside of a bandwidth of a first device to create an open band or by suppressing an input of the antenna and measuring noise within the open bandwidth of the element of the wireless communication network. A stored parameter is retrieved and compared to the measured operational noise. Alternatively, a leakage signal of the element may be received at a signal receiver and compared to a reference. The reference is a function of components of the wireless communication system in a leakage path of the leakage signal.

Journal ArticleDOI
TL;DR: Two extensions of the binaural SDW-MWF are proposed to improve the binural cue preservation and are able to preserve bINAural cues for the speech and noise sources, while still achieving significant noise reduction performance.
Abstract: Binaural hearing aids use microphone signals from both left and right hearing aid to generate an output signal for each ear. The microphone signals can be processed by a procedure based on speech distortion weighted multichannel Wiener filtering (SDW-MWF) to achieve significant noise reduction in a speech + noise scenario. In binaural procedures, it is also desirable to preserve binaural cues, in particular the interaural time difference (ITD) and interaural level difference (ILD), which are used to localize sounds. It has been shown in previous work that the binaural SDW-MWF procedure only preserves these binaural cues for the desired speech source, but distorts the noise binaural cues. Two extensions of the binaural SDW-MWF have therefore been proposed to improve the binaural cue preservation, namely the MWF with partial noise estimation (MWF-eta) and MWF with interaural transfer function extension (MWF-ITF). In this paper, the binaural cue preservation of these extensions is analyzed theoretically and tested based on objective performance measures. Both extensions are able to preserve binaural cues for the speech and noise sources, while still achieving significant noise reduction performance.

Journal ArticleDOI
TL;DR: This paper proposes a method to estimate the reconstruction error directly from the samples themselves, for every candidate in this sequence of candidate reconstructions, which provides a way to obtain run-time guarantees for recovery methods that otherwise lack a priori performance bounds.
Abstract: Compressed sensing allows perfect recovery of sparse signals (or signals sparse in some basis) using only a small number of random measurements. Existing results in compressed sensing literature have focused on characterizing the achievable performance by bounding the number of samples required for a given level of signal sparsity. However, using these bounds to minimize the number of samples requires a priori knowledge of the sparsity of the unknown signal, or the decay structure for near-sparse signals. Furthermore, there are some popular recovery methods for which no such bounds are known. In this paper, we investigate an alternative scenario where observations are available in sequence. For any recovery method, this means that there is now a sequence of candidate reconstructions. We propose a method to estimate the reconstruction error directly from the samples themselves, for every candidate in this sequence. This estimate is universal in the sense that it is based only on the measurement ensemble, and not on the recovery method or any assumed level of sparsity of the unknown signal. With these estimates, one can now stop observations as soon as there is reasonable certainty of either exact or sufficiently accurate reconstruction. They also provide a way to obtain ?run-time? guarantees for recovery methods that otherwise lack a priori performance bounds. We investigate both continuous (e.g., Gaussian) and discrete (e.g., Bernoulli) random measurement ensembles, both for exactly sparse and general near-sparse signals, and with both noisy and noiseless measurements.

Journal ArticleDOI
TL;DR: This paper proposes a robust nonlinear measurement operator based on the weighed myriad estimator employing a Lorentzian norm constraint on the residual error to recover sparse signals from noisy measurements and demonstrates that the proposed methods significantly outperform commonly employed compressed sensing sampling and reconstruction techniques in impulsive environments.
Abstract: Recent results in compressed sensing show that a sparse or compressible signal can be reconstructed from a few incoherent measurements. Since noise is always present in practical data acquisition systems, sensing, and reconstruction methods are developed assuming a Gaussian (light-tailed) model for the corrupting noise. However, when the underlying signal and/or the measurements are corrupted by impulsive noise, commonly employed linear sampling operators, coupled with current reconstruction algorithms, fail to recover a close approximation of the signal. In this paper, we propose robust methods for sampling and reconstructing sparse signals in the presence of impulsive noise. To solve the problem of impulsive noise embedded in the underlying signal prior the measurement process, we propose a robust nonlinear measurement operator based on the weighed myriad estimator. In addition, we introduce a geometric optimization problem based on L 1 minimization employing a Lorentzian norm constraint on the residual error to recover sparse signals from noisy measurements. Analysis of the proposed methods show that in impulsive environments when the noise posses infinite variance we have a finite reconstruction error and furthermore these methods yield successful reconstruction of the desired signal. Simulations demonstrate that the proposed methods significantly outperform commonly employed compressed sensing sampling and reconstruction techniques in impulsive environments, while providing comparable performance in less demanding, light-tailed environments.

Journal ArticleDOI
TL;DR: In this paper, a wideband common-gate (CG) LNA architecture was proposed to achieve broadband impedance matching, low noise, large gain, enhanced linearity, and wide bandwidth concurrently by employing an efficient and reliable dual negativefeedback.
Abstract: This paper presents a wideband common-gate (CG) LNA architecture that overcomes the fundamental tradeoff between power and noise match without compromising its stability. The proposed architecture can achieve the minimum noise figure (NF) over the previously reported feedback amplifiers in a CG configuration. The proposed architecture achieves broadband impedance matching, low noise, large gain, enhanced linearity, and wide bandwidth concurrently by employing an efficient and reliable dual negative-feedback. An amplifier prototype was realized in 0.18-μm CMOS, operates from 1.05 to 3.05 GHz, and dissipates 12.6 mW from 1.8-V supply while occupying a 0.073-mm2 active area. The LNA provides 16.9-dB maximum voltage gain, 2.57-dB minimum NF, better than - 10-dB input matching, and - 0.7-dBm minimum IIP3 across the entire bandwidth.

Proceedings ArticleDOI
03 Dec 2010
TL;DR: An automatic setting is proposed to select parameters based on the minimization of the estimated risk (mean square error) of the MSE for NL means with Poisson noise and Newton's method to find the optimal parameters in few iterations.
Abstract: An extension of the non local (NL) means is proposed for images damaged by Poisson noise. The proposed method is guided by the noisy image and a pre-filtered image and is adapted to the statistics of Poisson noise. The influence of both images can be tuned using two filtering parameters. We propose an automatic setting to select these parameters based on the minimization of the estimated risk (mean square error). This selection uses an estimator of the MSE for NL means with Poisson noise and Newton's method to find the optimal parameters in few iterations.

Journal ArticleDOI
TL;DR: An accurate and closed-form solution for the position and velocity of a moving target based on the optimization of a cost function related to the scalar product matrix in the classical MDS framework is presented and achieves better performance than the spherical-interpolation method and the two-step weighted least squares approach.
Abstract: A new framework for positioning a moving target is introduced by utilizing time differences of arrival (TDOA) and frequency differences of arrival (FDOA) measurements collected using an array of passive sensors. It exploits the multidimensional scaling (MDS) analysis, which has been developed for data analysis in the field such as physics, geography and biology. Particularly, we present an accurate and closed-form solution for the position and velocity of a moving target. Unlike most passive target localization methods focusing on minimizing a loss function with respect to the measurement vector, the proposed method is based on the optimization of a cost function related to the scalar product matrix in the classical MDS framework. It is robust to the large measurement noise. The bias and variance of the proposed estimator is also derived. Simulation results show that the proposed estimator achieves better performance than the spherical-interpolation (SI) method and the two-step weighted least squares (WLS) approach, and it attains the Cramer-Rao lower bound at a sufficiently high noise level before the threshold effect occurs. Moreover, for the proposed estimator the threshold effect, which is a result of the nonlinear nature of the localization problem, occurs apparently later as the measurement noise increases for a near-field target.

Journal ArticleDOI
TL;DR: This study provides an alternative performance measure, one that is natural and important in practice, for signal recovery in Compressive Sensing and other application areas exploiting signal sparsity, and offers surprising insights into sparse signal recovery.
Abstract: The performance of estimating the common support for jointly sparse signals based on their projections onto lower-dimensional space is analyzed. Support recovery is formulated as a multiple-hypothesis testing problem. Both upper and lower bounds on the probability of error are derived for general measurement matrices, by using the Chernoff bound and Fano's inequality, respectively. The upper bound shows that the performance is determined by a quantity measuring the measurement matrix incoherence, while the lower bound reveals the importance of the total measurement gain. The lower bound is applied to derive the minimal number of samples needed for accurate direction-of-arrival (DOA) estimation for a sparse representation based algorithm. When applied to Gaussian measurement ensembles, these bounds give necessary and sufficient conditions for a vanishing probability of error for majority realizations of the measurement matrix. Our results offer surprising insights into sparse signal recovery. For example, as far as support recovery is concerned, the well-known bound in Compressive Sensing with the Gaussian measurement matrix is generally not sufficient unless the noise level is low. Our study provides an alternative performance measure, one that is natural and important in practice, for signal recovery in Compressive Sensing and other application areas exploiting signal sparsity.

Proceedings ArticleDOI
13 Jun 2010
TL;DR: It is shown how the descriptors can be matched using recently developed more advanced techniques to obtain better matching performance and how by combining the two descriptors, one obtains much better results than either of them considered separately.
Abstract: Feature-based methods have found increasing use in many applications such as object recognition, 3D reconstruction and mosaicing. In this paper, we focus on the problem of matching such features. While a histogram-of-gradients type methods such as SIFT, GLOH and Shape Context are currently popular, several papers have suggested using orders of pixels rather than raw intensities and shown improved results for some applications. The papers suggest two different techniques for doing so: (1) A Histogram of Relative Orders in the Patch and (2) A Histogram of LBP codes. While these methods have shown good performance, they neglect the fact that the orders can be quite noisy in the presence of Gaussian noise. In this paper, we propose changes to these approaches to make them robust to Gaussian noise. We also show how the descriptors can be matched using recently developed more advanced techniques to obtain better matching performance. Finally, we show that the two methods have complimentary strengths and that by combining the two descriptors, one obtains much better results than either of them considered separately. The results are shown on the standard 2D Oxford and the 3D Caltech datasets.

Patent
22 Apr 2010
TL;DR: In this article, active noise cancellation is combined with spectrum modification of a reproduced audio signal to enhance intelligibility, and the results show that the modified audio signal is more intelligible than the original signal.
Abstract: Active noise cancellation is combined with spectrum modification of a reproduced audio signal to enhance intelligibility.

Journal ArticleDOI
TL;DR: An algorithm is developed for recognizing OFDM versus SCLD signals that obviates the need for commonly required signal preprocessing tasks, such as signal and noise power estimation and the recovery of symbol timing and carrier information.
Abstract: Previous studies on the cyclostationarity aspect of orthogonal frequency division multiplexing (OFDM) and single carrier linearly digitally modulated (SCLD) signals assumed simplified signal and channel models or considered only second-order cyclostationarity This paper presents new results concerning the cyclostationarity of these signals under more general conditions, including time dispersive channels, additive Gaussian noise, and carrier phase, frequency, and timing offsets Analytical closed-form expressions are derived for time- and frequency-domain parameters of the cyclostationarity of OFDM and SCLD signals In addition, a condition to eliminate aliasing in the cycle and spectral frequency domains is derived Based on these results, an algorithm is developed for recognizing OFDM versus SCLD signals This algorithm obviates the need for commonly required signal preprocessing tasks, such as signal and noise power estimation and the recovery of symbol timing and carrier information