scispace - formally typeset
Search or ask a question

Showing papers on "White noise published in 2008"


Journal ArticleDOI
TL;DR: In this article, the authors consider estimating a covariance matrix of p variables from n observations by either banding the sample covariance matrices or estimating a banded version of the inverse of the covariance.
Abstract: This paper considers estimating a covariance matrix of p variables from n observations by either banding the sample covariance matrix or estimating a banded version of the inverse of the covariance. We show that these estimates are consistent in the operator norm as long as (logp) 2 =n ! 0, and obtain explicit rates. The results are uniform over some fairly natural well-conditioned families of covariance matrices. We also introduce an analogue of the Gaussian white noise model and show that if the population covariance is embeddable in that model and well-conditioned then the banded approximations produce consistent estimates of the eigenvalues and associated eigenvectors of the covariance matrix. The results can be extended to smooth versions of banding and to non-Gaussian distributions with su‐ciently short tails. A resampling approach is proposed for choosing the banding parameter in practice. This approach is illustrated numerically on both simulated and real data.

1,143 citations


Journal ArticleDOI
TL;DR: A unified theory of neighborhood filters and reliable criteria to compare them to other filter classes are presented and it will be demonstrated that computing trajectories and restricting the neighborhood to them is harmful for denoising purposes and that space-time NL-means preserves more movie details.
Abstract: Neighborhood filters are nonlocal image and movie filters which reduce the noise by averaging similar pixels. The first object of the paper is to present a unified theory of these filters and reliable criteria to compare them to other filter classes. A CCD noise model will be presented justifying the involvement of neighborhood filters. A classification of neighborhood filters will be proposed, including classical image and movie denoising methods and discussing further a recently introduced neighborhood filter, NL-means. In order to compare denoising methods three principles will be discussed. The first principle, "method noise", specifies that only noise must be removed from an image. A second principle will be introduced, "noise to noise", according to which a denoising method must transform a white noise into a white noise. Contrarily to "method noise", this principle, which characterizes artifact-free methods, eliminates any subjectivity and can be checked by mathematical arguments and Fourier analysis. "Noise to noise" will be proven to rule out most denoising methods, with the exception of neighborhood filters. This is why a third and new comparison principle, the "statistical optimality", is needed and will be introduced to compare the performance of all neighborhood filters. The three principles will be applied to compare ten different image and movie denoising methods. It will be first shown that only wavelet thresholding methods and NL-means give an acceptable method noise. Second, that neighborhood filters are the only ones to satisfy the "noise to noise" principle. Third, that among them NL-means is closest to statistical optimality. A particular attention will be paid to the application of the statistical optimality criterion for movie denoising methods. It will be pointed out that current movie denoising methods are motion compensated neighborhood filters. This amounts to say that they are neighborhood filters and that the ideal neighborhood of a pixel is its trajectory. Unfortunately the aperture problem makes it impossible to estimate ground true trajectories. It will be demonstrated that computing trajectories and restricting the neighborhood to them is harmful for denoising purposes and that space-time NL-means preserves more movie details.

763 citations


Journal ArticleDOI
TL;DR: A novel Monte-Carlo technique is presented which enables the user to calculate SURE for an arbitrary denoising algorithm characterized by some specific parameter setting and it is demonstrated numerically that SURE computed using the new approach accurately predicts the true MSE for all the considered algorithms.
Abstract: We consider the problem of optimizing the parameters of a given denoising algorithm for restoration of a signal corrupted by white Gaussian noise. To achieve this, we propose to minimize Stein's unbiased risk estimate (SURE) which provides a means of assessing the true mean-squared error (MSE) purely from the measured data without need for any knowledge about the noise-free signal. Specifically, we present a novel Monte-Carlo technique which enables the user to calculate SURE for an arbitrary denoising algorithm characterized by some specific parameter setting. Our method is a black-box approach which solely uses the response of the denoising operator to additional input noise and does not ask for any information about its functional form. This, therefore, permits the use of SURE for optimization of a wide variety of denoising algorithms. We justify our claims by presenting experimental results for SURE-based optimization of a series of popular image-denoising algorithms such as total-variation denoising, wavelet soft-thresholding, and Wiener filtering/smoothing splines. In the process, we also compare the performance of these methods. We demonstrate numerically that SURE computed using the new approach accurately predicts the true MSE for all the considered algorithms. We also show that SURE uncovers the optimal values of the parameters in all cases.

365 citations


Journal ArticleDOI
TL;DR: A fundamental asymptotic limit of sample-eigenvalue-based detection of weak or closely spaced high-dimensional signals from a limited sample size is highlighted; this motivates the heuristic definition of the effective number of identifiable signals which is equal to the number of ldquosignalrdquo eigenvalues of the population covariance matrix.
Abstract: The detection and estimation of signals in noisy, limited data is a problem of interest to many scientific and engineering communities. We present a mathematically justifiable, computationally simple, sample-eigenvalue-based procedure for estimating the number of high-dimensional signals in white noise using relatively few samples. The main motivation for considering a sample-eigenvalue-based scheme is the computational simplicity and the robustness to eigenvector modelling errors which can adversely impact the performance of estimators that exploit information in the sample eigenvectors. There is, however, a price we pay by discarding the information in the sample eigenvectors; we highlight a fundamental asymptotic limit of sample-eigenvalue-based detection of weak or closely spaced high-dimensional signals from a limited sample size. This motivates our heuristic definition of the effective number of identifiable signals which is equal to the number of ldquosignalrdquo eigenvalues of the population covariance matrix which exceed the noise variance by a factor strictly greater than . The fundamental asymptotic limit brings into sharp focus why, when there are too few samples available so that the effective number of signals is less than the actual number of signals, underestimation of the model order is unavoidable (in an asymptotic sense) when using any sample-eigenvalue-based detection scheme, including the one proposed herein. The analysis reveals why adding more sensors can only exacerbate the situation. Numerical simulations are used to demonstrate that the proposed estimator, like Wax and Kailath's MDL-based estimator, consistently estimates the true number of signals in the dimension fixed, large sample size limit and the effective number of identifiable signals, unlike Wax and Kailath's MDL-based estimator, in the large dimension, (relatively) large sample size limit.

291 citations


Book
21 Apr 2008
TL;DR: This paper focuses on Discrete-Time Processing for Digital Communications, and examines the techniques used in the synthesis and processing of Carrier Phase Synchronization, as well as their applications in discrete-time radio and television.
Abstract: Contents 1 Introduction 1.1 A brief History of Communications 1.2 Basics of Wireless Communications 1.3 Digital Communications 1.4 Why Discrete-Time Processing is so Popular 1.5 Organization of the Text 1.6 Notes and References 2 Signals and Systems 1: A Review of the Basics 2.1 Introduction 2.2 Signals 2.2.1 Continuous-Time Signals 2.2.2 Discrete-Time Signals 2.3 Systems 2.3.1 Continuous-Time Systems 2.3.2 Discrete- Time Systems 2.4 Frequency Domain Characterization 2.4.1 Laplace Transform 2.4.2 Continuous-Time Fourier Transform 2.4.3 Z Transform 2.4.4 Discrete-Time Fourier Transform 2.5 The Discrete Fourier Transform 2.6 The Relationship Between Discrete-Time and Continuous- Time Systems 2.6.1 The Sampling Theorem 2.6.2 Discrete-Time Processing of Continuous-Time Signals 2.7 Discrete-Time Processing of Bandpass Signals 2.8 Notes and References 2.9 Exercises 3 Signals and Systems 2: Some Useful Discrete-Time Techniques for Digital Communications 3.1 Introduction 3.2 Multirate 3.2.1 Impulse Train Sampling 3.2.2 Downsampling 3.2.3 Upsampling 3.2.4 The Noble Identities 3.2.5 Polyphase Filterbanks 3.3 Discrete-Time Filters Design Methods 3.3.1 IIR Filter Design 3.3.2 FIR Filter Design 3.3.3 Two Important Filters: The Differentiator and the Intergrator 3.4 Notes and References 3.5 Exercises 4 A Review of Probability Theory 4.1 Basic Definitions 4.2 Gaussian Random Variables 4.2.1 Density and Distribution Functions 4.2.2 Product Moments 4.2.3 BivariateGaussian Distribution 4.2.4 Functions of Random Variables 4.3 Multivariate Gaussian Random Variables 4.4 Random Sequences 4.4.1 Power Spectral Density 4.4.2 Random Sequences and Discrete-Time LTI Systems 4.5 Additive White Gaussian Noise 4.5.1 Continuous Time Random Processes 4.5.2 The White Gaussian Random Process: A Good Model For Noise 4.5.3 White Gaussian Noise in a sampled data System 4.6 Notes and References 4.7 Exercises 5 Linear Modulation 1: Demodulation, and Detection 5.1 Signal Spaces 5.1.1 Definitions 5.1.2 The Synthesis Equation and Linear Modulation 5.1.3 The Analysis Equation and Detection 5.1.4 The matched Filter 5.2 M-ary Baseband Pulse Amplitude Modulation (PAM) 5.2.1 Continuous-Time Realization 5.2.2 Discrete-Time Realization 5.3 M-ary Quadrature Amplitude Modulation (MQAM) 5.3.1 Continuous-Time Realization 5.3.2 Discrete-Time Realization 5.4 Offset QPSK 5.5 Multicarrier 5.6 Maximum Likelihood detection 5.6.1 Introduction 5.6.2 Preliminaries 5.6.3 Maximum Likelihood Decision Rule 5.7 Notes and References 5.8 Exercises 6 Linear Modulation 2: Performance 6.1 Performance of PAM 6.1.1 Bandwidth 6.1.2 Probability of Error 6.2 Performance of QAM 6.2.1 Bandwidth 6.2.2 Probability of Error 6.3 Comparisons 6.4 Link Budgets 6.4.1 Received Power and The Friis equation 6.4.2 Equivalent Noise Temperature and Noise Figure 6.4.3 The Link Budget Equation 6.5 Projection White Noise Onto An Orthonormal Basis Set 6.6 Notes and References 6.7 Exercises 7 Carrier Phase Synchronization 7.1 Basics Problem Formulation 7.2 Carrier Phase Synchronization for QPSK 7.2.1 A Heuristic Phase Error Detector 7.2.2 The Maximum Likelihood Phase Error Detector 7.2.3 Examples 7.3 Carrier Phase Synchronization for BPSK 7.4 Carrier Phase Synchronization for MQAM 7.5 Carrier Phase Synchronization for Offset QPSK 7.6 Carrier Phase Synchronization for BPSK and QPSK Using Continuous-Time-Techniques 7.7 Phase Ambiguity Resolution 7.7.1 Unique Word 7.7.2 Differential Encoding 7.8 Maximum Likelihood Phase Estimation 7.8.1 Preliminaries 7.8.2 Carrier Phase Estimation 7.9 Notes and References 7.10 Exercises 8 Symbol Timing Synchronization 8.1 Basic Problem Formulation 8.2 Continuous-Time Techniques for M-ary PAM 8.3 Continuous-Time Techniques for MQAM 8.4 Discrete-Time Techniques for M-ary PAM 8.4.1 Timing Error Detectors 8.4.2 Interpolation 8.4.3 Interpolation Control 8.4.4 Examples 8.5 Discrete-Time Techniques for MQAM 8.6 Discrete-Time Techniques for Offset QPSK 8.7 Dealing with Transition Density: A Parctical Consideration 8.8 Maximum Likelihood Estimation 8.8.1 Preliminaries 8.2.2 Symbol Timing Estimation 8.9 Notes and References 8.10 Exercises 9 System Components 9.1 The Continuous-Time Discrete-Time Interface 9.1.1 Analog-to-Digital Converter 9.2.2 Digital-to-Analog Converter 9.2 Discrete-Time Oscillators 9.2.1 Discrete Oscillators Based on LTI Systems 9.2.2 Direct Digital Synthesizer 9.3 Resampling Filters 9.3.1 CIC and Hogenauer Filters 9.3.2 Half-Band Filters 9.3.3 Arbitrary Resampling Using Polyphase Filterbanks 9.4 CoRDiC: Coordinate Rotation Digital Computer 9.4.1 Rotations: Moving on a Circle 9.4.2 Moving Along Other Shapes 9.5 Automatic gain Control 9.6 Notes and References 9.7 Exercise 10 System Design 10.1 Advance Discrete-Time Architectures 10.1.1 Discrete-Time Architectures for QAM Modulators 10.1.2 Discrete-Time Architectures for QAM Demodulators 10.1.3 Putting It all Together 10.2 Channelization 10.2.1 Continuous-Time Techniques: The Superheterodynd Receiver 10.2.2 Discrete-Time Techniques Using Multirate Processing 10.3 Notes and References 10.4 Exercises

218 citations


Proceedings ArticleDOI
22 Sep 2008
TL;DR: A new algorithm for estimating the signal-to-noise ratio (SNR) of speech signals, called WADA-SNR (Waveform Amplitude Distribution Analysis) is introduced, which shows significantly less bias and less variability with respect to the type of noise compared to the standard NIST STNR algorithm.
Abstract: In this paper, we introduce a new algorithm for estimating the signal-to-noise ratio (SNR) of speech signals, called WADA-SNR (Waveform Amplitude Distribution Analysis) In this algorithm we assume that the amplitude distribution of clean speech can be approximated by the Gamma distribution with a shaping parameter of 04, and that an additive noise signal is Gaussian Based on this assumption, we can estimate the SNR by examining the amplitude distribution of the noise-corrupted speech We evaluate the performance of the WADA-SNR algorithm on databases corrupted by white noise, background music, and interfering speech The WADA-SNR algorithm shows significantly less bias and less variability with respect to the type of noise compared to the standard NIST STNR algorithm In addition, the algorithm is quite computationally efficient Index Terms : SNR estimation, Gamma distribution, Gaussian distribution 1 Introduction The estimation of signal-to-noise ratios (SNRs) has been extensively investigated for decades and it is still an active field of research (

155 citations


Journal ArticleDOI
TL;DR: A modification of the MLE equations is presented that allows the number of computations within the algorithm to be reduced from a cubic to a quadratic function of theNumber of observations when there are no data gaps.
Abstract: It has been generally accepted that the noise in continuous GPS observations can be well described by a power-law plus white noise model. Using maximum likelihood estimation (MLE) the numerical values of the noise model can be estimated. Current methods require calculating the data covariance matrix and inverting it, which is a significant computational burden. Analysing 10 years of daily GPS solutions of a single station can take around 2 h on a regular computer such as a PC with an AMD AthlonTM 64 X2 dual core processor. When one analyses large networks with hundreds of stations or when one analyses hourly instead of daily solutions, the long computation times becomes a problem. In case the signal only contains power-law noise, the MLE computations can be simplified to a process where N is the number of observations. For the general case of power-law plus white noise, we present a modification of the MLE equations that allows us to reduce the number of computations within the algorithm from a cubic to a quadratic function of the number of observations when there are no data gaps. For time-series of three and eight years, this means in practise a reduction factor of around 35 and 84 in computation time without loss of accuracy. In addition, this modification removes the implicit assumption that there is no environment noise before the first observation. Finally, we present an analytical expression for the uncertainty of the estimated trend if the data only contains power-law noise.

154 citations


Journal ArticleDOI
TL;DR: In this article, a number of known and novel reconstruction schemes, which differ in terms of the underlying signal prior, data likelihood and numerical inverse extra regularization schemes are derived and classified.
Abstract: We address the inverse problem of cosmic large-scale structure reconstruction from a Bayesian perspective. For a linear data model, a number of known and novel reconstruction schemes, which differ in terms of the underlying signal prior, data likelihood and numerical inverse extraregularization schemes are derived and classified. The Bayesian methodology presented in this paper tries to unify and extend the following methods: Wiener filtering, Tikhonov regularization, ridge regression, maximum entropy and inverse regularization techniques. The inverse techniques considered here are the asymptotic regularization, the Jacobi, Steepest Descent, Newton–Raphson, Landweber–Fridman and both linear and non-linear Krylov methods based on Fletcher–Reeves, Polak–Ribiere and Hestenes–Stiefel conjugate gradients. The structures of the up-to-date highest performing algorithms are presented, based on an operator scheme, which permits one to exploit the power of fast Fourier transforms. Using such an implementation of the generalized Wiener filter in the novel argo software package, the different numerical schemes are benchmarked with one-, two- and three-dimensional problems including structured white and Poissonian noise, data windowing and blurring effects. A novel numerical Krylov scheme is shown to be superior in terms of performance and fidelity. These fast inverse methods ultimately will enable the application of sampling techniques to explore complex joint posterior distributions. We outline how the space of the dark matter density field, the peculiar velocity field and the power spectrum can jointly be investigated by a Gibbs-sampling process. Such a method can be applied for the redshift distortions correction of the observed galaxies and for time-reversal reconstructions of the initial density field.

141 citations


Journal ArticleDOI
TL;DR: In this paper, a simple but realistic estimate of the frequency uncertainty in time series analyses is presented, where the error is defined as a function of the relative level of noise, signal and frequency difference.
Abstract: Context: Several approaches to estimate frequency, phase and amplitude errors in time series analyses were reported in the literature, but they are either time consuming to compute, grossly overestimating the error, or are based on empirically determined criteria. Aims: A simple, but realistic estimate of the frequency uncertainty in time series analyses. Methods: Synthetic data sets with mono- and multi-periodic harmonic signals and with randomly distributed amplitude, frequency and phase were generated and white noise added. We tried to recover the input parameters with classical Fourier techniques and investigated the error as a function of the relative level of noise, signal and frequency difference. Results: We present simple formulas for the upper limit of the amplitude, frequency and phase uncertainties in time-series analyses. We also demonstrate the possibility to detect frequencies which are separated by less than the classical frequency resolution and that the realistic frequency error is at least 4 times smaller than the classical frequency resolution.

137 citations


01 Jan 2008
TL;DR: It is shown that the NLMeans algorithm is basically the first iteration of the Jacobi optimization algorithm for robustly estimating the noise-free image and also an extension to noise reduction of coloured (correlated) noise.
Abstract: Recently, the NLMeans filter has been proposed by Buades et al. for the suppression of white Gaussian noise. This filter exploits the repetitive character of structures in an image, unlike conventional denoising algorithms, which typically operate in a local neighbourhood. Even though the method is quite intuitive and potentially very powerful, the PSNR and visual results are somewhat inferior to other recent state-of-the-art non-local algorithms, like KSVD and BM-3D. In this paper, we show that the NLMeans algorithm is basically the first iteration of the Jacobi optimization algorithm for robustly estimating the noise-free image. Based on this insight, we present additional improvements to the NLMeans algorithm and also an extension to noise reduction of coloured (correlated) noise. For white noise, PSNR results show that the proposed method is very competitive with the BM-3D method, while the visual quality of our method is better due to the lower presence of artifacts. For correlated noise on the other hand, we obtain a significant improvement in denoising performance compared to recent wavelet-based techniques.

134 citations


Journal ArticleDOI
TL;DR: Two new theorems and the ItÔ calculus show that white Levy noise will benefit subthreshold neuronal signal detection if the noise process's scaled drift velocity falls inside an interval that depends on the threshold values.
Abstract: Levy noise can help neurons detect faint or subthreshold signals. Levy noise extends standard Brownian noise to many types of impulsive jump-noise processes found in real and model neurons as well as in models of finance and other random phenomena. Two new theorems and the ItO calculus show that white Levy noise will benefit subthreshold neuronal signal detection if the noise process's scaled drift velocity falls inside an interval that depends on the threshold values. These results generalize earlier “forbidden interval” theorems of neuronal “stochastic resonance” (SR) or noise-injection benefits. Global and local Lipschitz conditions imply that additive white Levy noise can increase the mutual information or bit count of several feedback neuron models that obey a general stochastic differential equation (SDE). Simulation results show that the same noise benefits still occur for some infinite-variance stable Levy noise processes even though the theorems themselves apply only to finite-variance Levy noise. The Appendix proves the two ItO-theoretic lemmas that underlie the new Levy noise-benefit theorems.

Journal ArticleDOI
TL;DR: In this paper, a method for estimating common factors of multiple time series is proposed, where the unobservable, nonstationary factors are identified by expanding the white noise space step by step, thereby solving a high-dimensional optimization problem by several low-dimensional subproblems.
Abstract: We propose a new method for estimating common factors of multiple time series. One distinctive feature of the new approach is that it is applicable to some nonstationary time series. The unobservable, nonstationary factors are identified by expanding the white noise space step by step, thereby solving a high-dimensional optimization problem by several low-dimensional sub-problems. Asymptotic properties of the estimation are investigated. The proposed methodology is illustrated with both simulated and real datasets.

Journal ArticleDOI
TL;DR: In this paper, a simple, but realistic, estimate of the frequency uncertainty in time-series analyses is presented, which is based on a simple formula for the upper limit of the amplitude, frequency, and phase uncertainties.
Abstract: Context. Several approaches to estimating frequency, phase, and amplitude errors in time-series analyse have been reported in the literature, but they are either time-consuming to compute, grossly overestimating the error, or are based on empirically determined criteria.Aims. A simple, but realistic estimate of the frequency uncertainty in time-series analyses is our goal here.Methods. Synthetic data sets with mono- and multi-periodic harmonic signals and with randomly distributed amplitude, frequency, and phase were generated and white noise added. We tried to recover the input parameters with classical Fourier techniques and investigated the error as a function of the relative level of noise, signal, and frequency difference.Results. We present simple formulas for the upper limit of the amplitude, frequency, and phase uncertainties in time-serie analyses. We also demonstrate the possibility of detecting frequencies that are separated by less than the classical frequency resolution and of finding that the realistic frequency error is at least 4 times smaller than the classical frequency resolution.

Journal ArticleDOI
TL;DR: A new stochastic ML DOA estimator is derived based on an iterative procedure which concentrates the log-likelihood function with respect to the signal and noise nuisance parameters in a stepwise fashion and a modified inverse iteration algorithm is presented for the estimation of the noise parameters.
Abstract: This correspondence investigates the direction-of-arrival (DOA) estimation of multiple narrowband sources in the presence of nonuniform white noise with an arbitrary diagonal covariance matrix. While both the deterministic and stochastic Cramer-Rao bound (CRB) and the deterministic maximum-likelihood (ML) DOA estimator under this model have been derived by Pesavento and Gershman, the stochastic ML DOA estimator under the same setting is still not available in the literature. In this correspondence, a new stochastic ML DOA estimator is derived. Its implementation is based on an iterative procedure which concentrates the log-likelihood function with respect to the signal and noise nuisance parameters in a stepwise fashion. A modified inverse iteration algorithm is also presented for the estimation of the noise parameters. Simulation results have shown that the proposed algorithm is able to provide significant performance improvement over the conventional uniform ML estimator in nonuniform noise environments and require only a few iterations to converge to the nonuniform stochastic CRB.

Journal ArticleDOI
01 Sep 2008-EPL
TL;DR: This paper deals with the distinction between white noise and deterministic chaos in multivariate noisy time series by counting the number of the so-called ordinal patterns in independent samples of length L from the data sequence.
Abstract: This paper deals with the distinction between white noise and deterministic chaos in multivariate noisy time series. Our method is combinatorial in the sense that it is based on the properties of topological permutation entropy, and it becomes especially interesting when the noise is so high that the standard denoising techniques fail, so a detection of determinism is the most one can hope for. It proceeds by i) counting the number of the so-called ordinal patterns in independent samples of length L from the data sequence and ii) performing a χ2 test based on the results of i), the null hypothesis being that the data are white noise. Holds the null hypothesis, so should all possible ordinal patterns of a given length be visible and evenly distributed over sufficiently many samples, contrarily to what happens in the case of noisy deterministic data. We present numerical evidence in two dimensions for the efficiency of this method. A brief comparison with two common tests for independence, namely, the calculation of the autocorrelation function and the BDS algorithm, is also performed.

Journal ArticleDOI
TL;DR: In this article, the transient dynamics of the Verhulst model perturbed by arbitrary non-Gaussian white noise is investigated based on the infinitely divisible distribution of the Levy process.
Abstract: The transient dynamics of the Verhulst model perturbed by arbitrary non-Gaussian white noise is investigated. Based on the infinitely divisible distribution of the Levy process we study the nonlinear relaxation of the population density for three cases of white non-Gaussian noise: (i) shot noise; (ii) noise with a probability density of increments expressed in terms of Gamma function; and (iii) Cauchy stable noise. We obtain exact results for the probability distribution of the population density in all cases, and for Cauchy stable noise the exact expression of the nonlinear relaxation time is derived. Moreover starting from an initial delta function distribution, we find a transition induced by the multiplicative Levy noise, from a trimodal probability distribution to a bimodal probability distribution in asymptotics. Finally we find a nonmonotonic behavior of the nonlinear relaxation time as a function of the Cauchy stable noise intensity.

Journal ArticleDOI
TL;DR: Simulation results verify the theoretical derivations and demonstrate the potential applications, such as detection and parameter estimation of chirp signals, fractional power spectral estimation and system identification in the fractional Fourier domain.
Abstract: In this paper, by investigating the definitions of the fractional power spectrum and the fractional correlation for the deterministic process, we consider the case associated with the random process in an explicit manner. The fractional power spectral relations for the fractional Fourier domain filter are derived, and the expression for the fractional power spectrum in terms of the fractional correlation is obtained. In addition, the definitions and the properties of the fractional white noise and the chirp-stationary process are presented. Simulation results verify the theoretical derivations and demonstrate the potential applications, such as detection and parameter estimation of chirp signals, fractional power spectral estimation and system identification in the fractional Fourier domain.

Journal ArticleDOI
TL;DR: In this paper, the authors survey regularization and parameter selection from a linear algebra and statistics viewpoint and compare the statistical distributions of regularized estimates of the solution and the residual, and evaluate a method for choosing the regularization parameter that makes the residuals as close as possible to white noise.
Abstract: Consider an ill-posed problem transformed if necessary so that the errors in the data are independent identically normally distributed with mean zero and variance 1. We survey regularization and parameter selection from a linear algebra and statistics viewpoint and compare the statistical distributions of regularized estimates of the solution and the residual. We discuss methods for choosing a regularization parameter in order to assure that the residual for the model is statistically plausible. Ideally, as proposed by Rust (1998 Tech. Rep. NISTIR 6131, 2000 Comput. Sci. Stat. 32 333–47 ), the results of candidate parameter choices should be evaluated by plotting the resulting residual along with its periodogram and its cumulative periodogram, but sometimes an automated choice is needed. We evaluate a method for choosing the regularization parameter that makes the residuals as close as possible to white noise, using a diagnostic test based on the periodogram. We compare this method with standard techniques such as the discrepancy principle, the L-curve and generalized cross validation, showing that it performs better on two new test problems as well as a variety of standard problems.

Journal ArticleDOI
TL;DR: In this article, the Ornstein-Uhlenbeck process with its main mathematical properties and with original results on the first crossing times in the case of two threshold barriers is presented with an improved simulation scheme for the evaluation of first passage times between two barriers.
Abstract: The Ornstein–Uhlenbeck process is presented with its main mathematical properties and with original results on the first crossing times in the case of two threshold barriers. The interpretation of filtered white noise, its stationary spectrum and Allan variance are also presented for ease of use in the time and frequency metrology field. An improved simulation scheme for the evaluation of first passage times between two barriers is also introduced.

Posted Content
TL;DR: In this paper, the outage probability of the free-space optical channel was investigated under the assumption of orthogonal pulse-position modulation. And the authors investigated the mitigation of scintillation through the use of multiple lasers and multiple apertures, thereby creating a multiple-input multiple output (MIMO) channel.
Abstract: The free-space optical channel has the potential to facilitate inexpensive, wireless communication with fiber-like bandwidth under short deployment timelines. However, atmospheric effects can significantly degrade the reliability of a free-space optical link. In particular, atmospheric turbulence causes random fluctuations in the irradiance of the received laser beam, commonly referred to as scintillation. The scintillation process is slow compared to the large data rates typical of optical transmission. As such, we adopt a quasi-static block fading model and study the outage probability of the channel under the assumption of orthogonal pulse-position modulation. We investigate the mitigation of scintillation through the use of multiple lasers and multiple apertures, thereby creating a multiple-input multiple output (MIMO) channel. Non-ideal photodetection is also assumed such that the combined shot noise and thermal noise are considered as signal-independent Additive Gaussian white noise. Assuming perfect receiver channel state information (CSI), we compute the signal-to-noise ratio exponents for the cases when the scintillation is lognormal, exponential and gamma-gamma distributed, which cover a wide range of atmospheric turbulence conditions. Furthermore, we illustrate very large gains, in some cases larger than 15 dB, when transmitter CSI is also available by adapting the transmitted electrical power.

Journal ArticleDOI
TL;DR: The results of experiments using noise to try to better understand the losses in amblyopia show that the amblyopes' reduced efficiency for detecting signals in noise is explained in part by reduced template efficiency but to a greater extent by increased random internal noise.
Abstract: Amblyopia results in a loss of visual acuity, contrast sensitivity, and position acuity. However, the nature of the neural losses is not yet fully understood. Here we report the results of experiments using noise to try to better understand the losses in amblyopia. Specifically, in one experiment we compared the performance of normal, amblyopic, and ideal observers for detecting a localized signal (a discrete frequency pattern or DFP) in fixed contrast white noise. In a second experiment, we used visibility-scaled noise and varied both the visibility of the noise (from 2 to 20 times the noise detection threshold) and the spatial frequency of the signal. Our results show a loss of efficiency for detection of known signals in noise that increases with the spatial frequency of the signal in observers with amblyopia. To determine whether the loss of efficiency was a consequence of a mismatched template, we derived classification images. We found that although the amblyopic observers' template was shifted to lower spatial frequencies, the shift was insufficient to account for their threshold elevation. Reduced efficiency in the amblyopic visual system may reflect a high level of internal noise, a poorly matched position template, or both. To analyze the type of internal noise we used an "N-pass" technique, in which observers performed the identical experiment N times (where N = 3 or 4). The amount of disagreement between the repeated trials enables us to parse the internal noise into random noise and consistent noise beyond that due to the poorly matched template. Our results show that the amblyopes' reduced efficiency for detecting signals in noise is explained in part by reduced template efficiency but to a greater extent by increased random internal noise. This loss is more or less independent of external noise contrast over a log unit range of external noise.

Journal ArticleDOI
TL;DR: In this article, a robust optimal design criterion for a single tuned mass dampers (TMD) device is proposed, in which the protected main structure covariance displacement (dimensionless by dividing for the unprotected one) is adopted as the deterministic objective function.

Journal ArticleDOI
TL;DR: The expected firing probability of a stochastic neuron is approximated by a function of the expected subthreshold membrane potential, for the case of colored noise, in order to extend the recently proposed white noise model to conductance-based neurons.
Abstract: The expected firing probability of a stochastic neuron is approximated by a function of the expected subthreshold membrane potential, for the case of colored noise. We propose this approximation in order to extend the recently proposed white noise model [A. V. Chizhov and L. J. Graham, Phys. Rev. E 75, 011924 (2007)] to the case of colored noise, applying a refractory density approach to conductance-based neurons. The uncoupled neurons of a single population receive a common input and are dispersed by the noise. Within the framework of the model the effect of noise is expressed by the so-called hazard function, which is the probability density for a single neuron to fire given the average membrane potential in the presence of a noise term. To derive the hazard function we solve the Kolmogorov-Fokker-Planck equation for a mean voltage-driven neuron fluctuating due to colored noisy current. We show that a sum of both a self-similar solution for the case of slow changing mean voltage and a frozen stationary solution for fast changing mean voltage gives a satisfactory approximation for the hazard function in the arbitrary case. We demonstrate the quantitative effect of a temporal correlation of noisy input on the neuron dynamics in the case of leaky integrate-and-fire and detailed conductance-based neurons in response to an injected current step.

Journal ArticleDOI
TL;DR: In this paper, a path integral solution for non-linear systems under Poisson white noise is presented, which may be considered as a step-by-step solution technique in terms of probability density function.

Journal ArticleDOI
TL;DR: The author examines the relationship between an individual's attempts to avoid bias and 1/f noise in implicit measures of stereotyping and prejudice, leading to the prediction that increasing effort will reduce 1/ f noise.
Abstract: Phenomena that vary over time can often be represented as a complex waveform. Fourier analysis decomposes this complex wave into a set of sinusoidal component waves. In some phenomena, the amplitude of these waves varies in inverse relation to frequency. This pattern has been called 1/f noise and, unlike white noise, it reflects nonrandom variation. Latencies in simple computer tasks typically reveal 1/f noise, but the magnitude of the noise decreases as tasks become more challenging. The current work hypothesizes a correspondence between 1/f noise and effort, leading to the prediction that increasing effort will reduce 1/f noise. In 2 studies, the author examined the relationship between an individual's attempts to avoid bias (measured in Study 1, manipulated in Study 2) and 1/f noise in implicit measures of stereotyping and prejudice. In each study, participants who made an effort to modulate the use of racial information showed less 1/f noise than did participants who made less effort. The potential value of this analytic approach to social psychology is discussed.

Journal ArticleDOI
TL;DR: In this article, it was shown that nonparametric regression is asymptotically equivalent to a sequence of Gaussian white noise experiments as the number of observations tends to infinity.
Abstract: We show that nonparametric regression is asymptotically equivalent, in Le Cam’s sense, to a sequence of Gaussian white noise experiments as the number of observations tends to infinity. We propose a general constructive framework, based on approximation spaces, which allows asymptotic equivalence to be achieved, even in the cases of multivariate and random design.

Journal ArticleDOI
TL;DR: In this article, the authors developed significance tests for wavelet cross spectrum and linear coherence between wind speed and wave elevation series measured from a NOAA buoy on Lake Michigan using simulated signals.
Abstract: . This work attempts to develop significance tests for the wavelet cross spectrum and the wavelet linear coherence as a follow-up study on Ge (2007). Conventional approaches that are used by Torrence and Compo (1998) based on stationary background noise time series were used here in estimating the sampling distributions of the wavelet cross spectrum and the wavelet linear coherence. The sampling distributions are then used for establishing significance levels for these two wavelet-based quantities. In addition to these two wavelet quantities, properties of the phase angle of the wavelet cross spectrum of, or the phase difference between, two Gaussian white noise series are discussed. It is found that the tangent of the principal part of the phase angle approximately has a standard Cauchy distribution and the phase angle is uniformly distributed, which makes it impossible to establish significance levels for the phase angle. The simulated signals clearly show that, when there is no linear relation between the two analysed signals, the phase angle disperses into the entire range of [−π,π] with fairly high probabilities for values close to ±π to occur. Conversely, when linear relations are present, the phase angle of the wavelet cross spectrum settles around an associated value with considerably reduced fluctuations. When two signals are linearly coupled, their wavelet linear coherence will attain values close to one. The significance test of the wavelet linear coherence can therefore be used to complement the inspection of the phase angle of the wavelet cross spectrum. The developed significance tests are also applied to actual data sets, simultaneously recorded wind speed and wave elevation series measured from a NOAA buoy on Lake Michigan. Significance levels of the wavelet cross spectrum and the wavelet linear coherence between the winds and the waves reasonably separated meaningful peaks from those generated by randomness in the data set. As with simulated signals, nearly constant phase angles of the wavelet cross spectrum are found to coincide with large values in the wavelet linear coherence between the winds and the waves. Not limited to geophysics, the significance tests developed in the present work can also be applied to many other quantitative studies using the continuous wavelet transform.

Journal ArticleDOI
TL;DR: The proposed method is an extension of the homomorphic deconvolution, which is used here only to compute the initial estimate of the point-spread function, and gives stable results of clearly higher spatial resolution and better defined tissue structures than in the input images and than the results of the Homomorphic deconVolution alone.
Abstract: A new approach to 2-D blind deconvolution of ultrasonic images in a Bayesian framework is presented. The radio-frequency image data are modeled as a convolution of the point-spread function and the tissue function, with additive white noise. The deconvolution algorithm is derived from statistical assumptions about the tissue function, the point-spread function, and the noise. It is solved as an iterative optimization problem. In each iteration, additional constraints are applied as a projection operator to further stabilize the process. The proposed method is an extension of the homomorphic deconvolution, which is used here only to compute the initial estimate of the point-spread function. Homomorphic deconvolution is based on the assumption that the point-spread function and the tissue function lie in different bands of the cepstrum domain, which is not completely true. This limiting constraint is relaxed in the subsequent iterative deconvolution. The deconvolution is applied globally to the complete radiofrequency image data. Thus, only the global part of the point-spread function is considered. This approach, together with the need for only a few iterations, makes the deconvolution potentially useful for real-time applications. Tests on phantom and clinical images have shown that the deconvolution gives stable results of clearly higher spatial resolution and better defined tissue structures than in the input images and than the results of the homomorphic deconvolution alone.

Journal ArticleDOI
TL;DR: The proposed detector outperforms the energy detector in the presence of noise variance mismatch above 2.3 dB and some involved trade-offs in the spectrum sensing using the proposed detector are discussed.
Abstract: The spectrum sensing of a wideband frequency range is studied by dividing it into multiple subbands. It is assumed that in each subband either a primary user (PU) is active or absent in a additive white Gaussian noise environment with an unknown variance. It is also assumed that at least a minimum given number of subbands are vacant of PUs. In this multiple interrelated hypothesis testing problem, the noise variance is estimated and a generalised likelihood ratio detector is proposed to identify possible spectrum holes at a secondary user (SU). Provided that it is known that a specific PU can occupy a subset of subbands simultaneously, a grouping algorithm which allows faster spectrum sensing is proposed. The collaboration of multiple SUs can also be considered in order to enhance the detection performance. The collaborative algorithms are compared in terms of the required exchange information among SUs in some collaboration methods. The simulation results show that the proposed detector outperforms the energy detector in the presence of noise variance mismatch above 2.3 dB. Some involved trade-offs in the spectrum sensing using the proposed detector are discussed.

MonographDOI
01 Jul 2008
TL;DR: The white noise analysis has an aspect of infinite dimensional harmonic analysis arising from the infinite dimensional rotation group as mentioned in this paper, which has explored new areas of mathematics and has extended the fields of applications.
Abstract: White noise analysis is an advanced stochastic calculus that has developed extensively since three decades ago. It has two main characteristics. One is the notion of generalized white noise functionals, the introduction of which is oriented by the line of advanced analysis, and they have made much contribution to the fields in science enormously. The other characteristic is that the white noise analysis has an aspect of infinite dimensional harmonic analysis arising from the infinite dimensional rotation group. With the help of this rotation group, the white noise analysis has explored new areas of mathematics and has extended the fields of applications.