scispace - formally typeset
Search or ask a question

Showing papers on "White noise published in 2011"


Proceedings ArticleDOI
22 May 2011
TL;DR: The results show that, compared with EEMD, the new method here presented also provides a better spectral separation of the modes and a lesser number of sifting iterations is needed, reducing the computational cost.
Abstract: In this paper an algorithm based on the ensemble empirical mode decomposition (EEMD) is presented. The key idea on the EEMD relies on averaging the modes obtained by EMD applied to several realizations of Gaussian white noise added to the original signal. The resulting decomposition solves the EMD mode mixing problem, however it introduces new ones. In the method here proposed, a particular noise is added at each stage of the decomposition and a unique residue is computed to obtain each mode. The resulting decomposition is complete, with a numerically negligible error. Two examples are presented: a discrete Dirac delta function and an electrocardiogram signal. The results show that, compared with EEMD, the new method here presented also provides a better spectral separation of the modes and a lesser number of sifting iterations is needed, reducing the computational cost.

1,517 citations


Journal ArticleDOI
TL;DR: In this paper, an adaptive convolution of Gaussian white noise with a real-space transfer function kernel together with an adaptive multi-grid Poisson solver is used to generate displacements and velocities following first- or second-order Lagrangian perturbation theory (2LPT).
Abstract: We discuss a new algorithm to generate multi-scale initial conditions with multiple levels of refinements for cosmological 'zoom-in' simulations. The method uses an adaptive convolution of Gaussian white noise with a real-space transfer function kernel together with an adaptive multi-grid Poisson solver to generate displacements and velocities following first- (1LPT) or second-order Lagrangian perturbation theory (2LPT). The new algorithm achieves rms relative errors of the order of 10{sup -4} for displacements and velocities in the refinement region and thus improves in terms of errors by about two orders of magnitude over previous approaches. In addition, errors are localized at coarse-fine boundaries and do not suffer from Fourier-space-induced interference ringing. An optional hybrid multi-grid and Fast Fourier Transform (FFT) based scheme is introduced which has identical Fourier-space behaviour as traditional approaches. Using a suite of re-simulations of a galaxy cluster halo our real-space-based approach is found to reproduce correlation functions, density profiles, key halo properties and subhalo abundances with per cent level accuracy. Finally, we generalize our approach for two-component baryon and dark-matter simulations and demonstrate that the power spectrum evolution is in excellent agreement with linear perturbation theory. For initial baryon density fields, it is suggested to use the local Lagrangian approximation in order to generate a density field for mesh-based codes that is consistent with the Lagrangian perturbation theory instead of the current practice of using the Eulerian linearly scaled densities.

564 citations


Journal ArticleDOI
TL;DR: In this paper, the authors derived exact expressions for the asymptotic MSE of x1,λ, and evaluated its worst-case noise sensitivity over all types of k-sparse signals.
Abstract: Consider the noisy underdetermined system of linear equations: y = Ax0 + z, with A an n × N measurement matrix, n <; N, and z ~ N(0, σ2I) a Gaussian white noise. Both y and A are known, both x0 and z are unknown, and we seek an approximation to x0. When x0 has few nonzeros, useful approximations are often obtained by l1-penalized l2 minimization, in which the reconstruction x1,λ solves min{||y - Ax||22/2 + λ||x||1}. Consider the reconstruction mean-squared error MSE = E|| x1,λ - x0||22/N, and define the ratio MSE/σ2 as the noise sensitivity. Consider matrices A with i.i.d. Gaussian entries and a large-system limit in which n, N → ∞ with n/N → δ and k/n → ρ. We develop exact expressions for the asymptotic MSE of x1,λ , and evaluate its worst-case noise sensitivity over all types of k-sparse signals. The phase space 0 ≤ 8, ρ ≤ 1 is partitioned by the curve ρ = ρMSE(δ) into two regions. Formal noise sensitivity is bounded throughout the region ρ = ρMSE(δ) and is unbounded throughout the region ρ = ρMSE(δ). The phase boundary ρ = ρMSE(δ) is identical to the previously known phase transition curve for equivalence of l1 - l0 minimization in the k-sparse noiseless case. Hence, a single phase boundary describes the fundamental phase transitions both for the noise less and noisy cases. Extensive computational experiments validate these predictions, including the existence of game-theoretical structures underlying it (saddlepoints in the payoff, least-favorable signals and maximin penalization). Underlying our formalism is an approximate message passing soft thresholding algorithm (AMP) introduced earlier by the authors. Other papers by the authors detail expressions for the formal MSE of AMP and its close connection to l1-penalized reconstruction. The focus of the present paper is on computing the minimax formal MSE within the class of sparse signals x0.

341 citations


Journal ArticleDOI
06 Oct 2011-Nature
TL;DR: In this paper, the authors measured the spectrum of thermal noise by confining the Brownian fluctuations of a microsphere in a strong optical trap, and showed that hydrodynamic correlations result in a resonant peak in the power spectral density of the sphere's positional fluctuations, in strong contrast to overdamped systems.
Abstract: In Brownian motion, a particle's movement is driven by rapid collisions with the surrounding solvent molecules; this thermal force is assumed to be random and characterized by a Gaussian white noise spectrum. Friction between the particle and the viscous solvent damps its motion. However, the displaced fluid acts back on the particle, giving rise to a hydrodynamic 'memory' and thermal forces with a coloured noise spectrum. Direct experimental observation of a coloured spectrum has proved difficult. Sylvia Jeney and colleagues now report clear evidence for it in measurements of the Brownian fluctuations of a microsphere in a strong optical trap. They anticipate that such details in thermal noise could be exploited for the development of new types of sensors and particle-based assays in lab-on-a-chip applications. Observation of the Brownian motion of a small probe interacting with its environment provides one of the main strategies for characterizing soft matter1,2,3,4. Essentially, two counteracting forces govern the motion of the Brownian particle. First, the particle is driven by rapid collisions with the surrounding solvent molecules, referred to as thermal noise. Second, the friction between the particle and the viscous solvent damps its motion. Conventionally, the thermal force is assumed to be random and characterized by a Gaussian white noise spectrum. The friction is assumed to be given by the Stokes drag, suggesting that motion is overdamped at long times in particle tracking experiments, when inertia becomes negligible. However, as the particle receives momentum from the fluctuating fluid molecules, it also displaces the fluid in its immediate vicinity. The entrained fluid acts back on the particle and gives rise to long-range correlations5,6. This hydrodynamic ‘memory’ translates to thermal forces, which have a coloured, that is, non-white, noise spectrum. One hundred years after Perrin’s pioneering experiments on Brownian motion7,8,9, direct experimental observation of this colour is still elusive10. Here we measure the spectrum of thermal noise by confining the Brownian fluctuations of a microsphere in a strong optical trap. We show that hydrodynamic correlations result in a resonant peak in the power spectral density of the sphere’s positional fluctuations, in strong contrast to overdamped systems. Furthermore, we demonstrate different strategies to achieve peak amplification. By analogy with microcantilever-based sensors11,12, our results reveal that the particle–fluid–trap system can be considered a nanomechanical resonator in which the intrinsic hydrodynamic backflow enhances resonance. Therefore, instead of being treated as a disturbance, details in thermal noise could be exploited for the development of new types of sensor and particle-based assay in lab-on-a-chip applications13,14.

299 citations


Journal ArticleDOI
TL;DR: In this article, the response of an inductive power generator with a bistable symmetric potential to stationary random environmental excitations is investigated, and the expected value of the generator's output power is independent of the potential shape.

238 citations


Journal ArticleDOI
TL;DR: The estimator approaches Jacobsen's estimator for large N and presents a bias correction which is especially important for small and medium values of N .
Abstract: The parameter estimation of a complex exponential waveform observed under white noise is typically tackled in two stages. In the first stage, a coarse frequency estimate is found by the application of an N-point DFT to the input of length N . In the second stage, a fine search around the peak determined in the first stage is conducted. The method proposed in this paper presents a simpler alternative. The method suggests a nonlinear relation involving three DFT samples already calculated in the first stage to produce a real valued, fine resolution frequency estimate. The estimator approaches Jacobsen's estimator for large N and presents a bias correction which is especially important for small and medium values of N .

219 citations


Journal ArticleDOI
TL;DR: In this paper, the effects of time correlation in weekly GPS position time series on velocity estimates were analyzed in terms of noise content and velocity uncertainty assessment, including power law and Gauss-Markov processes.
Abstract: [1] This study focuses on the effects of time correlation in weekly GPS position time series on velocity estimates Time series 25 to 13 years long from a homogeneously reprocessed solution of 275 globally distributed stations are analyzed in terms of noise content and velocity uncertainty assessment Several noise models were tested, including power law and Gauss-Markov processes The best noise model describing our global data set was a combination of variable white noise and power law noise models with mean amplitudes of ∼2 mm and ∼6 mm, respectively, for the sites considered This noise model provided a mean vertical velocity uncertainty of ∼03 mm/yr, 4–5 times larger than the uncorrelated data assumption We demonstrated that correlated noise content with homogeneously reprocessed data is dependent on time series length and, especially, on data time period Time series of 2–3 years of the oldest data contain noise amplitude similar to that found for time series of 12 years The data time period should be taken into account when estimating correlated noise content, when comparing different noise estimations, or when applying an external noise estimation to assess velocity uncertainty We showed that the data period dependency cannot be explained by the increasing tracking network or the ambiguity fixation rate but is probably related to the amount and quality of recorded data

206 citations


Journal ArticleDOI
TL;DR: In this article, it was shown that for the vast majority of measurement schemes employed in compressed sensing, the two models are equivalent with the important difference that the signal-to-noise ratio is divided by a factor proportional to p/n, where p is the dimension of the signal and n is the number of observations.
Abstract: The literature on compressed sensing has focused almost entirely on settings where the signal is noiseless and the measurements are contaminated by noise. In practice, however, the signal itself is often subject to random noise prior to measurement. We briefly study this setting and show that, for the vast majority of measurement schemes employed in compressed sensing, the two models are equivalent with the important difference that the signal-to-noise ratio (SNR) is divided by a factor proportional to p/n, where p is the dimension of the signal and n is the number of observations. Since p/n is often large, this leads to noise folding which can have a severe impact on the SNR.

169 citations


Journal ArticleDOI
Hongwei Guo1
TL;DR: Gaussian functions are suitable for describing many processes in mathematics, science, and engineering, making them very useful in the fields of signal and image processing.
Abstract: Gaussian functions are suitable for describing many processes in mathematics, science, and engineering, making them very useful in the fields of signal and image processing. For example, the random noise in a signal, induced by complicated physical factors, can be simply modeled with the Gaussian distribution according to the central limit theorem from the probability theory.

163 citations


Journal ArticleDOI
01 Aug 2011
TL;DR: Experimental result showed that EEMD had better noise-filtering performance than EMD and FIR Wiener filter, based on the mode-mixing reduction between near IMF scales.
Abstract: Empirical mode decomposition (EMD) is a powerful algorithm that decomposes signals as a set of intrinsic mode function (IMF) based on the signal complexity. In this study, partial reconstruction of IMF acting as a filter was used for noise reduction in ECG. An improved algorithm, ensemble EMD (EEMD), was used for the first time to improve the noise-filtering performance, based on the mode-mixing reduction between near IMF scales. Both standard ECG templates derived from simulator and Arrhythmia ECG database were used as ECG signal, while Gaussian white noise was used as noise source. Mean square error (MSE) between the reconstructed ECG and original ECG was used as the filter performance indicator. FIR Wiener filter was also used to compare the filtering performance with EEMD. Experimental result showed that EEMD had better noise-filtering performance than EMD and FIR Wiener filter. The average MSE ratios of EEMD to EMD and FIR Wiener filter were 0.71 and 0.61, respectively. Thus, this study investigated an ECG noise-filtering procedure based on EEMD. Also, the optimal added noise power and trial number for EEMD was also examined.

131 citations


Proceedings ArticleDOI
22 May 2011
TL;DR: This work describes a modified feature-extraction procedure in which the time-difference operation is performed in the spectral domain, rather than the cepstral domain as is generally presently done, and finds the use of delta-spectral features improves the effective SNR for background music and white noise and recognition accuracy in reverberant environments is improved.
Abstract: Almost all current automatic speech recognition (ASR) systems conventionally append delta and double-delta cepstral features to static cepstral features. In this work we describe a modified feature-extraction procedure in which the time-difference operation is performed in the spectral domain, rather than the cepstral domain as is generally presently done. We argue that this approach based on “delta-spectral” features is needed because even though delta-cepstral features capture dynamic speech information and generally greatly improve ASR recognition accuracy, they are not robust to noise and reverberation. We support the validity of the delta-spectral approach both with observations about the modulation spectrum of speech and noise, and with objective experiments that document the benefit that the delta-spectral approach brings to a variety of currently popular feature extraction algorithms. We found that the use of delta-spectral features, rather than the more traditional delta-cepstral features, improves the effective SNR by between 5 and 8 dB for background music and white noise, and recognition accuracy in reverberant environments is improved as well.

Journal ArticleDOI
TL;DR: A Gaussian sum filter adapted to the two-body problem in space surveillance is proposed and demonstrated to achieve uncertainty consistency and the impact of correct uncertainty representation in the problems of data association (correlation) and anomaly detection is illustrated.
Abstract: While standard Kalman-based filters, Gaussian assumptions, and covariance-weighted metrics are very effective in data-rich tracking environments, their use in the data-sparse environment of space surveillance ismore limited. To properly characterize non-Gaussian density functions arising in the problem of long-term propagation of state uncertainties, a Gaussian sum filter adapted to the two-body problem in space surveillance is proposed and demonstrated to achieve uncertainty consistency. The proposed filter is made efficient by using only a onedimensional Gaussian sum in equinoctial orbital elements, thereby avoiding the expensive representation of a full six-dimensional mixture and hence the “curse of dimensionality.” Additionally, an alternate set of equinoctial elements is proposed and is shown to provide enhanced uncertainty consistently over the traditional element set. Simulation studies illustrate the improvements in theGaussian sumapproach over the traditional unscentedKalman filter and the impact of correct uncertainty representation in the problems of data association (correlation) and anomaly (maneuver) detection.

Journal ArticleDOI
TL;DR: In this article, an adaptive convolution of Gaussian white noise with a real space transfer function kernel together with an adaptive multi-grid Poisson solver is used to generate displacements and velocities following first (1LPT) or second order Lagrangian perturbation theory (2LPT).
Abstract: We discuss a new algorithm to generate multi-scale initial conditions with multiple levels of refinements for cosmological "zoom-in" simulations. The method uses an adaptive convolution of Gaussian white noise with a real space transfer function kernel together with an adaptive multi-grid Poisson solver to generate displacements and velocities following first (1LPT) or second order Lagrangian perturbation theory (2LPT). The new algorithm achieves RMS relative errors of order 10^(-4) for displacements and velocities in the refinement region and thus improves in terms of errors by about two orders of magnitude over previous approaches. In addition, errors are localized at coarse-fine boundaries and do not suffer from Fourier-space induced interference ringing. An optional hybrid multi-grid and Fast Fourier Transform (FFT) based scheme is introduced which has identical Fourier space behaviour as traditional approaches. Using a suite of re-simulations of a galaxy cluster halo our real space based approach is found to reproduce correlation functions, density profiles, key halo properties and subhalo abundances with per cent level accuracy. Finally, we generalize our approach for two-component baryon and dark-matter simulations and demonstrate that the power spectrum evolution is in excellent agreement with linear perturbation theory. For initial baryon density fields, it is suggested to use the local Lagrangian approximation in order to generate a density field for mesh based codes that is consistent with Lagrangian perturbation theory instead of the current practice of using the Eulerian linearly scaled densities.

Journal ArticleDOI
TL;DR: The proposed method showed promising results and high noise robustness to a wide range of heart sounds, however, more tests are needed to address any bias that may have been introduced by different sources of heartSounds in the current training set, and to concretely validate the method.
Abstract: A new framework for heart sound analysis is proposed. One of the most difficult processes in heart sound analysis is segmentation, due to interference form murmurs. Equal number of cardiac cycles were extracted from heart sounds with different heart rates using information from envelopes of autocorrelation functions without the need to label individual fundamental heart sounds (FHS). The complete method consists of envelope detection, calculation of cardiac cycle lengths using auto-correlation of envelope signals, features extraction using discrete wavelet transform, principal component analysis, and classification using neural network bagging predictors. The proposed method was tested on a set of heart sounds obtained from several on-line databases and recorded with an electronic stethoscope. Geometric mean was used as performance index. Average classification performance using ten-fold cross-validation was 0.92 for noise free case, 0.90 under white noise with 10 dB signal-to-noise ratio (SNR), and 0.90 under impulse noise up to 0.3 s duration. The proposed method showed promising results and high noise robustness to a wide range of heart sounds. However, more tests are needed to address any bias that may have been introduced by different sources of heart sounds in the current training set, and to concretely validate the method. Further work include building a new training set recorded from actual patients, then further evaluate the method based on this new training set.

Journal ArticleDOI
17 Mar 2011-PLOS ONE
TL;DR: Examination of noise, sampling frequency and time series length influence various measures of entropy when applied to human center of pressure (CoP) data, as well as in synthetic signals with known properties, suggests long-range correlations should be removed from CoP data prior to calculating entropy.
Abstract: BACKGROUND: Over the last two decades, various measures of entropy have been used to examine the complexity of human postural control. In general, entropy measures provide information regarding the health, stability and adaptability of the postural system that is not captured when using more traditional analytical techniques. The purpose of this study was to examine how noise, sampling frequency and time series length influence various measures of entropy when applied to human center of pressure (CoP) data, as well as in synthetic signals with known properties. Such a comparison is necessary to interpret data between and within studies that use different entropy measures, equipment, sampling frequencies or data collection durations. METHODS AND FINDINGS: The complexity of synthetic signals with known properties and standing CoP data was calculated using Approximate Entropy (ApEn), Sample Entropy (SampEn) and Recurrence Quantification Analysis Entropy (RQAEn). All signals were examined at varying sampling frequencies and with varying amounts of added noise. Additionally, an increment time series of the original CoP data was examined to remove long-range correlations. Of the three measures examined, ApEn was the least robust to sampling frequency and noise manipulations. Additionally, increased noise led to an increase in SampEn, but a decrease in RQAEn. Thus, noise can yield inconsistent results between the various entropy measures. Finally, the differences between the entropy measures were minimized in the increment CoP data, suggesting that long-range correlations should be removed from CoP data prior to calculating entropy. CONCLUSIONS: The various algorithms typically used to quantify the complexity (entropy) of CoP may yield very different results, particularly when sampling frequency and noise are different. The results of this study are discussed within the context of the neural noise and loss of complexity hypotheses.

Journal ArticleDOI
TL;DR: The proposed CFCC features consistently perform better than the baseline MFCC features under all three mismatched testing conditions and compare favorably to perceptual linear predictive (PLP) and RASTA-PLP features.
Abstract: An auditory-based feature extraction algorithm is presented. We name the new features as cochlear filter cepstral coefficients (CFCCs) which are defined based on a recently developed auditory transform (AT) plus a set of modules to emulate the signal processing functions in the cochlea. The CFCC features are applied to a speaker identification task to address the acoustic mismatch problem between training and testing environments. Usually, the performance of acoustic models trained in clean speech drops significantly when tested in noisy speech. The CFCC features have shown strong robustness in this kind of situation. In our experiments, the CFCC features consistently perform better than the baseline MFCC features under all three mismatched testing conditions-white noise, car noise, and babble noise. For example, in clean conditions, both MFCC and CFCC features perform similarly, over 96%, but when the signal-to-noise ratio (SNR) of the input signal is 6 dB, the accuracy of the MFCC features drops to 41.2%, while the CFCC features still achieve an accuracy of 88.3%. The proposed CFCC features also compare favorably to perceptual linear predictive (PLP) and RASTA-PLP features. The CFCC features consistently perform much better than PLP. Under white noise, the CFCC features are significantly better than RASTA-PLP, while under car and babble noise, the CFCC features provide similar performances to RASTA-PLP.

Journal ArticleDOI
TL;DR: This work analyzes numerically the role played by the asymmetry of a piecewise linear potential, in the presence of both a Gaussian white noise and a dichotomous noise, on the resonant activation phenomenon.
Abstract: This work analyzes numerically the role played by the asymmetry of a piecewise linear potential, in the presence of both a Gaussian white noise and a dichotomous noise, on the resonant activation phenomenon. The features of the asymmetry of the potential barrier arise by investigating the stochastic transitions far behind the potential maximum, from the initial well to the bottom of the adjacent potential well. Because of the asymmetry of the potential profile together with the random external force uniform in space, we find, for the different asymmetries: (1) an inversion of the curves of the mean first passage time in the resonant region of the correlation time τ of the dichotomous noise, for low thermal noise intensities; (2) a maximum of the mean velocity of the Brownian particle as a function of τ; and (3) an inversion of the curves of the mean velocity and a very weak current reversal in the miniratchet system obtained with the asymmetrical potential profiles investigated. An inversion of the mean first passage time curves is also observed by varying the amplitude of the dichotomous noise, behavior confirmed by recent experiments.

Journal ArticleDOI
TL;DR: In this article, the authors prove pathwise uniqueness for solutions of parabolic stochastic pde's with multiplicative white noise if the coefficient is Holder continuous of index γ > 3/4.
Abstract: We prove pathwise uniqueness for solutions of parabolic stochastic pde’s with multiplicative white noise if the coefficient is Holder continuous of index γ > 3/4. The method of proof is an infinite-dimensional version of the Yamada–Watanabe argument for ordinary stochastic differential equations.

Journal ArticleDOI
TL;DR: Close-form formulas for the transfer function of the optimal filter and for the mean-square phase error are derived for the case where the phase noise is modelled as random phase walk and a suboptimal filter is proposed.
Abstract: The paper deals with carrier recovery based on pilot symbols in single-carrier systems. The system model considered in the paper includes the channel additive white noise and the phase noise that affects the local oscillators used for up/down-conversion. Wiener's method is used to determine the optimal filter in estimation of phase noise assuming that a sequence of equally spaced pilot symbols is available. Our analysis allows to capture the cyclostationary performance of the estimate, a phenomenon that is not considered in the previous literature. In the paper, closed-form formulas for the transfer function of the optimal filter and for the mean-square phase error are derived for the case where the phase noise is modelled as random phase walk. For this case, a suboptimal filter is proposed. Numerical results are presented to substantiate the analysis.

Journal ArticleDOI
TL;DR: In this article, the velocity uncertainties from GPS position time series that are affected by time-correlated noise are derived based on the Allan variance, which is widely used in the estimation of oscillator stability and requires neither spectral analysis nor maximum likelihood estimation.
Abstract: [1] We present a method to derive velocity uncertainties from GPS position time series that are affected by time-correlated noise. This method is based on the Allan variance, which is widely used in the estimation of oscillator stability and requires neither spectral analysis nor maximum likelihood estimation (MLE). The Allan variance of the rate (AVR) is calculated in the time domain and hence is not too sensitive to gaps in the time series. We derived analytical expressions of the AVR for different kinds of noises like power law noise, white noise, flicker noise, and random walk and found an expression for the variance produced by an annual signal. These functional relations form the basis of error models that have to be fitted to the AVR in order to estimate the velocity uncertainty. Finally, we applied the method to the South Africa GPS network TrigNet. Most time series show noise characteristics that can be modeled by a power law noise plus an annual signal. The method is computationally very cheap, and the results are in good agreement with the ones obtained by methods based on MLE.

Journal ArticleDOI
TL;DR: It is proved that the generalized Poisson processes have a sparse representation in a wavelet-like basis subject to some mild matching condition, and presented a limit example of sparse process that yields a MAP signal estimator that is equivalent to the popular TV-denoising algorithm.
Abstract: We introduce an extended family of continuous-domain stochastic models for sparse, piecewise-smooth signals. These are specified as solutions of stochastic differential equations, or, equivalently, in terms of a suitable innovation model; the latter is analogous conceptually to the classical interpretation of a Gaussian stationary process as filtered white noise. The two specific features of our approach are 1) signal generation is driven by a random stream of Dirac impulses (Poisson noise) instead of Gaussian white noise, and 2) the class of admissible whitening operators is considerably larger than what is allowed in the conventional theory of stationary processes. We provide a complete characterization of these finite-rate-of-innovation signals within Gelfand's framework of generalized stochastic processes. We then focus on the class of scale-invariant whitening operators which correspond to unstable systems. We show that these can be solved by introducing proper boundary conditions, which leads to the specification of random, spline-type signals that are piecewise-smooth. These processes are the Poisson counterpart of fractional Brownian motion; they are nonstationary and have the same 1/ω-type spectral signature. We prove that the generalized Poisson processes have a sparse representation in a wavelet-like basis subject to some mild matching condition. We also present a limit example of sparse process that yields a MAP signal estimator that is equivalent to the popular TV-denoising algorithm.

Journal ArticleDOI
TL;DR: The conversion factor turns out to be simply the sampling rate for the full resolution cases and the introduction of this conversion can compare HSA and Fourier spectral analysis results quantitatively.
Abstract: As the original definition on Hilbert spectrum was given in terms of total energy and amplitude, there is a mismatch between the Hilbert spectrum and the traditional Fourier spectrum, which is defined in terms of energy density. Rigorous definitions of Hilbert energy and amplitude spectra are given in terms of energy and amplitude density in the time-frequency space. Unlike Fourier spectral analysis, where the resolution is fixed once the data length and sampling rate is given, the time-frequency resolution could be arbitrarily assigned in Hilbert spectral analysis (HSA). Furthermore, HSA could also provide zooming ability for detailed examination of the data in a specific frequency range with all the resolution power. These complications have made the conversion between Hilbert and Fourier spectral results difficult and the conversion formula is elusive until now. We have derived a simple relationship between them in this paper. The conversion factor turns out to be simply the sampling rate for the full resolution cases. In case of zooming, there is another additional multiplicative factor. The conversion factors have been tested in various cases including white noise, delta function, and signals from natural phenomena. With the introduction of this conversion, we can compare HSA and Fourier spectral analysis results quantitatively.

Journal ArticleDOI
TL;DR: In this article, a hierarchical entropy (HE) method was developed to quantify the complexity of a time series based on hierarchical decomposition and entropy analysis, which is applied to the Gaussian white noise and the 1/f noise.

Journal ArticleDOI
TL;DR: In this article, the authors study the BvM phenomenon for the infinite-dimensional Gaussian white noise model governed by Gaussian prior with diagonal-covariance structure and show that positive results regarding frequentist probability coverage of credible sets can only be obtained if the prior assigns null mass to the parameter space.
Abstract: We study the Bernstein-von Mises (BvM) phenomenon, i.e., Bayesian credible sets and frequentist confidence regions for the estimation error coincide asymptotically, for the infinite-dimensional Gaussian white noise model governed by Gaussian prior with diagonal-covariance structure. While in parametric statistics this fact is a consequence of (a particular form of) the BvM Theorem, in the nonparametric setup, however, the BvM Theorem is known to fail even in some, apparently, elementary cases. In the present paper we show that BvM-like statements hold for this model, provided that the parameter space is suitably embedded into the support of the prior. The overall conclusion is that, unlike in the parametric setup, positive results regarding frequentist probability coverage of credible sets can only be obtained if the prior assigns null mass to the parameter space.

Journal ArticleDOI
TL;DR: A modified STFT such that all coefficients coming from white Gaussian noise are circular is proposed, and a time-frequency segmentation algorithm based on successive iterations of noise variance estimation and time- Frequency coefficients detection is proposed.
Abstract: This paper investigates the circularity of short time Fourier transform (STFT) coefficients noise only, and proposes a modified STFT such that all coefficients coming from white Gaussian noise are circular. In order to use the spectral kurtosis (SK) as a Gaussianity test to check if signal points are present in a set of STFT points, we consider the SK of complex circular random variables, and its link with the kurtosis of the real and imaginary parts. We show that the variance of the SK is smaller than the variance of the kurtosis estimated from both real and imaginary parts. The effect of the noncircularity of Gaussian variables upon the spectral kurtosis of STFT coefficients is studied, as well as the effect of signal presence. Finally, a time-frequency segmentation algorithm based on successive iterations of noise variance estimation and time-frequency coefficients detection is proposed. The iterations are stopped when the spectral kurtosis on nondetected points reaches zero. Examples of segmented time-frequency space are presented on a dolphin whistle and on a simulated signal in nonwhite and nonstationary Gaussian noise.

Journal ArticleDOI
TL;DR: In this paper, the authors show that the asymptotic null distribution of the Box-Pierce test statistic with general weights still holds under unknown weak dependence as long as the lag truncation number grows at an appropriate rate with increasing sample size.
Abstract: Testing for white noise has been well studied in the literature of econometrics and statistics. For most of the proposed test statistics, such as the well-known Box–Pierce test statistic with fixed lag truncation number, the asymptotic null distributions are obtained under independent and identically distributed assumptions and may not be valid for dependent white noise. Because of recent popularity of conditional heteroskedastic models (e.g., generalized autoregressive conditional heteroskedastic [GARCH] models), which imply nonlinear dependence with zero autocorrelation, there is a need to understand the asymptotic properties of the existing test statistics under unknown dependence. In this paper, we show that the asymptotic null distribution of the Box–Pierce test statistic with general weights still holds under unknown weak dependence as long as the lag truncation number grows at an appropriate rate with increasing sample size. Further applications to diagnostic checking of the autoregressive moving average (ARMA) and fractional autoregressive integrated moving average (FARIMA) models with dependent white noise errors are also addressed. Our results go beyond earlier ones by allowing non-Gaussian and conditional heteroskedastic errors in the ARMA and FARIMA models and provide theoretical support for some empirical findings reported in the literature.

Journal ArticleDOI
TL;DR: A fast and low-complexity multiobjective Gauss-Newton algorithm for estimating the fundamental phasor and frequency of the power signal instantly and accurately and can be extended to include time-varying harmonics and interharmonics mixed with noise of low signal-to-noise ratio with a great degree of accuracy.
Abstract: This paper presents an adaptive method for tracking the amplitude, phase, and frequency of a time-varying sinusoid in white noise. Although the conventional techniques like adaptive linear elements and discrete or fast Fourier transforms are still widely used in many applications, their accuracy and convergence speed pose serious limitations under sudden supply frequency drift, fundamental amplitude, or phase variations. This paper, therefore, proposes a fast and low-complexity multiobjective Gauss-Newton algorithm for estimating the fundamental phasor and frequency of the power signal instantly and accurately. Further, the learning parameters in the proposed algorithm are tuned iteratively to provide faster convergence and better accuracy. The proposed method can also be extended to include time-varying harmonics and interharmonics mixed with noise of low signal-to-noise ratio with a great degree of accuracy. Numerical and experimental results are presented in support of the effectiveness of the new approach.

Journal ArticleDOI
TL;DR: In this paper, the response in terms of probability density function of nonlinear systems under combined normal and Poisson white noise is considered via a Path Integral Solution (PIS) that may be considered as a step-by-step solution technique.

Posted Content
TL;DR: In this paper, the authors introduce a general distributional framework that results in a unifying description and characterization of a rich variety of continuous-time stochastic processes, including CARMA processes.
Abstract: We introduce a general distributional framework that results in a unifying description and characterization of a rich variety of continuous-time stochastic processes. The cornerstone of our approach is an innovation model that is driven by some generalized white noise process, which may be Gaussian or not (e.g., Laplace, impulsive Poisson or alpha stable). This allows for a conceptual decoupling between the correlation properties of the process, which are imposed by the whitening operator L, and its sparsity pattern which is determined by the type of noise excitation. The latter is fully specified by a Levy measure. We show that the range of admissible innovation behavior varies between the purely Gaussian and super-sparse extremes. We prove that the corresponding generalized stochastic processes are well-defined mathematically provided that the (adjoint) inverse of the whitening operator satisfies some Lp bound for p>=1. We present a novel operator-based method that yields an explicit characterization of all Levy-driven processes that are solutions of constant-coefficient stochastic differential equations. When the underlying system is stable, we recover the family of stationary CARMA processes, including the Gaussian ones. The approach remains valid when the system is unstable and leads to the identification of potentially useful generalizations of the Levy processes, which are sparse and non-stationary. Finally, we show how we can apply finite difference operators to obtain a stationary characterization of these processes that is maximally decoupled and stable, irrespective of the location of the poles in the complex plane.

Book ChapterDOI
01 Jan 2011
TL;DR: This work presents the framework of statistical inverse problems where the data are corrupted by some stochastic error, and explains some basic issues regarding nonparametric statistics applied to inverse problems.
Abstract: There exist many fields where inverse problems appear. Some examples are: astronomy (blurred images of the Hubble satellite), econometrics (instrumental variables), financial mathematics (model calibration of the volatility), medical image processing (X-ray tomography), and quantum physics (quantum homodyne tomography). These are problems where we have indirect observations of an object (a function) that we want to reconstruct, through a linear operator A. Due to its indirect nature, solving an inverse problem is usually rather difficult. For this reason, one needs regularization methods in order to get a stable and accurate reconstruction. We present the framework of statistical inverse problems where the data are corrupted by some stochastic error. This white noise model may be discretized in the spectral domain using Singular Value Decomposition (SVD), when the operator A is compact. Several examples of inverse problems where the SVD is known are presented (circular deconvolution, heat equation, tomography). We explain some basic issues regarding nonparametric statistics applied to inverse problems. Standard regularization methods and their counterpart as estimation procedures by use of SVD are discussed (projection, Landweber, Tikhonov, . . . ). Several classical statistical approaches like minimax risk and optimal rates of convergence, are presented. This notion of optimality leads to some optimal choice of the tuning parameter. However these optimal parameters are unachievable since they depend on the unknown smoothness of the function. This leads to more recent concepts like adaptive estimation and oracle inequalities. Several data-driven selection procedures of the regularization parameter are discussed in details, among these: model selection methods, Stein’s unbiased risk estimation and the recent risk hull method.