scispace - formally typeset
Search or ask a question

Showing papers on "White noise published in 2007"


Journal ArticleDOI
TL;DR: In this paper, the authors derived minimum mean-square error estimators of speech DFT coefficient magnitudes as well as of complex-valued DFT coefficients based on two classes of generalized gamma distributions, under an additive Gaussian noise assumption.
Abstract: This paper considers techniques for single-channel speech enhancement based on the discrete Fourier transform (DFT). Specifically, we derive minimum mean-square error (MMSE) estimators of speech DFT coefficient magnitudes as well as of complex-valued DFT coefficients based on two classes of generalized gamma distributions, under an additive Gaussian noise assumption. The resulting generalized DFT magnitude estimator has as a special case the existing scheme based on a Rayleigh speech prior, while the complex DFT estimators generalize existing schemes based on Gaussian, Laplacian, and Gamma speech priors. Extensive simulation experiments with speech signals degraded by various additive noise sources verify that significant improvements are possible with the more recent estimators based on super-Gaussian priors. The increase in perceptual evaluation of speech quality (PESQ) over the noisy signals is about 0.5 points for street noise and about 1 point for white noise, nearly independent of input signal-to-noise ratio (SNR). The assumptions made for deriving the complex DFT estimators are less accurate than those for the magnitude estimators, leading to a higher maximum achievable speech quality with the magnitude estimators.

293 citations


Journal ArticleDOI
TL;DR: In this article, the authors consider the asymptotic behavior of posterior distributions and Bayes estimators based on observations which are required to be neither independent nor identically distributed and give general results on the rate of convergence of the posterior measure relative to distances derived from a testing criterion.
Abstract: We consider the asymptotic behavior of posterior distributions and Bayes estimators based on observations which are required to be neither independent nor identically distributed. We give general results on the rate of convergence of the posterior measure relative to distances derived from a testing criterion. We then specialize our results to independent, nonidentically distributed observations, Markov processes, stationary Gaussian time series and the white noise model. We apply our general results to several examples of infinite-dimensional statistical models including nonparametric regression with normal errors, binary regression, Poisson regression, an interval censoring model, Whittle estimation of the spectral density of a time series and a nonlinear autoregressive model.

263 citations


Proceedings ArticleDOI
01 Apr 2007
TL;DR: This paper proposes two detection methods based on the transformed sample covariance matrix of the received signal that are suitable for covariance absolute value (CAV) detection and the covariance Frobenius norm (CFN) detection.
Abstract: Sensing (signal detection) is a fundamental problem in cognitive radio. The statistical covariances of signal and noise are usually different. In this paper, this property is used to differentiate signal from noise. The sample covariance matrix of the received signal is computed and transformed based on the receiving filter. Then two detection methods are proposed based on the transformed sample covariance matrix. One is the covariance absolute value (CAV) detection and the other is the covariance Frobenius norm (CFN) detection. Theoretical analysis and threshold setting for the algorithms are discussed. Both methods do not need any information of the signal, the channel and noise power as a priori. Simulations based on captured ATSC DTV signals are presented to verify the methods.

260 citations


Journal ArticleDOI
TL;DR: Cognitive performance in noisy environments in relation to a neurocomputational model of attention deficit hyperactivity disorder (ADHD) and dopamine is investigated, indicating that ADHD subjects need more noise than controls for optimal cognitive performance.
Abstract: Background: Noise is typically conceived of as being detrimental to cognitive performance. However, given the mechanism of stochastic resonance, a certain amount of noise can benefit performance. We investigate cognitive performance in noisy environments in relation to a neurocomputational model of attention deficit hyperactivity disorder (ADHD) and dopamine. The Moderate Brain Arousal model (MBA; Sikstrom & Soderlund, 2007) suggests that dopamine levels modulate how much noise is required for optimal cognitive performance. We experimentally examine how ADHD and control children respond to different encoding conditions, providing different levels of environmental stimulation. Methods: Participants carried out self-performed mini tasks (SPT), as a high memory performance task, and a verbal task (VT), as a low memory task. These tasks were performed in the presence, or absence, of auditory white noise. Results: Noise exerted a positive effect on cognitive performance for the ADHD group and deteriorated performance for the control group, indicating that ADHD subjects need more noise than controls for optimal cognitive performance. Conclusions: The positive effect of white noise is explained by the phenomenon of stochastic resonance (SR), i.e., the phenomenon that moderate noise facilitates cognitive performance. The MBA model suggests that noise in the environment, introduces internal noise into the neural system through the perceptual system. This noise induces SR in the neurotransmitter systems and makes this noise beneficial for cognitive performance. In particular, the peak of the SR curve depends on the dopamine level, so that participants with low dopamine levels (ADHD) require more noise for optimal cognitive performance compared to controls.

215 citations


Journal ArticleDOI
P. Reegen1
TL;DR: SigSpec as mentioned in this paper is based on an analytical solution of the probability that a DFT peak of a given amplitude does not arise from white noise in a non-equally spaced data set.
Abstract: Context. Identifying frequencies with low signal-to-noise ratios in time series of stellar photometry and spectroscopy, and measuring their amplitude ratios and peak widths accurately, are critical goals for asteroseismology. These are also challenges for time series with gaps or whose data are not sampled at a constant rate, even with modern Discrete Fourier Transform (DFT) software. Also the False-Alarm Probability introduced by Lomb and Scargle is an approximation which becomes less reliable in time series with longer data gaps.Aims. A rigorous statistical treatment of how to determine the significance of a peak in a DFT, called SigSpec, is presented here. SigSpec is based on an analytical solution of the probability that a DFT peak of a given amplitude does not arise from white noise in a non-equally spaced data set.Methods. The underlying Probability Density Function (PDF) of the amplitude spectrum generated by white noise can be derived explicitly if both frequency and phase are incorporated into the solution. In this paper, I define and evaluate an unbiased statistical estimator, the “spectral significance”, which depends on frequency, amplitude, and phase in the DFT, and which takes into account the time-domain sampling.Results. I also compare this estimator to results from other well established techniques and assess the advantages of SigSpec, through comparison of its analytical solutions to the results of extensive numerical calculations. According to those tests, SigSpec obtains as accurate frequency values as a least-squares fit of sinusoids to data, and is less susceptible to aliasing than the Lomb-Scargle Periodogram, other DFTs, and Phase Dispersion Minimization (PDM). I demonstrate the effectiveness of SigSpec with a few examples of ground- and space-based photometric data, illustratring how SigSpec deals with the effects of noise and time-domain sampling in determining significant frequencies.

201 citations


Journal ArticleDOI
TL;DR: In this paper, the smoothness index is defined as the ratio of the geometric mean to the arithmetic mean of the wavelet coefficient moduli of the vibration signal, and it has been successfully used to de-noise both simulated and experimental signals.

201 citations


Journal ArticleDOI
TL;DR: In this paper, a methodology to assess the noise characteristics in time series of position estimates for permanent Global Positioning System (GPS) stations is proposed, where a set of harmonic functions for which they rely on the least squares harmonic estimation (LS?HE) method and parameter significance testing developed in the same framework as LS?VCE.
Abstract: We propose a methodology to assess the noise characteristics in time series of position estimates for permanent Global Positioning System (GPS) stations. Least squares variance component estimation (LS?VCE) is adopted to cope with any type of noise in the data. LS?VCE inherently provides the precision of (co)variance estimators. One can also apply statistical hypothesis testing in conjunction with LS?VCE. Using the w?test statistic, a combination of white noise and flicker noise turns out in general to best characterize the noise in all three position components. An interpretation for the colored noise of the series is given. Unmodelled periodic effects in the data will be captured by a set of harmonic functions for which we rely on the least squares harmonic estimation (LS?HE) method and parameter significance testing developed in the same framework as LS?VCE. Having included harmonic functions into the model, practically only white noise can be shown to remain in the data. Remaining time correlation, present only at very high frequencies (spanning a few days only), is expressed as a first?order autoregressive noise process. It can be caused by common and well?known sources of errors like atmospheric effects as well as satellite orbit errors. The autoregressive noise should be included in the stochastic model to avoid the overestimation (upward bias) of power law noise. The results confirm the presence of annual and semiannual signals in the series. We observed also significant periodic patterns with periods of 350 days and its fractions 350/n, n = 2, …, 8 that resemble the repeat time of the GPS constellation. Neglecting these harmonic signals in the functional model can seriously overestimate the rate uncertainty.

181 citations


Journal ArticleDOI
TL;DR: In this article, a rigorous derivation of a previously known formula for simulation of one-dimensional, univariate, nonstationary stochastic processes integrating Priestly's evolutionary spectral representation theory is presented.
Abstract: This paper presents a rigorous derivation of a previously known formula for simulation of one-dimensional, univariate, nonstationary stochastic processes integrating Priestly's evolutionary spectral representation theory. Applying this formula, sample functions can be generated with great computational efficiency. The simulated stochastic process is asymptotically Gaussian as the number of terms tends to infinity. This paper shows that (1) these sample functions accurately reflect the prescribed probabilistic characteristics of the stochastic process when the number of terms in the cosine series is large, i.e., the ensemble averaged evolutionary power spectral density function (PSDF) or autocorrelation function approaches the corresponding target function as the sample size increases, and (2) the simulation formula, under certain conditions, can be reduced to that for nonstationary white noise process or Shinozuka's spectral representation of stationary process. In addition to derivation of simulation formula, three methods are developed in this paper to estimate the evolutionary PSDF of a given time-history data by means of the short-time Fourier transform (STFT), the wavelet transform (WT), and the Hilbert-Huang transform (HHT). A comparison of the PSDF of the well-known El Centro earthquake record estimated by these methods shows that the STFT and the WT give similar results, whereas the HHT gives more concentrated energy at certain frequencies. Effectiveness of the proposed simulation formula for nonstationary sample functions is demonstrated by simulating time histories from the estimated evolutionary PSDFs. Mean acceleration spectrum obtained by averaging the spectra of generated time histories are then presented and compared with the target spectrum to demonstrate the usefulness of this method.

170 citations


Journal ArticleDOI
01 Sep 2007-EPL
TL;DR: In this paper, the authors discuss some properties of order patterns both in deterministic and random orbit generation and show that forbidden patterns are robust against noise and disintegrate with a rate that depends on the noise level.
Abstract: In this letter we discuss some properties of order patterns both in deterministic and random orbit generation. As it turns out, the orbits of one-dimensional maps have always forbidden patterns, i.e., order patterns that cannot occur, in contrast with random time series, in which any order pattern appears with probability one. However, finite random sequences may exhibit "false" forbidden patterns with non-vanishing probability. In this case, forbidden patterns decay with the sequence length, thus unveiling the random nature of the sequence. Last but not least, true forbidden patterns are robust against noise and disintegrate with a rate that depends on the noise level. These properties can be embodied in a simple method to distinguish deterministic, finite time series with very high levels of observational noise, from random ones. We present numerical evidence for white noise.

166 citations


Journal ArticleDOI
TL;DR: A blind calibration method for timing mismatches in a four-channel time-interleaved analog-to-digital converter (ADC) and an adaptive null steering algorithm for estimating the ADC timing offsets is described.
Abstract: In this paper, we describe a blind calibration method for timing mismatches in a four-channel time-interleaved analog-to-digital converter (ADC). The proposed method requires that the input signal should be slightly oversampled. This ensures that there exists a frequency band around the zero frequency where the Fourier transforms of the four ADC subchannels contain only three alias components, instead of four. Then the matrix power spectral density (PSD) of the ADC subchannels is rank deficient over this frequency band. Accordingly, when the timing offsets are known, we can construct a filter bank that nulls the vector signal at the ADC outputs. We employ a parametrization of this filter bank to develop an adaptive null steering algorithm for estimating the ADC timing offsets. The null steering filter bank employs seven fixed finite-impulse response filters and three unknown timing offset parameters which are estimated by using an adaptive stochastic gradient technique. A convergence analysis is presented for the blind calibration method. Numerical simulations for a bandlimited white noise input and for inputs containing several sinusoidal components demonstrate the effectiveness of the proposed technique

145 citations


Journal ArticleDOI
P. Reegen1
TL;DR: SigSpec as discussed by the authors is based on an analytical solution of the probability that a DFT peak of a given amplitude does not arise from white noise in a non-equally spaced data set.
Abstract: Identifying frequencies with low signal-to-noise ratios in time series of stellar photometry and spectroscopy, and measuring their amplitude ratios and peak widths accurately, are critical goals for asteroseismology. These are also challenges for time series with gaps or whose data are not sampled at a constant rate, even with modern Discrete Fourier Transform (DFT) software. Also the False-Alarm Probability introduced by Lomb and Scargle is an approximation which becomes less reliable in time series with longer data gaps. A rigorous statistical treatment of how to determine the significance of a peak in a DFT, called SigSpec, is presented here. SigSpec is based on an analytical solution of the probability that a DFT peak of a given amplitude does not arise from white noise in a non-equally spaced data set. The underlying Probability Density Function (PDF) of the amplitude spectrum generated by white noise can be derived explicitly if both frequency and phase are incorporated into the solution. In this paper, I define and evaluate an unbiased statistical estimator, the "spectral significance", which depends on frequency, amplitude, and phase in the DFT, and which takes into account the time-domain sampling. I also compare this estimator to results from other well established techniques and demonstrate the effectiveness of SigSpec with a few examples of ground- and space-based photometric data, illustratring how SigSpec deals with the effects of noise and time-domain sampling in determining significant frequencies.

Proceedings Article
03 Dec 2007
TL;DR: This work proposes a new approach for dealing with the estimation of the location of change-points in one-dimensional piecewise constant signals observed in white noise by combining the LAR algorithm and a reduced version of the dynamic programming algorithm and applies it to synthetic and real data.
Abstract: We propose a new approach for dealing with the estimation of the location of change-points in one-dimensional piecewise constant signals observed in white noise. Our approach consists in reframing this task in a variable selection context. We use a penalized least-squares criterion with a e1-type penalty for this purpose. We prove some theoretical results on the estimated change-points and on the underlying piecewise constant estimated function. Then, we explain how to implement this method in practice by combining the LAR algorithm and a reduced version of the dynamic programming algorithm and we apply it to synthetic and real data.

Journal ArticleDOI
TL;DR: The present study shows psychophysical evidence in a yes-no paradigm for the existence of a stochastic resonance-like phenomenon in the auditory-visual interactions and shows that the detection of a weak visual signal was an inverted U-like function of the intensity of different levels of auditory noise.

Journal ArticleDOI
TL;DR: In this paper, the Kumaresan-Tufts and matrix pencil methods were compared with nonlinear least-squares fitting methods to estimate ringdown parameters from ringdown signals after a binary black hole merger.
Abstract: The ringdown phase following a binary black hole merger is usually assumed to be well described by a linear superposition of complex exponentials (quasinormal modes). In the strong-field conditions typical of a binary black hole merger, nonlinear effects may produce mode coupling. Artificial mode coupling can also be induced by the black hole's rotation, if the radiation field is expanded in terms of spin-weighted spherical harmonics (rather than spin-weighted spheroidal harmonics). Observing deviations from the predictions of linear black hole perturbation theory requires optimal fitting techniques to extract ringdown parameters from numerical waveforms, which are inevitably affected by numerical error. So far, nonlinear least-squares fitting methods have been used as the standard workhorse to extract frequencies from ringdown waveforms. These methods are known not to be optimal for estimating parameters of complex exponentials. Furthermore, different fitting methods have different performance in the presence of noise. The main purpose of this paper is to introduce the gravitational wave community to modern variations of a linear parameter estimation technique first devised in 1795 by Prony: the Kumaresan-Tufts and matrix pencil methods. Using ``test'' damped sinusoidal signals in Gaussian white noise we illustrate the advantages of these methods, showing that they have variance and bias at least comparable to standard nonlinear least-squares techniques. Then we compare the performance of different methods on unequal-mass binary black hole merger waveforms. The methods we discuss should be useful both theoretically (to monitor errors and search for nonlinearities in numerical relativity simulations) and experimentally (for parameter estimation from ringdown signals after a gravitational wave detection).

Journal ArticleDOI
TL;DR: In this article, the pseudorange noise behavior is characterized in order to improve the understanding of the origin of the large day-boundary discontinuities in the geodetic time transfer results.
Abstract: When neglecting calibration issues, the accuracy of GPS-based time and frequency transfer using a combined analysis of code and carrier phase measurements highly depends on the noise of the GPS codes. In particular, the pseudorange noise is responsible for day-boundary discontinuities which can reach more than 1 ns in the time transfer results obtained from geodetic analysis. These discontinuities are caused by the fact that the data are analyzed in daily data batches where the absolute clock offset is determined by the mean code value during the daily data batch. This pseudorange noise is not a white noise, in particular due to multipath and variations of instrumental delays. In this paper, the pseudorange noise behavior is characterized in order to improve the understanding of the origin of the large day-boundary discontinuities in the geodetic time transfer results. In a first step, the effect of short-term noise and multipath is estimated, and shown to be responsible for only a maximum of 150 ps (picoseconds) of the day-boundary jumps, with only one exception at NRC1 where the correction provides a jump reduction of 300 ps. In a second step, a combination of time transfer results obtained with pseudoranges only and geodetic time transfer results is used to characterize the long-term evolution of pseudorange errors. It demonstrates that the day-boundary jumps, especially those of large amplitude, can be explained by an instrumental effect imposing a common behavior on all the satellite pseudoranges. Using known influences as temperature variations at ALGO or cable damages at HOB2, it is shown that the approach developed in this study can be used to look for the origin of the day-boundary discontinuities in other stations.

Journal ArticleDOI
TL;DR: A new, simple and effective low-level processing edge detection algorithm based on the law of universal gravity that can be tuned to work at any desired scale and tested and compared with conventional methods using several standard images.

Journal ArticleDOI
TL;DR: This letter explicitly formulate multichannel and single-channel blind image deconvolution as a PCA problem and shows that the PCA-based blind image decomvolution runs faster and is more robust to noise.
Abstract: Our earlier work revealed a connection between blind image deconvolution and principal components analysis (PCA). In this letter, we explicitly formulate multichannel and single-channel blind image deconvolution as a PCA problem. Although PCA is derived from blur models that do not contain additive noise, it can be justified on both theoretical and experimental grounds that the PCA-based restoration algorithm is actually robust to the presence of white noise. The algorithm is applied to the restoration of atmospheric turbulence-degraded imagery and compared to an adaptive Lucy-Richardson maximum-likelihood algorithm on both real and simulated atmospheric turbulence blurred images. It is shown that the PCA-based blind image deconvolution runs faster and is more robust to noise.

Journal ArticleDOI
TL;DR: This paper proposes two filtering algorithms that generalize the extended and unscented Kalman filters to the case in which the arrival of measurements can be one-step delayed and, hence, the measurement available to estimate the state may not be up-to-date.

Journal ArticleDOI
TL;DR: In this article, the authors investigated the behavior of escaping trajectories from one well to another by pointing to the special character that underpins the noise-induced discontinuity which is caused by the generalized Brownian paths that jump beyond the barrier location without actually hitting it.
Abstract: We explore the archetype problem of an escape dynamics occurring in a symmetric double well potential when the Brownian particle is driven by white Levy noise in a dynamical regime where inertial effects can safely be neglected. The behavior of escaping trajectories from one well to another is investigated by pointing to the special character that underpins the noise-induced discontinuity which is caused by the generalized Brownian paths that jump beyond the barrier location without actually hitting it. This fact implies that the boundary conditions for the mean first passage time (MFPT) are no longer determined by the well-known local boundary conditions that characterize the case with normal diffusion. By numerically implementing properly the set up boundary conditions, we investigate the survival probability and the average escape time as a function of the corresponding Levy white noise parameters. Depending on the value of the skewness beta of the Levy noise, the escape can either become enhanced or suppressed: a negative asymmetry parameter beta typically yields a decrease for the escape rate while the rate itself depicts a non-monotonic behavior as a function of the stability index alpha that characterizes the jump length distribution of Levy noise, exhibiting a marked discontinuity at alpha=1. We find that the typical factor of 2 that characterizes for normal diffusion the ratio between the MFPT for well-bottom-to-well-bottom and well-bottom-to-barrier-top no longer holds true. For sufficiently high barriers the survival probabilities assume an exponential behavior versus time. Distinct non-exponential deviations occur, however, for low barrier heights.

Journal ArticleDOI
TL;DR: The Kuramoto model of globally coupled phase oscillators subject to Ornstein-Uhlenbeck and non-Gaussian colored noise is considered and the dependence of the threshold as well as the maximum degree of synchronization on the correlation time and the strength of the noise is studied.
Abstract: We consider the Kuramoto model of globally coupled phase oscillators subject to Ornstein-Uhlenbeck and non-Gaussian colored noise and investigate the influence of noise on the order parameter of the synchronization process. We use numerical methods to study the dependence of the threshold as well as the maximum degree of synchronization on the correlation time and the strength of the noise, and find that the threshold of synchronization strongly depends on the nature of the noise. It is found to be lower for both the Ornstein-Uhlenbeck and non-Gaussian processes compared to the case of white noise. A finite correlation time also favors the achievement of the full synchronization of the system, in contract to the white noise process, which does not allow that. Finally, we discuss possible applications of the stochastic Kuramoto model to oscillations taking place in biochemical systems.

Journal ArticleDOI
TL;DR: In this article, a wavelet transform-based method of interference rejection was applied to the problem of onsite testing, using both laboratory tests and on-site tests, with use of transient pulse-like noise, discrete spectral interference (DSI) and white noise.
Abstract: For pt.I see ibid., p.3-14, (2007). Insulation assessment of HV cables requires continuous partial discharge (PD) monitoring to identify the nature of insulation defects and to determine any degradation trends. However to recover PD signals with sufficient sensitivity to determine such insulation degradation in substations with high levels of electromagnetic interference is a major challenge. This paper is the second of two papers addressing this challenge for on-line PD measurements in a noisy environment. The first paper described a wavelet transform-based method of interference rejection. This paper applies that method to the problem of on-site testing, using both laboratory tests and on-site tests. The laboratory tests were used to stimulate the noisy on-site testing environment, with use of transient pulse-like noise, discrete spectral interference (DSI) and white noise. These noise types have been successfully rejected by the method proposed in the first paper. In addition, on-site tests have been undertaken and have been able to detect PD signals in an old 11 kV substation multi-cable installation

Journal ArticleDOI
TL;DR: The results show that there always exists an appropriate white noise such that any recurrent NNs with mixed time-varying delays and Markovian-switching parameters can be exponentially stabilized by noise if the delays are sufficiently small.
Abstract: The stabilization of recurrent neural networks with mixed time-varying delays and Markovian-switching parameters by noise is discussed. First, a new result is given for the existence of unique states of recurrent neural networks (NNs) with mixed time-varying delays and Markovian-switching parameters in the presence of noise, without the need to satisfy the linear growth conditions required by general stochastic Markovian-switching systems. Next, a delay-dependent condition for stabilization of concerned recurrent NNs is derived by applying the ltd formula, the Gronwall inequality, the law of large numbers, and the ergodic property of Markovian chain. The results show that there always exists an appropriate white noise such that any recurrent NNs with mixed time-varying delays and Markovian-switching parameters can be exponentially stabilized by noise if the delays are sufficiently small.

Journal ArticleDOI
TL;DR: This paper proposes a method for noncoherent sources, which continues to work under such conditions, while maintaining low computational complexity, and allows the probability of false alarm to be controlled and predefined, which is a crucial point for systems such as RADARs.
Abstract: High-resolution methods for estimating signal processing parameters such as bearing angles in array processing or frequencies in spectral analysis may be hampered by the model order if poorly selected. As classical model order selection methods fail when the number of snapshots available is small, this paper proposes a method for noncoherent sources, which continues to work under such conditions, while maintaining low computational complexity. For white Gaussian noise and short data we show that the profile of the ordered noise eigenvalues is seen to approximately fit an exponential law. This fact is used to provide a recursive algorithm which detects a mismatch between the observed eigenvalue profile and the theoretical noise-only eigenvalue profile, as such a mismatch indicates the presence of a source. Moreover this proposed method allows the probability of false alarm to be controlled and predefined, which is a crucial point for systems such as RADARs. Results of simulations are provided in order to show the capabilities of the algorithm.

Journal ArticleDOI
TL;DR: In this article, the effect of wind correlation on aircraft conflict probability estimation is examined and the conclusion of the study is that wind correlation may have a significant effect under particular encounter geometries.
Abstract: A study which examines the effect of wind correlation on aircraft conflict probability estimation is presented. We describe the correlation structure of the difference between the actual wind and the meteorological wind forecasts and discuss how it can be implemented in simulation. For several encounters we then examine the aircraft conflict probability estimation errors if the correlation structure is ignored and the wind is instead assumed to be modeled as white noise. The conclusion of the study is that wind correlation may have a significant effect under particular encounter geometries.

Journal ArticleDOI
TL;DR: This paper investigates two classes of particle filtering techniques, distributed resampling with non-proportional allocation (DRNA) and local selection (LS), and analyzes the effect of DRNA and LS on the sample variance of the importance weights; the distortion, due to the resamplings step, of the discrete probability measure given by the particle filter; and the variance of estimators after resampled.

Proceedings ArticleDOI
15 Apr 2007
TL;DR: The Monte Carlo method is applied to compute the expected posterior Cramer-Rao lower bound (CRLB) in a nonlinear, possibly non-Gaussian, dynamic system and the joint recursive one-step-ahead CRLB on the state vector is introduced as the criterion for sensor selection.
Abstract: The objective in sensor collaboration for target tracking is to dynamically select a subset of sensors over time to optimize tracking performance in terms of mean square error (MSE). In this paper, we apply the Monte Carlo method to compute the expected posterior Cramer-Rao lower bound (CRLB) in a nonlinear, possibly non-Gaussian, dynamic system. The joint recursive one-step-ahead CRLB on the state vector is introduced as the criterion for sensor selection. The proposed approach is validated by simulation results. In the experiments, a particle filter is used to track a single target moving according to a white noise acceleration model through a two-dimensional field where bearing-only sensors are randomly distributed. Simulation results demonstrate the improved tracking performance of the proposed method compared to other existing methods in terms of tracking accuracy.

01 Jan 2007
TL;DR: This thesis introduces sequential annealed importance sampling as a method for calculating model evidence in an on-line fashion as new data arrives and describes how Gaussian processes can be used to efficiently estimate gradients of noisy functions, and numerically estimate integrals.
Abstract: Gaussian processes have proved to be useful and powerful constructs for the purposes of regression. The classical method proceeds by parameterising a covariance function, and then infers the parameters given the training data. In this thesis, the classical approach is augmented by interpreting Gaussian processes as the outputs of linear filters excited by white noise. This enables a straightforward definition of dependent Gaussian processes as the outputs of a multiple output linear filter excited by multiple noise sources. We show how dependent Gaussian processes defined in this way can also be used for the purposes of system identification. Onewell known problemwith Gaussian process regression is that the computational complexity scales poorly with the amount of training data. We review one approximate solution that alleviates this problem, namely reduced rank Gaussian processes. We then show how the reduced rank approximation can be applied to allow for the efficient computation of dependent Gaussian processes. We then examine the application of Gaussian processes to the solution of other machine learning problems. To do so, we review methods for the parameterisation of full covariance matrices. Furthermore, we discuss how improvements can be made by marginalising over alternative models, and introduce methods to perform these computations efficiently. In particular, we introduce sequential annealed importance sampling as a method for calculating model evidence in an on-line fashion as new data arrives. Gaussian process regression can also be applied to optimisation. An algorithm is described that uses model comparison between multiple models to find the optimum of a function while taking as few samples as possible. This algorithm shows impressive performance on the standard control problem of double pole balancing. Finally, we describe how Gaussian processes can be used to efficiently estimate gradients of noisy functions, and numerically estimate integrals.

Journal ArticleDOI
TL;DR: In this article, the authors developed a theory of ergodicity for a class of random dynamical systems where the driving noise is not white, using the strong Feller property and topological irreducibility.
Abstract: We develop a theory of ergodicity for a class of random dynamical systems where the driving noise is not white. The two main tools of our analysis are the strong Feller property and topological irreducibility, introduced in this work for a class of non-Markovian systems. They allow us to obtain a criteria for ergodicity which is similar in nature to the Doob-Khas'minskii theorem. The second part of this article shows how it is possible to apply these results to the case of stochastic differential equations driven by fractional Brownian motion. It follows that under a nondegeneracy condition on the noise, such equations admit a unique adapted stationary solution.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a technique for estimating the entropy and relative dimensionality of image patches based on a function called the proximity distribution (a nearest-neighbor technique).
Abstract: Natural scenes, like most all natural data sets, show considerable redundancy. Although many forms of redundancy have been investigated (e.g., pixel distributions, power spectra, contour relationships, etc.), estimates of the true entropy of natural scenes have been largely considered intractable. We describe a technique for estimating the entropy and relative dimensionality of image patches based on a function we call the proximity distribution (a nearest-neighbor technique). The advantage of this function over simple statistics such as the power spectrum is that the proximity distribution is dependent on all forms of redundancy. We demonstrate that this function can be used to estimate the entropy (redundancy) of 3×3 patches of known entropy as well as 8×8 patches of Gaussian white noise, natural scenes, and noise with the same power spectrum as natural scenes. The techniques are based on assumptions regarding the intrinsic dimensionality of the data, and although the estimates depend on an extrapolation model for images larger than 3×3, we argue that this approach provides the best current estimates of the entropy and compressibility of natural-scene patches and that it provides insights into the efficiency of any coding strategy that aims to reduce redundancy. We show that the sample of 8×8 patches of natural scenes used in this study has less than half the entropy of 8×8 white noise and less than 60% of the entropy of noise with the same power spectrum. In addition, given a finite number of samples (<220) drawn randomly from the space of 8×8 patches, the subspace of 8×8 natural-scene patches shows a dimensionality that depends on the sampling density and that for low densities is significantly lower dimensional than the space of 8×8 patches of white noise and noise with the same power spectrum.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a new approach for the location of seismic sources using a technique inspired by Gaussian-beam migration of three-component data, which requires only the preliminary picking of time intervals around a detected event and is much less sensitive to the picking precision than standard location procedures.
Abstract: We propose a new approach for the location of seismic sources using a technique inspired by Gaussian-beam migration of three-component data. This approach requires only the preliminary picking of time intervals around a detected event and is much less sensitive to the picking precision than standard location procedures. Furthermore, this approach is characterized by a high degree of automation. The polarization information of three-component data is estimated and used to perform initial-value ray tracing. By weighting the energy of the signal using Gaussian beams around these rays, the stacking is restricted to physically relevant regions only. Event locations correspond to regions of maximum energy in the resulting image. We have successfully applied the method to synthetic data examples with 20%–30% white noise and to real data of a hydraulic-fracturing experiment, where events with comparatively small magnitudes (<0) were recorded.