scispace - formally typeset
Search or ask a question

Showing papers on "Gaussian published in 1999"


Journal ArticleDOI
TL;DR: In this article, the authors studied a random Groeth model in two dimensions closely related to the one-dimensional totally asymmetric exclusion process and showed that shape fluctuations, appropriately scaled, converges in distribution to the Tracy-Widom largest eigenvalue distribution for the Gaussian Unitary Ensemble.
Abstract: We study a certain random groeth model in two dimensions closely related to the one-dimensional totally asymmetric exclusion process. The results show that the shape fluctuations, appropriately scaled, converges in distribution to the Tracy-Widom largest eigenvalue distribution for the Gaussian Unitary Ensemble.

1,031 citations


Journal ArticleDOI
Shmuel Rippa1
TL;DR: It is shown, numerically, that the value of the optimal c (the value of c that minimizes the interpolation error) depends on the number and distribution of data points, on the data vector, and on the precision of the computation.
Abstract: The accuracy of many schemes for interpolating scattered data with radial basis functions depends on a shape parameter c of the radial basis function. In this paper we study the effect of c on the quality of fit of the multiquadric, inverse multiquadric and Gaussian interpolants. We show, numerically, that the value of the optimal c (the value of c that minimizes the interpolation error) depends on the number and distribution of data points, on the data vector, and on the precision of the computation. We present an algorithm for selecting a good value for c that implicitly takes all the above considerations into account. The algorithm selects c by minimizing a cost function that imitates the error between the radial interpolant and the (unknown) function from which the data vector was sampled. The cost function is defined by taking some norm of the error vector E = (E 1, ... , EN)T where E k = Ek = fk - Sk xk) and S k is the interpolant to a reduced data set obtained by removing the point x k and the corresponding data value f k from the original data set. The cost function can be defined for any radial basis function and any dimension. We present the results of many numerical experiments involving interpolation of two dimensional data sets by the multiquadric, inverse multiquadric and Gaussian interpolants and we show that our algorithm consistently produces good values for the parameter c.

872 citations


Journal ArticleDOI
TL;DR: The algorithms used to generate three-dimensional grids of the electron localization function ELF, to assign the data points to basins and to perform the integration of the one-electron density and of the pair functions over the basins are described.

774 citations


Journal ArticleDOI
TL;DR: In this article, the first-order probability density functions (PDFs) of the class A and class B noise models were derived and the authors showed that these PDFs can be approximated by a symmetric Gaussian /spl alpha/stable model in the case of narrowband reception, or when the PDF /spl omega/sub 1/(/spl alpha/) of the amplitude is symmetric.
Abstract: The subject here is generalized (i.e., non-Gaussian) noise models, and specifically their first-order probability density functions (PDFs). Attention is focused primarily on the author's canonical statistical-physical Class A and Class B models. In particular, Class A noise describes the type of electromagnetic interference (EMI) often encountered in telecommunication applications, where this ambient noise is largely due to other, "intelligent" telecommunication operations. On the other hand, ambient Class B noise usually represents man-made or natural "nonintelligent"-i.e., nonmessage-bearing noise-and is highly impulsive. Class A noise is not an /spl alpha/-stable process, nor is it reducible to such, except in the limiting Gaussian cases of high-density noise (by the central limit theorem). Class B noise is also asymptotically normal (before model approximation). Under rather broad conditions, principally governed by the source propagation and distribution scenarios, the PDF of Class B noise alone (no Gaussian component) can usually be approximated by (1) a symmetric Gaussian /spl alpha/-stable (S/spl alpha/S) model in the case of narrowband reception, or when the PDF /spl omega//sub 1/(/spl alpha/) of the amplitude is symmetric; and (2) a nonsymmetric /spl alpha/-stable (NS/spl alpha/S) model (no Gaussian component) can be constructed in broadband regimes. New results here include: (i) counting functional methods for constructing the general qth-order characteristic functions (CFs) of Class A and Class B noise, from which (all) moments and (in principle), the PDFs follow; (ii) the first-order CFs, PDFs, and cumulative probabilities (APDs) of nonsymmetric broadband Class B noise, extended to include additive Gauss noise (AGN); (iii) proof of the existence of all moments in the basic Class A and Class B models; (iv) the key physical role of AGN and the fact that AGN removes /spl alpha/-stability; (v) the explicit roles of the propagation and distribution scenarios; and (vi) extension to noise fields. Although telecommunication applications are emphasized, Class A and Class B noise models apply selectively, but equally well, to other physical regimes, e.g., underwater acoustics and EM (radar, optics, etc.). Supportive empirical data are included.

650 citations


Journal ArticleDOI
TL;DR: A methodology is developed to derive algorithms for optimal basis selection by minimizing diversity measures proposed by Wickerhauser (1994) and Donoho (1994), which include the p-norm-like (l/sub (p/spl les/1)/) diversity measures and the Gaussian and Shannon entropies.
Abstract: A methodology is developed to derive algorithms for optimal basis selection by minimizing diversity measures proposed by Wickerhauser (1994) and Donoho (1994). These measures include the p-norm-like (l/sub (p/spl les/1)/) diversity measures and the Gaussian and Shannon entropies. The algorithm development methodology uses a factored representation for the gradient and involves successive relaxation of the Lagrangian necessary condition. This yields algorithms that are intimately related to the affine scaling transformation (AST) based methods commonly employed by the interior point approach to nonlinear optimization. The algorithms minimizing the (l/sub (p/spl les/1)/) diversity measures are equivalent to a previously developed class of algorithms called focal underdetermined system solver (FOCUSS). The general nature of the methodology provides a systematic approach for deriving this class of algorithms and a natural mechanism for extending them. It also facilitates a better understanding of the convergence behavior and a strengthening of the convergence results. The Gaussian entropy minimization algorithm is shown to be equivalent to a well-behaved p=0 norm-like optimization algorithm. Computer experiments demonstrate that the p-norm-like and the Gaussian entropy algorithms perform well, converging to sparse solutions. The Shannon entropy algorithm produces solutions that are concentrated but are shown to not converge to a fully sparse solution.

554 citations


Journal ArticleDOI
TL;DR: This paper investigates various connections between shrinkage methods and maximum a posteriori (MAP) estimation using such priors, and introduces a new family of complexity priors based upon Rissanen's universal prior on integers.
Abstract: Research on universal and minimax wavelet shrinkage and thresholding methods has demonstrated near-ideal estimation performance in various asymptotic frameworks. However, image processing practice has shown that universal thresholding methods are outperformed by simple Bayesian estimators assuming independent wavelet coefficients and heavy-tailed priors such as generalized Gaussian distributions (GGDs). In this paper, we investigate various connections between shrinkage methods and maximum a posteriori (MAP) estimation using such priors. In particular, we state a simple condition under which MAP estimates are sparse. We also introduce a new family of complexity priors based upon Rissanen's universal prior on integers. One particular estimator in this class outperforms conventional estimators based on earlier applications of the minimum description length (MDL) principle. We develop analytical expressions for the shrinkage rules implied by GGD and complexity priors. This allows us to show the equivalence between universal hard thresholding, MAP estimation using a very heavy-tailed GGD, and MDL estimation using one of the new complexity priors. Theoretical analysis supported by numerous practical experiments shows the robustness of some of these estimates against mis-specifications of the prior-a basic concern in image processing applications.

537 citations


Journal ArticleDOI
TL;DR: In this paper, the Kohn-Sham orbitals are expanded in Gaussian-type functions and an augmented-plane-wave-type approach is used to represent the electronic density, where the total density in a smooth extended part which was represented in plane waves as in our previous work and parts localised close to the nuclei which were expanded in a Gaussians.
Abstract: A new algorithm for density-functional-theory-based ab initio molecular dynamics simulations is presented. The Kohn–Sham orbitals are expanded in Gaussian-type functions and an augmented-plane-wave-type approach is used to represent the electronic density. This extends previous work of ours where the density was expanded only in plane waves. We describe the total density in a smooth extended part which we represent in plane waves as in our previous work and parts localised close to the nuclei which are expanded in Gaussians. Using this representation of the charge we show how the localised and extended part can be treated separately, achieving a computational cost for the calculation of the Kohn–Sham matrix that scales with the system size N as O(NlogN). Furthermore, we are able to reduce drastically the size of the plane-wave basis. In addition, we introduce a multiple-cutoff method that improves considerably the performance of this approach. Finally, we demonstrate with a series of numerical examples the accuracy and efficiency of the new algorithm, both for electronic structure calculations and for ab initio molecular dynamics simulations.

527 citations


Journal ArticleDOI
TL;DR: In this article, the density matrix renormalization group was used for quantum chemical calculations for molecules, as an alternative to traditional methods, such as configuration interaction or coupled cluster approaches.
Abstract: In this paper we describe how the density matrix renormalization group can be used for quantum chemical calculations for molecules, as an alternative to traditional methods, such as configuration interaction or coupled cluster approaches. As a demonstration of the potential of this approach, we present results for the H2O molecule in a standard gaussian basis. Results for the total energy of the system compare favorably with the best traditional quantum chemical methods.

489 citations


Journal ArticleDOI
TL;DR: The robustness of this communication scheme with respect to errors in the estimation of the fading process is studied, and the degradation in performance that results from such estimation errors is quantified.
Abstract: The analysis of flat-fading channels is often performed under the assumption that the additive noise is white and Gaussian, and that the receiver has precise knowledge of the realization of the fading process. These assumptions imply the optimality of Gaussian codebooks and of scaled nearest-neighbor decoding. Here we study the robustness of this communication scheme with respect to errors in the estimation of the fading process. We quantify the degradation in performance that results from such estimation errors, and demonstrate the lack of robustness of this scheme. For some situations we suggest the rule of thumb that, in order to avoid degradation, the estimation error should be negligible compared to the reciprocal of the signal-to-noise ratio (SNR).

468 citations


Journal ArticleDOI
TL;DR: It is shown that SsfPack can be easily used for implementing, fitting and analysing Gaussian models relevant to many areas of econometrics and statistics.
Abstract: This paper discusses and documents the algorithms of SsfPack 2.2. SsfPack is a suite of C routines for carrying out computations involving the statistical analysis of univariate and multivariate models in state space form. The emphasis is on documenting the link we have made to the Ox computing environment. SsfPack allows for a full range of different state space forms: from a simple time-invariant model to a complicated time-varying model. Functions can be used which put standard models such as ARMA and cubic spline models in state space form. Basic functions are available for filtering, moment smoothing and simulation smoothing. Ready-to-use functions are provided for standard tasks such as likelihood evaluation, forecasting and signal extraction. We show that SsfPack can be easily used for implementing, fitting and analysing Gaussian models relevant to many areas of econometrics and statistics. Some Gaussian illustrations are given.

456 citations


Journal ArticleDOI
TL;DR: In this paper, the authors discuss the recent theoretical developments that have led to these advances and demonstrate in a series of benchmark calculations the present capabilities of state-of-the-art computational quantum chemistry programs for the prediction of molecular structure and properties.
Abstract: Recent advances in linear scaling algorithms that circumvect the computational bottlenecks of large-scale electronic structure simulations make it possible to carry out density functional calculations with Gaussian orbitals on molecules containing more than 1000 atoms and 15 000 basis functions using current workstations and personal computers. This paper discusses the recent theoretical developments that have led to these advances and demonstrates in a series of benchmark calculations the present capabilities of state-of-the-art computational quantum chemistry programs for the prediction of molecular structure and properties.

Journal ArticleDOI
TL;DR: In this paper, the memory of non-stationary processes is estimated using a Gaussian semiparametric estimate of long-range dependence, which is consistent for d ∈ (−½, 1) and asymptotically normal for d∈ (− ½,¾) under a similar set of assumptions to those in Robinson's paper.
Abstract: Generalizing the definition of the memory parameter d in terms of the differentiated series, we showed in Velasco (Non-stationary log-periodogram regression, Forthcoming J. Economet., 1997) that it is possible to estimate consistently the memory of non-stationary processes using methods designed for stationary long-range-dependent time series. In this paper we consider the Gaussian semiparametric estimate analysed by Robinson (Gaussian semiparametric estimation of long range dependence. Ann. Stat. 23 (1995), 1630–61) for stationary processes. Without a priori knowledge about the possible non-stationarity of the observed process, we obtain that this estimate is consistent for d∈ (−½, 1) and asymptotically normal for d∈ (−½,¾) under a similar set of assumptions to those in Robinson's paper. Tapering the observations, we can estimate any degree of non-stationarity, even in the presence of deterministic polynomial trends of time. The semiparametric efficiency of this estimate for stationary sequences also extends to the non-stationary framework.

Journal ArticleDOI
TL;DR: It is obtained that the log-periodogram semiparametric estimate of the memory parameter d for non-stationary time series is asymptotically normal for d and still consistent for d, and the estimates are invariant to the presence of certain deterministic trends, without any need of estimation.

Journal ArticleDOI
TL;DR: In this paper, a unified state-space formulation for parameter estimation of exponential-affine term structure models is proposed, which only requires specifying the conditional mean and variance of the system in an approximate sense.
Abstract: This paper proposes a unified state-space formulation for parameter estimation of exponential-affine term structure models. The proposed method uses an approximate linear Kalman filter which only requires specifying the conditional mean and variance of the system in an approximate sense. The method allows for measurement errors in the observed yields to maturity, and can simultaneously deal with many yields on bonds with different maturities. An empirical analysis of two special cases of this general class of model is carried out: the Gaussian case (Vasicek 1977) and the non-Gaussian case (Cox Ingersoll and Ross 1985 and Chen and Scott 1992). Our test results indicate a strong rejection of these two cases. A Monte Carlo study indicates that the procedure is reliable for moderate sample sizes.

Journal ArticleDOI
TL;DR: The coherent vortex simulation (CVS) method as discussed by the authors decomposes turbulent flows into coherent, inhomogeneous, non-Gaussian component and an incoherent, homogeneous, Gaussian component.
Abstract: We decompose turbulent flows into two orthogonal parts: a coherent, inhomogeneous, non-Gaussian component and an incoherent, homogeneous, Gaussian component. The two components have different probability distributions and different correlations, hence different scaling laws. This separation into coherent vortices and incoherent background flow is done for each flow realization before averaging the results and calculating the next time step. To perform this decomposition we have developed a nonlinear scheme based on an objective threshold defined in terms of the wavelet coefficients of the vorticity. Results illustrate the efficiency of this coherent vortex extraction algorithm. As an example we show that in a 256 2 computation 0.7% of the modes correspond to the coherent vortices responsible for 99.2% of the energy and 94% of the enstrophy. We also present a detailed analysis of the nonlinear term, split into coherent and incoherent components, and compare it with the classical separation, e.g., used for large eddy simulation, into large scale and small scale components. We then propose a new method, called coherent vortex simulation ~CVS!, designed to compute and model two-dimensional turbulent flows using the previous wavelet decomposition at each time step. This method combines both deterministic and statistical approaches: ~i! Since the coherent vortices are out of statistical equilibrium, they are computed deterministically in a wavelet basis which is remapped at each time step in order to follow their nonlinear motions. ~ii! Since the incoherent background flow is homogeneous and in statistical equilibrium, the classical theory of homogeneous turbulence is valid there and we model statistically the effect of the incoherent background on the coherent vortices. To illustrate the CVS method we apply it to compute a two-dimensional turbulent mixing layer. © 1999 American Institute of Physics. @S1070-6631~99!04608-5# I. INTRODUCTION In this article we introduce a new approach for computing turbulence which is based on the observation that turbulent flows contain both an organized part ~the coherent vortices! and a random part ~the incoherent background flow !. The direct computation of fully developed turbulent flows involves such a large number of degrees of freedom that it is out of reach for the present and near future. Therefore some statistical modeling is needed to drastically reduce the computational cost. The problem is difficult because the statistical structure of turbulence is not Gaussian, although most statistical models assume simple Gaussian statistics. The approach we propose is to split the problem in two: ~i! the determinist computation of the non-Gaussian components of the flow and ~ii! the statistical modeling of the Gaussian components ~which can be done easily since they are completely characterized by their mean and variance! .W e

Proceedings ArticleDOI
15 Mar 1999
TL;DR: The model used here, a simplified version of the one proposed by LoPresto, Ramchandran and Orchard, is that of a mixture process of independent component fields having a zero-mean Gaussian distribution with unknown variances that are slowly spatially-varying with the wavelet coefficient location s.
Abstract: This paper deals with the application to denoising of a very simple but effective "local" spatially adaptive statistical model for the wavelet image representation that was previously introduced successfully in a compression context. Motivated by the intimate connection between compression and denoising, this paper explores the significant role of the underlying statistical wavelet image model. The model used here, a simplified version of the one proposed by LoPresto, Ramchandran and Orchard (see Proc. IEEE Data Compression Conf., 1997), is that of a mixture process of independent component fields having a zero-mean Gaussian distribution with unknown variances /spl sigma//sub s//sup 2/ that are slowly spatially-varying with the wavelet coefficient location s. We propose to use this model for image denoising by initially estimating the underlying variance field using a maximum likelihood (ML) rule and then applying the minimum mean squared error (MMSE) estimation procedure. In the process of variance estimation, we assume that the variance field is "locally" smooth to allow its reliable estimation, and use an adaptive window-based estimation procedure to capture the effect of edges. Despite the simplicity of our method, our denoising results compare favorably with the best reported results in the denoising literature.


Proceedings Article
29 Nov 1999
TL;DR: For mixture density estimation, it is shown that a k-component mixture estimated by maximum likelihood (or by an iterative likelihood improvement that is introduced) achieves log-likelihood within order 1/k of the log- likelihood achievable by any convex combination.
Abstract: Gaussian mixtures (or so-called radial basis function networks) for density estimation provide a natural counterpart to sigmoidal neural networks for function fitting and approximation. In both cases, it is possible to give simple expressions for the iterative improvement of performance as components of the network are introduced one at a time. In particular, for mixture density estimation we show that a k-component mixture estimated by maximum likelihood (or by an iterative likelihood improvement that we introduce) achieves log-likelihood within order 1/k of the log-likelihood achievable by any convex combination. Consequences for approximation and estimation using Kullback-Leibler risk are also given. A Minimum Description Length principle selects the optimal number of components k that minimizes the risk bound.

Journal ArticleDOI
TL;DR: In this article, the number of primitive Gaussians used to define the basis functions is not fixed but adjusted, based on a total energy criterion, and all basis functions share the same set of exponents.
Abstract: We introduce a scheme for the optimization of Gaussian basis sets for use in density-functional calculations. It is applicable to both all-electron and pseudopotential methodologies. In contrast to earlier approaches, the number of primitive Gaussians (exponents) used to define the basis functions is not fixed but adjusted, based on a total-energy criterion. Furthermore, all basis functions share the same set of exponents. The numerical results for the scaling of the shortest-range Gaussian exponent as a function of the nuclear charge are explained by analytical derivations. We have generated all-electron basis sets for H, B through F, Al, Si, Mn, and Cu. Our results show that they efficiently and accurately reproduce structural properties and binding energies for a variety of clusters and molecules for both local and gradient-corrected density functionals.

Journal ArticleDOI
TL;DR: A novel approach for the problem of estimating the data model of independent component analysis (or blind source separation) in the presence of Gaussian noise is introduced and a modification of the fixed-point (FastICA) algorithm is introduced.
Abstract: A novel approach for the problem of estimating the data model of independent component analysis (or blind source separation) in the presence of Gaussian noise is introduced. We define the Gaussian moments of a random variable as the expectations of the Gaussian function (and some related functions) with different scale parameters, and show how the Gaussian moments of a random variable can be estimated from noisy observations. This enables us to use Gaussian moments as one-unit contrast functions that have no asymptotic bias even in the presence of noise, and that are robust against outliers. To implement the maximization of the contrast functions based on Gaussian moments, a modification of the fixed-point (FastICA) algorithm is introduced.

Journal ArticleDOI
TL;DR: In this article, the power-spectrum covariance matrix in nonlinear perturbation theory (weakly nonlinear regime), in the hierarchical model, and from numerical simulations in real and redshift space is calculated for galaxy and weak-lensing surveys.
Abstract: Gravitational clustering is an intrinsically nonlinear process that generates significant non-Gaussian signatures in the density field We consider how these affect power spectrum determinations from galaxy and weak-lensing surveys Non-Gaussian effects not only increase the individual error bars compared to the Gaussian case but, most importantly, lead to nontrivial cross-correlations between different band powers, correlating small-scale band powers both among themselves and with those at large scales We calculate the power-spectrum covariance matrix in nonlinear perturbation theory (weakly nonlinear regime), in the hierarchical model (strongly nonlinear regime), and from numerical simulations in real and redshift space In particular, we show that the hierarchical Ansatz cannot be strictly valid for the configurations of the trispectrum involved in the calculation of the power-spectrum covariance matrix We discuss the impact of these results on parameter estimation from power-spectrum measurements and their dependence on the size of the survey and the choice of band powers We show that the non-Gaussian terms in the covariance matrix become dominant for scales smaller than the nonlinear scale knl ~ 02 h-1 Mpc-1, depending somewhat on power normalization Furthermore, we find that cross-correlations mostly deteriorate the determination of the amplitude of a rescaled power spectrum, whereas its shape is less affected In weak lensing surveys the projection tends to reduce the importance of non-Gaussian effects Even so, for background galaxies at redshift z ~ 1, the non-Gaussian contribution rises significantly around l ~ 1000 and could become comparable to the Gaussian terms depending upon the power spectrum normalization and cosmology The projection has another interesting effect: the ratio between non-Gaussian and Gaussian contributions saturates and can even decrease at small enough angular scales if the power spectrum of the three-dimensional field falls faster than k-2

Journal ArticleDOI
TL;DR: It is observed that the chirplet decomposition and the related TFD provide more compact and precise representation of signal inner structures compared with the commonly used time-frequency representations.
Abstract: A new four-parameter atomic decomposition of chirplets is developed for compact and precise representation of signals with chirp components. The four-parameter chirplet atom is obtained from the unit Gaussian function by successive applications of scaling, fractional Fourier transform (FRFT), and time-shift and frequency-shift operators. The application of the FRFT operator results in a rotation of the Wigner distribution of the Gaussian in the time-frequency plane by a specified angle. The decomposition is realized by using the matching pursuit algorithm. For this purpose, the four-parameter space is discretized to obtain a small but complete subset in the Hilbert space. A time-frequency distribution (TFD) is developed for clear and readable visualization of the signal components. It is observed that the chirplet decomposition and the related TFD provide more compact and precise representation of signal inner structures compared with the commonly used time-frequency representations.

Journal ArticleDOI
TL;DR: DWT multiscale products are analyzed for detection and estimation of steps, and a new general closed-form expression for the Cramer-Rao bound (CRB) for discrete-time step-change location estimation is employed.
Abstract: We analyze discrete wavelet transform (DWT) multiscale products for detection and estimation of steps. Here the DWT is an over complete approximation to smoothed gradient estimation, with smoothing varied over dyadic scale, as developed by Mallat and Zhong (1992). The multiscale product approach was first proposed by Rosenfeld (1970) for edge detection. We develop statistics of the multiscale products, and characterize the resulting non-Gaussian heavy tailed densities. The results may be applied to edge detection with a false-alarm constraint. The response to impulses, steps, and pulses is also characterized. To facilitate the analysis, we employ a new general closed-form expression for the Cramer-Rao bound (CRB) for discrete-time step-change location estimation. The CRB can incorporate any underlying continuous and differentiable edge model, including an arbitrary number of steps. The CRB analysis also includes sampling phase offset effects and is valid in both additive correlated Gaussian and independent and identically distributed (i.i.d.) non-Gaussian noise. We consider location estimation using multiscale products, and compare results to the appropriate CRB.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed to extend the traditional analysis by introducing intensity correlation functions g(n) of higher order, which allow both to detect non-Gaussian scattering processes and to extract information not available in g(2) alone.
Abstract: Dynamic light-scattering techniques provide noninvasive probes of diverse media, such as colloidal suspensions, granular materials, or foams. In homodyne photon correlation spectroscopy, the dynamical properties of the medium are extracted from the intensity autocorrelation g(2)(τ) of the scattered light by means of the Siegert relation g(2)(τ)=1+|〈E(0)E*(τ)〉|2/〈EE*〉2. This approach is unfortunately limited to systems where the electric field is a Gaussian random variable and thus breaks down when the scattering sites are few or correlated. We propose to extend the traditional analysis by introducing intensity correlation functions g(n) of higher order, which allow us both to detect non-Gaussian scattering processes and to extract information not available in g(2) alone. The g(n) are experimentally measured by a combination of a commercial correlator and a custom digital delay line. Experimental results for g(3) and g(4) are presented for both Gaussian and non-Gaussian light-scattering processes and compared with theoretical predictions.

Journal ArticleDOI
TL;DR: In this article, the quadratic configuration interaction (QCISD) energy calculation is replaced by a coupled cluster (CCSD(T)) energy calculation, which results in little change in the accuracy of the methods as assessed on the G2/97 test set.

Journal ArticleDOI
TL;DR: A new representation of audio noise signals is proposed, based on symmetric /spl alpha/-stable (S/spl alpha/S) distributions in order to better model the outliers that exist in real signals.
Abstract: A new representation of audio noise signals is proposed, based on symmetric /spl alpha/-stable (S/spl alpha/S) distributions in order to better model the outliers that exist in real signals. This representation addresses a shortcoming of the Gaussian model, namely, the fact that it is not well suited for describing signals with impulsive behavior. The /spl alpha/-stable and Gaussian methods are used to model measured noise signals. It is demonstrated that the /spl alpha/-stable distribution, which has heavier tails than the Gaussian distribution, gives a much better approximation to real-world audio signals. The significance of these results is shown by considering the time delay estimation (TDE) problem for source localization in teleimmersion applications. In order to achieve robust sound source localization, a novel time delay estimation approach is proposed. It is based on fractional lower order statistics (FLOS), which mitigate the effects of heavy-tailed noise. An improvement in TDE performance is demonstrated using FLOS that is up to a factor of four better than what can be achieved with second-order statistics.

Journal ArticleDOI
TL;DR: This work relates the small ball behavior of a Gaussian measure μ on a Banach space E with the metric entropy behavior of K μ, the unit ball of the reproducing kernel Hilbert space of μ in E to enable the application of tools and results from functional analysis to small ball problems.
Abstract: A precise link proved by Kuelbs and Li relates the small ball behavior of a Gaussian measure $\mu$ on a Banach space $E$ with the metric entropy behavior of $K_\mu$, the unit ball of the reproducing kernel Hilbert space of $\mu$ in $E$. We remove the main regularity assumption imposed on the unknown function in the link. This enables the application of tools and results from functional analysis to small ball problems and leads to small ball estimates of general algebraic type as well as to new estimates for concrete Gaussian processes. Moreover, we show that the small ball behavior of a Gaussian process is also tightly connected with the speed of approximation by “finite rank” processes.

01 Jan 1999
TL;DR: In this paper, the authors show that sampling the apparent diffusion coefficient at higher angular resolutions provides evidence for non-Gaussian diffusion in human brain white matter regions containing heterogeneous fiber orientations.
Abstract: Introduction The use of a tensor to describe diffusion in anisotropic tissue such as white matter or cardiac muscle is predicated on the assumption of Gaussian diffusion (I, 2). The diffusion may, however, exhibit non-Gaussian behavior if the diffusion is restricted (3), or if there is slow exchange between partial volume components containing Gaussian diffusion. The 6 gradient direction sampling typical of standard tensor imaging experiments cannot resolve such spatially non-Gaussian diffusion, and thus higher angular resolution sampling is required. Here, we show that sampling the apparent diffusion coefficient at higher angular resolutions provides evidence for non-Gaussian diffusion in human brain white matter regions containing heterogeneous fiber orientations (4).

Journal ArticleDOI
TL;DR: In this article, the power spectrum covariance matrix in non-linear perturbation theory (weakly nonlinear regime), in the hierarchical model and from numerical simulations in real and redshift space is calculated for galaxy and weak-lensing surveys.
Abstract: Gravitational clustering is an intrinsically non-linear process that generates significant non-Gaussian signatures in the density field. We consider how these affect power spectrum determinations from galaxy and weak-lensing surveys. Non-Gaussian effects not only increase the individual error bars compared to the Gaussian case but, most importantly, lead to non-trivial cross-correlations between different band-powers. We calculate the power-spectrum covariance matrix in non-linear perturbation theory (weakly non-linear regime), in the hierarchical model (strongly non-linear regime), and from numerical simulations in real and redshift space. We discuss the impact of these results on parameter estimation from power spectrum measurements and their dependence on the size of the survey and the choice of band-powers. We show that the non-Gaussian terms in the covariance matrix become dominant for scales smaller than the non-linear scale, depending somewhat on power normalization. Furthermore, we find that cross-correlations mostly deteriorate the determination of the amplitude of a rescaled power spectrum, whereas its shape is less affected. In weak lensing surveys the projection tends to reduce the importance of non-Gaussian effects. Even so, for background galaxies at redshift z=1, the non-Gaussian contribution rises significantly around l=1000, and could become comparable to the Gaussian terms depending upon the power spectrum normalization and cosmology. The projection has another interesting effect: the ratio between non-Gaussian and Gaussian contributions saturates and can even decrease at small enough angular scales if the power spectrum of the 3D field falls faster than 1/k^2.

Journal ArticleDOI
TL;DR: In this paper, a conceptual framework is presented for a unified treatment of issues arising in a variety of predictability studies, including the predictive power (PP), a measure based on information-theoretical principles, lies at the center of this framework.
Abstract: A conceptual framework is presented for a unified treatment of issues arising in a variety of predictability studies. The predictive power (PP), a predictability measure based on information‐theoretical principles, lies at the center of this framework. The PP is invariant under linear coordinate transformations and applies to multivariate predictions irrespective of assumptions about the probability distribution of prediction errors. For univariate Gaussian predictions, the PP reduces to conventional predictability measures that are based upon the ratio of the rms error of a model prediction over the rms error of the climatological mean prediction. Since climatic variability on intraseasonal to interdecadal timescales follows an approximately Gaussian distribution, the emphasis of this paper is on multivariate Gaussian random variables. Predictable and unpredictable components of multivariate Gaussian systems can be distinguished by predictable component analysis, a procedure derived from discriminant analysis: seeking components with large PP leads to an eigenvalue problem, whose solution yields uncorrelated components that are ordered by PP from largest to smallest. In a discussion of the application of the PP and the predictable component analysis in different types of predictability studies, studies are considered that use either ensemble integrations of numerical models or autoregressive models fitted to observed or simulated data. An investigation of simulated multidecadal variability of the North Atlantic illustrates the proposed methodology. Reanalyzing an ensemble of integrations of the Geophysical Fluid Dynamics Laboratory coupled general circulation model confirms and refines earlier findings. With an autoregressive model fitted to a single integration of the same model, it is demonstrated that similar conclusions can be reached without resorting to computationally costly ensemble integrations.