scispace - formally typeset
Search or ask a question

Showing papers on "Gaussian published in 1997"


Journal ArticleDOI
TL;DR: In this article, a density functional theory-based algorithm for periodic and non-periodic ab initio calculations is presented, which uses pseudopotentials in order to integrate out the core electrons from the problem.
Abstract: A density functional theory-based algorithm for periodic and non-periodic ab initio calculations is presented. This scheme uses pseudopotentials in order to integrate out the core electrons from the problem. The valence pseudo-wavefunctions are expanded in Gaussian-type orbitals and the density is represented in a plane wave auxiliary basis. The Gaussian basis functions make it possible to use the efficient analytical integration schemes and screening algorithms of quantum chemistry. Novel recursion relations are developed for the calculation of the matrix elements of the density-dependent Kohn-Sham self-consistent potential. At the same time the use of a plane wave basis for the electron density permits efficient calculation of the Hartree energy using fast Fourier transforms, thus circumventing one of the major bottlenecks of standard Gaussian based calculations. Furthermore, this algorithm avoids the fitting procedures that go along with intermediate basis sets for the charge density. The performance a...

1,150 citations


Journal ArticleDOI
TL;DR: It is proved in this two-user case that the probability of error of the MMSE detector is better than that of the decorrelating linear detector for all values of normalized crosscorrelations not greater than 1/2 /spl radic/(2+/spl Radic/3)/spl cong/0.9659.
Abstract: The performance analysis of the minimum-mean-square-error (MMSE) linear multiuser detector is considered in an environment of nonorthogonal signaling and additive white Gaussian noise. In particular, the behavior of the multiple-access interference (MAI) at the output of the MMSE detector is examined under various asymptotic conditions, including: large signal-to-noise ratio; large near-far ratios; and large numbers of users. These results suggest that the MAI-plus-noise contending with the demodulation of a desired user is approximately Gaussian in many cases of interest. For the particular case of two users, it is shown that the maximum divergence between the output MAI-plus-noise and a Gaussian distribution having the same mean and variance is quite small in most cases of interest. It is further proved in this two-user case that the probability of error of the MMSE detector is better than that of the decorrelating linear detector for all values of normalized crosscorrelations not greater than 1/2 /spl radic/(2+/spl radic/3)/spl cong/0.9659.

890 citations


Journal ArticleDOI
TL;DR: Methods for estimating non-Gaussian time series models rely on Markov chain Monte Carlo to carry out simulation smoothing and Bayesian posterior analysis of parameters, and on importance sampling to estimate the likelihood function for classical inference.
Abstract: SUMMARY In this paper we provide methods for estimating non-Gaussian time series models. These techniques rely on Markov chain Monte Carlo to carry out simulation smoothing and Bayesian posterior analysis of parameters, and on importance sampling to estimate the likelihood function for classical inference. The time series structure of the models is used to ensure that our simulation algorithms are efficient.

732 citations


Book
13 Jul 1997
TL;DR: In this paper, the authors consider Gaussian Hilbert spaces, Wick products, Tensor products, and Fock spaces, and show that they can be used for generalized U-statistics.
Abstract: 1. Gaussian Hilbert spaces 2. Wiener chaos 3. Wick products 4. Tensor products and Fock spaces 5. Hypercontractivity 6. Distributions of variables with finite chaos expansions 7. Stochastic integration 8. Gaussian stochastic processes 9. Conditioning 10. Limit theorems for generalized U-statistics 11. Applications to operator theory 12. Some operators from quantum physics 13. The Cameron-Martin shift 14. Malliavin calculus 15. Transforms Appendices.

704 citations


Proceedings ArticleDOI
29 Jun 1997
TL;DR: It is shown that capacity can be achieved by optimal power allocation over the channels, and an explicit characterization of the optimal power allocations and the resulting capacity region is obtained.
Abstract: We analyze the problem of communication over a set of parallel Gaussian broadcast channels, each with a different set of noise power for the users. We show that capacity can be achieved by optimal power allocation over the channels, and obtain an explicit characterization of the optimal power allocations and the resulting capacity region.

581 citations


Journal ArticleDOI
TL;DR: In this paper, Monte Carlo simulation is used to obtain an approximation to the loglikelihood for observations with non-Gaussian distributions, where the observations have a Poisson distribution and the observation errors have a t-distribution.
Abstract: State space models are considered for observations which have non-Gaussian distributions. We obtain accurate approximations to the loglikelihood for such models by Monte Carlo simulation. Devices are introduced which improve the accuracy of the approximations and which increase computational efficiency. The loglikelihood function is maximised numerically to obtain estimates of the unknown hyperparameters. Standard errors of the estimates due to simulation are calculated. Details are given for the important special cases where the observations come from an exponential family distribution and where the observation equation is linear but the observation errors are non-Gaussian. The techniques are illustrated with a series for which the observations have a Poisson distribution and a series for which the observation errors have a t-distribution.

462 citations


Posted Content
TL;DR: Software is now available that implements Gaussian process methods using covariance functions with hierarchical parameterizations, which can discover high-level properties of the data, such as which inputs are relevant to predicting the response.
Abstract: Gaussian processes are a natural way of defining prior distributions over functions of one or more input variables. In a simple nonparametric regression problem, where such a function gives the mean of a Gaussian distribution for an observed response, a Gaussian process model can easily be implemented using matrix computations that are feasible for datasets of up to about a thousand cases. Hyperparameters that define the covariance function of the Gaussian process can be sampled using Markov chain methods. Regression models where the noise has a t distribution and logistic or probit models for classification applications can be implemented by sampling as well for latent values underlying the observations. Software is now available that implements these methods using covariance functions with hierarchical parameterizations. Models defined in this way can discover high-level properties of the data, such as which inputs are relevant to predicting the response.

451 citations


Journal ArticleDOI
TL;DR: Exact computable rates of convergence for Gaussian target distributions are obtained and different random and non‐random updating strategies and blocking combinations are compared using the rates.
Abstract: In this paper many convergence issues concerning the implementation of the Gibbs sampler are investigated. Exact computable rates of convergence for Gaussian target distributions are obtained. Different random and non-random updating strategies and blocking combinations are compared using the rates. The effect of dimensionality and correlation structure on the convergence rates are studied. Some examples are considered to demonstrate the results. For a Gaussian image analysis problem several updating strategies are described and compared. For problems in Bayesian linear models several possible parameterizations are analysed in terms of their convergence rates characterizing the optimal choice.

448 citations


Journal ArticleDOI
TL;DR: In this paper, the authors studied the Gaussian random fields indexed by Rd whose covariance is defined in all generality as the parametrix of an elliptic pseudo-differential operator with minimal regularity assumption on the symbol.
Abstract: We study the Gaussian random fields indexed by Rd whose covariance is defined in all generality as the parametrix of an elliptic pseudo-differential operator with minimal regularity assumption on the symbol. We construct new wavelet bases adapted to these operators; the decomposition of the field in this corresponding basis yields its iterated logarithm law and its uniform modulus of continuity. We also characterize the local scalings of the fields in terms of the properties of the principal symbol of the pseudodifferential operator. Similar results are obtained for the Multi-Fractional Brownian Motion.

428 citations


Journal ArticleDOI
TL;DR: In this article, the equations of error propagation for two-dimensional elliptical Gaussian fits in the presence of Gaussian noise plus a new method that simplifies the use of a priori size constraints to reduce amplitude errors are presented.
Abstract: Elliptical Gaussian fits are used in astronomy for accurate measurements of fundamental source parameters such as central position, peak flux density, and angular size. The full value of a noise-limited image can be realized only if the effects of noise on the fitted parameters are estimated accurately. This paper presents the equations of error propagation for two-dimensional elliptical Gaussian fits in the presence of Gaussian noise plus a new method that simplifies the use of a priori size constraints to reduce amplitude errors.

385 citations


Journal ArticleDOI
TL;DR: In this paper, the angular power spectrum of the CMB maps is estimated using truncated spherical harmonics, which is shown to be computationally faster than the nonlinear maximum-likelihood technique.
Abstract: A new method for estimating the angular power spectrum ${C}_{l}$ from cosmic microwave background (CMB) maps is presented, which has the following desirable properties. (1) It is unbeatable in the sense that no other method can measure ${C}_{l}$ with smaller error bars. (2) It is quadratic, which makes the statistical properties of the measurements easy to compute and use for estimation of cosmological parameters. (3) It is computationally faster than rival high-precision methods such as the nonlinear maximum-likelihood technique, with the crucial steps scaling as ${n}^{2}$ rather than ${n}^{3}$, where $n$ is the number of map pixels. (4) It is applicable to any survey geometry whatsoever, with arbitrary regions masked out and arbitrary noise behavior. (5) It is not a ``black-box'' method, but quite simple to understand intuitively: it corresponds to a high-pass filtering and edge softening of the original map followed by a straight expansion in truncated spherical harmonics. It is argued that this method is computationally feasible even for future high-resolution CMB experiments with $n\ensuremath{\sim}{10}^{6}$--${10}^{7}$. It is shown that ${C}_{l}$ computed with this method is useful not merely for graphical presentation purposes, but also as an intermediate (and arguably necessary) step in the data analysis pipeline, reducing the data set to a more manageable size before the final step of constraining Gaussian cosmological models and parameters --- while retaining all the cosmological information that was present in the original map.

Journal ArticleDOI
TL;DR: In this article, the authors show that there is a q-analogue of the Gaussian functor of second quantization behind these processes and that this structure can be used to translate questions on q-Gaussian processes into corresponding (and much simpler) questions in the underlying Hilbert space.
Abstract: We examine, for −1

Journal ArticleDOI
TL;DR: This work provides an analytical approximation of the Bragg curve in closed form using a simple combination of Gaussians and parabolic cylinder functions and can be fitted to measurements within the measurement error.
Abstract: The knowledge of proton depth-dose curves, or “Bragg curves,” is a fundamental prerequisite for dose calculations in radiotherapy planning, among other applications. In various cases it is desirable to have an analytical representation of the Bragg curve, rather than using measured or numerically calculated data. This work provides an analytical approximation of the Bragg curve in closed form. The underlying model is valid for proton energies between about 10 and 200 MeV. Its main four constituents are: (i) a power-law relationship describing the range-energy dependency; (ii) a linear model for the fluence reduction due to nonelastic nuclear interactions, assuming local deposition of a fraction of the released energy; (iii) a Gaussian approximation of the range straggling distribution; and (iv) a representation of the energy spectrum of poly-energetic beams by a Gaussian with a linear “tail.” Based on these assumptions the Bragg curve can be described in closed form using a simple combination of Gaussians and parabolic cylinder functions. The resulting expression can be fitted to measurements within the measurement error. Very good agreement is also found with numerically calculated Bragg curves.

Journal ArticleDOI
01 Jun 1997
TL;DR: In this paper, a detailed analysis of experimental data, collected at Osborne Head Gunnery Range with McMaster University IPIX radar, to test theoretical models developed in the literature is devoted.
Abstract: The paper is devoted to a detailed analysis of experimental data, collected at Osborne Head Gunnery Range with McMaster University IPIX radar, to test theoretical models developed in the literature. The validity of the compound model has been proven for VV polarisation both for amplitude and correlation properties. Cross-polarised data also exhibit a compound behaviour but require an additional Gaussian component due to thermal noise. HH data deviate from the K model and seem to better approach a log-normal distribution. Previous results have been obtained by a correlation test that allows separation of the short and long correlation terms, a modified Kolmogoroff Smirnoff test to verify the fitting and a cumulants domain analysis to quantify the Gaussian component. The interest of the work lies in its application for successful radar design.

Journal ArticleDOI
TL;DR: The history of the heat kernel Gaussian estimates started with the works of Nash and Aronson and the Aronson’s upper bound for the case of time-independent coefficients which is of interest reads as follows.
Abstract: The history of the heat kernel Gaussian estimates started with the works of Nash [25] and Aronson [2] where the double-sided Gaussian estimates were obtained for the heat kernel of a uniformly parabolic equation in IR in a divergence form (see also [15] for improvement of the original Nash’s argument and [26] for a consistent account of the Aronson’s results and related topics). In particular, the Aronson’s upper bound for the case of time-independent coefficients which is of interest for us reads as follows:

Journal ArticleDOI
TL;DR: The Bayesian transformed Gaussian model (BTG) provides an alternative to trans-Gaussian kriging taking into account the major sources of uncertainty, including uncertainty about the “normalizing transformation” itself, in the computation of the predictive density function.
Abstract: A model for prediction in some types of non-Gaussian random fields is presented. It extends the work of Handcock and Stein to prediction in transformed Gaussian random fields, where the transformation is known to belong to a parametric family of monotone transformations. The Bayesian transformed Gaussian model (BTG) provides an alternative to trans-Gaussian kriging taking into account the major sources of uncertainty, including uncertainty about the “normalizing transformation” itself, in the computation of the predictive density function. Unlike trans-Gaussian kriging, this approach mitigates the consequences of a misspecified transformation, giving in this sense a more robust predictive inference. Because the mean of the predictive distribution does not exist for some commonly used families of transformations, the median is used as the optimal predictor. The BTG model is applied in the spatial prediction of weekly rainfall amounts. Cross-validation shows the predicting performance of the BTG mo...

Journal ArticleDOI
TL;DR: The genus statistics of isodensity contours has become a well-established tool in cosmology as discussed by the authors, and the Minkowski functionals have been applied for the first time to the continuous random field.
Abstract: The genus statistics of isodensity contours has become a well-established tool in cosmology. In this Letter we place the genus in the wider framework of a complete family of morphological descriptors. These are known as the Minkowski functionals, and we here apply them for the first time to isodensity contours of a continuous random field. By taking two equivalent approaches, one through differential geometry, the other through integral geometry, we derive two complementary formulae suitable for numerically calculating the Minkowski functionals. As an example, we apply them to simulated Gaussian random fields and compare the outcome to the analytically known results, demonstrating that both are indeed well suited for numerical evaluation. The code used for calculating all Minkowski functionals is available from the authors.

Journal ArticleDOI
TL;DR: In this paper, a hierarchical multipole method was developed for fast computation of the Coulomb matrix, and a linear scaling algorithm for calculation of the Fock matrix was demonstrated for a sequence of water clusters at the restricted Hartree-Fock/3-21G level of theory.
Abstract: Computation of the Fock matrix is currently the limiting factor in the application of Hartree-Fock and hybrid Hartree-Fock/density functional theories to larger systems. Computation of the Fock matrix is dominated by calculation of the Coulomb and exchange matrices. With conventional Gaussian-based methods, computation of the Fock matrix typically scales as ∼N2.7, where N is the number of basis functions. A hierarchical multipole method is developed for fast computation of the Coulomb matrix. This method, together with a recently described approach to computing the Hartree-Fock exchange matrix of insulators [J. Chem. Phys. 105, 2726 (1900)], leads to a linear scaling algorithm for calculation of the Fock matrix. Linear scaling computation the Fock matrix is demonstrated for a sequence of water clusters at the restricted Hartree-Fock/3-21G level of theory, and corresponding accuracies in converged total energies are shown to be comparable with those obtained from standard quantum chemistry programs. Restri...

BookDOI
01 Jan 1997
TL;DR: This chapter discusses applications of Scale-Space Theory in the context of non-Linear Extensions, as well as specific cases such as multi-Scale Watershed Segmentation, and local Morse Theory for Gaussian Blurred Functions.
Abstract: Preface. Scale in Perspective J.J. Koenderink. I: Applications. 1. Applications of Scale-Space Theory B. ter Haar Romeny. 2. Enhancement of Fingerprint Images Using Shape-Adapted Scale-Space Operators A. Almansa, T. Lindeberg. 3. Optic Flow and Stereo W.J. Niessen, R. Maas. II: The Foundation. 4. On The History of Gaussian Scale-Space Axiomatics J. Weickert, et al. 5. Scale-Space and Measurement Duality L. Florack. 6. On The Axiomatic Foundations of Linear Scale-Space T. Lindeberg. 7. Scale-Space Generators and Functionals M. Nielsen. 8. Invariance Theory A. Salden. 9. Stochastic Analysis of Image Acquisition and Scale-Space Smoothing K. Astrom, A. Heyden. III: The Structure. 10. Local Analysis of Image Scale Space P. Johansen. 11. Local Morse Theory for Gaussian Blurred Functions J. Damon. 12. Critical Point Events in Affine Scale-Space L. Griffin. 13. Topological Numbers and Singularities S. Kalitzin. 14. Multi-Scale Watershed Segmentation O.F. Olsen. IV: Non-Linear Extensions. 15. The Morphological Equivalent of Gaussian Scale-Space R. van den Boomgaard, L. Dorst. 16. Nonlinear Diffusion Scale-Spaces J. Weickert. Bibliography. Index.

Proceedings ArticleDOI
21 Apr 1997
TL;DR: An approximate maximum likelihood method for blind source separation and deconvolution of noisy signal is proposed, which is able to capture some salient features of the input signal distribution and performs generally much better than third-order or fourth-order cumulant based techniques.
Abstract: An approximate maximum likelihood method for blind source separation and deconvolution of noisy signal is proposed. This technique relies upon a data augmentation scheme, where the (unobserved) input are viewed as the missing data. In the technique described, the input signal distribution is modeled by a mixture of Gaussian distributions, enabling the use of explicit formula for computing the posterior density and conditional expectation and thus avoiding Monte-Carlo integrations. Because this technique is able to capture some salient features of the input signal distribution, it performs generally much better than third-order or fourth-order cumulant based techniques.

Journal ArticleDOI
01 Feb 1997
TL;DR: In this article, two suboptimum procedures for coherent detection of a radar target signal, in the presence of a mixture of K-distributed and Gaussian distributed clutter, are presented.
Abstract: The author introduces two suboptimum procedures for the coherent detection of a radar target signal, in the presence of a mixture of K-distributed and Gaussian distributed clutter. As a comparison, the optimum Neyman-Pearson and the whitening matched filter strategies to detect coherent pulse trains against the above mentioned disturbance are also presented. The optimum detection scheme is heavy to implement: it involves a numerical integration with respect to the texture variable of the K distribution. It strongly depends on the parameters of the clutter distribution, thus no predetermined threshold can be assigned to achieve a given probability of false alarm if such parameters are unknown. The preferred sub-optimum approach is based on the estimation of the texture variable, which is then used to determine the likelihood ratio. Applying the maximum likelihood estimate the resulting detection strategy is a linear quadratic functional of the observed vector and is clutter distribution free. The performance of the proposed detector is close to optimal and much better than the whitening matched filter detector; moreover, it guarantees approximately constant false alarm rate behaviour, regardless of the clutter distribution.

Journal ArticleDOI
TL;DR: This paper proposes a (Gaussian) prefiltering to reduce the noise sensitivity of the Richardson–Lucy algorithm and shows an example of how restoration methods can improve quantitative analysis: the total amount of fluorescence inside a closed object is measured in the vicinity of another object before and after restoration.
Abstract: In this paper, we compare the performance of three iterative methods for image restoration: the Richardson–Lucy algorithm, the iterative constrained Tikhonov–Miller algorithm (ICTM) and the Carrington algorithm. Both the ICTM and the Carrington algorithm are based on an additive Gaussian noise model, but differ in the way they incorporate the non-negativity constraint. Under low light-level conditions, this additive (Gaussian) noise model is a poor description of the actual photon-limited image recording, compared with that of a Poisson process. The Richardson–Lucy algorithm is a maximum likelihood estimator for the intensity of a Poisson process. We have studied various methods for determining the regularization parameter of the ICTM and the Carrington algorithm and propose a (Gaussian) prefiltering to reduce the noise sensitivity of the Richardson–Lucy algorithm. The results of these algorithms are compared on spheres convolved with a point spread function and distorted by Poisson noise. Our simulations show that the Richardson–Lucy algorithm, with Gaussian prefiltering, produces the best result in most of the tests. Finally, we show an example of how restoration methods can improve quantitative analysis: the total amount of fluorescence inside a closed object is measured in the vicinity of another object before and after restoration.

Journal ArticleDOI
TL;DR: In this paper, the authors use wavelets to decompose the volatility of intraday return data across scales and show that when investigating two-points correlation functions of the volatility logarithms across different time scales, one reveals the existence of a causal information cascade from large scales (i.e. small frequencies, hence to vocable infrared) to fine scales (ultraviolet).
Abstract: Modelling accurately financial price variations is an essential step underlying portfolio allocation optimization, derivative pricing and hedging, fund management and trading. The observed complex price fluctuations guide and constraint our theoretical understanding of agent interactions and of the organization of the market. The gaussian paradigm of independent normally distributed price increments has long been known to be incorrect with many attempts to improve it. Econometric nonlinear autoregressive models with conditional heteroskedasticity (ARCH) and their generalizations capture only imperfectly the volatility correlations and the fat tails of the probability distribution function (pdf) of price variations. Moreover, as far as changes in time scales are concerned, the so-called ``aggregation'' properties of these models are not easy to control. More recently, the leptokurticity of the full pdf was described by a truncated ``additive'' Levy flight model (TLF). Alternatively, Ghashghaie et al. proposed an analogy between price dynamics and hydrodynamic turbulence. In this letter, we use wavelets to decompose the volatility of intraday (S&P500) return data across scales. We show that when investigating two-points correlation functions of the volatility logarithms across different time scales, one reveals the existence of a causal information cascade from large scales (i.e. small frequencies, hence to vocable ``infrared'') to fine scales (``ultraviolet''). We quantify and visualize the information flux across scales. We provide a possible interpretation of our findings in terms of market dynamics.

Book ChapterDOI
TL;DR: This paper compares the on-line extrema tracking performance of an evolutionary program without self-adaptation against an evolutionary programs using a self- Adaptive Gaussian update rule over a number of dynamics applied to a simple static function.
Abstract: Typical applications of evolutionary optimization involve the off-line approximation of extrema of static multi-modal functions. Methods which use a variety of techniques to self-adapt mutation parameters have been shown to be more successful than methods which do not use self-adaptation. For dynamic functions, the interest is not to obtain the extrema but to follow it as closely as possible. This paper compares the on-line extrema tracking performance of an evolutionary program without self-adaptation against an evolutionary program using a self-adaptive Gaussian update rule over a number of dynamics applied to a simple static function. The experiments demonstrate that for some dynamic functions, self-adaptation is effective while for others it is detrimental.

Journal ArticleDOI
Yimin Xiao1
TL;DR: In this article, the Holder conditions in the set variable for the local time of X(t) are established and the exact Hausdorff measure of the level set X−1(x) is evaluated.
Abstract: Let Y(t) (t∈ℝ N ) be a real-valued, strongly locally nondeterministic Gaussian random field with stationary increments and Y(0)=0. Consider the (N,d) Gaussian random field defined by where X 1,…,X d are independent copies of Y. The local and global Holder conditions in the set variable for the local time of X(t) are established and the exact Hausdorff measure of the level set X −1(x) is evaluated.

Book ChapterDOI
Jean Jacod1
01 Jan 1997
TL;DR: In this article, the conditions générales d'utilisation (http://www.numdam.org/legal.php) of the agreement with the séminaire de probabilités (Strasbourg) are discussed, i.e., the copie ou impression de ce fichier do not contenir la présente mention de copyright.
Abstract: © Springer-Verlag, Berlin Heidelberg New York, 1997, tous droits réservés. L’accès aux archives du séminaire de probabilités (Strasbourg) (http://portail. mathdoc.fr/SemProba/) implique l’accord avec les conditions générales d’utilisation (http://www.numdam.org/legal.php). Toute utilisation commerciale ou impression systématique est constitutive d’une infraction pénale. Toute copie ou impression de ce fichier doit contenir la présente mention de copyright.

Journal ArticleDOI
TL;DR: In this paper, a pseudospectral method for performing fully coupled six-dimensional bound state dynamics calculations is presented, including overall rotational effects, using a Lanczos based iterative diagonalization scheme.
Abstract: A novel and efficient pseudospectral method for performing fully coupled six-dimensional bound state dynamics calculations is presented, including overall rotational effects. A Lanczos based iterative diagonalization scheme produces the energy levels in increasing energies. This scheme, which requires repetitively acting the Hamiltonian operator on a vector, circumvents the problem of constructing the full matrix. This permits the use of ultralarge molecular basis sets (up to over one million states for a given symmetry) in order to fully converge the calculations. The Lanczos scheme was conducted in a symmetry adapted spectral representation, containing Wigner functions attached to each monomer. The Hamiltonian operator has been split into different terms, each corresponding to an associated diagonal or nearly diagonal representation. The potential term is evaluated by a pseudospectral scheme of Gaussian accuracy, which guarantees the variational principle. Spectroscopic properties are computed with this...

Journal ArticleDOI
TL;DR: It is shown that, whilst both methods are capable of determining cluster validity for data sets in which clusters tend towards a multivariate Gaussian distribution, the parametric method inevitably fails for clusters which have a non-Gaussian structure whilst the scale-space method is more robust.

Journal ArticleDOI
TL;DR: A large number of papers written over the last ten years have concerned the spectral theory of Laplace-Beltrami operators on complete Riemannian manifolds, and of other self-adjoint second order elliptic operators as discussed by the authors.
Abstract: A large number of papers written over the last ten years have concerned the spectral theory of Laplace–Beltrami operators on complete Riemannian manifolds, and of other self-adjoint second order elliptic operators. Much of the interest has centred on the relationship between various types of Sobolev inequality, parabolic Harnack inequalities and the Liouville property on the one hand, and Gaussian heat kernel bounds on the other. For manifolds of bounded geometry there is an important connection between this problem and a corresponding one for discrete Laplacians on graphs. Standard references are [9, 37] and more recent literature can be traced via [5, 16, 32].

Journal ArticleDOI
TL;DR: In this article, the role of non-Gaussian fluctuations in primordial black hole (PBH) formation is explored and shown that the standard Gaussian assumption, used in all PBH formation papers to date, is not justified.
Abstract: We explore the role of non-Gaussian fluctuations in primordial black hole (PBH) formation and show that the standard Gaussian assumption, used in all PBH formation papers to date, is not justified. Since large spikes in power are usually associated with flat regions of the inflaton potential, quantum fluctuations become more important in the field dynamics, leading to mode-mode coupling and non-Gaussian statistics. Moreover, PBH production requires several {sigma} (rare) fluctuations in order to prevent premature matter dominance of the universe, so we are necessarily concerned with distribution tails, where any intrinsic skewness will be especially important. We quantify this argument by using the stochastic slow-roll equation and a relatively simple analytic method to obtain the final distribution of fluctuations. We work out several examples with toy models that produce PBH{close_quote}s, and test the results with numerical simulations. Our examples show that the naive Gaussian assumption can result in errors of many orders of magnitude. For models with spikes in power, our calculations give sharp cutoffs in the probability of large positive fluctuations, meaning that Gaussian distributions would vastly overproduce PBH{close_quote}s. The standard results that link inflation-produced power spectra and PBH number densities must then be reconsidered, since they rely quitemore » heavily on the Gaussian assumption. We point out that since the probability distributions depend strongly on the nature of the potential, it is impossible to obtain results for general models. However, calculating the distribution of fluctuations for any specific model seems to be relatively straightforward, at least in the single inflaton case. {copyright} {ital 1997} {ital The American Physical Society}« less