scispace - formally typeset
Search or ask a question
Topic

Gaussian process

About: Gaussian process is a research topic. Over the lifetime, 18944 publications have been published within this topic receiving 486645 citations. The topic is also known as: Gaussian stochastic process.


Papers
More filters
Proceedings ArticleDOI
Kai-ching Chu1
01 Dec 1972
TL;DR: It is shown in this paper that this class of densities can be expressed as integrals of a set of Gaussian densities and it is proved that the conditional expectation is linear with exactly the same form as the Gaussian case.
Abstract: A random variable is said to have elliptical distribution if its probability density is a function of a quadratic form. This class includes the Gaussian and many other useful densities in statistics. It is shown in this paper that this class of densities can be expressed as integrals of a set of Gaussian densities. This property is not changed under linear transformation of the random variables. It is also proved in this paper that the conditional expectation is linear with exactly the same form as the Gaussian case. Many estimation results of the Gaussian case can be readily extended. Problems of computing optimal estimation, filtering, stochastic control, and team decisions in various linear systems become tractable for this class of random processes.

140 citations

Journal ArticleDOI
TL;DR: In this paper, the authors derived an averaged equation for a class of stochastic partial differential equations without any Lipschitz assumption on the slow modes and derived the rate of convergence in probability as a byproduct.

140 citations

Journal ArticleDOI
TL;DR: An approximate EM algorithm, the EM-EP algorithm, is presented, which is found to converge in practice and provides an efficient Bayesian framework for learning hyperparameters of the kernel.
Abstract: Gaussian process classifiers (GPCs) are Bayesian probabilistic kernel classifiers. In GPCs, the probability of belonging to a certain class at an input location is monotonically related to the value of some latent function at that location. Starting from a Gaussian process prior over this latent function, data are used to infer both the posterior over the latent function and the values of hyperparameters to determine various aspects of the function. Recently, the expectation propagation (EP) approach has been proposed to infer the posterior over the latent function. Based on this work, we present an approximate EM algorithm, the EM-EP algorithm, to learn both the latent function and the hyperparameters. This algorithm is found to converge in practice and provides an efficient Bayesian framework for learning hyperparameters of the kernel. A multiclass extension of the EM-EP algorithm for GPCs is also derived. In the experimental results, the EM-EP algorithms are as good or better than other methods for GPCs or support vector machines (SVMs) with cross-validation

140 citations

Journal ArticleDOI
TL;DR: In this paper, a power covariance with range parameter is proposed for the spatial linear model and a convenient profile likelihood is introduced and studied in view of potential multimodal likelihoods for small samples.
Abstract: SUMMARY A popular covariance scheme used for the spatial linear model in geostatistics has spherical form. However, the likelihood is not twice differentiable with respect to the range parameter, and this raises some questions regarding the unimodality of the likelihood. We compare the likelihoods of the spatial linear model for small samples under this scheme and the doubly geometric scheme. Also, a power covariance with range parameter is proposed. In view of potential multimodal likelihoods for small samples for this model, a convenient profile likelihood is introduced and studied.

140 citations

Journal ArticleDOI
TL;DR: A bound on the rate of convergence in Hellinger distance for densityestimation is established using the Gaussian mixture sieve assuming that the true density is itself a mixture of Gaussians; the underlying mixing measure of thetrue density is not necessarilyassumed to have finite support.
Abstract: Gaussian mixtures provide a convenient method of densityestimation that lies somewhere between parametric models and kernel densityestimators When the number of components of the mixture is allowed to increase as sample size increases, the model is called a mixture sieve We establish a bound on the rate of convergence in Hellinger distance for densityestimation using the Gaussian mixture sieve assuming that the true density is itself a mixture of Gaussians; the underlying mixing measure of the true densityis not necessarilyassumed to have finite support Computing the rate involves some delicate calculations since the size of the sieve—as measured bybracketing entropy—and the saturation rate, cannot be found using standard methods When the mixing measure has compact support, using kn ∼ n 2/3 /� log n� 1/3 components in the mixture yields a rate of order � log n� � 1+η� /6/n1/6 for every η> 0� The rates depend heavilyon the tail behavior of the true density The sensitivity to the tail behavior is diminished byusing a robust sieve which includes a long-tailed component in the

139 citations


Network Information
Related Topics (5)
Estimator
97.3K papers, 2.6M citations
87% related
Optimization problem
96.4K papers, 2.1M citations
85% related
Artificial neural network
207K papers, 4.5M citations
84% related
Support vector machine
73.6K papers, 1.7M citations
82% related
Deep learning
79.8K papers, 2.1M citations
82% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023502
20221,181
20211,132
20201,220
20191,119
2018978