scispace - formally typeset
Search or ask a question
Topic

Gaussian process

About: Gaussian process is a research topic. Over the lifetime, 18944 publications have been published within this topic receiving 486645 citations. The topic is also known as: Gaussian stochastic process.


Papers
More filters
Proceedings Article
03 Dec 2012
TL;DR: A flexible yet simple Bayesian nonparametric model is obtained by placing a Gaussian process prior on the parameter function which constitutes the natural model parameter in a Bayesian model.
Abstract: A fundamental problem in the analysis of structured relational data like graphs, networks, databases, and matrices is to extract a summary of the common structure underlying relations between individual entities. Relational data are typically encoded in the form of arrays; invariance to the ordering of rows and columns corresponds to exchangeable arrays. Results in probability theory due to Aldous, Hoover and Kallenberg show that exchangeable arrays can be represented in terms of a random measurable function which constitutes the natural model parameter in a Bayesian model. We obtain a flexible yet simple Bayesian nonparametric model by placing a Gaussian process prior on the parameter function. Efficient inference utilises elliptical slice sampling combined with a random sparse approximation to the Gaussian process. We demonstrate applications of the model to network data and clarify its relation to models in the literature, several of which emerge as special cases.

123 citations

Journal Article
TL;DR: In this article, the authors combine the variational approach to sparse approximation and the spectral representation of Gaussian processes to obtain an approximation with the representational power and computational scalability of spectral representations.
Abstract: This work brings together two powerful concepts in Gaussian processes: the variational approach to sparse approximation and the spectral representation of Gaussian processes. This gives rise to an approximation that inherits the benefits of the variational approach but with the representational power and computational scalability of spectral representations. The work hinges on a key result that there exist spectral features related to a finite domain of the Gaussian process which exhibit almost-independent covariances. We derive these expressions for Matern kernels in one dimension, and generalize to more dimensions using kernels with specific structures. Under the assumption of additive Gaussian noise, our method requires only a single pass through the data set, making for very fast and accurate computation. We fit a model to 4 million training points in just a few minutes on a standard laptop. With non-conjugate likelihoods, our MCMC scheme reduces the cost of computation from O(NM2) (for a sparse Gaussian process) to O(NM) per iteration, where N is the number of data and M is the number of features.

123 citations

Posted Content
TL;DR: A scalable approach for exact GPs is developed that leverages multi-GPU parallelization and methods like linear conjugate gradients, accessing the kernel matrix only through matrix multiplication, and is generally applicable, without constraints to grid data or specific kernel classes.
Abstract: Gaussian processes (GPs) are flexible non-parametric models, with a capacity that grows with the available data. However, computational constraints with standard inference procedures have limited exact GPs to problems with fewer than about ten thousand training points, necessitating approximations for larger datasets. In this paper, we develop a scalable approach for exact GPs that leverages multi-GPU parallelization and methods like linear conjugate gradients, accessing the kernel matrix only through matrix multiplication. By partitioning and distributing kernel matrix multiplies, we demonstrate that an exact GP can be trained on over a million points, a task previously thought to be impossible with current computing hardware, in less than 2 hours. Moreover, our approach is generally applicable, without constraints to grid data or specific kernel classes. Enabled by this scalability, we perform the first-ever comparison of exact GPs against scalable GP approximations on datasets with $10^4 \!-\! 10^6$ data points, showing dramatic performance improvements.

123 citations

Journal ArticleDOI
TL;DR: In this article, the authors used the Malliavin calculus to obtain a new exact formula for the density of any random variable, which is measurable and differentiable with respect to a given isonormal Gaussian process.
Abstract: We show how to use the Malliavin calculus to obtain a new exact formula for the density $\rho$ of the law of any random variable $Z$ which is measurable and differentiable with respect to a given isonormal Gaussian process. The main advantage of this formula is that it does not refer to the divergence operator $\delta$ (dual of the Malliavin derivative $D$). The formula is based on an auxilliary random variable $G:= _H$, where $L$ is the generator of the so-called Ornstein-Uhlenbeck semigroup. The use of $G$ was first discovered by Nourdin and Peccati (PTRF 145 75-118 2009 MR-2520122 ), in the context of rates of convergence in law. Here, thanks to $G$, density lower bounds can be obtained in some instances. Among several examples, we provide an application to the (centered) maximum of a general Gaussian process. We also explain how to derive concentration inequalities for $Z$ in our framework.

123 citations

Journal ArticleDOI
TL;DR: The new family of Bessel K forms (BKF) densities are shown to fit very well to the observed histograms and demonstrate a high degree of match between observed and estimated prior densities using the BKF model.
Abstract: A novel Bayesian nonparametric estimator in the wavelet domain is presented. In this approach, a prior model is imposed on the wavelet coefficients designed to capture the sparseness of the wavelet expansion. Seeking probability models for the marginal densities of the wavelet coefficients, the new family of Bessel K forms (BKF) densities are shown to fit very well to the observed histograms. Exploiting this prior, we designed a Bayesian nonlinear denoiser and we derived a closed form for its expression. We then compared it to other priors that have been introduced in the literature, such as the generalized Gaussian density (GGD) or the /spl alpha/-stable models, where no analytical form is available for the corresponding Bayesian denoisers. Specifically, the BKF model turns out to be a good compromise between these two extreme cases (hyperbolic tails for the /spl alpha/-stable and exponential tails for the GGD). Moreover, we demonstrate a high degree of match between observed and estimated prior densities using the BKF model. Finally, a comparative study is carried out to show the effectiveness of our denoiser which clearly outperforms the classical shrinkage or thresholding wavelet-based techniques.

123 citations


Network Information
Related Topics (5)
Estimator
97.3K papers, 2.6M citations
87% related
Optimization problem
96.4K papers, 2.1M citations
85% related
Artificial neural network
207K papers, 4.5M citations
84% related
Support vector machine
73.6K papers, 1.7M citations
82% related
Deep learning
79.8K papers, 2.1M citations
82% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023502
20221,181
20211,132
20201,220
20191,119
2018978