scispace - formally typeset
Search or ask a question
Topic

Gaussian process

About: Gaussian process is a research topic. Over the lifetime, 18944 publications have been published within this topic receiving 486645 citations. The topic is also known as: Gaussian stochastic process.


Papers
More filters
03 Jul 1997
TL;DR: The problem of regression under Gaussian assumptions is treated in this paper, where the relationship between Bayesian prediction, regularization and smoothing is elucidated and the ideal regression is the posterior mean and its computation scales as O(n 3 ).
Abstract: The problem of regression under Gaussian assumptions is treated generally. The relationship between Bayesian prediction, regularization and smoothing is elucidated. The ideal regression is the posterior mean and its computation scales as O(n 3 ) , where n is the sample size. We show that the optimal m -dimensional linear model under a given prior is spanned by the first m eigenfunctions of a covariance operator, which is a trace-class operator. This is an infinite dimensional analogue of principal component analysis. The importance of Hilbert space methods to practical statistics is also discussed.

114 citations

Proceedings ArticleDOI
07 Nov 2009
TL;DR: This work model the analysis coefficients of typical images obtained using typical pyramidal frames as a strictly sparse vector plus a Gaussian correction term, which allows for an elegant iterated marginal optimization and provides state-of-the-art performance in standard deconvolution tests.
Abstract: Sparse optimization in overcomplete frames has been widely applied in recent years to ill-conditioned inverse problems. In particular, analysis-based sparse optimization consists of achieving a certain trade-off between fidelity to the observation and sparsity in a given linear representation, typically measured by some l p quasi-norm. Whereas most popular choice for p is 1 (convex optimization case), there is an increasing evidence on both the computational feasibility and higher performance potential of non-convex approaches (0 ≤ p ≪ 1). The extreme p = 0 case is especial, because analysis coefficients of typical images obtained using typical pyramidal frames are not strictly sparse, but rather compressible. Here we model the analysis coefficients as a strictly sparse vector plus a Gaussian correction term. This statistical formulation allows for an elegant iterated marginal optimization. We also show that it provides state-of-the-art performance, in a least-squares error sense, in standard deconvolution tests.

113 citations

Journal ArticleDOI
TL;DR: A nonparametric Bayesian dynamic model is proposed, which reduces dimensionality in characterizing the binary matrix through a lower-dimensional latent space representation, with the latent coordinates evolving in continuous time via Gaussian processes, to obtain a flexible and computationally tractable formulation.
Abstract: Symmetric binary matrices representing relations are collected in many areas. Our focus is on dynamically evolving binary relational matrices, with interest being on inference on the relationship structure and prediction. We propose a nonparametric Bayesian dynamic model, which reduces dimensionality in characterizing the binary matrix through a lower-dimensional latent space representation, with the latent coordinates evolving in continuous time via Gaussian processes. By using a logistic mapping function from the link probability matrix space to the latent relational space, we obtain a flexible and computationally tractable formulation. Employing Polya-gamma data augmentation, an efficient Gibbs sampler is developed for posterior computation, with the dimension of the latent space automatically inferred. We provide theoretical results on flexibility of the model, and illustrate its performance via simulation experiments. We also consider an application to co-movements in world financial markets.

113 citations

Journal ArticleDOI
TL;DR: In this paper, a critical examination of the analytical solution presented in the classic paper of Greenwood and Williamson (1966), (GW) on the statistical modeling of nominally flat contacting rough surfaces is undertaken, and it is found that using GW simple exponential distribution to approximate the usually Gaussian height distribution of the asperities is inadequate for most practical cases.
Abstract: A critical examination of the analytical solution presented in the classic paper of Greenwood and Williamson (1966), (GW) on the statistical modeling of nominally flat contacting rough surfaces is undertaken in this study. It is found that using the GW simple exponential distribution to approximate the usually Gaussian height distribution of the asperities is inadequate for most practical cases. Some other exponential type approximations are suggested, which approximate the Gaussian distribution more accurately, and still enable closed form solutions for the real area of contact, the contact load, and the number of contacting asperities. The best-modified exponential approximation is then used in the case of elastic-plastic contacts of Chang et al. (1987) ( CEB model), to obtain closed-form solutions, which favorably compare with the numerical results using the Gaussian distribution.

113 citations

Posted Content
TL;DR: In this paper, an approximate Expectation Propagation procedure and a novel and efficient extension of the probabilistic backpropagation algorithm for learning is proposed. But the method is not suitable for large-scale regression problems.
Abstract: Deep Gaussian processes (DGPs) are multi-layer hierarchical generalisations of Gaussian processes (GPs) and are formally equivalent to neural networks with multiple, infinitely wide hidden layers. DGPs are nonparametric probabilistic models and as such are arguably more flexible, have a greater capacity to generalise, and provide better calibrated uncertainty estimates than alternative deep models. This paper develops a new approximate Bayesian learning scheme that enables DGPs to be applied to a range of medium to large scale regression problems for the first time. The new method uses an approximate Expectation Propagation procedure and a novel and efficient extension of the probabilistic backpropagation algorithm for learning. We evaluate the new method for non-linear regression on eleven real-world datasets, showing that it always outperforms GP regression and is almost always better than state-of-the-art deterministic and sampling-based approximate inference methods for Bayesian neural networks. As a by-product, this work provides a comprehensive analysis of six approximate Bayesian methods for training neural networks.

113 citations


Network Information
Related Topics (5)
Estimator
97.3K papers, 2.6M citations
87% related
Optimization problem
96.4K papers, 2.1M citations
85% related
Artificial neural network
207K papers, 4.5M citations
84% related
Support vector machine
73.6K papers, 1.7M citations
82% related
Deep learning
79.8K papers, 2.1M citations
82% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023502
20221,181
20211,132
20201,220
20191,119
2018978