scispace - formally typeset
Search or ask a question
Topic

Gaussian process

About: Gaussian process is a research topic. Over the lifetime, 18944 publications have been published within this topic receiving 486645 citations. The topic is also known as: Gaussian stochastic process.


Papers
More filters
Journal Article
TL;DR: The GPstuff toolbox is a versatile collection of Gaussian process models and computational tools required for Bayesian inference, including various inference methods, sparse approximations and model assessment methods.
Abstract: The GPstuff toolbox is a versatile collection of Gaussian process models and computational tools required for Bayesian inference. The tools include, among others, various inference methods, sparse approximations and model assessment methods.

266 citations

Journal ArticleDOI
TL;DR: For discrete mixed autoregressive moving-average processes, it was shown in this paper that time reversal is a unique property of Gaussian processes, and that it is a special case of the time reversal property of discrete mixed auto-regression processes.
Abstract: Time-reversibility is defined for a process X(t) as the property that {X(t), - - -, X(t.)} and {X(- t), - -, X(- t.)} have the same joint probability distribution. It is shown that, for discrete mixed autoregressive moving-average processes, this is a unique property of Gaussian processes. TIME-REVERSIBILITY; SHOT NOISE; CHARACTERISATIONS OF THE NORMAL DISTRIBUTION; TIME SERIES; STOCHASTIC PROCESSES

265 citations

Proceedings Article
09 Dec 2003
TL;DR: It is speculated that the intrinsic ability of GP models to characterise distributions of functions would allow the method to capture entire distributions over future values instead of merely their expectation, which has traditionally been the focus of much of reinforcement learning.
Abstract: We exploit some useful properties of Gaussian process (GP) regression models for reinforcement learning in continuous state spaces and discrete time. We demonstrate how the GP model allows evaluation of the value function in closed form. The resulting policy iteration algorithm is demonstrated on a simple problem with a two dimensional state space. Further, we speculate that the intrinsic ability of GP models to characterise distributions of functions would allow the method to capture entire distributions over future values instead of merely their expectation, which has traditionally been the focus of much of reinforcement learning.

264 citations

Proceedings ArticleDOI
09 May 1995
TL;DR: In this paper, a robust variable step size LMS-type algorithm with the attractive property of achieving a small final misadjustment while providing fast convergence at early stages of adaptation is presented.
Abstract: The paper presents a robust variable step size LMS-type algorithm with the attractive property of achieving a small final misadjustment while providing fast convergence at early stages of adaptation. The performance of the algorithm is not affected by the presence of noise. Approximate analysis of convergence and steady state performance for zero-mean stationary Gaussian inputs and a nonstationary optimal weight vector is provided. Simulation results clearly indicate its superior performance for stationary cases. For the nonstationary environment, the algorithm provides performance equivalent to that of the regular LMS algorithm.

264 citations

Proceedings ArticleDOI
14 Jun 2009
TL;DR: This paper develops a non-linear probabilistic matrix factorization using Gaussian process latent variable models and uses stochastic gradient descent (SGD) to optimize the model.
Abstract: A popular approach to collaborative filtering is matrix factorization. In this paper we develop a non-linear probabilistic matrix factorization using Gaussian process latent variable models. We use stochastic gradient descent (SGD) to optimize the model. SGD allows us to apply Gaussian processes to data sets with millions of observations without approximate methods. We apply our approach to benchmark movie recommender data sets. The results show better than previous state-of-the-art performance.

264 citations


Network Information
Related Topics (5)
Estimator
97.3K papers, 2.6M citations
87% related
Optimization problem
96.4K papers, 2.1M citations
85% related
Artificial neural network
207K papers, 4.5M citations
84% related
Support vector machine
73.6K papers, 1.7M citations
82% related
Deep learning
79.8K papers, 2.1M citations
82% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023502
20221,181
20211,132
20201,220
20191,119
2018978