Topic
Gaussian process
About: Gaussian process is a research topic. Over the lifetime, 18944 publications have been published within this topic receiving 486645 citations. The topic is also known as: Gaussian stochastic process.
Papers published on a yearly basis
Papers
More filters
•
TL;DR: The GPstuff toolbox is a versatile collection of Gaussian process models and computational tools required for Bayesian inference, including various inference methods, sparse approximations and model assessment methods.
Abstract: The GPstuff toolbox is a versatile collection of Gaussian process models and computational tools required for Bayesian inference. The tools include, among others, various inference methods, sparse approximations and model assessment methods.
266 citations
••
TL;DR: For discrete mixed autoregressive moving-average processes, it was shown in this paper that time reversal is a unique property of Gaussian processes, and that it is a special case of the time reversal property of discrete mixed auto-regression processes.
Abstract: Time-reversibility is defined for a process X(t) as the property that {X(t), - - -, X(t.)} and {X(- t), - -, X(- t.)} have the same joint probability distribution. It is shown that, for discrete mixed autoregressive moving-average processes, this is a unique property of Gaussian processes. TIME-REVERSIBILITY; SHOT NOISE; CHARACTERISATIONS OF THE NORMAL DISTRIBUTION; TIME SERIES; STOCHASTIC PROCESSES
265 citations
•
09 Dec 2003TL;DR: It is speculated that the intrinsic ability of GP models to characterise distributions of functions would allow the method to capture entire distributions over future values instead of merely their expectation, which has traditionally been the focus of much of reinforcement learning.
Abstract: We exploit some useful properties of Gaussian process (GP) regression models for reinforcement learning in continuous state spaces and discrete time. We demonstrate how the GP model allows evaluation of the value function in closed form. The resulting policy iteration algorithm is demonstrated on a simple problem with a two dimensional state space. Further, we speculate that the intrinsic ability of GP models to characterise distributions of functions would allow the method to capture entire distributions over future values instead of merely their expectation, which has traditionally been the focus of much of reinforcement learning.
264 citations
••
09 May 1995TL;DR: In this paper, a robust variable step size LMS-type algorithm with the attractive property of achieving a small final misadjustment while providing fast convergence at early stages of adaptation is presented.
Abstract: The paper presents a robust variable step size LMS-type algorithm with the attractive property of achieving a small final misadjustment while providing fast convergence at early stages of adaptation. The performance of the algorithm is not affected by the presence of noise. Approximate analysis of convergence and steady state performance for zero-mean stationary Gaussian inputs and a nonstationary optimal weight vector is provided. Simulation results clearly indicate its superior performance for stationary cases. For the nonstationary environment, the algorithm provides performance equivalent to that of the regular LMS algorithm.
264 citations
••
14 Jun 2009TL;DR: This paper develops a non-linear probabilistic matrix factorization using Gaussian process latent variable models and uses stochastic gradient descent (SGD) to optimize the model.
Abstract: A popular approach to collaborative filtering is matrix factorization. In this paper we develop a non-linear probabilistic matrix factorization using Gaussian process latent variable models. We use stochastic gradient descent (SGD) to optimize the model. SGD allows us to apply Gaussian processes to data sets with millions of observations without approximate methods. We apply our approach to benchmark movie recommender data sets. The results show better than previous state-of-the-art performance.
264 citations