scispace - formally typeset
Open AccessProceedings Article

MCMC for variationally sparse Gaussian processes

Reads0
Chats0
TLDR
A Hybrid Monte-Carlo sampling scheme which allows for a non-Gaussian approximation over the function values and covariance parameters simultaneously, with efficient computations based on inducing-point sparse GPs.
Abstract
Gaussian process (GP) models form a core part of probabilistic machine learning. Considerable research effort has been made into attacking three issues with GP models: how to compute efficiently when the number of data is large; how to approximate the posterior when the likelihood is not Gaussian and how to estimate covariance function parameter posteriors. This paper simultaneously addresses these, using a variational approximation to the posterior which is sparse in support of the function but otherwise free-form. The result is a Hybrid Monte-Carlo sampling scheme which allows for a non-Gaussian approximation over the function values and covariance parameters simultaneously, with efficient computations based on inducing-point sparse GPs. Code to replicate each experiment in this paper is available at github.com/sparseMCMC.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

When Gaussian Process Meets Big Data: A Review of Scalable GPs

TL;DR: In this article, a review of state-of-the-art scalable Gaussian process regression (GPR) models is presented, focusing on global and local approximations for subspace learning.
Journal Article

GPflow: a Gaussian process library using tensorflow

TL;DR: GPflow as discussed by the authors is a Gaussian process library that uses TensorFlow for its core computations and Python for its front end The distinguishing features of GPflow are that it uses variational inference as the primary approximation method.
Proceedings Article

Doubly Stochastic Variational Inference for Deep Gaussian Processes

TL;DR: In this paper, a doubly stochastic variational inference algorithm for DGPs is proposed, which does not force independence between layers and can be used for both classification and regression.
Posted Content

When Gaussian Process Meets Big Data: A Review of Scalable GPs

TL;DR: This article is devoted to reviewing state-of-the-art scalable GPs involving two main categories: global approximations that distillate the entire data and local approximation that divide the data for subspace learning.
Posted Content

Adversarial Examples, Uncertainty, and Transfer Testing Robustness in Gaussian Process Hybrid Deep Networks

TL;DR: This paper shows that GP hybrid deep networks, GPDNNs, (GPs on top of DNNs and trained end-to-end) inherit the nice properties of both GPs and DNNS and are much more robust to adversarial examples.
References
More filters
Journal ArticleDOI

A Unifying View of Sparse Approximate Gaussian Process Regression

TL;DR: A new unifying view, including all existing proper probabilistic sparse approximations for Gaussian process regression, relies on expressing the effective prior which the methods are using, and highlights the relationship between existing methods.
Proceedings Article

Sparse Gaussian Processes using Pseudo-inputs

TL;DR: It is shown that this new Gaussian process (GP) regression model can match full GP performance with small M, i.e. very sparse solutions, and it significantly outperforms other approaches in this regime.

Bayesian Filtering and Smoothing

Simo Särkkä
TL;DR: This compact, informal introduction for graduate students and advanced undergraduates presents the current state-of-the-art filtering and smoothing methods in a unified Bayesian framework and learns what non-linear Kalman filters and particle filters are, how they are related, and their relative advantages and disadvantages.
Journal ArticleDOI

Soft Margins for AdaBoost

TL;DR: It is found that ADABOOST asymptotically achieves a hard margin distribution, i.e. the algorithm concentrates its resources on a few hard-to-learn patterns that are interestingly very similar to Support Vectors.
Proceedings Article

Variational Learning of Inducing Variables in Sparse Gaussian Processes

TL;DR: A variational formulation for sparse approximations that jointly infers the inducing inputs and the kernel hyperparameters by maximizing a lower bound of the true log marginal likelihood.