scispace - formally typeset
M

Matthew M. Graham

Researcher at University College London

Publications -  22
Citations -  2336

Matthew M. Graham is an academic researcher from University College London. The author has contributed to research in topics: Markov chain Monte Carlo & Hybrid Monte Carlo. The author has an hindex of 7, co-authored 18 publications receiving 2226 citations. Previous affiliations of Matthew M. Graham include National University of Singapore.

Papers
More filters
Posted Content

Theano: A Python framework for fast computation of mathematical expressions

Rami Al-Rfou, +111 more
TL;DR: The performance of Theano is compared against Torch7 and TensorFlow on several machine learning models and recently-introduced functionalities and improvements are discussed.
Proceedings Article

Pseudo-Marginal Slice Sampling

TL;DR: In this paper, the authors describe a general way to clamp and update the random numbers used in a pseudo-marginal method's unbiased estimator, and obtain more robust Markov chains, which often mix more quickly.
Journal ArticleDOI

Asymptotically exact inference in differentiable generative models

TL;DR: A method for performing efficient MCMC inference in generative models when conditioning on observations of the model output is presented, using a constrained variant of Hamiltonian Monte Carlo which leverages the smooth geometry of the manifold to coherently move between inputs exactly consistent with observations.
Posted Content

Asymptotically exact inference in differentiable generative models

TL;DR: In this article, a constrained variant of Hamiltonian Monte Carlo (HMC) is used to integrate a density across the manifold corresponding to the set of inputs consistent with the observed outputs.
Posted Content

Continuously tempered Hamiltonian Monte Carlo

TL;DR: In this paper, the authors propose a method for augmenting the Hamiltonian system with an extra continuous temperature control variable which allows the dynamic to bridge between sampling a complex target distribution and a simpler unimodal base distribution, which improves mixing in multimodal targets and allows the normalisation constant of the target distribution to be estimated.