scispace - formally typeset
M

Matthew D. Hoffman

Researcher at Google

Publications -  117
Citations -  18593

Matthew D. Hoffman is an academic researcher from Google. The author has contributed to research in topics: Inference & Markov chain Monte Carlo. The author has an hindex of 38, co-authored 112 publications receiving 14724 citations. Previous affiliations of Matthew D. Hoffman include Adobe Systems & Princeton University.

Papers
More filters
Journal ArticleDOI

Stan : A Probabilistic Programming Language

TL;DR: Stan as discussed by the authors is a probabilistic programming language for specifying statistical models, where a program imperatively defines a log probability function over parameters conditioned on specified data and constants, which can be used in alternative algorithms such as variational Bayes, expectation propagation, and marginal inference using approximate integration.

Stan: A Probabilistic Programming Language.

TL;DR: Stan is a probabilistic programming language for specifying statistical models that provides full Bayesian inference for continuous-variable models through Markov chain Monte Carlo methods such as the No-U-Turn sampler and an adaptive form of Hamiltonian Monte Carlo sampling.
Journal ArticleDOI

Stochastic variational inference

TL;DR: Stochastic variational inference lets us apply complex Bayesian models to massive data sets, and it is shown that the Bayesian nonparametric topic model outperforms its parametric counterpart.
Proceedings Article

Online Learning for Latent Dirichlet Allocation

TL;DR: An online variational Bayes (VB) algorithm for Latent Dirichlet Allocation (LDA) based on online stochastic optimization with a natural gradient step is developed, which shows converges to a local optimum of the VB objective function.
Posted Content

The No-U-Turn Sampler: Adaptively Setting Path Lengths in Hamiltonian Monte Carlo

TL;DR: The No-U-Turn Sampler (NUTS) as discussed by the authors is an extension to HMC that eliminates the need to set a number of steps L. NUTS uses a recursive algorithm to build a set of likely candidate points that spans a wide swath of the target distribution, stopping automatically when it starts to double back and retrace its steps.