scispace - formally typeset
Open AccessPosted Content

A Unified Stochastic Gradient Approach to Designing Bayesian-Optimal Experiments

Reads0
Chats0
TLDR
This work introduces a fully stochastic gradient based approach to Bayesian optimal experimental design (BOED) that utilizes variational lower bounds on the expected information gain of an experiment that can be simultaneously optimized with respect to both the variational and design parameters.
Abstract
We introduce a fully stochastic gradient based approach to Bayesian optimal experimental design (BOED). Our approach utilizes variational lower bounds on the expected information gain (EIG) of an experiment that can be simultaneously optimized with respect to both the variational and design parameters. This allows the design process to be carried out through a single unified stochastic gradient ascent procedure, in contrast to existing approaches that typically construct a pointwise EIG estimator, before passing this estimator to a separate optimizer. We provide a number of different variational objectives including the novel adaptive contrastive estimation (ACE) bound. Finally, we show that our gradient-based approaches are able to provide effective design optimization in substantially higher dimensional settings than existing approaches.

read more

Citations
More filters
Proceedings ArticleDOI

Interventions, Where and How? Experimental Design for Causal Models at Scale

TL;DR: This work incorporates recent advances in Bayesian causal discovery into the Bayesian optimal experimental design framework, allowing for active causal discovery of large, nonlinear SCMs while selecting both the interventional target and the value.
Proceedings Article

Optimizing Sequential Experimental Design with Deep Reinforcement Learning

TL;DR: The problem of optimizing policies can be reduced to solving a Markov decision process (MDP) with modern deep reinforcement learning techniques and exhibits state-of-the-art performance on both continuous and discrete design spaces, even when the probabilistic model is a black box.
Journal ArticleDOI

GFlowNets for AI-Driven Scientific Discovery

TL;DR: In this paper , the most pressing problems for humanity, such as the climate crisis and the threat of global pandemics, require accelerating the pace of scientific discovery, while science has traditionally relied on traditional methods.
Journal ArticleDOI

Modern Bayesian Experimental Design

TL;DR: In this article , the authors outline how recent advances have transformed our ability to overcome computational challenges and thus utilize BED effectively, before discussing some key areas for future development in the field.
Journal ArticleDOI

Statistical applications of contrastive learning

TL;DR: In this paper , contrastive learning is used to derive methods for diverse statistical problems, namely parameter estimation for energy-based models, Bayesian inference for simulator-based model, as well as experimental design.
References
More filters
Posted Content

Adam: A Method for Stochastic Optimization

TL;DR: In this article, the adaptive estimates of lower-order moments are used for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimate of lowerorder moments.
Journal ArticleDOI

A Stochastic Approximation Method

TL;DR: In this article, a method for making successive experiments at levels x1, x2, ··· in such a way that xn will tend to θ in probability is presented.
Proceedings Article

Practical Bayesian Optimization of Machine Learning Algorithms

TL;DR: This work describes new algorithms that take into account the variable cost of learning algorithm experiments and that can leverage the presence of multiple cores for parallel experimentation and shows that these proposed algorithms improve on previous automatic procedures and can reach or surpass human expert-level optimization for many algorithms.
Book ChapterDOI

Large-Scale Machine Learning with Stochastic Gradient Descent

Léon Bottou
TL;DR: A more precise analysis uncovers qualitatively different tradeoffs for the case of small-scale and large-scale learning problems.
Posted Content

Representation Learning with Contrastive Predictive Coding

TL;DR: This work proposes a universal unsupervised learning approach to extract useful representations from high-dimensional data, which it calls Contrastive Predictive Coding, and demonstrates that the approach is able to learn useful representations achieving strong performance on four distinct domains: speech, images, text and reinforcement learning in 3D environments.
Related Papers (5)