scispace - formally typeset
Open AccessPosted Content

Sequential Bayesian optimal experimental design via approximate dynamic programming

Xun Huan, +1 more
- 28 Apr 2016 - 
Reads0
Chats0
TLDR
This paper rigorously formulate the general sequential optimal experimental design (sOED) problem as a dynamic program, adopting a Bayesian formulation with an information theoretic design objective, and develops new numerical approaches for nonlinear design with continuous parameter, design, and observation spaces.
Abstract
The design of multiple experiments is commonly undertaken via suboptimal strategies, such as batch (open-loop) design that omits feedback or greedy (myopic) design that does not account for future effects. This paper introduces new strategies for the optimal design of sequential experiments. First, we rigorously formulate the general sequential optimal experimental design (sOED) problem as a dynamic program. Batch and greedy designs are shown to result from special cases of this formulation. We then focus on sOED for parameter inference, adopting a Bayesian formulation with an information theoretic design objective. To make the problem tractable, we develop new numerical approaches for nonlinear design with continuous parameter, design, and observation spaces. We approximate the optimal policy by using backward induction with regression to construct and refine value function approximations in the dynamic program. The proposed algorithm iteratively generates trajectories via exploration and exploitation to improve approximation accuracy in frequently visited regions of the state space. Numerical results are verified against analytical solutions in a linear-Gaussian setting. Advantages over batch and greedy design are then demonstrated on a nonlinear source inversion problem where we seek an optimal policy for sequential sensing.

read more

Citations
More filters
Book

Submodular functions and optimization

悟 藤重
TL;DR: In this paper, the Lovasz Extensions of Submodular Functions are extended to include nonlinear weight functions and linear weight functions with continuous variables, and a Decomposition Algorithm is proposed.
Journal ArticleDOI

Optimum experimental designs

W. Näther
- 01 Dec 1994 - 
Journal ArticleDOI

Replication or exploration? Sequential design for stochastic simulation experiments

TL;DR: In this article, the authors investigate the merits of replication, and provide methods for optimal design (including replicates), with the goal of obtaining globally accurate emulation of noisy computer simulation experiments, and show that replication can be beneficial from both design and computational perspectives, in the context of Gaussian process surrogate modeling.
Journal ArticleDOI

Sensor placement for calibration of spatially varying model parameters

TL;DR: An approximation for the posterior distribution is employed within the optimization problem to facilitate the identification of the optimal sensor locations using the simulated annealing algorithm.
Journal ArticleDOI

Digital Twin Concepts with Uncertainty for Nuclear Power Applications

Brendan Kochunas, +1 more
- 14 Jul 2021 - 
TL;DR: For nuclear power applications, DT development should rely first on mechanistic model-based methods to leverage the extensive experience and understanding of these systems, and model-free techniques can then be adopted to selectively, and correctively, augment limitations in the model- based approaches.
References
More filters
Book

Elements of information theory

TL;DR: The author examines the role of entropy, inequality, and randomness in the design of codes and the construction of codes in the rapidly changing environment.
Book

Reinforcement Learning: An Introduction

TL;DR: This book provides a clear and simple account of the key ideas and algorithms of reinforcement learning, which ranges from the history of the field's intellectual foundations to the most recent developments and applications.
Book

Markov Decision Processes: Discrete Stochastic Dynamic Programming

TL;DR: Puterman as discussed by the authors provides a uniquely up-to-date, unified, and rigorous treatment of the theoretical, computational, and applied research on Markov decision process models, focusing primarily on infinite horizon discrete time models and models with discrete time spaces while also examining models with arbitrary state spaces, finite horizon models, and continuous time discrete state models.
Book

Dynamic Programming and Optimal Control

TL;DR: The leading and most up-to-date textbook on the far-ranging algorithmic methododogy of Dynamic Programming, which can be used for optimal control, Markovian decision problems, planning and sequential decision making under uncertainty, and discrete/combinatorial optimization.
Book

Information Theory, Inference and Learning Algorithms

TL;DR: A fun and exciting textbook on the mathematics underpinning the most dynamic areas of modern science and engineering.
Related Papers (5)