scispace - formally typeset
Search or ask a question
Author

Tomohiko Hironaka

Bio: Tomohiko Hironaka is an academic researcher from University of Tokyo. The author has contributed to research in topics: Monte Carlo method & Bayesian probability. The author has an hindex of 4, co-authored 6 publications receiving 41 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: The expected information gain is an important quality criterion of Bayesian experimental designs, which measures how much the information entropy about uncertain quantity of interest θ is reduced o... as discussed by the authors, and it is defined as
Abstract: The expected information gain is an important quality criterion of Bayesian experimental designs, which measures how much the information entropy about uncertain quantity of interest θ is reduced o...

22 citations

Journal ArticleDOI
TL;DR: It is shown, under a set of assumptions on decision and information models, that successive approximation levels are tightly coupled, which directly proves that the proposed MLMC estimator improves the necessary computational cost to optimal $O(\varepsilon^{-2})$.
Abstract: We study Monte Carlo estimation of the expected value of sample information (EVSI), which measures the expected benefit of gaining additional information for decision making under uncertainty. EVSI...

17 citations

Journal ArticleDOI
TL;DR: An efficient algorithm to estimate the expected information gain by applying a multilevel Monte Carlo (MLMC) method is developed and an antithetic MLMC estimator is introduced to provide a sufficient condition on the data model under which the antithetic property of the MLMC estimation is well exploited such that optimal complexity of is achieved.
Abstract: The expected information gain is an important quality criterion of Bayesian experimental designs, which measures how much the information entropy about uncertain quantity of interest $\theta$ is reduced on average by collecting relevant data $Y$. However, estimating the expected information gain has been considered computationally challenging since it is defined as a nested expectation with an outer expectation with respect to $Y$ and an inner expectation with respect to $\theta$. In fact, the standard, nested Monte Carlo method requires a total computational cost of $O(\varepsilon^{-3})$ to achieve a root-mean-square accuracy of $\varepsilon$. In this paper we develop an efficient algorithm to estimate the expected information gain by applying a multilevel Monte Carlo (MLMC) method. To be precise, we introduce an antithetic MLMC estimator for the expected information gain and provide a sufficient condition on the data model under which the antithetic property of the MLMC estimator is well exploited such that optimal complexity of $O(\varepsilon^{-2})$ is achieved. Furthermore, we discuss how to incorporate importance sampling techniques within the MLMC estimator to avoid arithmetic underflow. Numerical experiments show the considerable computational cost savings compared to the nested Monte Carlo method for a simple test case and a more realistic pharmacokinetic model.

16 citations

Posted Content
TL;DR: An unbiased Monte Carlo estimator is introduced for the gradient of the expected information gain with finite expected squared $\ell_2$-norm and finite expected computational cost per sample.
Abstract: In this paper we propose an efficient stochastic optimization algorithm to search for Bayesian experimental designs such that the expected information gain is maximized. The gradient of the expected information gain with respect to experimental design parameters is given by a nested expectation, for which the standard Monte Carlo method using a fixed number of inner samples yields a biased estimator. In this paper, applying the idea of randomized multilevel Monte Carlo (MLMC) methods, we introduce an unbiased Monte Carlo estimator for the gradient of the expected information gain with finite expected squared $\ell_2$-norm and finite expected computational cost per sample. Our unbiased estimator can be combined well with stochastic gradient descent algorithms, which results in our proposal of an optimization algorithm to search for an optimal Bayesian experimental design. Numerical experiments confirm that our proposed algorithm works well not only for a simple test problem but also for a more realistic pharmacokinetic problem.

14 citations

Journal ArticleDOI
TL;DR: In this paper, a multilevel Monte Carlo (MLMC) estimator is proposed to estimate the expected value of sample information (EVSI), which measures the expected benefit of gaining additional information for decision making under uncertainty.
Abstract: We study Monte Carlo estimation of the expected value of sample information (EVSI) which measures the expected benefit of gaining additional information for decision making under uncertainty. EVSI is defined as a nested expectation in which an outer expectation is taken with respect to one random variable $Y$ and an inner conditional expectation with respect to the other random variable $\theta$. Although the nested (Markov chain) Monte Carlo estimator has been often used in this context, a root-mean-square accuracy of $\varepsilon$ is achieved notoriously at a cost of $O(\varepsilon^{-2-1/\alpha})$, where $\alpha$ denotes the order of convergence of the bias and is typically between $1/2$ and $1$. In this article we propose a novel efficient Monte Carlo estimator of EVSI by applying a multilevel Monte Carlo (MLMC) method. Instead of fixing the number of inner samples for $\theta$ as done in the nested Monte Carlo estimator, we consider a geometric progression on the number of inner samples, which yields a hierarchy of estimators on the inner conditional expectation with increasing approximation levels. Based on an elementary telescoping sum, our MLMC estimator is given by a sum of the Monte Carlo estimates of the differences between successive approximation levels on the inner conditional expectation. We show, under a set of assumptions on decision and information models, that successive approximation levels are tightly coupled, which directly proves that our MLMC estimator improves the necessary computational cost to optimal $O(\varepsilon^{-2})$. Numerical experiments confirm the considerable computational savings as compared to the nested Monte Carlo estimator.

3 citations


Cited by
More filters
Journal Article
TL;DR: A simulation-based approach that can be used to solve optimal design problems in which one is interested in finding a large number of (near) optimal design points for a small number of design variables is presented.
Abstract: The use of Bayesian methodologies for solving optimal experimental design problems has increased. Many of these methods have been found to be computationally intensive for design problems that require a large number of design points. A simulation-based approach that can be used to solve optimal design problems in which one is interested in finding a large number of (near) optimal design points for a small number of design variables is presented. The approach involves the use of lower dimensional parameterisations that consist of a few design variables, which generate multiple design points. Using this approach, one simply has to search over a few design variables, rather than searching over a large number of optimal design points, thus providing substantial computational savings. The methodologies are demonstrated on four applications, including the selection of sampling times for pharmacokinetic and heat transfer studies, and involve nonlinear models. Several Bayesian design criteria are also compared and contrasted, as well as several different lower dimensional parameterisation schemes for generating the many design points.

47 citations

01 Jan 2015
TL;DR: In this paper, the authors explore the use of Laplace approximations in the design setting to overcome the drawback that importance sampling will tend to break down if there is a reasonable number of experimental observations and/or the model parameter is high dimensional.
Abstract: Utility functions in Bayesian experimental design are usually based on the posterior distribution. When the posterior is found by simulation, it must be sampled from for each future data set drawn from the prior predictive distribution. Many thousands of posterior distributions are often required. A popular technique in the Bayesian experimental design literature to rapidly obtain samples from the posterior is importance sampling, using the prior as the importance distribution. However, importance sampling will tend to break down if there is a reasonable number of experimental observations and/or the model parameter is high dimensional. In this paper we explore the use of Laplace approximations in the design setting to overcome this drawback. Furthermore, we consider using the Laplace approximation to form the importance distribution to obtain a more efficient importance distribution than the prior. The methodology is motivated by a pharmacokinetic study which investigates the effect of extracorporeal membrane oxygenation on the pharmacokinetics of antibiotics in sheep. The design problem is to find 10 near optimal plasma sampling times which produce precise estimates of pharmacokinetic model parameters/measures of interest. We consider several different utility functions of interest in these studies, which involve the posterior distribution of parameter functions.

34 citations

Journal ArticleDOI
TL;DR: In this article, a multilevel Double Loop Monte Carlo (MLDLMC) is proposed to estimate the expected information gain for Bayesian inference of the fiber orientation in composite laminate materials from an electrical impedance tomography experiment.
Abstract: An optimal experimental set-up maximizes the value of data for statistical inferences and predictions. The efficiency of strategies for finding optimal experimental set-ups is particularly important for experiments that are time-consuming or expensive to perform. For instance, in the situation when the experiments are modeled by Partial Differential Equations (PDEs), multilevel methods have been proven to dramatically reduce the computational complexity of their single-level counterparts when estimating expected values. For a setting where PDEs can model experiments, we propose two multilevel methods for estimating a popular design criterion known as the expected information gain in simulation-based Bayesian optimal experimental design. The expected information gain criterion is of a nested expectation form, and only a handful of multilevel methods have been proposed for problems of such form. We propose a Multilevel Double Loop Monte Carlo (MLDLMC), which is a multilevel strategy with Double Loop Monte Carlo (DLMC), and a Multilevel Double Loop Stochastic Collocation (MLDLSC), which performs a high-dimensional integration by deterministic quadrature on sparse grids. For both methods, the Laplace approximation is used for importance sampling that significantly reduces the computational work of estimating inner expectations. The optimal values of the method parameters are determined by minimizing the average computational work, subject to satisfying the desired error tolerance. The computational efficiencies of the methods are demonstrated by estimating the expected information gain for Bayesian inference of the fiber orientation in composite laminate materials from an electrical impedance tomography experiment. MLDLSC performs better than MLDLMC when the regularity of the quantity of interest, with respect to the additive noise and the unknown parameters, can be exploited.

24 citations

Journal ArticleDOI
TL;DR: It is shown, under a set of assumptions on decision and information models, that successive approximation levels are tightly coupled, which directly proves that the proposed MLMC estimator improves the necessary computational cost to optimal $O(\varepsilon^{-2})$.
Abstract: We study Monte Carlo estimation of the expected value of sample information (EVSI), which measures the expected benefit of gaining additional information for decision making under uncertainty. EVSI...

17 citations