scispace - formally typeset
Open AccessJournal ArticleDOI

Multilevel Monte Carlo Estimation of the Expected Value of Sample Information

Reads0
Chats0
TLDR
It is shown, under a set of assumptions on decision and information models, that successive approximation levels are tightly coupled, which directly proves that the proposed MLMC estimator improves the necessary computational cost to optimal $O(\varepsilon^{-2})$.
Abstract
We study Monte Carlo estimation of the expected value of sample information (EVSI), which measures the expected benefit of gaining additional information for decision making under uncertainty. EVSI...

read more

Content maybe subject to copyright    Report

Copyright © by SIAM and ASA. Unauthorized reproduction of this article is prohibited.
SIAM/ASA J. UNCERTAINTY QUANTIFICATION © 2020 Society for Industrial and Applied Mathematics
Vol. 8, No. 3, pp. 1236–1259 and American Statistical Association
Multilevel Monte Carlo Estimation of the Expected Value of Sample Information
Tomohiko Hironaka
, Michael B. Giles
, Takashi Goda
, and Howard Thom
§
Abstract. We study Monte Carlo estimation of the expected value of sample information (EVSI), which mea-
sures the expected benefit of gaining additional information for decision making under uncertainty.
EVSI is defined as a nested expectation in which an outer expectation is taken with respect to one
random variable Y and an inner conditional expectation with respect to the other random variable
θ. Although the nested (Markov chain) Monte Carlo estimator has been often used in this context, a
root-mean-square accuracy of ε is achieved notoriously at a cost of O(ε
21
), where α denotes the
order of convergence of the bias and is typically between 1/2 and 1. In this article we propose a novel
efficient Monte Carlo estimator of EVSI by applying a multilevel Monte Carlo (MLMC) method.
Instead of fixing the number of inner samples for θ as done in the nested Monte Carlo estimator,
we consider a geometric progression on the number of inner samples, which yields a hierarchy of
estimators on the inner conditional expectation with increasing approximation levels. Based on an
elementary telescoping sum, our MLMC estimator is given by a sum of the Monte Carlo estimates
of the differences between successive approximation levels on the inner conditional expectation. We
show, under a set of assumptions on decision and information models, that successive approximation
levels are tightly coupled, which directly proves that our MLMC estimator improves the necessary
computational cost to optimal O(ε
2
). Numerical experiments confirm the considerable computa-
tional savings as compared to the nested Monte Carlo estimator.
Key words. expected value of sample information, multilevel Monte Carlo, nested expectations, decision-
making under uncertainty
AMS subject classifications. 65C05, 62P10, 90B50
DOI. 10.1137/19M1284981
1. Introduction. Motivated by applications to medical decision making under uncertainty
[27], we study Monte Carlo estimation of the expected value of sample information (EVSI).
Let θ be a vector of random variables representing the uncertainty in the effectiveness of
different medical treatments. Let D be a finite set of possible medical treatments, and for
each treatment d D, f
d
denotes a function of θ representing some measure of the patient
outcome with “the larger the better,” where quality-adjusted life years (QALY) is typically
employed in the context of medical decision making [1, 2, 25, 14]. Without any knowledge
about θ, the best treatment is the one which maximizes the expectation of f
d
, giving the
Received by the editors September 3, 2019; accepted for publication (in revised form) July 15, 2020; published
electronically September 30, 2020.
https://doi.org/10.1137/19M1284981
Funding: The work of the fourth author was supp orted by Medical Research Council grant MR/S036709/1.
Scho ol of Engineering, University of Tokyo, Tokyo, Japan (hironaka-tomohiko@g.ecc.u-tokyo.ac.jp, goda@frcer.
t.u-tokyo.ac.jp).
Mathematical Institute, University of Oxford, Oxford, UK (mike.giles@maths.ox.ac.uk).
§
Bristol Medical School, University of Bristol, Bristol, UK (howard.thom@bristol.ac.uk).
1236
Downloaded 10/16/20 to 81.110.214.155. Redistribution subject to SIAM license or copyright; see https://epubs.siam.org/page/terms

Copyright © by SIAM and ASA. Unauthorized reproduction of this article is prohibited.
MLMC ESTIMATION OF THE EXPECTED VALUE OF SAMPLE INFORMATION 1237
average outcome
max
dD
E
θ
[f
d
(θ)] ,(1.1)
where E
θ
[·] denotes the expectation taken with respect to the prior probability density function
of θ. On the other hand, if perfect information on θ is available, the best treatment after
knowing the value of θ is simply the one which maximizes f
d
(θ), so that, on average, the
outcome will be
E
θ
max
dD
f
d
(θ)
.
The difference between these two values is called the expected value of perfect information
(EVPI):
EVPI := E
θ
max
dD
f
d
(θ)
max
dD
E
θ
[f
d
(θ)] .
However, it will be rare that we have access to perfect information on θ. In practice, what we
obtain, for instance, through carrying out some new medical research is either partial perfect
information or sample information on θ.
Partial perfect information is nothing but perfect information on only a subset of random
variables θ
1
for a partition θ = (θ
1
, θ
2
), where θ
1
and θ
2
are assumed independent. After
knowing the value of θ
1
, the best treatment is the one which maximizes the partial expectation
E
θ
2
[f
d
(θ
1
, θ
2
)]. Therefore, the average outcome with partial perfect information on θ
1
will be
E
θ
1
max
dD
E
θ
2
[f
d
(θ
1
, θ
2
)]
,
and the increment from (1.1) is called the expected value of partial perfect information
(EVPPI):
EVPPI := E
θ
1
max
dD
E
θ
2
[f
d
(θ
1
, θ
2
)]
max
dD
E
θ
[f
d
(θ)] .
Sample information on θ, which is of our interest in this article, is a single realization
drawn from some probability distribution. To be more precise, we consider that information
Y is stochastically generated according to the forward information model:
Y = h(θ) + ,(1.2)
where h is a known deterministic function of θ possibly with multiple outputs and is a zero-
mean random variable with density ρ. Note that h and are called the observation operator
and the observation noise, respectively [26, section 2]. It is widely known that Bayes' theorem
provides an update of the probability density of θ after observing Y :
π
Y
(θ) =
ρ(Y | θ)π
0
(θ)
E
θ
[ρ(Y | θ)]
,(1.3)
where π
0
(θ) denotes the prior probability density of θ, and ρ(Y | θ) denotes the conditional
probability density of Y given θ. Here ρ(Y | θ) is also called the likelihood of the information
Y , and it follows from the model (1.2) that ρ(Y | θ) := ρ(Y h(θ)).
Downloaded 10/16/20 to 81.110.214.155. Redistribution subject to SIAM license or copyright; see https://epubs.siam.org/page/terms

Copyright © by SIAM and ASA. Unauthorized reproduction of this article is prohibited.
1238 T. HIRONAKA, M. B. GILES, T. GODA, AND H. THOM
Now, if such sample information Y is available, by choosing the best treatment which
maximizes the conditional expectation E
θ | Y
[f
d
(θ)] depending on Y , where E
θ | Y
[·] denotes
the expectation taken with respect to the conditional probability density π
Y
(θ), the overall
average outcome becomes
E
Y
max
dD
E
θ | Y
[f
d
(θ)]
.
Then EVSI represents the expected benefit of gaining the information Y and is defined by the
difference
EVSI := E
Y
max
dD
E
θ | Y
[f
d
(θ)]
max
dD
E
θ
[f
d
(θ)] .
In this article we are concerned with Monte Carlo estimation of EVSI. Given that EVPI
can be estimated with root-mean-square accuracy ε by using N = O(ε
2
) independent and
identically distributed (i.i.d.) samples of θ, denoted by θ
(1)
, . . . , θ
(N)
, as
1
N
N
n=1
max
dD
f
d
(θ
(n)
) max
dD
1
N
N
n=1
f
d
(θ
(n)
),
it suffices to efficiently estimate the difference between EVPI and EVSI:
EVPI EVSI = E
θ
max
dD
f
d
(θ)
E
Y
max
dD
E
θ | Y
[f
d
(θ)]
.(1.4)
Because of the noncommutativity between the operators E and max
dD
, this estimation is
inherently a nested expectation problem, and it is far from trivial whether we can construct
a good Monte Carlo estimator which achieves a root-mean-square accuracy ε at a cost of
O(ε
2
).
Classically the most standard approach is to apply nested (Markov chain) Monte Carlo
methods. For M, N Z
>0
, let Y
(1)
, . . . , Y
(N)
be N outer i.i.d. samples of Y , and for each
1 n N, let θ
(n,1)
, . . . , θ
(n,M)
be M inner i.i.d. samples of θ conditional on Y
(n)
. Then the
nested Monte Carlo estimator of EVPI EVSI is given by
1
N
N
n=1
1
M
M
m=1
max
dD
f
d
(θ
(n,m)
) max
dD
1
M
M
m=1
f
d
(θ
(n,m)
)
.(1.5)
Here it is often hard to generate inner i.i.d. samples of θ conditional on some value of Y
directly (although, conversely, it is quite easy to generate i.i.d. samples of Y conditional on
some value of θ according to (1.2)). This is a major difference from estimating EVPPI.
To work around this difficulty, although the resulting samples are no longer i.i.d., one relies
on Markov chain Monte Carlo (MCMC) sampling techniques such as Metropolis–Hastings
sampling and Gibbs sampling; see [18, 20]. Under certain conditions, it follows from [16, 17]
that one can establish a nonasymptotic error bound of O(M
1/2
) for MCMC estimation of
the inner conditional expectation. Still, as inferred from a recent work of Giles and Goda [11]
on EVPPI estimation, we need N = O(ε
2
) and M = O(ε
1
) samples for outer and inner
expectations, respectively, to estimate EVPI EVSI with root-mean-square accuracy ε. Here
Downloaded 10/16/20 to 81.110.214.155. Redistribution subject to SIAM license or copyright; see https://epubs.siam.org/page/terms

Copyright © by SIAM and ASA. Unauthorized reproduction of this article is prohibited.
MLMC ESTIMATION OF THE EXPECTED VALUE OF SAMPLE INFORMATION 1239
α denotes the order of convergence of the bias and is typically between 1/2 and 1. This way
the necessary total computational cost is of O(ε
21
).
In this article, building upon the earlier work by Giles and Goda [11], we develop a novel
efficient Monte Carlo estimator of EVPI–EVSI by using a multilevel Monte Carlo (MLMC)
method [8, 9]. Although there has been extensive recent research on efficient approximations
of EVSI in the medical decision making context [25, 14, 19, 15], our proposal avoids func-
tion approximations on the inner conditional expectation and any reliance on assumptions
of multilinearity of f
d
or weak correlation between random variables in θ. Recently MLMC
estimators have been studied intensively for nested expectations of different forms, for in-
stance, by [4, 12, 13]. We also refer the reader to [10] for a review of recent developments
of MLMC applied to nested expectation problems. Importantly, our approach developed in
this article does not require MCMC sampling for generating inner conditional samples of θ
and can achieve a root-mean-square accuracy ε at a cost of optimal O(ε
2
). Moreover, it is
straightforward to incorporate importance sampling techniques within our estimator, which
may sometimes reduce the variance of the estimator significantly.
2. Multilevel Monte Carlo.
2.1. Basic theory. Before introducing our estimator of EVPIEVSI, we give an overview
of the MLMC method. Let P be a real-valued random variable which cannot be sampled
exactly, and let P
0
, P
1
, . . . be a sequence of real-valued random variables which approximate
P with increasing accuracy but also with increasing cost. In order to estimate E[P ], we first
approximate E[P ] by E[P
L
] for some L Z
0
and then the standard Monte Carlo method
estimates E[P
L
] by using i.i.d. samples P
(1)
L
, P
(2)
L
, . . . of P
L
as
E[P ] E[P
L
] P
L
N
:=
1
N
N
n=1
P
(n)
L
.
On the other hand, the MLMC method exploits the following telescoping sum represen-
tation:
E[P
L
] = E[P
0
] +
L
=1
E[P
P
1
].
More generally, given a sequence of random variables P
0
, P
1
, . . . which satisfy
E[∆P
0
] = E[P
0
] and E[∆P
] = E[P
P
1
] for 1,
we have
E[P
L
] =
L
=0
E[∆P
].
Then the MLMC estimator is given by a sum of independent Monte Carlo estimates of
E[∆P
0
], E[∆P
1
], . . . , i.e.,
Z
MLMC
=
L
=0
P
N
=
L
=0
1
N
N
n=1
P
(n)
.(2.1)
Downloaded 10/16/20 to 81.110.214.155. Redistribution subject to SIAM license or copyright; see https://epubs.siam.org/page/terms

Copyright © by SIAM and ASA. Unauthorized reproduction of this article is prohibited.
1240 T. HIRONAKA, M. B. GILES, T. GODA, AND H. THOM
Since P
0
, P
1
, . . . approximate P with increasing accuracy, through a tight coupling of P
1
and P
, the variance of the correction variable P
is expected to get smaller as the level
increases. This implies that the numbers of samples N
0
, N
1
, . . . can also get smaller as the
level increases so as to estimate each quantity E[∆P
] accurately. If this is the case, the
total computational cost can be reduced significantly as compared to the standard Monte
Carlo method.
The following basic theorem from [8, 5, 9] makes the above observation explicit.
Theorem 2.1. Let P be a random variable, and for Z
0
, let P
be the level approxima-
tion of P . Assume that there exist independent correction random variables P
with expected
cost C
and variance V
, and positive constants α, β, γ, c
1
, c
2
, c
3
such that α min(β, γ)/2 and
1. E[∆P
] =
E[P
0
], = 0,
E[P
P
1
], 1,
2. |E[P
P ]| c
1
2
α
,
3. V
c
2
2
β
,
4. C
c
3
2
γ
.
Then there exists a positive constant c
4
such that, for any root-mean-square accuracy ε < e
1
,
there are L and N
0
, . . . , N
L
for which the MLMC estimator (2.1) achieves a mean-square error
less than ε
2
, i.e.,
E[(Z
MLMC
E[P ])
2
] ε
2
with a computational cost C bounded above by
E[C]
c
4
ε
2
, β > γ,
c
4
ε
2
(log ε
1
)
2
, β = γ,
c
4
ε
2(γβ)
, β < γ.
Remark 2.2. As discussed in [5, section 2.1] and [11, section 2.1], under assumptions
similar to those in Theorem 2.1, the standard Monte Carlo estimator achieves a root-mean-
square accuracy at a cost of O(ε
2γ
). Therefore, regardless of the values of β > 0 and γ,
the MLMC estimator always has an asymptotically lower complexity bound than the standard
Monte Carlo estimator.
2.2. MLMC estimator. Here we construct an MLMC estimator of the difference EVPI
EVSI. Our starting point is to insert (1.3) into (1.4), which results in
EVPI EVSI = E
Y
E
θ | Y
max
dD
f
d
(θ)
E
Y
max
dD
E
θ | Y
[f
d
(θ)]
= E
Y
E
θ
[max
dD
f
d
(θ)ρ(Y | θ)]
E
θ
[ρ(Y | θ)]
max
dD
E
θ
[f
d
(θ)ρ(Y | θ)]
E
θ
[ρ(Y | θ)]
.
This has converted the posterior expectation with respect to θ given Y into the ratio of
two prior expectations with respect to θ, which avoids the need for MCMC sampling. This
idea has been used not only in the current context [19] but also in other areas related to
Bayesian computations; see [22, 21, 6, 7], among many others. On a technical level, this gives
a decisive difference from estimating EVPPI as considered in [11] for which we do not need such
Downloaded 10/16/20 to 81.110.214.155. Redistribution subject to SIAM license or copyright; see https://epubs.siam.org/page/terms

Citations
More filters
Posted Content

Unbiased MLMC stochastic gradient-based optimization of Bayesian experimental designs.

TL;DR: An unbiased Monte Carlo estimator is introduced for the gradient of the expected information gain with finite expected squared $\ell_2$-norm and finite expected computational cost per sample.
Posted Content

Nonlinear Monte Carlo methods with polynomial runtime for high-dimensional iterated nested expectations.

TL;DR: This article proves under suitable assumptions that these MLP approximation schemes can approximately calculate multiply iterated nested expectations with a computational effort growing at most polynomially in the number of nestings.
Journal ArticleDOI

Value of Information Analysis in Models to Inform Health Policy

TL;DR: Value of information (VoI) is a decision-theoretic approach to estimate the expected benefits from collecting further information of different kinds, in scientific problems based on combining one or more sources of data as mentioned in this paper .
Journal ArticleDOI

Unbiased MLMC Stochastic Gradient-Based Optimization of Bayesian Experimental Designs

TL;DR: In this paper , a multilevel Monte Carlo (MLM) was used for Bayesian experimental design with expected information gain (EIG) in the context of stochastic gradient descent (SGD).
References
More filters
Book

Monte Carlo Statistical Methods

TL;DR: This new edition contains five completely new chapters covering new developments and has sold 4300 copies worldwide of the first edition (1999).
Book

Monte Carlo strategies in scientific computing

Jun Liu
TL;DR: This book provides a self-contained and up-to-date treatment of the Monte Carlo method and develops a common framework under which various Monte Carlo techniques can be "standardized" and compared.
Journal ArticleDOI

Inverse problems: A Bayesian perspective

TL;DR: The Bayesian approach to regularization is reviewed, developing a function space viewpoint on the subject, which allows for a full characterization of all possible solutions, and their relative probabilities, whilst simultaneously forcing significant modelling issues to be addressed in a clear and precise fashion.
Journal ArticleDOI

Multilevel Monte Carlo Path Simulation

TL;DR: It is shown that multigrid ideas can be used to reduce the computational complexity of estimating an expected value arising from a stochastic differential equation using Monte Carlo path simulations.
Journal ArticleDOI

Multilevel Monte Carlo methods and applications to elliptic PDEs with random coefficients

TL;DR: A novel variance reduction technique for the standard Monte Carlo method, called the multilevel Monte Carlo Method, is described, and numerically its superiority is demonstrated.
Related Papers (5)
Frequently Asked Questions (9)
Q1. What have the authors contributed in "Multilevel monte carlo estimation of the expected value of sample information | siam/asa journal on uncertainty quantification | vol. 8, no. 3 | society for industrial and applied mathematics" ?

The authors study Monte Carlo estimation of the expected value of sample information ( EVSI ), which measures the expected benefit of gaining additional information for decision making under uncertainty. In this article the authors propose a novel efficient Monte Carlo estimator of EVSI by applying a multilevel Monte Carlo ( MLMC ) method. Instead of fixing the number of inner samples for \\theta as done in the nested Monte Carlo estimator, the authors consider a geometric progression on the number of inner samples, which yields a hierarchy of estimators on the inner conditional expectation with increasing approximation levels. The authors show, under a set of assumptions on decision and information models, that successive approximation levels are tightly coupled, which directly proves that their MLMC estimator improves the necessary computational cost to optimal O ( \\varepsilon 2 ). 

Repeating the standard Monte Carlo estimation with 108 i.i.d. samples of \\theta 10 times, EVPI is estimated as 4,063.5 with the standard error equal to 0.66. 

It should be noted that the values of both \\alpha and \\beta are estimated on the fly as the computation is performed, with the value of \\alpha being used to determine when the bias \\BbbE [P - PL] has converged sufficiently as L increases. 

For the outer samples of Y which are far away from the decision manifold K, a smaller number of inner samples than M02 \\ell may be sufficient to ensure thatargmax d\\in D gd = argmax d\\in Dgd (a) = argmaxd\\in D gd(b) = d\\mathrm{o}\\mathrm{p}\\mathrm{t}holds with high probability. 

Probability of critical event PE,d Derived from on treatments d = 2, 3 PE,1 and ORE,d Probability of side effect PSE,1 0 (constant) on treatment d = 1Probability of side effect PSE,d logit-normal \\biggl( \\biggl( - 1.4 - 1.1 \\biggr) , \\biggl( 0.10 0.05 0.05 0.25 \\biggr) \\biggr) on treatments d = 2, 3 Monetary value of 1 QALY \\lambda $75,000 (constant)2. 

This hypothetical multicenter RCT costs $200,000 for setup, $1500 for each randomized patient, and $50,000 for each additional 100 patients added to the study (owing to the cost of establishing a new center). 

For the second term, when \\bigm| \\bigm| \\rho (Y | \\cdot )/\\rho (Y ) - 1\\bigm| \\bigm| \\leq 1/2 the authors have| gd - Gd| p \\leq 2p - 1 \\bigm| \\bigm| \\bigm| \\bigm| \\bigm| fd(\\cdot )\\rho (Y | \\cdot )\\rho (Y | \\cdot ) - Fd(Y )\\rho (Y | \\cdot ) \\bigm| \\bigm| \\bigm| \\bigm| \\bigm| p + 2p - 1 \\bigm| \\bigm| \\bigm| \\bigm| \\bigm| Fd(Y )\\rho (Y | \\cdot ) - Fd(Y )\\rho (Y ) \\bigm| \\bigm| \\bigm| \\bigm| \\bigm| p\\leq 2 2p - 1\\rho (Y )p \\bigm| \\bigm| \\bigm| fd(\\cdot )\\rho (Y | \\cdot ) - Fd(Y )\\bigm| \\bigm| \\bigm| p + 22p - 1F pmax \\bigm| \\bigm| \\bigm| \\bigm| \\bigm| \\rho (Y | \\cdot )\\rho (Y ) - 1 \\bigm| \\bigm| \\bigm| \\bigm| \\bigm| p ,and therefore, again by Lemma 3.8,\\BbbE [| gd - Gd| p1Ec ] \\leq 22p - 1 - p\\ell /2M - p/2 0 Cp\\BbbE Y \\biggl[ \\BbbE \\theta \\biggl[ \\bigm| \\bigm| \\bigm| \\bigm| fd(\\theta )\\rho (Y | \\theta ) - Fd(Y )\\rho (Y ) \\bigm| \\bigm| \\bigm| \\bigm| p\\biggr] \\biggr] + 22p - 1 - p\\ell /2M - p/2 0 CpF p max\\BbbE Y \\biggl[ \\BbbE \\theta \\biggl[ \\bigm| \\bigm| \\bigm| \\bigm| \\rho (Y | \\theta )\\rho (Y ) - 1 \\bigm| \\bigm| \\bigm| \\bigm| p\\biggr] \\biggr] .(3.2) Note that \\bigm| \\bigm| \\bigm| \\bigm| \\rho (Y | \\theta )\\rho (Y ) - 1 \\bigm| \\bigm| \\bigm| \\bigm| p \\leq max\\biggl( \\rho (Y | \\theta )\\rho (Y ) , 1 \\biggr) p \\leq \\biggl( \\rho (Y | \\theta ) \\rho (Y ) \\biggr) p + 1,which gives\\BbbE Y \\biggl[ \\BbbE \\theta \\biggl[ \\bigm| \\bigm| \\bigm| \\bigm| \\rho (Y | \\theta )\\rho (Y ) - 1 \\bigm| \\bigm| \\bigm| \\bigm| p\\biggr] \\biggr] \\leq \\BbbE Y \\biggl[ \\BbbE \\theta \\biggl[ \\biggl( \\rho (Y | \\theta )\\rho (Y ) \\biggr) p\\biggr] \\biggr] + 1.(3.3)Similarly,| fd(\\theta )\\rho (Y | \\theta ) - Fd(Y )| p \\leq 2p - 1 (| fd(\\theta )\\rho (Y | \\theta )| p + | Fd(Y )| p) \\leq 2p - 1F pmax(\\rho (Y | \\theta )p + \\rho (Y )p),which gives \\BbbE Y \\biggl[ \\BbbE \\theta \\biggl[ \\bigm| \\bigm| \\bigm| \\bigm| fd(\\theta )\\rho (Y | \\theta ) - Fd(Y )\\rho (Y ) \\bigm| \\bigm| \\bigm| \\bigm| p\\biggr] \\biggr] \\leq 2p - 1F pmax\\biggl( \\BbbE Y \\biggl[ \\BbbE \\theta \\biggl[ \\biggl( \\rho (Y | \\theta )\\rho (Y ) \\biggr) p\\biggr] \\biggr] + 1 \\biggr) .(3.4)Combining the bounds in (3.1) and (3.2), and then using (3.3) and (3.4), the authors get\\BbbE [ | gd - Gd| p] \\leq (3 \\cdot 22p - 1+23p - 2)2 - p\\ell /2M - p/2 0 CpF p max \\biggl( \\BbbE Y \\biggl[ \\BbbE \\theta \\biggl[ \\biggl( \\rho (Y | \\theta ) \\rho (Y ) \\biggr) p\\biggr] \\biggr] + 1 \\biggr) ,which completes the proof of the first assertion. 

This directly implies from Theorem 2.1 that their MLMC estimator of EVPI - EVSI achieves a root-mean-square accuracy \\varepsilon at a cost of optimal O(\\varepsilon - 2). 

Their final modification to improve practical relevance is to assume an annual population that experiences this disease as 2,500, a technology horizon of 10 years, and an annual discount factor of 1.035, giving a total discounted population that will benefit from sampling as 21,519.