scispace - formally typeset
Open AccessProceedings ArticleDOI

A tutorial on particle filters for on-line nonlinear/non-Gaussian Bayesian tracking

TLDR
Both optimal and suboptimal Bayesian algorithms for nonlinear/non-Gaussian tracking problems, with a focus on particle filters are reviewed.
Abstract
Increasingly, for many application areas, it is becoming important to include elements of nonlinearity and non-Gaussianity in order to model accurately the underlying dynamics of a physical system. Moreover, it is typically crucial to process data on-line as it arrives, both from the point of view of storage costs as well as for rapid adaptation to changing signal characteristics. In this paper, we review both optimal and suboptimal Bayesian algorithms for nonlinear/non-Gaussian tracking problems, with a focus on particle filters. Particle filters are sequential Monte Carlo methods based on point mass (or “particle”) representations of probability densities, which can be applied to any state-space model and which generalize the traditional Kalman filtering methods. Several variants of the particle filter such as SIR, ASIR, and RPF are introduced within a generic framework of the sequential importance sampling (SIS) algorithm. These are discussed and compared with the standard EKF through an illustrative example.

read more

Content maybe subject to copyright    Report

174 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 50, NO. 2, FEBRUARY 2002
A Tutorial on Particle Filters for Online
Nonlinear/Non-Gaussian Bayesian Tracking
M. Sanjeev Arulampalam, Simon Maskell, Neil Gordon, and Tim Clapp
Abstract—Increasingly, for many application areas, it is
becoming important to include elements of nonlinearity and
non-Gaussianity in order to model accurately the underlying
dynamics of a physical system. Moreover, it is typically crucial
to process data on-line as it arrives, both from the point of view
of storage costs as well as for rapid adaptation to changing
signal characteristics. In this paper, we review both optimal and
suboptimal Bayesian algorithms for nonlinear/non-Gaussian
tracking problems, with a focus on particle filters. Particle filters
are sequential Monte Carlo methods based on point mass (or
“particle”) representations of probability densities, which can
be applied to any state-space model and which generalize the
traditional Kalman filtering methods. Several variants of the
particle filter such as SIR, ASIR, and RPF are introduced within
a generic framework of the sequential importance sampling (SIS)
algorithm. These are discussed and compared with the standard
EKF through an illustrative example.
Index Terms—Bayesian, nonlinear/non-Gaussian, particle
filters, sequential Monte Carlo, tracking.
I. INTRODUCTION
M
ANY problems in science require estimation of the state
of a system that changes over time using a sequence of
noisy measurements made on the system. In this paper, we will
concentrate on the state-space approach to modeling dynamic
systems, and the focus will be on the discrete-time formulation
of the problem. Thus, difference equations are used to model
the evolution of the system with time, and measurements are
assumed to be available at discrete times. For dynamic state es-
timation, the discrete-time approach is widespread and conve-
nient.
The state-space approach to time-series modeling focuses at-
tention on the state vector of a system. The state vector con-
tains all relevant information required to describe the system
under investigation. For example, in tracking problems, this in-
formation could be related to the kinematic characteristics of
the target. Alternatively, in an econometrics problem, it could be
Manuscript received February 8, 2001; revised October 15, 2001. S.
Arulampalam was supported by the Royal Academy of Engineering with
an Anglo–Australian Post-Doctoral Research Fellowship. S. Maskell was
supported by the Royal Commission for the Exhibition of 1851 with an
Industrial Fellowship. The associate editor coordinating the review of this
paper and approving it for publication was Dr. Petar M. Djuric
´
.
M. S. Arulampalam is with the Defence Science and Technology Organisa-
tion, Adelaide, Australia (e-mail: sanjeev.arulampalam@dsto.defence.gov.au).
S. Maskell and N. Gordon are with the Pattern and Information Processing
Group, QinetiQ, Ltd., Malvern, U.K., and Cambridge University Engineering
Department, Cambridge, U.K. (e-mail: s.maskell@signal.qinetiq.com;
n.gordon@signal.qinetiq.com).
T. Clapp is with Astrium Ltd., Stevenage, U.K. (e-mail: t.clapp@iee.org).
Publisher Item Identifier S 1053-587X(02)00569-X.
related to monetary flow, interest rates, inflation, etc. The mea-
surement vector represents (noisy) observations that are related
to the state vector. The measurement vector is generally (but not
necessarily) of lower dimension than the state vector. The state-
space approach is convenient for handling multivariate data and
nonlinear/non-Gaussian processes, and it provides a significant
advantageover traditional time-series techniques for these prob-
lems. A full description is provided in [41]. In addition, many
varied examples illustrating the application of nonlinear/non-
Gaussian state space models are given in [26].
In order to analyze and make inference about a dynamic
system, at least two models are required: First, a model de-
scribing the evolution of the state with time (the system model)
and, second, a model relating the noisy measurements to the
state (the measurement model). We will assume that these
models are available in a probabilistic form. The probabilistic
state-space formulation and the requirement for the updating of
information on receipt of new measurements are ideally suited
for the Bayesian approach. This provides a rigorous general
framework for dynamic state estimation problems.
In the Bayesian approach to dynamic state estimation, one
attempts to construct the posterior probability density function
(pdf) of the state based on all available information, including
the set of received measurements. Since this pdf embodies all
available statistical information, it may be said to be the com-
plete solution to the estimation problem. In principle, an optimal
(with respect to any criterion) estimate of the state may be ob-
tained from the pdf. A measure of the accuracy of the estimate
may also be obtained. For many problems, an estimate is re-
quired every time that a measurement is received. In this case, a
recursive filter is a convenient solution. A recursive filtering ap-
proach means that received data can be processed sequentially
rather than as a batch so that it is not necessary to store the com-
plete data set nor to reprocess existing data if a new measure-
ment becomes available.
1
Such a filter consists of essentially
two stages: prediction and update. The prediction stage uses the
system model to predict the state pdf forward from one mea-
surement time to the next. Since the state is usually subject to
unknown disturbances (modeled as random noise), prediction
generally translates, deforms, and spreads the state pdf. The up-
date operation uses the latest measurement to modify the pre-
diction pdf. This is achieved using Bayes theorem, which is the
mechanism for updating knowledge about the target state in the
light of extra information from new data.
1
In this paper, we assume no out-of-sequence measurements; in the presence
of out-of-sequence measurements, the order of times to which the measurements
relate can differ from the order in which the measurements are processed. For a
particle filter solution to the problem of relaxing this assumption, see [32].
1053–587X/02$17.00 © 2002 IEEE

ARULAMPALAM et al.: TUTORIAL ON PARTICLE FILTERS 175
We begin in Section II with a description of the nonlinear
tracking problem and its optimal Bayesian solution. When
certain constraints hold, this optimal solution is tractable.
The Kalman filter and grid-based filter, which is described
in Section III, are two such solutions. Often, the optimal
solution is intractable. The methods outlined in Section IV
take several different approximation strategies to the optimal
solution. These approaches include the extended Kalman filter,
approximate grid-based filters, and particle filters. Finally, in
Section VI, we use a simple scalar example to illustrate some
points about the approaches discussed up to this point and
then draw conclusions in Section VII. This paper is a tutorial;
therefore, to facilitate easy implementation, the “pseudo-code”
for algorithms has been included at relevant points.
II. N
ONLINEAR BAYESIAN TRACKING
To define the problem of tracking, consider the evolution of
the state sequence
of a target given by
(1)
where
is a possibly nonlinear function
of the state
, is an i.i.d. process noise se-
quence,
are dimensions of the state and process noise
vectors, respectively, and
is the set of natural numbers. The
objective of tracking is to recursively estimate
from mea-
surements
(2)
where
is a possibly nonlinear func-
tion,
is an i.i.d. measurement noise sequence,
and
are dimensions of the measurement and measure-
ment noise vectors, respectively. In particular, we seek filtered
estimates of
based on the set of all available measurements
, up to time .
From a Bayesian perspective, the tracking problem is to re-
cursively calculate some degree of belief in the state
at time
, taking different values, given the data up to time . Thus,
it is required to construct the pdf
. It is assumed that
the initial pdf
of the state vector, which is also
known as the prior, is available (
being the set of no measure-
ments). Then, in principle, the pdf
may be obtained,
recursively, in two stages: prediction and update.
Suppose that the required pdf
at time
is available. The prediction stage involves using the system
model (1) to obtain the prior pdf of the state at time
via the
Chapman–Kolmogorov equation
(3)
Note that in (3), use has been made of the fact that
,
as (1) describes a Markov process
of order one. The probabilistic model of the state evolution
is defined by the system equation (1) and the
known statistics of
.
At time step
, a measurement becomes available, and this
may be used to update the prior (update stage) via Bayes’ rule
(4)
where the normalizing constant
(5)
depends on the likelihood function
defined by the
measurement model (2) and the known statistics of
. In the
update stage (4), the measurement
is used to modify the
prior density to obtain the required posterior density of the
current state.
The recurrence relations (3) and (4) form the basis for the
optimal Bayesian solution.
2
This recursive propagation of the
posterior density is only a conceptual solution in that in general,
it cannot be determined analytically. Solutions do exist in a re-
strictive set of cases, including the Kalman filter and grid-based
filters described in the next section. We also describe how, when
the analytic solution is intractable, extended Kalman filters, ap-
proximate grid-based filters, and particle filters approximate the
optimal Bayesian solution.
III. O
PTIMAL ALGORITHMS
A. Kalman Filter
The Kalman filter assumes that the posterior density at every
time step is Gaussian and, hence, parameterized by a mean and
covariance.
If
is Gaussian, it can be proved that
is also Gaussian, provided that certain assumptions
hold [21]:
and are drawn from Gaussian distributions of
known parameters.
is known and is a linear function of
and .
is a known linear function of and .
That is, (1) and (2) can be rewritten as
(6)
(7)
and are known matrices defining the linear functions.
The covariances of
and are, respectively, and
. Here, we consider the case when and have zero
mean and are statistically independent. Note that the system and
measurement matrices
and , as well as noise parameters
and , are allowed to be time variant.
The Kalman filter algorithm, which was derived using (3) and
(4), can then be viewed as the following recursive relationship:
(8)
(9)
(10)
2
For clarity, the optimal Bayesian solution solves the problem of recursively
calculating the exact posterior density. An optimal algorithm is a method for
deducing this solution.

176 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 50, NO. 2, FEBRUARY 2002
where
(11)
(12)
(13)
(14)
and where
is a Gaussian density with argument ,
mean
, and covariance , and
(15)
(16)
are the covariance of the innovation term
, and
the Kalman gain, respectively. In the above equations, the trans-
pose of a matrix
is denoted by .
This is the optimal solution to the tracking problem—if the
(highly restrictive) assumptions hold. The implication is that no
algorithm can ever do better than a Kalman filter in this linear
Gaussian environment. It should be noted that it is possible to
derive the same results using a least squares (LS) argument [22].
All the distributions are then described by their means and co-
variances, and the algorithm remains unaltered, but are not con-
strained to be Gaussian. Assuming the means and covariances
to be unbiased and consistent, the filter then optimally derives
the mean and covariance of the posterior. However, this poste-
rior is not necessarily Gaussian, and therefore, if optimality is
the ability of an algorithm to calculate the posterior, the filter is
then not certain to be optimal.
Similarly, if smoothed estimates of the states are required,
that is, estimates of
, where ,
3
then the
Kalman smoother is the optimal estimator of
.
This holds if
is fixed (fixed-lag smoothing, if a batch of data
are considered and
(fixed-interval smoothing), or if
the state at a particular time is of interest
is fixed (fixed-point
smoothing). The problem of calculating smoothed densities is
of interest because the densities at time
are then conditional
not only on measurements up to and including time index
but
also on future measurements. Since there is more information
on which to base the estimation, these smoothed densities are
typically tighter than the filtered densities.
Although this is true, there is an algorithmic issue that should
be highlighted here. It is possible to formulate a backward-time
Kalman filter that recurses through the data sequence from the
final data to the first and then combines the estimates from the
forward and backward passes to obtain overall smoothed es-
timates [20]. A different formulation implicitly calculates the
backward-time state estimates and covariances, recursively esti-
mating the smoothed quantities [38]. Both techniques are prone
to having to calculate matrix inverses that do not necessarily
exist. Instead, it is preferable to propagate different quantities
using an information filter when carrying out the backward-time
recursion [4].
3
If
`
=0
, then the problem reduces to the estimation of
p
(
x
j
z
)
consid-
ered up to this point.
B. Grid-Based Methods
Grid-based methods provide the optimal recursion of the fil-
tered density
if the state space isdiscrete and consists
of a finite number of states. Suppose the state space at time
consists of discrete states , . For each state
, let the conditional probability of that state, given mea-
surements up to time
be denoted by , that is,
. Then, the posterior pdf
at
can be written as
(17)
where
is the Dirac delta measure. Substitution of (17) into
(3) and (4) yields the prediction and update equations, respec-
tively
(18)
(19)
where
(20)
(21)
The above assumes that
and are known
butdoes not constrain the particularform of these discrete densi-
ties. Again, this is the optimal solution if the assumptions made
hold.
IV. S
UBOPTIMAL ALGORITHMS
In many situations of interest, the assumptions made above
do not hold. The Kalman filter and grid-based methods cannot,
therefore, be used as described—approximations are necessary.
In this section, we consider three approximate nonlinear
Bayesian filters:
a) extended Kalman filter (EKF);
b) approximate grid-based methods;
c) particle filters.
A. Extended Kalman Filter
If (1) and (2) cannot be rewritten in the form of (6) and (7)
because the functions are nonlinear, then a local linearization of
the equations may be a sufficient description of the nonlinearity.
The EKF is based on this approximation.
is approx-
imated by a Gaussian
(22)
(23)
(24)

ARULAMPALAM et al.: TUTORIAL ON PARTICLE FILTERS 177
where
(25)
(26)
(27)
(28)
and where now,
and are nonlinear functions, and
and are local linearizations of these nonlinear functions (i.e.,
matrices)
(29)
(30)
(31)
(32)
The EKF as described above utilizes the first term in a Taylor
expansion of the nonlinear function. A higher order EKF that
retains further terms in the Taylor expansion exists, but the ad-
ditional complexity has prohibited its widespread use.
Recently, the unscented transform has been used in an EKF
framework [23], [42], [43]. The resulting filter, which is known
as the “unscented Kalman filter,” considers a set of points that
are deterministically selected from the Gaussian approximation
to
. These points are all propagated through the true
nonlinearity, and the parameters of the Gaussian approximation
are then re-estimated. For some problems, this filter has been
shown to give better performance than a standard EKF since
it better approximates the nonlinearity; the parameters of the
Gaussian approximation are improved.
However, the EKF always approximates
to
be Gaussian. If the true density is non-Gaussian (e.g., if it
is bimodal or heavily skewed), then a Gaussian can never
describe it well. In such cases, approximate grid-based filters
and particle filters will yield an improvement in performance
in comparison to that of an EKF [1].
B. Approximate Grid-Based Methods
If the state space is continuous but can be decomposed into
“cells,” : , then a grid-based method
can be used to approximate the posterior density. Specifically,
suppose the approximation to the posterior pdf at
is given
by
(33)
Then, the prediction and update equations can be written as
(34)
(35)
where
(36)
(37)
Here,
denotes the center of the th cell at time index .
The integrals in (36) and (37) arise due to the fact that the grid
points
, , represent regions of continuous state
space, and thus, the probabilities must be integrated over these
regions. In practice, to simplify computation, a further approx-
imation is made in the evaluation of
. Specifically, these
weights are computed at the center of the “cell” corresponding
to
(38)
(39)
The grid must be sufficiently dense to get a good approxi-
mation to the continuous state space. As the dimensionality of
the state space increases, the computational cost of the approach
therefore increases dramatically. If the state space is not finite in
extent, then using a grid-based approach necessitates some trun-
cation of the state space. Another disadvantage of grid-based
methods is that the state space must be predefined and, there-
fore, cannot be partitioned unevenly to give greater resolution
in high probability density regions, unless prior knowledge is
used.
Hidden Markov model (HMM) filters [30], [35], [36], [39]
are an application of such approximate grid-based methods in
a fixed-interval smoothing context and have been used exten-
sively in speech processing. In HMM-based tracking, a common
approach is to use the Viterbi algorithm [18] to calculate the
maximum a posteriori estimate of the path through the trellis,
that is, the sequence of discrete states that maximizes the prob-
ability of the state sequence given the data. Another approach,
due to Baum–Welch [35], is to calculate the probability of each
discrete state at each time epoch given the entire data sequence.
4
V. P ARTICLE FILTERING METHODS
A. Sequential Importance Sampling (SIS) Algorithm
The sequential importance sampling (SIS) algorithm is
a Monte Carlo (MC) method that forms the basis for most
sequential MC filters developed over the past decades; see [13],
4
The Viterbi and Baum–Welch algorithms are frequently applied when the
state space is approximated to be discrete. The algorithms are optimal if and
only if the underlying state space is truly discrete in nature.

178 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 50, NO. 2, FEBRUARY 2002
[14]. This sequential MC (SMC) approach is known variously
as bootstrap filtering [17], the condensation algorithm [29],
particle filtering [6], interacting particle approximations [10],
[11], and survival of the fittest [24]. It is a technique for imple-
menting a recursive Bayesian filter by MC simulations. The key
idea is to represent the required posterior density function by a
set of random samples with associated weights and to compute
estimates based on these samples and weights. As the number
of samples becomes very large, this MC characterization
becomes an equivalent representation to the usual functional
description of the posterior pdf, and the SIS filter approaches
the optimal Bayesian estimate.
In order to develop the details of the algorithm, let
denote a random measure that characterizes the
posterior pdf
, where , is a set
of support points with associated weights
,
and , is the set of all states up to time
. The weights are normalized such that . Then, the
posterior density at
can be approximated as
(40)
We therefore have a discrete weighted approximation to the
true posterior,
. The weights are chosen using the
principle of importance sampling [3], [12]. This principle relies
on the following. Suppose
is a probability density
from which it is difficult to draw samples but for which
can
be evaluated [as well as
up to proportionality]. In addition,
let
, be samples that are easily gener-
ated from a proposal
called an importance density. Then, a
weighted approximation to the density
is given by
(41)
where
(42)
is the normalized weight of the
th particle.
Therefore, if the samples
were drawn from an impor-
tance density
, then the weights in (40) are defined
by (42) to be
(43)
Returning to the sequential case, at each iteration, one
could have samples constituting an approximation to
and want to approximate
with a new set of samples. If the importance density is chosen
to factorize such that
(44)
then one can obtain samples
by augmenting
each of the existing samples
with
the new state
, . To derive the weight
update equation,
is first expressed in terms of
, , and . Note that (4) can
be derived by integrating (45)
(45)
(46)
By substituting (44) and (46) into (43), the weight update
equation can then be shown to be
(47)
Furthermore, if
, , , then
the importance density becomes only dependent on
and
. This is particularly useful in the common case when only
a filtered estimate of
is required at each time step.
From this point on, we will assume such a case, except when
explicitly stated otherwise. In such scenarios, only
need be
stored; therefore, one can discard the path
and history of
observations
. The modified weight is then
(48)
and the posterior filtered density
can be approxi-
mated as
(49)
where the weights are defined in (48). It can be shown that as
, the approximation (49) approaches the true posterior
density
.
The SIS algorithm thus consists of recursive propagation of
the weights and support points as each measurement is received
sequentially. A pseudo-code description of this algorithm is
given by algorithm 1.
Algorithm 1: SIS Particle Filter
SIS
FOR
Draw ,
Assign the particle a weight, ,
according to (48)
END FOR
1) Degeneracy Problem: A common problem with the SIS
particle filter is the degeneracy phenomenon, where after a few
iterations, all but one particle will have negligible weight. It has

Citations
More filters
MonographDOI

Planning Algorithms: Introductory Material

TL;DR: This coherent and comprehensive book unifies material from several sources, including robotics, control theory, artificial intelligence, and algorithms, into planning under differential constraints that arise when automating the motions of virtually any mechanical system.
Journal ArticleDOI

Kernel-based object tracking

TL;DR: A new approach toward target representation and localization, the central component in visual tracking of nonrigid objects, is proposed, which employs a metric derived from the Bhattacharyya coefficient as similarity measure, and uses the mean shift procedure to perform the optimization.
Proceedings ArticleDOI

Earthquake shakes Twitter users: real-time event detection by social sensors

TL;DR: This paper investigates the real-time interaction of events such as earthquakes in Twitter and proposes an algorithm to monitor tweets and to detect a target event and produces a probabilistic spatiotemporal model for the target event that can find the center and the trajectory of the event location.
Book Chapter

A Tutorial on Particle Filtering and Smoothing: Fifteen years later

TL;DR: A complete, up-to-date survey of particle filtering methods as of 2008, including basic and advanced particle methods for filtering as well as smoothing.
Journal ArticleDOI

Gesture Recognition: A Survey

TL;DR: A survey on gesture recognition with particular emphasis on hand gestures and facial expressions is provided, and applications involving hidden Markov models, particle filtering and condensation, finite-state machines, optical flow, skin color, and connectionist models are discussed in detail.
References
More filters
Journal ArticleDOI

Novel approach to nonlinear/non-Gaussian Bayesian state estimation

TL;DR: An algorithm, the bootstrap filter, is proposed for implementing recursive Bayesian filters, represented as a set of random samples, which are updated and propagated by the algorithm.
Journal ArticleDOI

The viterbi algorithm

TL;DR: This paper gives a tutorial exposition of the Viterbi algorithm and of how it is implemented and analyzed, and increasing use of the algorithm in a widening variety of areas is foreseen.
Journal ArticleDOI

On sequential Monte Carlo sampling methods for Bayesian filtering

TL;DR: An overview of methods for sequential simulation from posterior distributions for discrete time dynamic models that are typically nonlinear and non-Gaussian, and how to incorporate local linearisation methods similar to those which have previously been employed in the deterministic filtering literature are shown.
Proceedings ArticleDOI

The unscented Kalman filter for nonlinear estimation

TL;DR: The unscented Kalman filter (UKF) as discussed by the authors was proposed by Julier and Uhlman (1997) for nonlinear control problems, including nonlinear system identification, training of neural networks, and dual estimation.
Journal ArticleDOI

Filtering via Simulation: Auxiliary Particle Filters

TL;DR: This article analyses the recently suggested particle approach to filtering time series and suggests that the algorithm is not robust to outliers for two reasons: the design of the simulators and the use of the discrete support to represent the sequentially updating prior distribution.
Related Papers (5)
Frequently Asked Questions (14)
Q1. What contributions have the authors mentioned in the paper "A tutorial on particle filters for online nonlinear/non-gaussian bayesian tracking" ?

In this paper, the authors review both optimal and suboptimal Bayesian algorithms for nonlinear/non-Gaussian tracking problems, with a focus on particle filters. Several variants of the particle filter such as SIR, ASIR, and RPF are introduced within a generic framework of the sequential importance sampling ( SIS ) algorithm. These are discussed and compared with the standard EKF through an illustrative example. 

6Since the particles actually represent paths through the state space, by storing the trajectory taken by each particle, fixed-lag and fixed-point smoothed estimates of the state can be obtained [4]. 

If the state space is continuous but can be decomposed into “cells,” : , then a grid-based method can be used to approximate the posterior density. 

Assign the particle a weight, , according to (48) END FOR1) Degeneracy Problem: A common problem with the SIS particle filter is the degeneracy phenomenon, where after a few iterations, all but one particle will have negligible weight. 

The problem of calculating smoothed densities is of interest because the densities at time are then conditional not only on measurements up to and including time index but also on future measurements. 

In HMM-based tracking, a common approach is to use the Viterbi algorithm [18] to calculate the maximum a posteriori estimate of the path through the trellis, that is, the sequence of discrete states that maximizes the probability of the state sequence given the data. 

The second method by which the effects of degeneracy can be reduced is to use resampling whenever a significant degeneracy is observed (i.e., when falls below some threshold ). 

Were the density to be Gaussian, one would expect the state to be within two standard deviations of the mean approximately 95% of the time. 

The importance density used to draw the sample is defined to satisfy the proportionality(68)where is some characterization of , given . 

The sequential importance sampling algorithm presented in Section V-A forms the basis for most particle filters that havebeen developed so far. 

Since integrates to unity over while integrates to unity over , the ratio of the probability densities is then proportional to the inverse of the ratio of the lengths, and . 

In order to develop the details of the algorithm, let denote a random measure that characterizes the posterior pdf , where , is a set of support points with associated weights , and , is the set of all states up to time. 

This arises due to the fact that in the resampling stage, samples are drawn from a discrete distribution rather than a continuous one. 

if the samples were drawn from an importance density , then the weights in (40) are defined by (42) to be(43)Returning to the sequential case, at each iteration, one could have samples constituting an approximation to and want to approximate with a new set of samples.