scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A path integral method for data assimilation

01 Jan 2008-Physica D: Nonlinear Phenomena (North-Holland)-Vol. 237, Iss: 1, pp 14-27
TL;DR: In this paper, a sampling-based approach for data assimilation, of sequential data and evolutionary models, is described. But it makes no assumptions on linearity in the dynamics, or on Gaussianity in statistics, it permits consideration of very general estimation problems, and can be used for such tasks as computing a smoother solution, parameter estimation, and data/model initialization.
About: This article is published in Physica D: Nonlinear Phenomena.The article was published on 2008-01-01 and is currently open access. It has received 34 citations till now. The article focuses on the topics: Sampling (statistics) & Estimator.

Summary (4 min read)

1 Introduction

  • Data assimilation refers to finding an estimator that derives its statistical nature from a model as well as data; the model or the data may have a stochastic component, which are referred to as model error and measurement error, respectively.
  • The state vector for which an estimator is sought can consist of dynamic quantities (e.g., physical variables) as well as parameters.
  • Another sampling strategy is the generalized Hybrid Monte Carlo (gHMC) approach (see Ferreira and Toral (1993)).
  • It leads to very small time steps in the assimilation: beyond the obvious issue that relates computational expense, for a given tolerance in the accuracy, the new dimension on this problem is that on finite precision machines it may have an effect on how the relative variances of the model and the data are balanced.

2 Problem Statement

  • Considered here is the determination of the statistics of the state variable x(t), whose dimension is Nx, given incomplete and possibly imprecise observations of that system.
  • The distribution of the stochastic component of the model is assumed known.
  • The diffusion matrix is D, and the vectorvalue dW(t) represents an incremental standard Wiener process.

3 The Path Integral Method

  • The Euler-Maruyama discretization is the simplest and makes the presentation of the method clearest.
  • More importantly, it is a convenient scheme with regard to adapting legacy code in the assimilation strategy.
  • Without loss of generality it will be assumed that the discrete time steps are equally spaced, and further, that the measurement times are commensurate with δt, the time step interval.
  • The total action or cost functional associated with the distribution of the state variable, conditioned on measurements, is then U = Udynamics + Uobs. (10) Rather than trying to minimize the cost functional (10) PIMC proposes to sample exp(−U), which is proportional to the conditional probability distribution.
  • A slight modification of this sampling strategy, which is referred to here as gMC, would differ from the above in only one respect: the new proposal consists of a new configuration for all components of the state vector at every time slice.

3.1 The Accelerated Sampler: Using Molecular Dynamics

  • The two hybrid samplers considered next take the correspondence between the estimation problem and the mechanics of a multi-particle system further.
  • Equally important, however, is to choose a matrix that does not increase the computational cost unreasonably: at worst (12) can increase the cost by O(T 2) due to the matrix/vector multiplies.
  • The gHMC proposes a dynamic over fictitious time that may no longer be Hamiltonian, with the purpose of overcoming the torpor of standard Monte Carlo sampling.
  • The Metropolis step corrects for τ -time discretization errors.
  • In the manner presented here the intermediate values, in fictitious time, for the conjugate position, are not saved.

4 Higher Order SDE Integrators

  • Nevertheless, described in what follows is how to modify the PIMC assimilation methodology to handle integrators other than Euler-Mayurama (note that the point-vortex/drifter system is in fact one in which perhaps a higher order method would be more suitable in integrating the stochastic differential equations).
  • A concrete example illustrates this: Euler-Mayurama results from retaining the first three terms in the above expression.
  • The modifications required are in the definition of the cost function, relevant to the accept/reject stage, and the Hamiltonian in the molecular dynamics stage (see Section 3.1).
  • Assume, for illustration that the noise is and that D is a constant, then the probability distribution associated with the predictor is, by definition, capable of generating samples independent of the distribution for the corrector changes (8).

5 Implementation Details

  • Presuming that the model is already in the form of (6) what is required is a code that implements the MCMC trials (consisting of a scheme to generate proposals, calculate the cost function, and accept/reject the proposal within a Markov Chain Monte Carlo context).
  • Proposals are generated via random walks of single degrees of freedom in MC and of sets of state vectors and time steps gMC, or via molecular dynamics runs in HMC and gHMC.
  • Within the MCMC trials loop care must be taken when computing ∆Ĥ (see (14)): If Nx×T is very large it is possible to degrade the computation due to loss-of-precision errors in subtracting large numbers from one another.
  • In solving the Hamiltonian system of HMC there is an increase in storage, when compared with MC, say, by the Verlet integrator.
  • If the data being assimilated was produced by (6) that a good starting guess would be produced by solving the SDE itself with no noise.

5.1 The MCMC Trial Loop

  • The MCMC trial section of the code, in its simplest guise, might look as in Algorithm 1.
  • Far more elegant and compact version of this algorithm are possible but the one presented here is easy to follow.
  • The while loop turns out to be a convenient construct from the point of view of monitoring progress in the MCMC loop.
  • The while loop can be implemented with added diagnostic tools that can examine the statistics of the cost function itself.

5.2 Acceleration by Fictitious-Time Molecular Dynamics

  • The parameters dτ and J are used to change the ratio of accepted trials to total trials.
  • GMC results from not using the molecular dynamics routine.
  • When the model is simple the subroutine representing Fk can be built by hand.
  • J the number of fictitious time steps, N{·} the MCMC trials required by each of the sampling methods.
  • This defines a “correlation length” for the method –which, incidentally, will pin down the value of N{·}: statistical stability of the results was obtained with N{·} in the order of 8 to 10 times the correlation length.

5.3 Testing the Outcomes

  • Debugging the code is greatly facilitated by the relatively simple structure of the algorithm: the gMC strategy can be checked by turning off the molecular dynamics routine.
  • Finally gHMC can be tested by checking for agreement with gMC and HMC.
  • Monitoring the spectrum of the cost function is also very useful.
  • Another test requires generating the data y(tm) by solving the SDE.
  • For MCMC trials, in the order of 10 million, it was found that the mean relative error in all methods to be used in the next section to evaluate the samplers was very small.

6 Example Calculations

  • The example featured here consists of producing the conditional mean history and the uncertainty of a system comprised of Np passive drifters and Nv point vortices (see Friedrichs (1966)).
  • They attribute failure in the estimation, mainly, to repeated crossings of saddle points by the drifter.
  • The estimated histories, shown in the following figures, were computed via HMC with J = 1.
  • The predicted standard deviation for the drifter position is plotted as a function of time in Figure 2b.
  • In Figure 2c appear the estimated vortical paths, as connected dots, the connected circles are the vortical paths that were not used in the assimilation process but produced by the numerical run that generated the drifter data.

6.1 Comparison of the Different Samplers

  • Table 1 summarizes the computational efficiency of the methods in obtaining the results portrayed in Figure 2. HMCJ refers to HMC using J fictitious time steps.
  • The time of each run is quoted in seconds, all of the cases were run using 10 million MCMC trials.
  • The correlation time, in seconds, for gMC is only about 1.5 times better than MC and higher than a couple of the HMC entries, however, this strategy does not require a gradient.
  • In order for this method to have been competitive on this specific problem the correlation length would have to be approximately 353, a correlation length that seemed impossible to achieve, though trial and error was the only strategy available to test this conjecture computationally.
  • It is also possible that the circulant matrix was well matched to the double well problem, but not as well matched to the Lagrangian problem, leading to much more impressive results in the use of gHMC on the double-well problem than in the Lagrangian problem, using the correlation length metric.

6.2 Assimilating Discretized-SDE Data

  • PIMC should deliver highly accurate estimates when the discretzed SDE that was used in deriving the action is used to produce the data and the uncertainty in the data is small.
  • This should still be the case in the hidden variable case.
  • To illustrate this case drifter data was generated by actually solving the SDE via (6), with all parameters equal to those used in generating the results in Figure 2.
  • Figure 3a shows the drifter path data as well as its estimated mean, and Figure 3b shows the assimilated vortical paths.
  • Granted, the data was considerably smoother and less noisy when compared to the results in Figure 2a as it enters the creation of the data in a different way.

6.3 Uncertainties, Data Insertion Frequency

  • Considered here are two more examples that further illustrate how the method deals with changes in data parameters.
  • The data was the same as that used in generating Figure 2.
  • The first case will correspond to dropping the insertion rate from every 4 to every 15 time steps.
  • Figure 4 shows how the results are modified by a significantly lower contribution in the cost function from observations.
  • The insertion interval of every 15 steps was been chosen because it is not commensurate with the inherent frequencies of motion of the dynamical system (a commensurate interval would either follow the twists and turns of the drifter data more faithfully or will suppress completely the higher frequency motions.

7 Conclusions

  • PIMC is a data assimilation scheme which makes use of the discretized model in the formulation of the cost function.
  • The preferred outcomes of the assimilation process are moments of histories, conditioned on data and thus the structure of the action is also influenced by a Bayesian inter-relation between model and data.
  • In order to exploit the computational savings that the hybrid Monte Carlo methods afford, a code for the gradient of the action is needed.
  • GHMC had a higher computational overhead than other methods explored here.
  • When linear and linearized methods fail, the matter of efficiency becomes a moot point, and the feasibility of getting an estimate, particularly one that is optimal or nearoptimal, make the methods developed for nonlinear/non-Gaussian problems viable.

Did you find this useful? Give us your feedback

Figures (6)

Content maybe subject to copyright    Report

A Path Integral Method
for Data Assimilation
Juan M. Restrepo
a
a
Department of Mathematics and Department of Physics, University of Arizona
Tucson, AZ 85721 U.S.A.
Abstract
Described here is a path integral, sampling-based approach for data assimilation, of
sequential data and evolutionary models. Since it makes no assumptions on linearity
in the dynamics, or on Gaussianity in the statistics, it permits consideration of very
general estimation problems. The method can be used for such tasks as computing
a smoother solution, parameter estimation, and data/model initialization.
Speedup in the Monte Carlo sampling process is essential if the path integral
method has any chance of being a viable estimator on moderately large problems.
Here a variety of strategies are proposed and compared for their relative ability
to improve the sampling efficiency of the resulting estimator. Provided as well are
details useful in its implementation and testing.
The method is applied to a problem in which standard methods are known to fail,
an idealized flow/drifter problem, which has been used as a testbed for assimilation
strategies involving Lagrangian data. It is in this kind of context that the method
may prove to be a useful assimilation tool in oceanic studies.
Key words: data assimilation, Lagrangian data assimilation, sampling, Markov
Chain Monte Carlo, hybrid Monte Carlo.
Preprint submitted to Elsevier Science 11 July 2007

PACS: 94.05sx, 92.60.Ry, 92.60.Wc, 93.65.+e, 92.70.-j, 02.50.-r
1 Introduction
Data assimilation refers to finding an estimator that derives its statistical
nature from a model as well as data; the model or the data may have a
stochastic component, which are referred to as model error and measurement
error, respectively. The model error may be a stochastic parametrization of
unresolved physics, for example. The stochastic component of both the model
and the measurements are assumed to be known. The state vector for which
an estimator is sought can consist of dynamic quantities (e.g., physical vari-
ables) as well as parameters. Hence the dimension of the state variable and
the dynamic variable may be different. Furthermore, the dimension of the
measurement vector may be different from that of the state vector; if the for-
mer is smaller the assimilation is sometimes called a “hidden variable” state
estimation problem.
A straightforward way to combine the influence of model and data in the
estimation is to make use of Bayesian ideas. In the method to be presented the
model and data are used to propose the likelihood and prior. In what follows
we will focus on the time-dependent problem and thus the estimators are
conditional moments of the history of the state variables. Typically the mean
and uncertainty of the history is sought. However, the method presented here
Email address: restrepo@math.arizona.edu (Juan M. Restrepo).
2

is sample-based and thus it is straightforward to compute sample moments of
histories, without requiring added storage.
The conditional mean is the best estimate of the state, and the conditional
covariance matrix provides a measure of its uncertainty. Of all estimators,
the conditional mean is distinguished as the one which minimizes the trace of
the conditional covariance matrix, i.e., the variance-minimizing estimator, or
”smoother” estimate (a corresponding set of statistics using only the currently
available set of measurements from prior times is called the ”filter” estimate).
For linear dynamics and Gaussian error statistics an optimal smoother of the
history and its uncertainty is provided by variance-minimizing least-squares,
the variational data assimilation approach or the Kalman filter/smoother
(see Wunsch (1996) for details on these). Two commonly used techniques
in nonlinear and possibly non-Gaussian contexts are the extended Kalman
filter/smoother, and the ensemble Kalman filter/smoother which uses a lin-
ear forecast but makes use of sampling techniques to update the uncertainty
(see Evensen (1997) and references contained therein). Ensemble Kalman ap-
proaches and the variational approach (4DVar) are presently being evaluated
in operational forecast models for weather and climate. The physics underlying
these types of problems is generally nonlinear, and to a certain degree, non-
Gaussian. If one ignores the issue of statistical convergence of the estimates,
the remarkable thing is that these estimation methods work better than one
would think is possible, at least in controlled numerical experiments and un-
der the stipulation that only the mean and the variance are to be examined.
(See Gilmour et al. (2001), Lawless et al. (2005) for relevant discussions). Not
3

withstanding, it is not surprising that linear methods are expected to produce
poor estimates (for example, not be capable of minimizing the variance) or
outright failing in getting the first moment. The propensity to failure can be
significantly exacerbated when there are very few observations and/or when
the confidence in the quality of the data is low.
This paper, along with the one by Alexander et al. (2005), presents a sampling-
based strategy to data assimilation we call the path integral Monte Carlo ap-
proach (PIMC). By construction PIMC will yield the optimal estimate for the
discretized nonlinear/non-Gaussian problem. As will be seen, whether PIMC
can track the estimate is not the main issue standing in the way of its ap-
plication, but rather, whether the computational expense is justified for a
given problem. Obviously, the computational cost becomes less of an issue on
problems that are amenable to this method and impossible to more conven-
tional assimilation techniques. Hence, the method would hardly be of interest
wherever a conventional least-squares based method will be suitable. There
are a number of data assimilation strategies which specifically target prob-
lems in which nonlinearity and/or non-Gaussianity pose major challenges.
Among them are: the optimal approach of Kushner (1962) (see as well Kush-
ner (1967a), Kushner (1967b). Also, see Stratonovich (1960) and Pardoux
(1982)); the mean-field variational strategy of Eyink et al. (2004) (see also
Eyink and Restrepo (2000), and Eyink et al. (2002)); particle methods, such
as is described in Leeuwen (2003) and Kim et al. (2003); direct Monte Carlo
sampling (see Pham (2001)); and the Langevin approaches, such as that of
Hairer et al. (2005). The method of Kushner (1962) yields an optimal fil-
ter/smoother and can be used as a benchmark for other methods (see Eyink
4

et al. (2004) for details of the methodology, its computational aspects, and
how it was used as a benchmark in a simple problem). A common trait of
the methods specifically developed for nonlinear/non-Gaussian problems is
that they are computationally intensive and thus impractical for problems
with a sufficiently large dimension, e.g. weather forecasting models. This di-
mensional/computational constraint, however, should not be construed as a
practical failure; not every time dependent estimation problem of interest has
dimensions comparable to those of the weather forecasting problem. Further-
more, these methods can be used to benchmark the results of operational
methods for which optimality bounds are unavailable; seldom do test involve
checking statistical convergence beyond the first two moments.
PIMC is considerably easier to implement than many of the nonlinear non-
Gaussian methods alluded to above. Its efficiency is crucially tied to applying
fast sampling methods and thus this study reports on preliminary efforts to
test some of these fast samplers.
The data assimilation problem and a description of PIMC appear in Sections
2 and 3 –see Alexander et al. (2005) for more details and background. For
time dependent problems PIMC can be briefly described as follows: the evo-
lution equation for the state vector x(t) in its discretized form is used to
construct a function proportional to the likelihood distribution of {x(t
i
)}
f
i=0
where t
0
= 0 < t
1
, ... < t
f
. The specific form of the action U
dynamics
depends
on the statistical distribution of the model error. Another functional U
obs
, as-
sociated with the discrete measurements y
1
, y
2
, ...., y
M
, where the subscript
labels the measurement times, is used to construct a function proportional to
5

Citations
More filters
Journal ArticleDOI
TL;DR: A general model that subsumes many parametric models for continuous data that can be inverted using exactly the same scheme, namely, dynamic expectation maximization, and is formulated as a simple neural network that may provide a useful metaphor for inference and learning in the brain.
Abstract: This paper describes a general model that subsumes many parametric models for continuous data. The model comprises hidden layers of state-space or dynamic causal models, arranged so that the output of one provides input to another. The ensuing hierarchy furnishes a model for many types of data, of arbitrary complexity. Special cases range from the general linear model for static data to generalised convolution models, with system noise, for nonlinear time-series analysis. Crucially, all of these models can be inverted using exactly the same scheme, namely, dynamic expectation maximization. This means that a single model and optimisation scheme can be used to invert a wide range of models. We present the model and a brief review of its inversion to disclose the relationships among, apparently, diverse generative models of empirical data. We then show that this inversion can be formulated as a simple neural network and may provide a useful metaphor for inference and learning in the brain.

771 citations


Cites methods from "A path integral method for data ass..."

  • ...Finally, [82] proposes a path integral approach to particle filtering for data assimilation....

    [...]

Journal ArticleDOI
TL;DR: The authors use the setting of singular perturbations, which allows them to study both weak and strong interactions among the states of the chain and give the asymptotic behavior of many controlled stochastic dynamic systems when the perturbation parameter tends to 0.
Abstract: This is an important contribution to a modern area of applied probability that deals with nonstationary Markov chains in continuous time. This area is becoming increasingly useful in engineering, economics, communication theory, active networking, and so forth, where the Markov-chain system is subject to frequent  uctuations with clusters of states such that the chain  uctuates very rapidly among different states of a cluster but changes less rapidly from one cluster to another. The authors use the setting of singular perturbations, which allows them to study both weak and strong interactions among the states of the chain. This leads to simpliŽ cations through the averaging principle, aggregation, and decomposition. The main results include asymptotic expansions of the corresponding probability distributions, occupations measures, limiting normality, and exponential rates. These results give the asymptotic behavior of many controlled stochastic dynamic systems when the perturbation parameter tends to 0. The classical analytical method employs the asymptotic expansions of onedimensional distributions of the Markov chain as solutions to a system of singularly perturbed ordinary differential equations. Indeed, the asymptotic behavior of solutions of such equations is well studied and understood. A more probabilistic approach also used by the authors is based on the tightness of the family of probability measures generated by the singularly perturbed Markov chain with the corresponding weak convergence properties. Both of these methods are illustrated by practical dynamic optimization problems, in particular by hierarchical production planning in a manufacturing system. An important contribution is the last chapter, Chapter 10, which describes numerical methods to solve various control and optimization problems involving Markov chains. Altogether the monograph consists of three parts, with Part I containing necessary, technically rather demanding facts about Markov processes (which in the nonstationary case are deŽ ned through martingales.) Part II derives the mentioned asymptotic expansions, and Part III deals with several applications, including Markov decision processes and optimal control of stochastic dynamic systems. This technically demanding book may be out of reach of many readers of Technometrics. However, the use of Markov processes has become common for numerous real-life complex stochastic systems. To understand the behavior of these systems, the sophisticated mathematical methods described in this book may be indispensable.

475 citations

Journal ArticleDOI
TL;DR: This work uses a variational approximation method to analyze a population of real neurons recorded in a slice preparation of the zebra finch forebrain nucleus HVC, demonstrating that using only 1,500 ms of voltage recorded while injecting a complex current waveform, the values of 12 state variables and 72 parameters in a dynamical model can be estimated.
Abstract: Recent results demonstrate techniques for fully quantitative, statistical inference of the dynamics of individual neurons under the Hodgkin---Huxley framework of voltage-gated conductances. Using a variational approximation, this approach has been successfully applied to simulated data from model neurons. Here, we use this method to analyze a population of real neurons recorded in a slice preparation of the zebra finch forebrain nucleus HVC. Our results demonstrate that using only 1,500 ms of voltage recorded while injecting a complex current waveform, we can estimate the values of 12 state variables and 72 parameters in a dynamical model, such that the model accurately predicts the responses of the neuron to novel injected currents. A less complex model produced consistently worse predictions, indicating that the additional currents contribute significantly to the dynamics of these neurons. Preliminary results indicate some differences in the channel complement of the models for different classes of HVC neurons, which accords with expectations from the biology. Whereas the model for each cell is incomplete (representing only the somatic compartment, and likely to be missing classes of channels that the real neurons possess), our approach opens the possibility to investigate in modeling the plausibility of additional classes of channels the cell might possess, thus improving the models over time. These results provide an important foundational basis for building biologically realistic network models, such as the one in HVC that contributes to the process of song production and developmental vocal learning in songbirds.

67 citations


Cites methods from "A path integral method for data ass..."

  • ...Statistical properties of estimated quantities such as the expected state during a learning window, or in a prediction window following the learning period, along with the estimated errors of these expected values, are given by the evaluation of a path integral through the state space of the model [60–63]....

    [...]

01 Jul 1992
TL;DR: Experimental results show that ADifOR can handle real- life codes and that ADIFOR-generated codes are competitive with divided-difference approximations of derivatives, and studies suggest that the source-transformation approach to automatic differentation may improve the time required to compute derivatives by orders of magnitude.
Abstract: The numerical methods employed in the solution of many scientific computing problems require the computation of derivatives of a function f: R{sup n} {yields} R{sup m}. ADIFOR (Automatic Differentiation in FORtran) is a source transformation tool that accepts Fortran 77 code for the computation of a function and writes portable Fortran 77 code for the computation of the derivatives. In contrast to previous approaches, ADIFOR views automatic differentiation as a source transformation problem and employs the data analysis capabilities of the ParaScope Fortran programming environment. Experimental results show that ADIFOR can handle real- life codes and that ADIFOR-generated codes are competitive with divided-difference approximations of derivatives. In addition, studies suggest that the source-transformation approach to automatic differentation may improve the time required to compute derivatives by orders of magnitude.

33 citations

Journal ArticleDOI
TL;DR: In this article, the process of transferring information from observations of a dynamical system to estimate the fixed parameters and unobserved states of a system model can be formulated as the evaluation of a discrete-time path integral in model state space.
Abstract: The process of transferring information from observations of a dynamical system to estimate the fixed parameters and unobserved states of a system model can be formulated as the evaluation of a discrete-time path integral in model state space. The observations serve as a guiding ‘potential’ working with the dynamical rules of the model to direct system orbits in state space. The path-integral representation permits direct numerical evaluation of the conditional mean path through the state space as well as conditional moments about this mean. Using a Monte Carlo method for selecting paths through state space, we show how these moments can be evaluated and demonstrate in an interesting model system the explicit influence of the role of transfer of information from the observations. We address the question of how many observations are required to estimate the unobserved state variables, and we examine the assumptions of Gaussianity of the underlying conditional probability. Copyright © 2010 Royal Meteorological Society

31 citations

References
More filters
Journal ArticleDOI
TL;DR: The algorithm can be used as a building block for solving other distributed graph problems, and can be slightly modified to run on a strongly-connected diagraph for generating the existent Euler trail or to report that no Euler trails exist.

13,828 citations

BookDOI
01 Jan 1983

7,182 citations


Additional excerpts

  • ...See Gardiner (2004)....

    [...]

BookDOI
01 Jan 2001
TL;DR: This book presents the first comprehensive treatment of Monte Carlo techniques, including convergence results and applications to tracking, guidance, automated target recognition, aircraft navigation, robot navigation, econometrics, financial modeling, neural networks, optimal control, optimal filtering, communications, reinforcement learning, signal enhancement, model averaging and selection.
Abstract: Monte Carlo methods are revolutionizing the on-line analysis of data in fields as diverse as financial modeling, target tracking and computer vision. These methods, appearing under the names of bootstrap filters, condensation, optimal Monte Carlo filters, particle filters and survival of the fittest, have made it possible to solve numerically many complex, non-standard problems that were previously intractable. This book presents the first comprehensive treatment of these techniques, including convergence results and applications to tracking, guidance, automated target recognition, aircraft navigation, robot navigation, econometrics, financial modeling, neural networks, optimal control, optimal filtering, communications, reinforcement learning, signal enhancement, model averaging and selection, computer vision, semiconductor design, population biology, dynamic Bayesian networks, and time series analysis. This will be of great value to students, researchers and practitioners, who have some basic knowledge of probability. Arnaud Doucet received the Ph. D. degree from the University of Paris-XI Orsay in 1997. From 1998 to 2000, he conducted research at the Signal Processing Group of Cambridge University, UK. He is currently an assistant professor at the Department of Electrical Engineering of Melbourne University, Australia. His research interests include Bayesian statistics, dynamic models and Monte Carlo methods. Nando de Freitas obtained a Ph.D. degree in information engineering from Cambridge University in 1999. He is presently a research associate with the artificial intelligence group of the University of California at Berkeley. His main research interests are in Bayesian statistics and the application of on-line and batch Monte Carlo methods to machine learning. Neil Gordon obtained a Ph.D. in Statistics from Imperial College, University of London in 1993. He is with the Pattern and Information Processing group at the Defence Evaluation and Research Agency in the United Kingdom. His research interests are in time series, statistical data analysis, and pattern recognition with a particular emphasis on target tracking and missile guidance.

6,574 citations

Book
01 Jun 1992
TL;DR: In this article, a time-discrete approximation of deterministic Differential Equations is proposed for the stochastic calculus, based on Strong Taylor Expansions and Strong Taylor Approximations.
Abstract: 1 Probability and Statistics- 2 Probability and Stochastic Processes- 3 Ito Stochastic Calculus- 4 Stochastic Differential Equations- 5 Stochastic Taylor Expansions- 6 Modelling with Stochastic Differential Equations- 7 Applications of Stochastic Differential Equations- 8 Time Discrete Approximation of Deterministic Differential Equations- 9 Introduction to Stochastic Time Discrete Approximation- 10 Strong Taylor Approximations- 11 Explicit Strong Approximations- 12 Implicit Strong Approximations- 13 Selected Applications of Strong Approximations- 14 Weak Taylor Approximations- 15 Explicit and Implicit Weak Approximations- 16 Variance Reduction Methods- 17 Selected Applications of Weak Approximations- Solutions of Exercises- Bibliographical Notes

6,284 citations

Journal ArticleDOI
24 Jan 2005
TL;DR: It is shown that such an approach can yield an implementation of the discrete Fourier transform that is competitive with hand-optimized libraries, and the software structure that makes the current FFTW3 version flexible and adaptive is described.
Abstract: FFTW is an implementation of the discrete Fourier transform (DFT) that adapts to the hardware in order to maximize performance. This paper shows that such an approach can yield an implementation that is competitive with hand-optimized libraries, and describes the software structure that makes our current FFTW3 version flexible and adaptive. We further discuss a new algorithm for real-data DFTs of prime size, a new way of implementing DFTs by means of machine-specific single-instruction, multiple-data (SIMD) instructions, and how a special-purpose compiler can derive optimized implementations of the discrete cosine and sine transforms automatically from a DFT algorithm.

5,172 citations


"A path integral method for data ass..." refers methods in this paper

  • ...However, it should be noted that in the gHHMC calculations FFTW was used (see Frigo and Johnson (2005)) and thus matrix-vector multiplies are performed in 4T (1 + log T )) per conjugate position/momenta using an already optimized FFT code....

    [...]

Frequently Asked Questions (1)
Q1. What are the contributions in "A path integral method for data assimilation" ?

In this paper, a sampling-based approach for data assimilation, of sequential data and evolutionary models, is described.