scispace - formally typeset
Search or ask a question

Showing papers on "Monte Carlo method published in 1999"


Book
01 Jan 1999
TL;DR: This new edition contains five completely new chapters covering new developments and has sold 4300 copies worldwide of the first edition (1999).
Abstract: We have sold 4300 copies worldwide of the first edition (1999). This new edition contains five completely new chapters covering new developments.

6,884 citations


Journal ArticleDOI
TL;DR: In this paper, the bias of LSDV for dynamic panel data models can be sizeable, even when T = 20, and a corrected LSDV estimator is the best choice overall.

2,168 citations


Proceedings ArticleDOI
10 May 1999
TL;DR: The Monte Carlo localization method is introduced, where the probability density is represented by maintaining a set of samples that are randomly drawn from it, and it is shown that the resulting method is able to efficiently localize a mobile robot without knowledge of its starting location.
Abstract: To navigate reliably in indoor environments, a mobile robot must know where it is. Thus, reliable position estimation is a key problem in mobile robotics. We believe that probabilistic approaches are among the most promising candidates to providing a comprehensive and real-time solution to the robot localization problem. However, current methods still face considerable hurdles. In particular the problems encountered are closely related to the type of representation used to represent probability densities over the robot's state space. Earlier work on Bayesian filtering with particle-based density representations opened up a new approach for mobile robot localization based on these principles. We introduce the Monte Carlo localization method, where we represent the probability density involved by maintaining a set of samples that are randomly drawn from it. By using a sampling-based representation we obtain a localization method that can represent arbitrary distributions. We show experimentally that the resulting method is able to efficiently localize a mobile robot without knowledge of its starting location. It is faster, more accurate and less memory-intensive than earlier grid-based methods,.

1,629 citations


Journal ArticleDOI
TL;DR: In this paper, a Monte Carlo simulation study was conducted to investigate the effects on structural equation modeling (SEM) fit indexes of sample size, estimation method, and model specification, and two primary conclusions were suggested: (a) some fit indexes appear to be noncomparable in terms of the information they provide about model fit for misspecified models and (b) estimation method strongly influenced almost all the fit indexes examined.
Abstract: A Monte Carlo simulation study was conducted to investigate the effects on structural equation modeling (SEM) fit indexes of sample size, estimation method, and model specification. Based on a balanced experimental design, samples were generated from a prespecified population covariance matrix and fitted to structural equation models with different degrees of model misspecification. Ten SEM fit indexes were studied. Two primary conclusions were suggested: (a) some fit indexes appear to be noncomparable in terms of the information they provide about model fit for misspecified models and (b) estimation method strongly influenced almost all the fit indexes examined, especially for misspecified models. These 2 issues do not seem to have drawn enough attention from SEM practitioners. Future research should study not only different models vis‐a‐vis model complexity, but a wider range of model specification conditions, including correctly specified models and models specified incorrectly to varying degrees.

1,516 citations


Journal ArticleDOI
TL;DR: In this article, the authors derived similar unit root tests for first-order autoregressive panel data models, assuming that the time dimension of the panel is fixed, and showed that the limiting distributions of the test statistics are normal.

1,138 citations


Journal ArticleDOI
TL;DR: In this paper, a nonlinear filtering theory is applied to unify the data assimilation and ensemble generation problem and to produce superior estimates of the probability distribution of the initial state of the atmosphere (or ocean) on regional or global scales.
Abstract: Knowledge of the probability distribution of initial conditions is central to almost all practical studies of predictability and to improvements in stochastic prediction of the atmosphere. Traditionally, data assimilation for atmospheric predictability or prediction experiments has attempted to find a single “best” estimate of the initial state. Additional information about the initial condition probability distribution is then obtained primarily through heuristic techniques that attempt to generate representative perturbations around the best estimate. However, a classical theory for generating an estimate of the complete probability distribution of an initial state given a set of observations exists. This nonlinear filtering theory can be applied to unify the data assimilation and ensemble generation problem and to produce superior estimates of the probability distribution of the initial state of the atmosphere (or ocean) on regional or global scales. A Monte Carlo implementation of the fully n...

967 citations


Journal ArticleDOI
TL;DR: In this article, a Markov chain Monte Carlo (MCMC) sampling algorithm is used to estimate Bayesian credible and highest probability density (HPD) intervals for parameters of interest and provides a simple Monte Carlo approach to approximate these Bayesian intervals when a sample of the relevant parameters can be generated from their respective marginal posterior distribution using a sample from an importance sampling distribution.
Abstract: This article considers how to estimate Bayesian credible and highest probability density (HPD) intervals for parameters of interest and provides a simple Monte Carlo approach to approximate these Bayesian intervals when a sample of the relevant parameters can be generated from their respective marginal posterior distribution using a Markov chain Monte Carlo (MCMC) sampling algorithm. We also develop a Monte Carlo method to compute HPD intervals for the parameters of interest from the desired posterior distribution using a sample from an importance sampling distribution. We apply our methodology to a Bayesian hierarchical model that has a posterior density containing analytically intractable integrals that depend on the (hyper) parameters. We further show that our methods are useful not only for calculating the HPD intervals for the parameters of interest but also for computing the HPD intervals for functions of the parameters. Necessary theory is developed and illustrative examples—including a si...

844 citations


Journal ArticleDOI
TL;DR: The spectrum of glueballs below 4 GeV in the SU(3) pure-gauge theory was investigated using Monte Carlo simulations of gluons on several anisotropic lattices with spatial grid separations ranging from 0.1 to 0.4 fm as mentioned in this paper.
Abstract: The spectrum of glueballs below 4 GeV in the SU(3) pure-gauge theory is investigated using Monte Carlo simulations of gluons on several anisotropic lattices with spatial grid separations ranging from 0.1 to 0.4 fm. Systematic errors from discretization and finite volume are studied, and the continuum spin quantum numbers are identified. Care is taken to distinguish single glueball states from two-glueball and torelon-pair states. Our determination of the spectrum significantly improves upon previous Wilson action calculations.

832 citations


Journal ArticleDOI
TL;DR: In this paper, a Monte Carlo direct search method is used to estimate the information in the available ensemble to guide a resampling of the parameters of the model space, which can be used to obtain measures of resolution and trade-off in the model parameters.
Abstract: SUMMARY Monte Carlo direct search methods, such as genetic algorithms, simulated annealing etc., areoften used to explore a finite dimensional parameter space. They require th e solving of theforward problem many times, that is, making predictions of observables from an earth model.The resulting ensemble of earth models represents all ‘information’ collected in the searchprocess. Search techniques have been the subject of much study in geophysics; less attention isgiven to the appraisal of the ensemble. Often inferences are based on only a small subset of theensemble, and sometimes a single member.This paper presents a new approach to the appraisal problem. To our knowledge this is the firsttime the general case has been addressed, that is, how to infer information from a completeensemble, previously generated by any search method. The essence of the new approach is touse the informationin the available ensembleto guidea resamplingofthe parameterspace. Thisrequires no further solving of the forward problem, but from the new ‘resampled’ ensemblewe are able to obtain measures of resolution and trade-off in the model parameters, or anycombinations of them.The new ensemble inference algorithm is illustrated on a highly non-linear waveform inversionproblem.It is shownhow the computationtime and memoryrequirements scale with the dimen-sion of the parameter space and size of the ensemble. The method is highly parallel, and mayeasily be distributed across several computers. Since little is assumed about the initial ensembleof earth models, the technique is applicable to a wide variety of situations. For example, it maybe applied to perform ‘error analysis’ using the ensemble generated by a genetic algorithm, orany other direct search method.Key words: numerical techniques, receiver functions, waveform inversion.

817 citations


Journal ArticleDOI
TL;DR: Poncela et al. as discussed by the authors extended the TraPPE-UA force field to include Lennard-Jones interaction parameters for methine and quaternary carbon groups by fitting to critical temperatures and saturated liquid densities of branched alkanes.
Abstract: A new generalization of the configurational-bias Monte Carlo method is presented which avoids the problems inherent in a Boltzmann rejection scheme for sequentially generating bond bending and torsional angles. The TraPPE-UA (transferable potentials for phase equilibria united-atom) force field is extended to include Lennard-Jones interaction parameters for methine and quaternary carbon groups by fitting to critical temperatures and saturated liquid densities of branched alkanes. Configurational-bias Monte Carlo simulations in the Gibbs ensemble were carried out to determine the vapor−liquid coexistence curves (VLCC) for six alkane isomers with four to eight carbons. Results are presented for two united-atom alkane force fields: PRF [Poncela, et al. Mol. Phys. 1997, 91, 189] and TraPPE-UA. Standard-state specific densities for the TraPPE-UA model were studied by simulations in the isobaric−isothermal ensemble. It is found that a single set of methyl, methylene, methine, and quaternary carbon parameters g...

803 citations


Journal ArticleDOI
TL;DR: It is shown that, in expectation, z^*"n is a lower bound on z* and that this bound monotonically improves as n increases, and confidence intervals are constructed on the optimality gap for any candidate solution x@^ to SP.

Journal ArticleDOI
TL;DR: In this paper, two new implementations of the EM algorithm are proposed for maximum likelihood fitting of generalized linear mixed models using random sampling to construct Monte Carlo approximations at the E-step.
Abstract: Summary. Two new implementations of the EM algorithm are proposed for maximum likelihood fitting of generalized linear mixed models. Both methods use random (independent and identically distributed) sampling to construct Monte Carlo approximations at the E-step. One approach involves generating random samples from the exact conditional distribution of the random effects (given the data) by rejection sampling, using the marginal distribution as a candidate. The second method uses a multivariate t importance sampling approximation. In many applications the two methods are complementary. Rejection sampling is more efficient when sample sizes are small, whereas importance sampling is better with larger sample sizes. Monte Carlo approximation using random samples allows the Monte Carlo error at each iteration to be assessed by using standard central limit theory combined with Taylor series methods. Specifically, we construct a sandwich variance estimate for the maximizer at each approximate E-step. This suggests a rule for automatically increasing the Monte Carlo sample size after iterations in which the true EM step is swamped by Monte Carlo error. In contrast, techniques for assessing Monte Carlo error have not been developed for use with alternative implementations of Monte Carlo EM algorithms utilizing Markov chain Monte Carlo E-step approximations. Three different data sets, including the infamous salamander data of McCullagh and Nelder, are used to illustrate the techniques and to compare them with the alternatives. The results show that the methods proposed can be considerably more efficient than those based on Markov chain Monte Carlo algorithms. However, the methods proposed may break down when the intractable integrals in the likelihood function are of high dimension.

Journal ArticleDOI
TL;DR: In this paper, a consistent two-step estimation procedure is proposed for a system of equations with limited dependent variables, and Monte Carlo simulation results suggest the procedure outperforms an existing two-stage method.
Abstract: A consistent two-step estimation procedure is proposed for a system of equations with limited dependent variables. Monte Carlo simulation results suggest the procedure outperforms an existing two-step method.

01 Jan 1999
TL;DR: This thesis phrases the application of terrain navigation in the Bayesian framework, and develops a numerical approximation to the optimal but intractable recursive solution, and derives explicit expressions for the Cramer-Rao bound of general nonlinear filtering, smoothing and prediction problems.
Abstract: Recursive estimation deals with the problem of extracting information about parameters, or states, of a dynamical system in real time, given noisy measurements of the system output. Recursive estimation plays a central role in many applications of signal processing, system identification and automatic control. In this thesis we study nonlinear and non-Gaussian recursive estimation problems in discrete time. Our interest in these problems stems from the airborne applications of target tracking, and autonomous aircraft navigation using terrain information.In the Bayesian framework of recursive estimation, both the sought parameters and the observations are considered as stochastic processes. The conceptual solution to the estimation problem is found as a recursive expression for the posterior probability density function of the parameters conditioned on the observed measurements. This optimal solution to nonlinear recursive estimation is usually impossible to compute in practice, since it involves several integrals that lack analytical solutions.We phrase the application of terrain navigation in the Bayesian framework, and develop a numerical approximation to the optimal but intractable recursive solution. The designed point-mass filter computes a discretized version of the posterior filter density in a uniform mesh over the interesting region of the parameter space. Both the uniform mesh resolution and the grid point locations are automatically adjusted at each iteration of the algorithm. This Bayesian point-mass solution is shown to yield high navigation performance in a simulated realistic environment.Even though the optimal Bayesian solution is intractable to implement, the performance of the optimal solution is assessable and can be used for comparative evaluation of suboptimal implementations. We derive explicit expressions for the Cramer-Rao bound of general nonlinear filtering, smoothing and prediction problems. We consider both the cases of random and nonrandom modeling of the parameters. The bounds are recursively expressed and are connected to linear recursive estimation. The newly developed Cramer-Rao bounds are applied to the terrain navigation problem, and the point-mass filter is verified to reach the bound in exhaustive simulations.The uniform mesh of the point-mass filter limits it to estimation problems of low dimension. Monte Carlo methods offer an alternative approach to recursive estimation and promise tractable solutions to general high dimensional estimation problems. We provide a review over the active field of statistical Monte Carlo methods. In particular, we study the particle filters for recursive estimation. Three different particle filters are applied to terrain navigation, and evaluated against the Cramer-Rao bound and the point-mass filter. The particle filters utilize an adaptive grid representation of the filter density and are shown to yield a performance equal to the point-mass method.A Markov Chain Monte Carlo (MCMC) method is developed for a highly complex data association problem in target tracking. This algorithm is compared to previously proposed methods and is shown to yield competitive results in a simulation study.

Journal ArticleDOI
TL;DR: In this paper, the authors examined the asymptotic and finite-sample properties of tests for equal forecast accuracy and encompassing applied to 1-step ahead forecasts from nested parametric models.
Abstract: We examine the asymptotic and finite-sample properties of tests for equal forecast accuracy and encompassing applied to 1-step ahead forecasts from nested parametric models. We first derive the asymptotic distributions of two standard tests and one new test of encompassing. Tables of asymptotically valid critical values are provided. Monte Carlo methods are then used to evaluate the size and power of the tests of equal forecast accuracy and encompassing. The simulations indicate that post-sample tests can be reasonably well sized. Of the post-sample tests considered, the encompassing test proposed in this paper is the most powerful. We conclude with an empirical application regarding the predictive content of unemployment for inflation.

Journal ArticleDOI
TL;DR: This article studies the small sample behavior of several test statistics that are based on maximum likelihood estimator, but are designed to perform better with nonnormal data.
Abstract: Structural equation modeling is a well-known technique for studying relationships among multivariate data. In practice, high dimensional nonnormal data with small to medium sample sizes are very common, and large sample theory, on which almost all modeling statistics are based, cannot be invoked for model evaluation with test statistics. The most natural method for nonnormal data, the asymptotically distribution free procedure, is not defined when the sample size is less than the number of nonduplicated elements in the sample covariance. Since normal theory maximum likelihood estimation remains defined for intermediate to small sample size, it may be invoked but with the probable consequence of distorted performance in model evaluation. This article studies the small sample behavior of several test statistics that are based on maximum likelihood estimator, but are designed to perform better with nonnormal data. We aim to identify statistics that work reasonably well for a range of small sample sizes and distribution conditions. Monte Carlo results indicate that Yuan and Bentler's recently proposed F-statistic performs satisfactorily.

Journal ArticleDOI
TL;DR: In this article, a fast and accurate computer simulation program for electron drift and diffusion in gases under the influence of electric and magnetic fields is described and some calculated results are compared to precise experimental results in carbon tetraflouride and methane mixtures.
Abstract: A fast and accurate computer simulation program for electron drift and diffusion in gases under the influence of electric and magnetic fields is described and some calculated results are compared to precise experimental results in carbon tetraflouride and methane mixtures. The calculated Lorentz angles are shown to be typically within 1° of the measured experimental values. The program allows the electric and magnetic fields to be at any angle to each other.

Journal ArticleDOI
TL;DR: Dose calculation methods for photon beams are reviewed in the context of radiation therapy treatment planning and state-of-the-art methods based on point or pencil kernels, which are derived through Monte Carlo simulations, to characterize secondary particle transport are presented in some detail.
Abstract: Dose calculation methods for photon beams are reviewed in the context of radiation therapy treatment planning. Following introductory summaries on photon beam characteristics and clinical requirements on dose calculations, calculation methods are described in order of increasing explicitness of particle transport. The simplest are dose ratio factorizations limited to point dose estimates useful for checking other more general, but also more complex, approaches. Some methods incorporate detailed modelling of scatter dose through differentiation of measured data combined with various integration techniques. State-of-the-art methods based on point or pencil kernels, which are derived through Monte Carlo simulations, to characterize secondary particle transport are presented in some detail. Explicit particle transport methods, such as Monte Carlo, are briefly summarized. The extensive literature on beam characterization and handling of treatment head scatter is reviewed in the context of providing phase space data for kernel based and/or direct Monte Carlo dose calculations. Finally, a brief overview of inverse methods for optimization and dose reconstruction is provided.

Journal ArticleDOI
TL;DR: A general analytical framework quantifying the spectral efficiency of cellular systems with variable-rate transmission is introduced, and Monte Carlo simulations are developed to estimate the value of this efficiency for average interference conditions.
Abstract: A general analytical framework quantifying the spectral efficiency of cellular systems with variable-rate transmission is introduced. This efficiency, the area spectral efficiency, defines the sum of the maximum average data rates per unit bandwidth per unit area supported by a cell's base station. Expressions for this efficiency as a function of the reuse distance for the worst and best case interference configurations are derived. Moreover, Monte Carlo simulations are developed to estimate the value of this efficiency for average interference conditions. Both fully loaded and partially loaded cellular systems are investigated. The effect of random user location is taken into account, and the impact of lognormal shadowing and Nakagami (1960) multipath fading is also studied.

Journal ArticleDOI
TL;DR: In this paper, a cluster update (operator loop) is developed within the framework of a numerically exact quantum Monte Carlo method based on the power series expansion of φ(n) (n π(n, π)-exp(n), π (n, n)-stochastic series expansion for a wide class of lattice Hamiltonians.
Abstract: A cluster update (the ``operator loop'') is developed within the framework of a numerically exact quantum Monte Carlo method based on the power series expansion of $\mathrm{exp}(\ensuremath{-}\ensuremath{\beta}H)$ (stochastic series expansion). The method is generally applicable to a wide class of lattice Hamiltonians for which the expansion is positive definite. For some important models the operator-loop algorithm is more efficient than loop updates previously developed for ``worldline'' simulations. The method is here tested on a two-dimensional anisotropic Heisenberg antiferromagnet in a magnetic field.

Posted Content
TL;DR: In this paper, the authors examined the asymptotic and finite-sample properties of tests for equal forecast accuracy and encompassing applied to 1-step ahead forecasts from nested parametric models.
Abstract: We examine the asymptotic and finite-sample properties of tests for equal forecast accuracy and encompassing applied to 1-step ahead forecasts from nested parametric models. We first derive the asymptotic distributions of two standard tests and one new test of encompassing. Tables of asymptotically valid critical values are provided. Monte Carlo methods are then used to evaluate the size and power of the tests of equal forecast accuracy and encompassing. The simulations indicate that post-sample tests can be reasonably well sized. Of the post-sample tests considered, the encompassing test proposed in this paper is the most powerful. We conclude with an empirical application regarding the predictive content of unemployment for inflation.

Journal ArticleDOI
01 Jan 1999-Langmuir
TL;DR: Gelb et al. as mentioned in this paper used the Barrett−Joyner−Halenda (BJH) method to yield pore size distributions, which are tested against exact pore sizes directly measured from the pore structures.
Abstract: We have prepared a series of molecular models of porous glass using a recently developed procedure (Gelb, L. D.; Gubbins, K. E. Langmuir 1998, 14, 2097) that mimics the experimental processes that produce Vycor and controlled-pore glasses. We calculate nitrogen adsorption isotherms in these precisely characterized model glasses using Monte Carlo simulations. These isotherms are analyzed using the Barrett−Joyner−Halenda (BJH) method to yield pore size distributions, which are tested against exact pore size distributions directly measured from the pore structures. The BJH method yields overly sharp distributions that are systematically shifted (by about 1 nm) to lower pore sizes than those from our geometric method.

Journal ArticleDOI
TL;DR: The aim of the paper is to provide an alternative sampling algorithm to rejection‐based methods and other sampling approaches such as the Metropolis–Hastings algorithm.
Abstract: Summary. We demonstrate the use of auxiliary (or latent) variables for sampling non-standard densities which arise in the context of the Bayesian analysis of non-conjugate and hierarchical models by using a Gibbs sampler. Their strategic use can result in a Gibbs sampler having easily sampled full conditionals. We propose such a procedure to simplify or speed up the Markov chain Monte Carlo algorithm. The strength of this approach lies in its generality and its ease of implementation. The aim of the paper, therefore, is to provide an alternative sampling algorithm to rejection-based methods and other sampling approaches such as the Metropolis-Hastings algorithm.

Journal ArticleDOI
TL;DR: In this article, the authors investigate several approaches for constructing Monte Carlo realizations of the merging history of virialized dark matter haloes (''merger trees'' using the extended Press-Schechter formalism.
Abstract: We investigate several approaches for constructing Monte Carlo realizations of the merging history of virialized dark matter haloes (`merger trees') using the extended Press--Schechter formalism. We describe several unsuccessful methods in order to illustrate some of the difficult aspects of this problem. We develop a practical method that leads to the reconstruction of the mean quantities that can be derived from the Press--Schechter model. This method is convenient, computationally efficient, and works for any power spectrum or background cosmology. In addition, we investigate statistics that describe the distribution of the number of progenitors and their masses as a function of redshift.

Journal ArticleDOI
TL;DR: This new version of VMC (now called XVMC) is more efficient than EGS4/PRESTA photon dose calculation by a factor of 15-20, so a standard treatment plan for photons can be calculated by Monte Carlo in about 20 min. on a "normal" personal computer.
Abstract: A new Monte Carlo algorithm for 3D photondose calculation in radiation therapy is presented, which is based on the previously developed Voxel Monte Carlo (VMC) for electron beams. The main result is that this new version of VMC (now called XVMC) is more efficient than EGS4/PRESTA photondose calculation by a factor of 15–20. Therefore, a standard treatment plan for photons can be calculated by Monte Carlo in about 20 min. on a “normal” personal computer. The improvement is caused mainly by the fast electron transport algorithm and ray tracing technique, and an initial ray tracing method to calculate the number of electrons created in each voxel by the primary photon beam. The model was tested in comparison to calculations by EGS4 using several fictive phantoms. In most cases a good coincidence has been found between both codes. Only within lung substitute dose differences have been observed.

Journal ArticleDOI
TL;DR: In this article, a general Markov chain Monte Carlo (MCMC) strategy based on Metropolis-Hastings sampling is described for Bayesian inference in complex item response theory (IRT) settings.
Abstract: Patz and Junker (1999) describe a general Markov chain Monte Carlo (MCMC) strategy, based on Metropolis-Hastings sampling, for Bayesian inference in complex item response theory (IRT) settings. The...

Journal ArticleDOI
TL;DR: This paper presents a time sequential Monte Carlo simulation technique which can be used in complex distribution system evaluation, and describes a computer program developed to implement this technique.
Abstract: Analytical techniques for distribution system reliability assessment can be effectively used to evaluate the mean values of a wide range of system reliability indices. This approach is usually used when teaching the basic concepts of distribution system reliability evaluation. The mean or expected value, however, does not provide any information on the inherent variability of an index. Appreciation of this inherent variability is an important parameter in comprehending the actual reliability experienced by a customer and should be recognized when teaching distribution system reliability evaluation. This paper presents a time sequential Monte Carlo simulation technique which can be used in complex distribution system evaluation, and describes a computer program developed to implement this technique. General distribution system elements, operating models and radial configurations are considered in the program. The results obtained using both analytical and simulation methods are compared. The mean values and the probability distributions for both load point and system indices are illustrated using a practical test system.

Journal ArticleDOI
TL;DR: McStas as mentioned in this paper is a freeware program package for neutron simulations based on a special meta-language designed for Monte Carlo ray-tracing calculations and includes a library of spectrometer components and visualization tools.
Abstract: We present version 1.0 of the McStas freeware program package for neutron simulations. The package is based upon a special meta-language designed for Monte Carlo ray-tracing calculations and includes a library of spectrometer components and visualization tools. A detailed simulation of the Rise triple-axis spectrometer TASl shows good agreement with experiments. We invite neutron researchers to use this package and to contribute to the development of the component library.

Journal ArticleDOI
TL;DR: In this paper, the authors presented improved formulas for the calculation of transition rate constants in the transition path ensemble, where transition paths between stable states are generated by sampling the distribution of paths with a Monte Carlo procedure.
Abstract: We present improved formulas for the calculation of transition rate constants in the transition path ensemble. In this method transition paths between stable states are generated by sampling the distribution of paths with a Monte Carlo procedure. With the new expressions the computational cost for the calculation of transition rate constants can be reduced considerably compared to our original formulation. We demonstrate the method by studying the isomerization of a diatomic molecule immersed in a Weeks–Chandler–Andersen fluid. The paper is concluded by an efficiency analysis of the path sampling algorithm.

Journal ArticleDOI
TL;DR: In this paper, a unified state-space formulation for parameter estimation of exponential-affine term structure models is proposed, which only requires specifying the conditional mean and variance of the system in an approximate sense.
Abstract: This paper proposes a unified state-space formulation for parameter estimation of exponential-affine term structure models. The proposed method uses an approximate linear Kalman filter which only requires specifying the conditional mean and variance of the system in an approximate sense. The method allows for measurement errors in the observed yields to maturity, and can simultaneously deal with many yields on bonds with different maturities. An empirical analysis of two special cases of this general class of model is carried out: the Gaussian case (Vasicek 1977) and the non-Gaussian case (Cox Ingersoll and Ross 1985 and Chen and Scott 1992). Our test results indicate a strong rejection of these two cases. A Monte Carlo study indicates that the procedure is reliable for moderate sample sizes.