scispace - formally typeset
Search or ask a question

Showing papers on "Monte Carlo method published in 2007"


MonographDOI
01 Dec 2007

2,421 citations


Journal ArticleDOI
TL;DR: In this article, the authors show that standard asymptotics based on the number of groups going to infinity provide a poor approximation to the finite sample distribution and propose simple two-step estimators for these cases.
Abstract: We examine inference in panel data when the number of groups is small, as is typically the case for difference-in-differences estimation and when some variables are fixed within groups. In this case, standard asymptotics based on the number of groups going to infinity provide a poor approximation to the finite sample distribution. We show that in some cases the t-statistic is distributed as t and propose simple two-step estimators for these cases. We apply our analysis to two well-known papers. We confirm our theoretical analysis with Monte Carlo simulations.

1,283 citations


Journal ArticleDOI
TL;DR: An approach in which Markov chain Monte Carlo simulations are used to integrate over the space of genealogies, whereas other parameters are integrated out analytically, resulting in an approximation to the full joint posterior density of the model parameters.
Abstract: In 1988, Felsenstein described a framework for assessing the likelihood of a genetic data set in which all of the possible genealogical histories of the data are considered, each in proportion to their probability. Although not analytically solvable, several approaches, including Markov chain Monte Carlo methods, have been developed to find approximate solutions. Here, we describe an approach in which Markov chain Monte Carlo simulations are used to integrate over the space of genealogies, whereas other parameters are integrated out analytically. The result is an approximation to the full joint posterior density of the model parameters. For many purposes, this function can be treated as a likelihood, thereby permitting likelihood-based analyses, including likelihood ratio tests of nested models. Several examples, including an application to the divergence of chimpanzee subspecies, are provided.

995 citations


Journal ArticleDOI
TL;DR: This work proposes a sequential Monte Carlo sampler that convincingly overcomes inefficiencies of existing methods and demonstrates its implementation through an epidemiological study of the transmission rate of tuberculosis.
Abstract: Recent new methods in Bayesian simulation have provided ways of evaluating posterior distributions in the presence of analytically or computationally intractable likelihood functions. Despite representing a substantial methodological advance, existing methods based on rejection sampling or Markov chain Monte Carlo can be highly inefficient and accordingly require far more iterations than may be practical to implement. Here we propose a sequential Monte Carlo sampler that convincingly overcomes these inefficiencies. We demonstrate its implementation through an epidemiological study of the transmission rate of tuberculosis.

906 citations


Journal ArticleDOI
TL;DR: In this paper, a next-to-leading order calculation of heavy flavour production in hadronic collisions that can be interfaced to shower Monte Carlo programs is performed in the context of the POWHEG method.
Abstract: We present a next-to-leading order calculation of heavy flavour production in hadronic collisions that can be interfaced to shower Monte Carlo programs. The calculation is performed in the context of the POWHEG method [1]. It is suitable for the computation of charm, bottom and top hadroproduction. In the case of top production, spin correlations in the decay products are taken into account.

807 citations


Journal ArticleDOI
TL;DR: In this article, four different Hong's point estimate schemes are presented and tested on the probabilistic power flow problem and compared against those obtained from the Monte Carlo simulation, showing that the use of the scheme provides the best performance when a high number of random variables, both continuous and discrete, are considered.
Abstract: This paper analyzes the behavior of Hong's point estimate methods to account for uncertainties on the probabilistic power flow problem. This uncertainty may arise from different sources as load demand or generation unit outages. Point estimate methods constitute a remarkable tool to handle stochastic power system problems because good results can be achieved by using the same routines as those corresponding to deterministic problems, while keeping low the computational burden. In previous works related to power systems, only the two-point estimate method has been considered. In this paper, four different Hong's point estimate schemes are presented and tested on the probabilistic power flow problem. Binomial and normal distributions are used to model input random variables. Results for two different case studies, based on the IEEE 14-bus and IEEE 118-bus test systems, respectively, are presented and compared against those obtained from the Monte Carlo simulation. Particularly, this paper shows that the use of the scheme provides the best performance when a high number of random variables, both continuous and discrete, are considered.

632 citations


Journal ArticleDOI
TL;DR: This paper presents a newly developed simulation-based approach for Bayesian model updating, model class selection, and model averaging called the transitional Markov chain Monte Carlo (TMCMC) approach, motivated by the adaptive Metropolis–Hastings method.
Abstract: This paper presents a newly developed simulation-based approach for Bayesian model updating, model class selection, and model averaging called the transitional Markov chain Monte Carlo (TMCMC) approach. The idea behind TMCMC is to avoid the problem of sampling from difficult target probability density functions (PDFs) but sampling from a series of intermediate PDFs that converge to the target PDF and are easier to sample. The TMCMC approach is motivated by the adaptive Metropolis–Hastings method developed by Beck and Au in 2002 and is based on Markov chain Monte Carlo. It is shown that TMCMC is able to draw samples from some difficult PDFs (e.g., multimodal PDFs, very peaked PDFs, and PDFs with flat manifold). The TMCMC approach can also estimate evidence of the chosen probabilistic model class conditioning on the measured data, a key component for Bayesian model class selection and model averaging. Three examples are used to demonstrate the effectiveness of the TMCMC approach in Bayesian model updating, ...

616 citations


Book ChapterDOI
01 Jan 2007
TL;DR: The purpose of this chapter is to provide an introduction to this KMC method, by taking the reader through the basic concepts underpinning KMC and how it is typically implemented, assuming no prior knowledge of these kinds of simulations.
Abstract: Monte Carlo refers to a broad class of algorithms that solve problems through the use of random numbers. They first emerged in the late 1940’s and 1950’s as electronic computers came into use [1], and the name means just what it sounds like, whimsically referring to the random nature of the gambling at Monte Carlo, Monaco. The most famous of the Monte Carlo methods is the Metropolis algorithm [2], invented just over 50 years ago at Los Alamos National Laboratory. Metropolis Monte Carlo (which is not the subject of this chapter) offers an elegant and powerful way to generate a sampling of geometries appropriate for a desired physical ensemble, such as a thermal ensemble. This is accomplished through surprisingly simple rules, involving almost nothing more than moving one atom at a time by a small random displacement. The Metropolis algorithm and the numerous methods built on it are at the heart of many, if not most, of the simulations studies of equilibrium properties of physical systems. In the 1960’s researchers began to develop a different kind of Monte Carlo algorithm for evolving systems dynamically from state to state. The earliest application of this approach for an atomistic system may have been Beeler’s 1966 simulation of radiation damage annealing [3]. Over the next 20 years, there were developments and applications in this area (e.g., see [3, 4, 5, 6, 7]), as well as in surface adsorption, diffusion and growth (e.g., see [8, 9, 10, 11, 12, 13, 14, 15, 16, 17]), in statistical physics (e.g., see [18, 19, 20]), and likely other areas, too. In the 1990’s the terminology for this approach settled in as kinetic Monte Carlo, though the early papers typically don’t use this term [21]. The popularity and range of applications of kinetic Monte Carlo (KMC) has continued to grow and KMC is now a common tool for studying materials subject to irradiation, the topic of this book. The purpose of this chapter is to provide an introduction to this KMC method, by taking the reader through the basic concepts underpinning KMC and how it is typically implemented, assuming no prior knowledge of these kinds of simulations. An appealing property of KMC is that it can, in principle, give the exact dynamical evolution of a system. Although this ideal is virtually never achieved, and usually not even attempted, the KMC method is presented here from this point of view because it makes a good framework for

549 citations


Journal ArticleDOI
TL;DR: Empirical evidence suggests that local Monte Carlo Markov chain strategies are effective up to the clustering phase transition and belief propagation up toThe condensation point and refined message passing techniques (such as survey propagation) may also beat this threshold.
Abstract: An instance of a random constraint satisfaction problem defines a random subset (the set of solutions) of a large product space chiN (the set of assignments). We consider two prototypical problem ensembles (random k-satisfiability and q-coloring of random regular graphs) and study the uniform measure with support on S. As the number of constraints per variable increases, this measure first decomposes into an exponential number of pure states ("clusters") and subsequently condensates over the largest such states. Above the condensation point, the mass carried by the n largest states follows a Poisson-Dirichlet process. For typical large instances, the two transitions are sharp. We determine their precise location. Further, we provide a formal definition of each phase transition in terms of different notions of correlation between distinct variables in the problem. The degree of correlation naturally affects the performances of many search/sampling algorithms. Empirical evidence suggests that local Monte Carlo Markov chain strategies are effective up to the clustering phase transition and belief propagation up to the condensation point. Finally, refined message passing techniques (such as survey propagation) may also beat this threshold.

533 citations


Journal ArticleDOI
TL;DR: In this paper, the authors review the practice of experimental design for choice experiments in environmental economics and compare it with advances in experimental design, and evaluate the statistical efficiency of four different designs by means of Monte Carlo experiments.

506 citations


Journal ArticleDOI
TL;DR: In this article, a next-to-leading order calculation of heavy flavour production in hadronic collisions that can be interfaced to shower Monte Carlo programs is performed in the context of the POWHEG method.
Abstract: We present a next-to-leading order calculation of heavy flavour production in hadronic collisions that can be interfaced to shower Monte Carlo programs. The calculation is performed in the context of the POWHEG method. It is suitable for the computation of charm, bottom and top hadroproduction. In the case of top production, spin correlations in the decay products are taken into account.

Journal ArticleDOI
TL;DR: This work presents a reformulation of the Bayesian approach to inverse problems, that seeks to accelerate Bayesian inference by using polynomial chaos expansions to represent random variables, and evaluates the utility of this technique on a transient diffusion problem arising in contaminant source inversion.

Journal ArticleDOI
TL;DR: This work considers basic ergodicity properties of adaptive Markov chain Monte Carlo algorithms under minimal assumptions, using coupling constructions and proves convergence in distribution and a weak law of large numbers.
Abstract: We consider basic ergodicity properties of adaptive Markov chain Monte Carlo algorithms under minimal assumptions, using coupling constructions. We prove convergence in distribution and a weak law of large numbers. We also give counterexamples to demonstrate that the assumptions we make are not redundant.

Journal ArticleDOI
TL;DR: Various spatial and temporal multiscale KMC methods, namely, the coarse-grained Monte Carlo (CGMC), the stochastic singular perturbation approximation, and the τ-leap methods are reviewed, introduced recently to overcome the disparity of length and time scales and the one-at-a time execution of events.
Abstract: The microscopic spatial kinetic Monte Carlo (KMC) method has been employed extensively in materials modeling. In this review paper, we focus on different traditional and multiscale KMC algorithms, challenges associated with their implementation, and methods developed to overcome these challenges. In the first part of the paper, we compare the implementation and computational cost of the null-event and rejection-free microscopic KMC algorithms. A firmer and more general foundation of the null-event KMC algorithm is presented. Statistical equivalence between the null-event and rejection-free KMC algorithms is also demonstrated. Implementation and efficiency of various search and update algorithms, which are at the heart of all spatial KMC simulations, are outlined and compared via numerical examples. In the second half of the paper, we review various spatial and temporal multiscale KMC methods, namely, the coarse-grained Monte Carlo (CGMC), the stochastic singular perturbation approximation, and the τ-leap methods, introduced recently to overcome the disparity of length and time scales and the one-at-a time execution of events. The concepts of the CGMC and the τ-leap methods, stochastic closures, multigrid methods, error associated with coarse-graining, a posteriori error estimates for generating spatially adaptive coarse-grained lattices, and computational speed-up upon coarse-graining are illustrated through simple examples from crystal growth, defect dynamics, adsorption–desorption, surface diffusion, and phase transitions.

Journal ArticleDOI
TL;DR: In this article, the authors present a set of methods available in the literature on selection bias correction, when selection is specified as a multinomial logit model and compare the underlying assumptions made by the different methods.
Abstract: This survey presents the set of methods available in the literature on selection bias correction, when selection is specified as a multinomial logit model. It contrasts the underlying assumptions made by the different methods and shows results from a set of Monte Carlo experiments. We find that, in many cases, the approach initiated by Dubin and MacFadden (1984) as well as the semi-parametric alternative recently proposed by Dahl (2002) are to be preferred to the most commonly used Lee (1983) method. We also find that a restriction imposed in the original Dubin and MacFadden paper can be waived to achieve more robust estimators. Monte Carlo experiments also show that selection bias correction based on the multinomial logit model can provide fairly good correction for the outcome equation, even when the IIA hypothesis is violated.


Journal ArticleDOI
TL;DR: The authors present scalar-relativistic energy-consistent Hartree-Fock pseudopotentials for the main-group elements that are suitable for quantum Monte Carlo (QMC) calculations and demonstrate their transferability through extensive benchmark calculations of atomic excitation spectra as well as molecular properties.
Abstract: The authors present scalar-relativistic energy-consistent Hartree-Fock pseudopotentials for the main-group elements. The pseudopotentials do not exhibit a singularity at the nucleus and are therefore suitable for quantum Monte Carlo (QMC) calculations. They demonstrate their transferability through extensive benchmark calculations of atomic excitation spectra as well as molecular properties. In particular, they compute the vibrational frequencies and binding energies of 26 first- and second-row diatomic molecules using post-Hartree-Fock methods, finding excellent agreement with the corresponding all-electron values. They also show their pseudopotentials give superior accuracy than other existing pseudopotentials constructed specifically for QMC. Finally, valence basis sets of different sizes (VnZ with n=D,T,Q,5 for first and second rows, and n=D,T for third to fifth rows) optimized for our pseudopotentials are also presented.

Book ChapterDOI
14 Mar 2007

Journal ArticleDOI
TL;DR: In this paper, a local linear approach is developed to estimate the time trend and coefficient functions, and the asymptotic properties of the proposed estimators, coupled with their comparisons with other methods, are established under the α-mixing conditions and without specifying the error distribution.

Book
01 Jan 2007
TL;DR: Calculating Free Energy Differences Using Perturbation Theory and Specialized Methods for Improving Ergodic Sampling Using Molecular Dynamics and Monte Carlo Simulations.
Abstract: Calculating Free Energy Differences Using Perturbation Theory.- Methods Based on Probability Distributions and Histograms.- Thermodynamic Integration Using Constrained and Unconstrained Dynamics.- Nonequilibrium Methods for Equilibrium Free Energy Calculations.- Understanding and Improving Free Energy Calculations in Molecular Simulations: Error Analysis and Reduction Methods.- Transition Path Sampling and the Calculation of Free Energies.- Specialized Methods for Improving Ergodic Sampling Using Molecular Dynamics and Monte Carlo Simulations.- Potential Distribution Methods and Free Energy Models of Molecular Solutions.- Methods for Examining Phase Equilibria.- Quantum Contributions to Free Energy Changes in Fluids.- Free Energy Calculations: Approximate Methods for Biological Macromolecules.- Applications of Free Energy Calculations to Chemistry and Biology.- Summary and Outlook.

Journal ArticleDOI
TL;DR: In this article, a hierarchical framework for modeling speed and accuracy on test items is presented as an alternative to these models, allowing a "plug-and-play" approach with alternative choices of models for the response and response-time distributions as well as the distributions of their parameters.
Abstract: Current modeling of response times on test items has been strongly influenced by the paradigm of experimental reaction-time research in psychology. For instance, some of the models have a parameter structure that was chosen to represent a speed-accuracy tradeoff, while others equate speed directly with response time. Also, several response-time models seem to be unclear as to the level of parametrization they represent. A hierarchical framework for modeling speed and accuracy on test items is presented as an alternative to these models. The framework allows a "plug-and-play approach" with alternative choices of models for the response and response-time distributions as well as the distributions of their parameters. Bayesian treatment of the framework with Markov chain Monte Carlo (MCMC) computation facilitates the approach. Use of the framework is illustrated for the choice of a normal-ogive response model, a lognormal model for the response times, and multivariate normal models for their parameters with Gibbs sampling from the joint posterior distribution.

Journal ArticleDOI
TL;DR: A new Rao-Blackwellized particle filtering based algorithm for tracking an unknown number of targets based on formulating probabilistic stochastic process models for target states, data associations, and birth and death processes is proposed.

Journal ArticleDOI
TL;DR: This article proposes an adaptive algorithm to cope with the estimation of rare event probability that is asymptotically consistent, costs just a little bit more than classical multilevel splitting, and has the same efficiency in terms of asymPTotic variance.
Abstract: The estimation of rare event probability is a crucial issue in areas such as reliability, telecommunications, aircraft management. In complex systems, analytical study is out of question and one has to use Monte Carlo methods. When rare is really rare, which means a probability less than 10−9, naive Monte Carlo becomes unreasonable. A widespread technique consists in multilevel splitting, but this method requires enough knowledge about the system to decide where to put the levels at hand. This, unfortunately, is not always possible. In this article, we propose an adaptive algorithm to cope with this problem: The estimation is asymptotically consistent, costs just a little bit more than classical multilevel splitting, and has the same efficiency in terms of asymptotic variance. In the one-dimensional case, we rigorously prove the a.s. convergence and the asymptotic normality of our estimator, with the same variance as with other algorithms that use fixed crossing levels. In our proofs we mainly us...

Journal ArticleDOI
TL;DR: Five notions of data depth are considered and they are mostly designed for functional data but they can be also adapted to the standard multivariate case.
Abstract: Five notions of data depth are considered. They are mostly designed for functional data but they can be also adapted to the standard multivariate case. The performance of these depth notions, when used as auxiliary tools in estimation and classification, is checked through a Monte Carlo study.

Journal ArticleDOI
TL;DR: In this paper, the authors describe a simple, generic and highly accurate efficient importance sampling (EIS) Monte Carlo (MC) procedure for the evaluation of high-dimensional numerical integrals.

Journal ArticleDOI
TL;DR: In this paper, an efficient scheme for one-dimensional extensive air shower simulation and its implementation in the program conex is presented, where explicit Monte Carlo simulation of the high-energy part of hadronic and electro-magnetic cascades in the atmosphere is combined with a numeric solution of cascade equations for smaller energy sub-showers to obtain accurate shower predictions.

Journal ArticleDOI
TL;DR: It is observed for the C2 molecule studied here, and for other systems the authors have studied, that as more parameters in the trial wave functions are optimized, the diffusion Monte Carlo total energy improvesMonotonically, implying that the nodal hypersurface also improves monotonically.
Abstract: We study three wave function optimization methods based on energy minimization in a variational Monte Carlo framework: the Newton, linear, and perturbative methods. In the Newton method, the parameter variations are calculated from the energy gradient and Hessian, using a reduced variance statistical estimator for the latter. In the linear method, the parameter variations are found by diagonalizing a nonsymmetric estimator of the Hamiltonian matrix in the space spanned by the wave function and its derivatives with respect to the parameters, making use of a strong zero-variance principle. In the less computationally expensive perturbative method, the parameter variations are calculated by approximately solving the generalized eigenvalue equation of the linear method by a nonorthogonal perturbation theory. These general methods are illustrated here by the optimization of wave functions consisting of a Jastrow factor multiplied by an expansion in configuration state functions (CSFs) for the C2 molecule, including both valence and core electrons in the calculation. The Newton and linear methods are very efficient for the optimization of the Jastrow, CSF, and orbital parameters. The perturbative method is a good alternative for the optimization of just the CSF and orbital parameters. Although the optimization is performed at the variational Monte Carlo level, we observe for the C2 molecule studied here, and for other systems we have studied, that as more parameters in the trial wave functions are optimized, the diffusion Monte Carlo total energy improves monotonically, implying that the nodal hypersurface also improves monotonically.


Journal ArticleDOI
TL;DR: In this paper, a modified effective medium formulation for composites where the characteristic length of the inclusion is on the order of or smaller than the phonon mean free path is introduced.
Abstract: This letter introduces a modified effective medium formulation for composites where the characteristic length of the inclusion is on the order of or smaller than the phonon mean free path. The formulation takes into account the increased interface scattering in the different phases of the nanocomposite and the thermal boundary resistance between the phases. The interface density of inclusions is introduced and is found to be a primary factor in determining the thermal conductivity. The predictions are in good agreement with results from Monte Carlo simulations and solutions to the Boltzmann equation.

Journal ArticleDOI
TL;DR: In this paper, the analysis of structural breaks in the context of fractionally integrated models is dealt with, assuming that the break dates are unknown and that the different sub-samples possess different intercepts, slope coefficients and fractional orders of integration.
Abstract: . This article deals with the analysis of structural breaks in the context of fractionally integrated models. We assume that the break dates are unknown and that the different sub-samples possess different intercepts, slope coefficients and fractional orders of integration. The procedure is based on linear regression models using a grid of values for the fractional differencing parameters and least squares estimation. Several Monte Carlo experiments conducted across the study show that the procedure performs well if the sample size is large enough. Two empirical applications are described at the end of the article.