scispace - formally typeset
Search or ask a question

Showing papers on "Monte Carlo method published in 2005"


Journal ArticleDOI
TL;DR: The authors showed that the extra variation due to the presence of these estimated parameters in the weight matrix accounts for much of the difference between the finite sample and the usual asymptotic variance of the two-step generalized method of moments estimator, when the moment conditions used are linear in the parameters.

3,967 citations


Book
19 Sep 2005
TL;DR: A review of Monte Carlo methods of computer simulation can be found in this article, where a brief review of other methods of simulation can also be found, as well as a brief introduction to Monte Carlo studies of biological molecules.
Abstract: Preface 1. Introduction 2. Some necessary background 3. Simple sampling Monte Carlo methods 4. Importance sampling Monte Carlo methods 5. More on importance sampling Monte Carlo methods of lattice systems 6. Off-lattice models 7. Reweighting methods 8. Quantum Monte Carlo methods 9. Monte Carlo renormalization group methods 10. Non-equilibrium and irreversible processes 11. Lattice gauge models: a brief introduction 12. A brief review of other methods of computer simulation 13. Monte Carlo simulations at the periphery of physics and beyond 14. Monte Carlo studies of biological molecules 15. Outlook Appendix Index.

2,055 citations


Book
01 Jun 2005
TL;DR: It is possible to locate as well as download monte carlo statistical methods springer texts in statistics Book.
Abstract: Are you looking to uncover monte carlo statistical methods springer texts in statistics Digitalbook. Correct here it is possible to locate as well as download monte carlo statistical methods springer texts in statistics Book. We've got ebooks for every single topic monte carlo statistical methods springer texts in statistics accessible for download cost-free. Search the site also as find Jean Campbell eBook in layout. We also have a fantastic collection of information connected to this Digitalbook for you. As well because the best part is you could assessment as well as download for monte carlo statistical methods springer texts in statistics eBook

1,310 citations


Journal ArticleDOI
TL;DR: In this paper, a sequential Monte Carlo (SMC) multitarget filter is proposed and demonstrated on a number of simulated scenarios, which is suitable for problems involving nonlinear nonGaussian dynamics.
Abstract: Random finite sets (RFSs) are natural representations of multitarget states and observations that allow multisensor multitarget filtering to fit in the unifying random set framework for data fusion. Although the foundation has been established in the form of finite set statistics (FISST), its relationship to conventional probability is not clear. Furthermore, optimal Bayesian multitarget filtering is not yet practical due to the inherent computational hurdle. Even the probability hypothesis density (PHD) filter, which propagates only the first moment (or PHD) instead of the full multitarget posterior, still involves multiple integrals with no closed forms in general. This article establishes the relationship between FISST and conventional probability that leads to the development of a sequential Monte Carlo (SMC) multitarget filter. In addition, an SMC implementation of the PHD filter is proposed and demonstrated on a number of simulated scenarios. Both of the proposed filters are suitable for problems involving nonlinear nonGaussian dynamics. Convergence results for these filters are also established.

1,248 citations


Book ChapterDOI
22 Jun 2005
TL;DR: In this paper, the authors describe the Monte Carlo method for the simulation of grain growth and recrystallization, and present a small subset of the broader use of Monte Carlo methods for which an excellent overview can be found in the book.
Abstract: This chapter is aimed at describing the Monte Carlo method for the simulation of grain growth and recrystallization. It has also been extended to phase transformations and hybrid versions (Monte Carlo coupled with Cellular Automaton) of the model can also accommodate diffusion. If reading this chapter inspires you to program your own version of the algorithm and try to solve some problems, then we will have succeeded! The method is simple to implement and it is fairly straightforward to apply variable material properties such as anisotropic grain boundary energy and mobility. There are, however, some important limitations of the method that must be kept in mind. These limitations include an inherent lattice anisotropy that manifests itself in various ways. For many purposes, however, if you pay attention to what has been found to previous work, the model is robust and highly efficient from a computational perspective. In many circumstances, it is best to use the model to gain insight into a physical system and then obtain a new theoretical understanding, in preference to interpreting the results as being directly representative of a particular material. Please also keep in mind that the “Monte Carlo Method” described herein is a small subset of the broader use of Monte Carlo methods for which an excellent overview can be found in the book by Landau and Binder (2000).

1,115 citations


Journal ArticleDOI
TL;DR: It is proved that the sign problem is nondeterministic polynomial (NP) hard, implying that a generic solution of the sign problems would also solve all problems in the complexity class NP inPolynomial time.
Abstract: Quantum Monte Carlo simulations, while being efficient for bosons, suffer from the "negative sign problem" when applied to fermions--causing an exponential increase of the computing time with the number of particles. A polynomial time solution to the sign problem is highly desired since it would provide an unbiased and numerically exact method to simulate correlated quantum systems. Here we show that such a solution is almost certainly unattainable by proving that the sign problem is nondeterministic polynomial (NP) hard, implying that a generic solution of the sign problem would also solve all problems in the complexity class NP in polynomial time.

1,025 citations


Journal ArticleDOI
TL;DR: In this paper, the authors examine tests for jumps based on recent asymptotic results; they interpret the tests as Hausman-type tests and find that microstructure noise biases the tests against detecting jumps, and a simple lagging strategy corrects the bias.
Abstract: We examine tests for jumps based on recent asymptotic results; we interpret the tests as Hausman-type tests. Monte Carlo evidence suggests that the daily ratio z-statistic has appropriate size, good power, and good jump detection capabilities revealed by the confusion matrix comprised of jump classification probabilities. We identify a pitfall in applying the asymptotic approximation over an entire sample. Theoretical and Monte Carlo analysis indicates that microstructure noise biases the tests against detecting jumps, and that a simple lagging strategy corrects the bias. Empirical work documents evidence for jumps that account for seven percent of stock market price variance.

782 citations


Journal ArticleDOI
TL;DR: In this paper, the authors examine tests for jumps based on recent asymptotic results; they interpret the tests as Hausman-type tests, and they identify a pitfall in applying the approximation over an entire sample.
Abstract: We examine tests for jumps based on recent asymptotic results; we interpret the tests as Hausman-type tests. Monte Carlo evidence suggests that the daily ratio z-statistic has appropriate size, good power, and good jump detection capabilities revealed by the confusion matrix comprised of jump classification probabilities. We identify a pitfall in applying the asymptotic approximation over an entire sample. Theoretical and Monte Carlo analysis indicates that microstructure noise biases the tests against detecting jumps, and that a simple lagging strategy corrects the bias. Empirical work documents evidence for jumps that account for 7% of stock market price variance. Copyright 2005, Oxford University Press.

663 citations


Journal ArticleDOI
TL;DR: A new probabilistic load-flow solution algorithm based on an efficient point estimate method that can be used directly with any existing deterministic load- flow program and compared with those obtained from Monte Carlo simulation technique and combined simulation and analytical method.
Abstract: A new probabilistic load-flow solution algorithm based on an efficient point estimate method is proposed in this paper. It is assumed that the uncertainties of bus injections and line parameters can be estimated or measured. This paper shows how to estimate the corresponding uncertainty in the load-flow solution. The proposed method can be used directly with any existing deterministic load-flow program. For a system with m uncertain parameters, it uses 2m calculations of load flow to calculate the statistical moments of load-flow solution distributions by weighting the value of the solution evaluated at 2m locations. The moments are then used in the probability distribution fitting. Performance of the proposed method is verified and compared with those obtained from Monte Carlo simulation technique and combined simulation and analytical method using several IEEE test systems.

502 citations


Journal ArticleDOI
TL;DR: In this article, the numerical resolution of backward stochastic differential equations is studied and a new numerical scheme based on iterative regressions on function bases is proposed, which coefficients are evaluated using Monte Carlo simulations.
Abstract: We are concerned with the numerical resolution of backward stochastic differential equations. We propose a new numerical scheme based on iterative regressions on function bases, which coefficients are evaluated using Monte Carlo simulations. A full convergence analysis is derived. Numerical experiments about finance are included, in particular, concerning option pricing with differential interest rates.

489 citations


Journal ArticleDOI
TL;DR: In this article, a numerically exact continuous-time quantum Monte Carlo algorithm for fermions with a general interaction nonlocal in space-time is presented, which is based on a stochastic series expansion for the partition function in the interaction representation.
Abstract: We present a numerically exact continuous-time quantum Monte Carlo algorithm for fermions with a general interaction nonlocal in space-time The new determinantal grand-canonical scheme is based on a stochastic series expansion for the partition function in the interaction representation The method is particularly applicable for multiband, time-dependent correlations since it does not invoke the Hubbard-Stratonovich transformation The test calculations for exactly solvable models, as well results for the Green function and for the time-dependent susceptibility of the multiband supersymmetric model with a spin-flip interaction are discussed

Journal ArticleDOI
TL;DR: The methods are applicable to general nonlinear and non-Gaussian models for the target dynamics and measurement likelihood, and provide efficient solutions to two very pertinent problems: the data association problem that arises due to unlabelled measurements in the presence of clutter, and the curse of dimensionality that arose due to the increased size of the state-space associated with multiple targets.
Abstract: We present Monte Carlo methods for multi-target tracking and data association. The methods are applicable to general nonlinear and non-Gaussian models for the target dynamics and measurement likelihood. We provide efficient solutions to two very pertinent problems: the data association problem that arises due to unlabelled measurements in the presence of clutter, and the curse of dimensionality that arises due to the increased size of the state-space associated with multiple targets. We develop a number of algorithms to achieve this. The first, which we refer to as the Monte Carlo joint probabilistic data association filter (MC-JPDAF), is a generalisation of the strategy proposed by Schulz et al. (2001) and Schulz et al. (2003). As is the case for the JPDAF, the distributions of interest are the marginal filtering distributions for each of the targets, but these are approximated with particles rather than Gaussians. We also develop two extensions to the standard particle filtering methodology for tracking multiple targets. The first, which we refer to as the sequential sampling particle filter (SSPF), samples the individual targets sequentially by utilising a factorisation of the importance weights. The second, which we refer to as the independent partition particle filter (IPPF), assumes the associations to be independent over the individual targets, leading to an efficient component-wise sampling strategy to construct new particles. We evaluate and compare the proposed methods on a challenging synthetic tracking problem.

Journal ArticleDOI
Jesper Lindé1
TL;DR: In this paper, Monte Carlo simulations with a simple New-Keynesian sticky price model are used to argue that single equations methods, e.g., GMM, are likely to produce imprecise and biased estimates.

Journal ArticleDOI
TL;DR: An electronic structure theory is developed that combines a new local correlation energy (based on Monte Carlo calculations applied to the homogeneous electron gas) and a combination of local and explicit long-ranged exchange.
Abstract: We derive an exact representation of the exchange-correlation energy within density functional theory (DFT) which spawns a class of approximations leading to correct long-range asymptotic behavior. Using a simple approximation, we develop an electronic structure theory that combines a new local correlation energy (based on Monte Carlo calculations applied to the homogeneous electron gas) and a combination of local and explicit long-ranged exchange. The theory is applied to several first-row atoms and diatomic molecules where encouraging results are obtained: good description of the chemical bond at the same time allowing for bound anions, reasonably accurate affinity energies, and correct polarizability of an elongated hydrogen chain. Further stringent tests of DFT are passed, concerning ionization potential and charge distribution under large bias.

Journal ArticleDOI
TL;DR: This article discusses the use of a symmetric multiplicative interaction effect to capture certain types of third-order dependence patterns often present in social networks and other dyadic datasets.
Abstract: This article discusses the use of a symmetric multiplicative interaction effect to capture certain types of third-order dependence patterns often present in social networks and other dyadic datasets. Such an effect, along with standard linear fixed and random effects, is incorporated into a generalized linear model, and a Markov chain Monte Carlo algorithm is provided for Bayesian estimation and inference. In an example analysis of international relations data, accounting for such patterns improves model fit and predictive performance.

Journal ArticleDOI
TL;DR: The IQE is found to be strongly sensitive to the scale of phase separation in the morphology, with a peak at approximately 20 nm for the PFB/F8BT system studied.
Abstract: We present a dynamical Monte Carlo study of the dependence of the internal quantum efficiency (IQE) of an organic bulk heterojunction solar cell on the device morphology. The IQE is found to be strongly sensitive to the scale of phase separation in the morphology, with a peak at approximately 20 nm for the PFB/F8BT system studied. An ordered, checkered morphology exhibits a peak IQE 1.5 times higher than a disordered blend.


Posted ContentDOI
TL;DR: The LHAPDF library as mentioned in this paper is a functional replacement for PDFLIB, and it supports Monte Carlo generators such as PYTHIA and HERWIG, as well as a look-a-like interface.
Abstract: We describe the development of the LHAPDF library from its initial implementation following the Les Houches meeting in 2001 to its present state as a functional replacement for PDFLIB. Brief details are given of how to install and use the library together with the PDF sets available. We also describe LHAGLUE, an add-on PDFLIB look-a-like interface to LHAPDF, which facilitates using LHAPDF with existing Monte Carlo generators such as PYTHIA and HERWIG.

Journal ArticleDOI
TL;DR: A method to analyze biased molecular-dynamics and Monte Carlo simulations, also known as umbrella sampling, which employs only quantities with easily controllable equilibration and greatly reduces the statistical errors compared to the standard weighted histogram analysis method.
Abstract: We present a method to analyze biased molecular-dynamics and Monte Carlo simulations, also known as umbrella sampling. In the limiting case of a strong bias, this method is equivalent to thermodynamic integration. It employs only quantities with easily controllable equilibration and greatly reduces the statistical errors compared to the standard weighted histogram analysis method. We show the success of our approach for two examples, one analytic function, and one biological system.


Journal ArticleDOI
TL;DR: In this paper, a semi-empirical model was used to develop the hadronic portion of air showers in a manner analogous to the well-known Heitler splitting approximation of electromagnetic cascades.

Journal ArticleDOI
TL;DR: In this article, a modified approach is developed that allows possible atomic distribution functions, which are consistent with the measured data to be explored, and any solution to the inversion process must be derivable from a distribution of nonoverlapping atoms or molecules as in the physical system under investigation.
Abstract: Neutron and x-ray diffraction are widely used to measure the structure of liquids and disordered solids. Using techniques such as isotope substitution or anomalous dispersion or combining neutron and x-ray data, it is sometimes possible to invert the total diffraction patterns from these materials into a set of partial structure factors, which describe the correlations between specific atom types in the material. However, even in situations where the matrix for performing this inversion appears well determined, there are significant uncertainties in the process and it is rarely possible to achieve a unique set of partial structure factors in practice. Based on the much earlier method of F. G. Edwards and J. E. Enderby [J. Phys. C 8, 3483 (1975)] and extending the reverse Monte Carlo method of McGreevy [J. Phys.: Condens. Matter 13, R877 (2001)] and others, a modified approach is developed here that allows possible atomic distribution functions, which are consistent with the measured data to be explored. The basis of the present approach is that any solution to the inversion process must be derivable from a distribution of nonoverlapping atoms or molecules as in the physical system under investigation. Solutions to the problem of inverting the measured differentialmore » cross sections to partial structure factors are then extracted assuming different levels of confidence in the data, confidence being represented by a feedback factor on a scale of 0-1. These different solutions serve to identify where ambiguities exist in the derived partial structure factors, particularly when a particular partial structure factor contributes only weakly to the total diffraction pattern. The method is illustrated using some old diffraction data on molten zinc chloride that have significant uncertainties associated with them, but that have been used extensively as the basis for a number of computer simulations of this material.« less

Journal ArticleDOI
TL;DR: Three different ways to build a Monte Carlo program for light propagation with polarization are given and comparison in between Monte Carlo runs and Adding Doubling program yielded less than 1 % error.
Abstract: Three Monte Carlo programs were developed which keep track of the status of polarization of light traveling through mono-disperse solutions of micro-spheres. These programs were described in detail in our previous article [1]. This paper illustrates a series of Monte Carlo simulations that model common experiments of light transmission and reflection of scattering media. Furthermore the codes were expanded to model light propagating through poly-disperse solutions of micro-spheres of different radii distributions.

Journal ArticleDOI
08 Jul 2005-Science
TL;DR: This work used Bayesian inference to derive a probability distribution that represents the unknown structure and its precision and implemented this approach by using Markov chain Monte Carlo techniques, providing an objective figure of merit and improves structural quality.
Abstract: Macromolecular structures calculated from nuclear magnetic resonance data are not fully determined by experimental data but depend on subjective choices in data treatment and parameter settings. This makes it difficult to objectively judge the precision of the structures. We used Bayesian inference to derive a probability distribution that represents the unknown structure and its precision. This probability distribution also determines additional unknowns, such as theory parameters, that previously had to be chosen empirically. We implemented this approach by using Markov chain Monte Carlo techniques. Our method provides an objective figure of merit and improves structural quality.

Book
01 Jun 2005
TL;DR: In this article, two vectorized Monte Carlo codes MVP and GMVP have been developed at JAERI to realize fast and accurate Monte Carlo simulation of neutron and photon transport problems.
Abstract: To realize fast and accurate Monte Carlo simulation of neutron and photon transport problems, two vectorized Monte Carlo codes MVP and GMVP have been developed at JAERI. MVP is based on the continuous energy model and GMVP is on the multigroup model. Compared with conventional scalar codes, these codes achieve higher computation speed by a factor of 10 or more on vector supercomputers. Both codes have sufficient functions for production use by adopting accurate physics model, geometry description capability and variance reduction techniques. The first version of the codes was released in 1994. They have been extensively improved and new functions have been implemented. The major improvements and new functions are (1) capability to treat the scattering model expressed with File 6 of the ENDF-6 format, (2) time-dependent tallies, (3) reaction rate calculation with the pointwise response function, (4) flexible source specification, etc. This report describes the physical model, geometry description method used in the codes, new functions and how to use them.

Journal ArticleDOI
TL;DR: Different estimation procedures have been used to estimate the unknown parameter(s) and their performances are compared using Monte Carlo simulations, and it is observed that this particular skewed distribution can be used quite effectively in analyzing lifetime data.

Journal ArticleDOI
Piotr Golonka1, Zbigniew Was1
TL;DR: In this article, the authors present a discussion of the precision for the PHOTOS Monte Carlo algorithm, with improved implementation of QED interference and multiple-photon radiation, and they found that the current version of PHOTOS is of 0.1% in the case of Z and W decays.
Abstract: We present a discussion of the precision for the PHOTOS Monte Carlo algorithm, with improved implementation of QED interference and multiple-photon radiation. The main application of PHOTOS is the generation of QED radiative corrections in decays of any resonances, simulated by a "host" Monte Carlo generator. By careful comparisons automated with the help of the MC-TESTER tool specially tailored for that purpose, we found that the precision of the current version of PHOTOS is of 0.1% in the case of Z and W decays. In the general case, the precision of PHOTOS was also improved, but this will not be quantified here.

Journal ArticleDOI
TL;DR: This work presents a novel partial-update near-neighbor list (NNL) algorithm that is superior to previous algorithms at high densities, without compromising the correctness of the algorithm.

Journal ArticleDOI
TL;DR: An efficient statistical timing analysis algorithm that predicts the probability distribution of the circuit delay considering both inter-die and intra- die variations, while accounting for the effects of spatial correlations of intra-die parameter variations, is presented.
Abstract: Process variations are of increasing concern in today's technologies, and they can significantly affect circuit performance An efficient statistical timing analysis algorithm that predicts the probability distribution of the circuit delay considering both inter-die and intra-die variations, while accounting for the effects of spatial correlations of intra-die parameter variations, is presented The procedure uses a first-order Taylor series expansion to approximate the gate and interconnect delays Next, principal component analysis (PCA) techniques are employed to transform the set of correlated parameters into an uncorrelated set The statistical timing computation is then easily performed with a program evaluation and review technique (PERT)-like circuit graph traversal The run time of this algorithm is linear in the number of gates and interconnects, as well as the number of varying parameters and grid partitions that are used to model spatial correlations The accuracy of the method is verified with Monte Carlo (MC) simulation On average, for the 100 nm technology, the errors of mean and standard deviation (SD) values computed by the proposed method are 106% and -434%, respectively, and the errors of predicting the 99% and 1% confidence point are -246% and -099%, respectively A testcase with about 17 800 gates was solved in about 500 s, with high accuracy as compared to an MC simulation that required more than 15 h

Journal ArticleDOI
TL;DR: This work describes a sequential importance sampling procedure for analyzing two-way zero–one or contingency tables with fixed marginal sums, and produces Monte Carlo samples that are remarkably close to the uniform distribution, enabling one to approximate closely the null distributions of various test statistics about these tables.
Abstract: We describe a sequential importance sampling (SIS) procedure for analyzing two-way zero–one or contingency tables with fixed marginal sums. An essential feature of the new method is that it samples the columns of the table progressively according to certain special distributions. Our method produces Monte Carlo samples that are remarkably close to the uniform distribution, enabling one to approximate closely the null distributions of various test statistics about these tables. Our method compares favorably with other existing Monte Carlo-based algorithms, and sometimes is a few orders of magnitude more efficient. In particular, compared with Markov chain Monte Carlo (MCMC)-based approaches, our importance sampling method not only is more efficient in terms of absolute running time and frees one from pondering over the mixing issue, but also provides an easy and accurate estimate of the total number of tables with fixed marginal sums, which is far more difficult for an MCMC method to achieve.