scispace - formally typeset
Search or ask a question

Showing papers on "Monte Carlo molecular modeling published in 2011"


Journal ArticleDOI
TL;DR: In this article, the authors proposed Metropolis adjusted Langevin and Hamiltonian Monte Carlo sampling methods on the Riemann manifold to resolve the shortcomings of existing Monte Carlo algorithms when sampling from target densities that may be high dimensional and exhibit strong correlations.
Abstract: The paper proposes Metropolis adjusted Langevin and Hamiltonian Monte Carlo sampling methods defined on the Riemann manifold to resolve the shortcomings of existing Monte Carlo algorithms when sampling from target densities that may be high dimensional and exhibit strong correlations. The methods provide fully automated adaptation mechanisms that circumvent the costly pilot runs that are required to tune proposal densities for Metropolis–Hastings or indeed Hamiltonian Monte Carlo and Metropolis adjusted Langevin algorithms. This allows for highly efficient sampling even in very high dimensions where different scalings may be required for the transient and stationary phases of the Markov chain. The methodology proposed exploits the Riemann geometry of the parameter space of statistical models and thus automatically adapts to the local structure when simulating paths across this manifold, providing highly efficient convergence and exploration of the target density. The performance of these Riemann manifold Monte Carlo methods is rigorously assessed by performing inference on logistic regression models, log-Gaussian Cox point processes, stochastic volatility models and Bayesian estimation of dynamic systems described by non-linear differential equations. Substantial improvements in the time-normalized effective sample size are reported when compared with alternative sampling approaches. MATLAB code that is available from http://www.ucl.ac.uk/statistics/research/rmhmc allows replication of all the results reported.

1,279 citations


Posted Content
TL;DR: The No-U-Turn Sampler (NUTS) as discussed by the authors is an extension to HMC that eliminates the need to set a number of steps L. NUTS uses a recursive algorithm to build a set of likely candidate points that spans a wide swath of the target distribution, stopping automatically when it starts to double back and retrace its steps.
Abstract: Hamiltonian Monte Carlo (HMC) is a Markov chain Monte Carlo (MCMC) algorithm that avoids the random walk behavior and sensitivity to correlated parameters that plague many MCMC methods by taking a series of steps informed by first-order gradient information. These features allow it to converge to high-dimensional target distributions much more quickly than simpler methods such as random walk Metropolis or Gibbs sampling. However, HMC's performance is highly sensitive to two user-specified parameters: a step size {\epsilon} and a desired number of steps L. In particular, if L is too small then the algorithm exhibits undesirable random walk behavior, while if L is too large the algorithm wastes computation. We introduce the No-U-Turn Sampler (NUTS), an extension to HMC that eliminates the need to set a number of steps L. NUTS uses a recursive algorithm to build a set of likely candidate points that spans a wide swath of the target distribution, stopping automatically when it starts to double back and retrace its steps. Empirically, NUTS perform at least as efficiently as and sometimes more efficiently than a well tuned standard HMC method, without requiring user intervention or costly tuning runs. We also derive a method for adapting the step size parameter {\epsilon} on the fly based on primal-dual averaging. NUTS can thus be used with no hand-tuning at all. NUTS is also suitable for applications such as BUGS-style automatic inference engines that require efficient "turnkey" sampling algorithms.

1,050 citations


Journal ArticleDOI
TL;DR: A novel variance reduction technique for the standard Monte Carlo method, called the multilevel Monte Carlo Method, is described, and numerically its superiority is demonstrated.
Abstract: We consider the numerical solution of elliptic partial differential equations with random coefficients Such problems arise, for example, in uncertainty quantification for groundwater flow We describe a novel variance reduction technique for the standard Monte Carlo method, called the multilevel Monte Carlo method, and demonstrate numerically its superiority The asymptotic cost of solving the stochastic problem with the multilevel method is always significantly lower than that of the standard method and grows only proportionally to the cost of solving the deterministic problem in certain circumstances Numerical calculations demonstrating the effectiveness of the method for one- and two-dimensional model problems arising in groundwater flow are presented

571 citations


Journal ArticleDOI
TL;DR: The overall complexity of computing mean fields as well as k-point correlations of the random solution is proved to be of log-linear complexity in the number of unknowns of a single Multi-level solve of the deterministic elliptic problem.
Abstract: In Monte Carlo methods quadrupling the sample size halves the error. In simulations of stochastic partial differential equations (SPDEs), the total work is the sample size times the solution cost of an instance of the partial differential equation. A Multi-level Monte Carlo method is introduced which allows, in certain cases, to reduce the overall work to that of the discretization of one instance of the deterministic PDE. The model problem is an elliptic equation with stochastic coefficients. Multi-level Monte Carlo errors and work estimates are given both for the mean of the solutions and for higher moments. The overall complexity of computing mean fields as well as k-point correlations of the random solution is proved to be of log-linear complexity in the number of unknowns of a single Multi-level solve of the deterministic elliptic problem. Numerical examples complete the theoretical analysis.

346 citations


Journal ArticleDOI
TL;DR: In this article, a Monte Carlo method for obtaining solutions of the Boltzmann equation to describe phonon transport in micro-and nanoscale devices is presented, which can resolve arbitrarily small signals at small constant cost and thus represents a considerable improvement compared to traditional Monte Carlo methods, whose cost increases quadratically with decreasing signal.
Abstract: We present a Monte Carlo method for obtaining solutions of the Boltzmann equation to describe phonon transport in micro- and nanoscale devices. The proposed method can resolve arbitrarily small signals (e.g., temperature differences) at small constant cost and thus represents a considerable improvement compared to traditional Monte Carlo methods, whose cost increases quadratically with decreasing signal. This is achieved via a control-variate variance-reduction formulation in which the stochastic particle description solves only for the deviation from a nearby equilibrium, while the latter is described analytically. We also show that simulation of an energy-based Boltzmann equation results in an algorithm that lends itself naturally to exact energy conservation, thereby considerably improving the simulation fidelity. Simulations using the proposed method are used to investigate the effect of porosity on the effective thermal conductivity of silicon. We also present simulations of a recently developed thermal conductivity spectroscopy process. The latter simulations demonstrate how the computational gains introduced by the proposed method enable the simulation of otherwise intractable multiscale phenomena.

268 citations


Book ChapterDOI
10 May 2011
TL;DR: The Metropolis algorithm was widely used by chemists and physicists, but it did not become widely known among statisticians until after 1990, when Gelfand and Smith (1990) made the wider Bayesian community aware of the Gibbs sampler, which up to that time had been known only in the spatial statistics community.
Abstract: Despite a few notable uses of simulation of random processes in the pre-computer era (Hammersley and Handscomb, 1964, Section 1.2; Stigler, 2002, Chapter 7), practical widespread use of simulation had to await the invention of computers. Almost as soon as computers were invented, they were used for simulation (Hammersley and Handscomb, 1964, Section 1.2). The name “Monte Carlo” started as cuteness-gambling was then (around 1950) illegal in most places, and the casino at Monte Carlo was the most famous in the world-but it soon became a colorless technical term for simulation of random processes. Markov chain Monte Carlo (MCMC) was invented soon after ordinary Monte Carlo atLos Alamos, one of the few places where computers were available at the time. Metropolis et al. (1953)∗ simulated a liquid in equilibrium with its gas phase. The obvious way to find out about the thermodynamic equilibrium is to simulate the dynamics of the system, and let it run until it reaches equilibrium. The tour de force was their realization that they did not need to simulate the exact dynamics; they only needed to simulate someMarkov chain having the same equilibrium distribution. Simulations following the scheme of Metropolis et al. (1953) are said to use the Metropolis algorithm. As computers became more widely available, the Metropolis algorithm was widely used by chemists and physicists, but it did not become widely known among statisticians until after 1990. Hastings (1970) generalized the Metropolis algorithm, and simulations following his scheme are said to use the Metropolis-Hastings algorithm. A special case of the Metropolis-Hastings algorithm was introduced by Geman and Geman (1984), apparently without knowledge of earlier work. Simulations following their scheme are said to use the Gibbs sampler. Much of Geman and Geman (1984) discusses optimization to find the posterior mode rather than simulation, and it took some time for it to be understood in the spatial statistics community that the Gibbs sampler simulated the posterior distribution, thus enabling full Bayesian inference of all kinds. Amethodology that was later seen to be very similar to the Gibbs sampler was introduced by Tanner and Wong (1987), again apparently without knowledge of earlier work. To this day, some refer to the Gibbs sampler as “data augmentation” following these authors. Gelfand and Smith (1990)made thewider Bayesian community aware of theGibbs sampler, which up to that time had been known only in the spatial statistics community. Then it took off; as of this writing, a search for Gelfand and Smith (1990) on Google Scholar yields 4003 links to other works. It was rapidly realized that most Bayesian inference couldresearchers to properly understand the theory of MCMC (Geyer, 1992; Tierney, 1994) and that all of the aforementionedworkwas a special case of the notion ofMCMC. Green (1995) generalized theMetropolis-Hastings algorithm, asmuch as it can be generalized.Although this terminology is not widely used, we say that simulations following his scheme use the Metropolis-Hastings-Green algorithm. MCMC is not used only for Bayesian inference. Likelihood inference in caseswhere the likelihood cannot be calculated explicitly due tomissing data or complex dependence can also useMCMC (Geyer, 1994, 1999; Geyer and Thompson, 1992, 1995, and references cited therein).

261 citations


Journal ArticleDOI
TL;DR: In this article, the authors trace the history and development of Markov chain Monte Carlo (MCMC) from its early inception in the late 1940s through its use today, and see how the earlier stages of MC, not MCMC, research have led to the algorithms currently in use.
Abstract: We attempt to trace the history and development of Markov chain Monte Carlo (MCMC) from its early inception in the late 1940s through its use today. We see how the earlier stages of Monte Carlo (MC, not MCMC) research have led to the algorithms currently in use. More importantly, we see how the development of this methodology has not only changed our solutions to problems, but has changed the way we think about problems.

246 citations


Book
26 Sep 2011
TL;DR: Corrections to Conditional Monte Carlo: Gradient Estimation and Optimization Applications by Michael C. Fu and Jian-Qiang Hu can be found at the Internet.
Abstract: Preface. Selected Notation. 1. Introduction. 2. Three Extended Examples. 3. Conditional Monte Carlo Gradient Estimation. 4. Links to Other Settings. 5. Synopsis and Preview. 6. Queueing Systems. 7. (s,S) Inventory Systems. 8. Other Applications. References. Index. Corrections to Conditional Monte Carlo: Gradient Estimation and Optimization Applications (Kluwer International Series in Engineering and Computer Science, 392) by Michael C. Fu and Jian-Qiang Hu can be found at the Internet.

234 citations


Journal ArticleDOI
TL;DR: It is shown that the relative entropy offers tight control over the errors due to coarse-graining in arbitrary microscopic properties, and a systematic approach to reducing them is suggested.
Abstract: The ability to generate accurate coarse-grained models from reference fully atomic (or otherwise "first-principles") ones has become an important component in modeling the behavior of complex molecular systems with large length and time scales. We recently proposed a novel coarse-graining approach based upon variational minimization of a configuration-space functional called the relative entropy, S(rel), that measures the information lost upon coarse-graining. Here, we develop a broad theoretical framework for this methodology and numerical strategies for its use in practical coarse-graining settings. In particular, we show that the relative entropy offers tight control over the errors due to coarse-graining in arbitrary microscopic properties, and suggests a systematic approach to reducing them. We also describe fundamental connections between this optimization methodology and other coarse-graining strategies like inverse Monte Carlo, force matching, energy matching, and variational mean-field theory. We suggest several new numerical approaches to its minimization that provide new coarse-graining strategies. Finally, we demonstrate the application of these theoretical considerations and algorithms to a simple, instructive system and characterize convergence and errors within the relative entropy framework.

230 citations


Book
07 Jun 2011
TL;DR: This book provides the basic detail necessary to learn how to apply Monte Carlo methods and thus should be useful as a text book for undergraduate or graduate courses in numerical methods.
Abstract: Exploring Monte Carlo Methods is a basic text that describes the numerical methods that have come to be known as "Monte Carlo." The book treats the subject generically through the first eight chapters and, thus, should be of use to anyone who wants to learn to use Monte Carlo. The next two chapters focus on applications in nuclear engineering, which are illustrative of uses in other fields. Five appendices are included, which provide useful information on probability distributions, general-purpose Monte Carlo codes for radiation transport, and other matters. The famous "Buffon’s needle problem" provides a unifying theme as it is repeatedly used to illustrate many features of Monte Carlo methods. This book provides the basic detail necessary to learn how to apply Monte Carlo methods and thus should be useful as a text book for undergraduate or graduate courses in numerical methods. It is written so that interested readers with only an understanding of calculus and differential equations can learn Monte Carlo on their own. Coverage of topics such as variance reduction, pseudo-random number generation, Markov chain Monte Carlo, inverse Monte Carlo, and linear operator equations will make the book useful even to experienced Monte Carlo practitioners. Provides a concise treatment of generic Monte Carlo methods Proofs for each chapter Appendixes include Certain mathematical functions; Bose Einstein functions, Fermi Dirac functions, Watson functions

199 citations


Journal ArticleDOI
TL;DR: Numerical experiments are reported, showing that the quasi-Monte Carlo method consistently outperforms the Monte Carlo method, with a smaller error and a noticeably better than O(N^-^1^/^2) convergence rate.

Journal ArticleDOI
TL;DR: In this article, a Monte Carlo atmospheric radiative transfer model is presented to support the interpretation of UV/vis/near-IR spectroscopic measurements of scattered Sun light in the atmosphere.
Abstract: A new Monte Carlo atmospheric radiative transfer model is presented which is designed to support the interpretation of UV/vis/near-IR spectroscopic measurements of scattered Sun light in the atmosphere. The integro differential equation describing the underlying transport process and its formal solution are discussed. A stochastic approach to solve the differential equation, the Monte Carlo method, is deduced and its application to the formal solution is demonstrated. It is shown how model photon trajectories of the resulting ray tracing algorithm are used to estimate functionals of the radiation field such as radiances, actinic fluxes and light path integrals. In addition, Jacobians of the former quantities with respect to optical parameters of the atmosphere are analyzed. Model output quantities are validated against measurements, by self-consistency tests and through inter comparisons with other radiative transfer models.

Journal ArticleDOI
TL;DR: Novel likelihood-free approaches to model comparison are presented, based upon the independent estimation of the evidence of each model under study, which allow the exploitation of MCMC or SMC algorithms for exploring the parameter space, and that they do not require a sampler able to mix between models.
Abstract: Statistical methods of inference typically require the likelihood function to be computable in a reasonable amount of time. The class of "likelihood-free" methods termed Approximate Bayesian Computation (ABC) is able to eliminate this requirement, replacing the evaluation of the likelihood with simulation from it. Likelihood-free methods have gained in efficiency and popularity in the past few years, following their integration with Markov Chain Monte Carlo (MCMC) and Sequential Monte Carlo (SMC) in order to better explore the parameter space. They have been applied primarily to estimating the parameters of a given model, but can also be used to compare models. Here we present novel likelihood-free approaches to model comparison, based upon the independent estimation of the evidence of each model under study. Key advantages of these approaches over previous techniques are that they allow the exploitation of MCMC or SMC algorithms for exploring the parameter space, and that they do not require a sampler able to mix between models. We validate the proposed methods using a simple exponential family problem before providing a realistic problem from human population genetics: the comparison of different demographic models based upon genetic data from the Y chromosome.

Journal ArticleDOI
TL;DR: The bimodal shape of the density of states, and hence the critical point itself, is a purely liquid-state phenomenon that is distinct from the crystal-liquid transition.
Abstract: We perform successive umbrella sampling grand canonical Monte Carlo computer simulations of the original ST2 model of water in the vicinity of the proposed liquid–liquid critical point, at temperatures above and below the critical temperature. Our results support the previous work of Y. Liu, A. Z. Panagiotopoulos and P. G. Debenedetti [J. Chem. Phys., 2009, 131, 104508], who provided evidence for the existence and location of the critical point for ST2 using the Ewald method to evaluate the long-range forces. Our results therefore demonstrate the robustness of the evidence for critical behavior with respect to the treatment of the electrostatic interactions. In addition, we verify that the liquid is equilibrated at all densities on the Monte Carlo time scale of our simulations, and also that there is no indication of crystal formation during our runs. These findings demonstrate that the processes of liquid-state relaxation and crystal nucleation are well separated in time. Therefore, the bimodal shape of the density of states, and hence the critical point itself, is a purely liquid-state phenomenon that is distinct from the crystal–liquid transition.

Journal ArticleDOI
TL;DR: In this article, a Monte Carlo method is developed that performs adjoint-weighted tallies in continuous energy k-eigenvalue calculations, where each contribution to a tally score is weighted by an estimate of the relativized relativistic value.
Abstract: A Monte Carlo method is developed that performs adjoint-weighted tallies in continuous-energy k-eigenvalue calculations. Each contribution to a tally score is weighted by an estimate of the relativ...

Journal ArticleDOI
TL;DR: In this article, an iterative strategy is used to determine a response surface that is able to fit the limit state function in the neighbourhood of the design point, where the locations of the sample points used to evaluate the free parameters of the response surface are chosen according to the importance sensitivity of each random variable.

Journal ArticleDOI
TL;DR: In this paper, the authors applied the gradient theory of fluid interfaces and Monte Carlo molecular simulations for the description of the interfacial behavior of the methane/water mixture, and the results obtained are compared with Monte Carlo simulations, where the fluid interface is explicitly considered in biphasic simulation boxes at both constant pressure and volume (NPT and NVT ensembles), using reliable united atom molecular models.
Abstract: This work is dedicated to the simultaneous application of the gradient theory of fluid interfaces and Monte Carlo molecular simulations for the description of the interfacial behavior of the methane/water mixture. Macroscopic (interfacial tension, adsorption) and microscopic (density profiles, interfacial thickness) properties are investigated. The gradient theory is coupled in this work with the SAFT-VR Mie equation of state. The results obtained are compared with Monte Carlo simulations, where the fluid interface is explicitly considered in biphasic simulation boxes at both constant pressure and volume (NPT and NVT ensembles), using reliable united atom molecular models. On one hand, both methods provide very good estimations of the interfacial tension of this mixture over a broad range of thermodynamic conditions. On the other hand, microscopic properties computed with both gradient theory and MC simulations are in very good agreement with each other, which confirms the consistency of both approaches. Interfacial tension minima at high pressure and prewetting transitions in the vicinity of saturation conditions are also investigated.

Journal ArticleDOI
TL;DR: The experimental results in preclinical settings demonstrates the feasibility of using both aMC and pMC methods for time-resolved whole body studies in small animals within a few hours, and a computationally prohibitive technique that is not well suited forTime-domain fluorescence tomography applications.
Abstract: Purpose: The Monte Carlo method is an accurate model for time-resolved quantitative fluorescence tomography. However, this method suffers from low computational efficiency due to the large number of photons required for reliable statistics. This paper presents a comparison study on the computational efficiency of three Monte Carlo-based methods for time-domain fluorescence molecular tomography. Methods: The methods investigated to generate time-gated Jacobians were the perturbation Monte Carlo (pMC) method, the adjoint Monte Carlo (aMC) method and the mid-way Monte Carlo (mMC) method. The effects of the different parameters that affect the computation time and statistics reliability were evaluated. Also, the methods were applied to a set of experimental data for tomographic application. Results:In silico results establish that, the investigated parameters affect the computational time for the three methods differently (linearly, quadratically, or not significantly). Moreover, the noise level of the Jacobian varies when these parameters change. The experimental results in preclinical settings demonstrates the feasibility of using both aMC and pMC methods for time-resolved whole body studies in small animals within a few hours. Conclusions: Among the three Monte Carlo methods, the mMC method is a computationally prohibitive technique that is not well suited for time-domain fluorescence tomography applications. The pMC method is advantageous over the aMC method when the early gates are employed and large number of detectors is present. Alternatively, the aMC method is the method of choice when a small number of source-detector pairs are used.

Journal ArticleDOI
TL;DR: In this article, five variance reduction techniques applicable to Monte Carlo simulations of radiative transfer in the atmosphere are presented: detector directional importance sampling, n-tuple local estimate, prediction-based splitting and Russian roulette, and circum-solar virtual importance sampling.
Abstract: We present five new variance reduction techniques applicable to Monte Carlo simulations of radiative transfer in the atmosphere: detector directional importance sampling, n-tuple local estimate, prediction-based splitting and Russian roulette, and circum-solar virtual importance sampling. With this set of methods it is possible to simulate remote sensing instruments accurately and quickly. In contrast to all other known techniques used to accelerate Monte Carlo simulations in cloudy atmospheres – except for two methods limited to narrow angle lidars – the presented methods do not make any approximations, and hence do not bias the result. Nevertheless, these methods converge as quickly as any of the biasing acceleration techniques, and the probability distribution of the simulation results is almost perfectly normal. The presented variance reduction techniques have been implemented into the Monte Carlo code MYSTIC (“Monte Carlo code for the physically correct tracing of photons in cloudy atmospheres”) in order to validate the techniques.

Journal ArticleDOI
TL;DR: This work designs an Irreversible version of Metropolis–Hastings (IMH) and test it on an example of a spin cluster and constructs irreversible deformation of a given reversible algorithm capable of dramatic improvement of sampling from known distribution.

Journal ArticleDOI
TL;DR: A simple and easily implemented Monte Carlo algorithm is described which enables configurational-bias sampling of molecules containing branch points and rings with endocyclic and exocyClic atoms and can be used to sample conformational space for molecules of arbitrary complexity in both open and closed statistical mechanical ensembles.
Abstract: A simple and easily implemented Monte Carlo algorithm is described which enables configurational-bias sampling of molecules containing branch points and rings with endocyclic and exocyclic atoms. The method overcomes well-known problems associated with sequential configurational-bias sampling methods. A “reservoir” or “library” of fragments are generated with known probability distributions dependent on stiff intramolecular degrees of freedom. Configurational-bias moves assemble the fragments into whole molecules using the energy associated with the remaining degrees of freedom. The methods for generating the fragments are validated on models of propane, isobutane, neopentane, cyclohexane, and methylcyclohexane. It is shown how the sampling method is implemented in the Gibbs ensemble, and validation studies are performed in which the liquid coexistence curves of propane, isobutane, and 2,2-dimethylhexane are computed and shown to agree with accepted values. The method is general and can be used to sample conformational space for molecules of arbitrary complexity in both open and closed statistical mechanical ensembles.

Journal ArticleDOI
TL;DR: In this paper, a new class of Monte Carlo moves based on nonequilibrium dynamics is introduced, where candidate configurations are generated through a finite-time process in which a system is actively driven out of equilibrium, and accepted with criteria that preserve the equilibrium distribution.
Abstract: Metropolis Monte Carlo simulation is a powerful tool for studying the equilibrium properties of matter. In complex condensed-phase systems, however, it is difficult to design Monte Carlo moves with high acceptance probabilities that also rapidly sample uncorrelated configurations. Here, we introduce a new class of moves based on nonequilibrium dynamics: Candidate configurations are generated through a finite-time process in which a system is actively driven out of equilibrium, and accepted with criteria that preserve the equilibrium distribution. The acceptance rule is similar to the Metropolis acceptance probability, but related to the nonequilibrium work rather than the instantaneous energy difference. Our method is applicable to sampling from both a single thermodynamic state or a mixture of thermodynamic states, and allows both coordinates and thermodynamic parameters to be driven in nonequilibrium proposals. Whereas generating finite-time switching trajectories incurs an additional cost, driving some degrees of freedom while allowing others to evolve naturally can lead to large enhancements in acceptance probabilities, greatly reducing structural correlation times. Using nonequilibrium driven processes vastly expands the repertoire of useful Monte Carlo proposals in simulations of dense solvated systems.

Journal ArticleDOI
TL;DR: In this article, an efficient particle simulation method for the Boltzmann transport equation based on the low-variance deviational simulation Monte Carlo approach to the variable-hard-sphere gas was proposed.
Abstract: We present an efficient particle simulation method for the Boltzmann transport equation based on the low-variance deviational simulation Monte Carlo approach to the variable-hard-sphere gas. The proposed method exhibits drastically reduced statistical uncertainty for low-signal problems compared to standard particle methods such as the direct simulation Monte Carlo method. We show that by enforcing mass conservation, accurate simulations can be performed in the transition regime requiring as few as ten particles per cell, enabling efficient simulation of multidimensional problems at arbitrarily small deviation from equilibrium.

Journal ArticleDOI
TL;DR: In this paper, Monte Carlo sampling is used to extract the expectation value of projected entangled pair states with a large virtual bond dimension, which can be used to obtain the tensors describing the ground state wave function of the antiferromagnetic Heisenberg model.
Abstract: We demonstrate that Monte Carlo sampling can be used to efficiently extract the expectation value of projected entangled pair states with a large virtual bond dimension. We use the simple update rule introduced by H. C. Jiang et al. [Phys. Rev. Lett 101, 090603 (2008)] to obtain the tensors describing the ground state wave function of the antiferromagnetic Heisenberg model and evaluate the finite size energy and staggered magnetization for square lattices with periodic boundary conditions of linear sizes up to $L=16$ and virtual bond dimensions up to $D=16$. The finite size magnetization errors are $0.003(2)$ and $0.013(2)$ at $D=16$ for a system of size $L=8,16$, respectively. Finite $D$ extrapolation provides exact finite size magnetization for $L=8$, and reduces the magnetization error to $0.005(3)$ for $L=16$, significantly improving the previous state-of-the-art results.

Proceedings Article
12 Dec 2011
TL;DR: This paper proposes MCMC samplers that make use of quasi-Newton approximations, which approximate the Hessian of the target distribution from previous samples and gradients generated by the sampler at a fraction of the cost of MCMC methods that require higher-order derivatives.
Abstract: The performance of Markov chain Monte Carlo methods is often sensitive to the scaling and correlations between the random variables of interest. An important source of information about the local correlation and scale is given by the Hessian matrix of the target distribution, but this is often either computationally expensive or infeasible. In this paper we propose MCMC samplers that make use of quasi-Newton approximations, which approximate the Hessian of the target distribution from previous samples and gradients generated by the sampler. A key issue is that MCMC samplers that depend on the history of previous states are in general not valid. We address this problem by using limited memory quasi-Newton methods, which depend only on a fixed window of previous samples. On several real world datasets, we show that the quasi-Newton sampler is more effective than standard Hamiltonian Monte Carlo at a fraction of the cost of MCMC methods that require higher-order derivatives.

Posted Content
01 Jan 2011
TL;DR: The history and development of Markov chain Monte Carlo (MCMC) from its early inception in the late 1940's through its use today and how the development of this methodology has changed the way the authors think about problems is traced.
Abstract: In this note we attempt to trace the history and development of Markov chain Monte Carlo (MCMC) from its early inception in the late 1940's through its use today. We see how the earlier stages of the Monte Carlo (MC, not MCMC) research have led to the algorithms currently in use. More importantly, we see how the development of this methodology has not only changed our solutions to problems, but has changed the way we think about problems.

Journal ArticleDOI
TL;DR: PhyloSim is an extensible framework for the Monte Carlo simulation of sequence evolution, written in R, using the Gillespie algorithm to integrate the actions of many concurrent processes such as substitutions, insertions and deletions and allows for the incorporation of selective constraints on indel events.
Abstract: Background The Monte Carlo simulation of sequence evolution is routinely used to assess the performance of phylogenetic inference methods and sequence alignment algorithms. Progress in the field of molecular evolution fuels the need for more realistic and hence more complex simulations, adapted to particular situations, yet current software makes unreasonable assumptions such as homogeneous substitution dynamics or a uniform distribution of indels across the simulated sequences. This calls for an extensible simulation framework written in a high-level functional language, offering new functionality and making it easy to incorporate further complexity.

Journal ArticleDOI
TL;DR: In this article, a Monte Carlo sampling of diagrammatic corrections to the noncrossing approximation is shown to provide numerically exact estimates of the long-time dynamics and steady-state properties of nonequilibrium quantum impurity models.
Abstract: A Monte Carlo sampling of diagrammatic corrections to the noncrossing approximation is shown to provide numerically exact estimates of the long-time dynamics and steady-state properties of nonequilibrium quantum impurity models. This ``bold'' expansion converges uniformly in time and significantly ameliorates the sign problem that has heretofore limited the power of real-time Monte Carlo approaches to strongly interacting real-time quantum problems. The approach enables the study of previously intractable problems ranging from generic long-time nonequilibrium transport characteristics in systems with large on-site repulsion to the direct description of spectral functions on the real frequency axis in dynamical mean field theory.

Journal ArticleDOI
TL;DR: In this article, a novel algorithm (JEA) is proposed to simulate exactly from a class of one-dimensional jump-diffusion processes with state-dependent intensity, which allows unbiased Monte Carlo simulation of a wide class of functionals of the process trajectory.
Abstract: We introduce a novel algorithm (JEA) to simulate exactly from a class of one-dimensional jump-diffusion processes with state-dependent intensity. The simulation of the continuous component builds on the recent Exact Algorithm (Beskos et al., Bernoulli 12(6):1077–1098, 2006a). The simulation of the jump component instead employs a thinning algorithm with stochastic acceptance probabilities in the spirit of Glasserman and Merener (Proc R Soc Lond Ser A Math Phys Eng Sci 460(2041):111–127, 2004). In turn JEA allows unbiased Monte Carlo simulation of a wide class of functionals of the process’ trajectory, including discrete averages, max/min, crossing events, hitting times. Our numerical experiments show that the method outperforms Monte Carlo methods based on the Euler discretization.

Journal ArticleDOI
TL;DR: A technique to maximize the efficiency of the linear extrapolation of diffusion Monte Carlo results to zero time step is proposed, finding that a relative time-step ratio of 1:4 is optimal.
Abstract: We describe a number of strategies for minimizing and calculating accurately the statistical uncertainty in quantum Monte Carlo calculations. We investigate the impact of the sampling algorithm on the efficiency of the variational Monte Carlo method. We then propose a technique to maximize the efficiency of the linear extrapolation of diffusion Monte Carlo results to zero time step, finding that a relative time-step ratio of 1:4 is optimal. Finally, we discuss the removal of serial correlation from data sets by reblocking, setting out criteria for the choice of block length and quantifying the effects of the uncertainty in the estimated correlation length.