scispace - formally typeset
Search or ask a question

Showing papers on "Monte Carlo method published in 2015"


Journal ArticleDOI
TL;DR: In this paper, a review of the atomic nucleus from the ground up is presented, including the structure of light nuclei, electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter.
Abstract: Quantum Monte Carlo techniques aim at providing a description of complex quantum systems such as nuclei and nucleonic matter from first principles, i.e., realistic nuclear interactions and currents. The methods are similar to those used for many-electron systems in quantum chemistry and condensed matter physics, but are extended to include spin-isospin, tensor, spin-orbit, and three-body interactions. This review shows how to build the atomic nucleus from the ground up. Examples include the structure of light nuclei, electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter.

602 citations


Journal ArticleDOI
TL;DR: A review of the progress in multilevel Monte Carlo path simulation can be found in this article, where the authors highlight the simplicity, flexibility and generality of the multi-level Monte Carlo approach.
Abstract: The author’s presentation of multilevel Monte Carlo path simulation at the MCQMC 2006 conference stimulated a lot of research into multilevel Monte Carlo methods. This paper reviews the progress since then, emphasising the simplicity, flexibility and generality of the multilevel Monte Carlo approach. It also offers a few original ideas and suggests areas for future research.

590 citations


Journal ArticleDOI
TL;DR: In this article, a new mathematical framework to the analysis of millimeter wave cellular networks is introduced, which considers realistic path-loss and blockage models derived from recently reported experimental data.
Abstract: In this paper, a new mathematical framework to the analysis of millimeter wave cellular networks is introduced. Its peculiarity lies in considering realistic path-loss and blockage models, which are derived from recently reported experimental data. The path-loss model accounts for different distributions of line-of-sight and non-line-of-sight propagation conditions and the blockage model includes an outage state that provides a better representation of the outage possibilities of millimeter wave communications. By modeling the locations of the base stations as points of a Poisson point process and by relying on a noise-limited approximation for typical millimeter wave network deployments, simple and exact integral as well as approximated and closed-form formulas for computing the coverage probability and the average rate are obtained. With the aid of Monte Carlo simulations, the noise-limited approximation is shown to be sufficiently accurate for typical network densities. The noise-limited approximation, however, may not be sufficiently accurate for ultra-dense network deployments and for sub-gigahertz transmission bandwidths. For these case studies, the analytical approach is generalized to take the other-cell interference into account at the cost of increasing its computational complexity. The proposed mathematical framework is applicable to cell association criteria based on the smallest path-loss and on the highest received power. It accounts for beamforming alignment errors and for multi-tier cellular network deployments. Numerical results confirm that sufficiently dense millimeter wave cellular networks are capable of outperforming micro wave cellular networks, in terms of coverage probability and average rate.

370 citations


Journal ArticleDOI
TL;DR: This paper presents the most recent review of the Geant4-DNA extension, as available to Geant 4 users since June 2015 (release 10.2 Beta), and includes the description of new physical models for thedescription of electron elastic and inelastic interactions in liquid water, as well as new examples dedicated to the simulation of physicochemical and chemical stages of water radiolysis.

370 citations


Journal ArticleDOI
TL;DR: In this article, numerical results for ground state and excited state properties (energies, double occupancies, and Matsubara-axis self energies) of the single-orbital Hubbard model on a two-dimensional square lattice are presented, in order to provide an assessment of our ability to compute accurate results in the thermodynamic limit.
Abstract: Numerical results for ground state and excited state properties (energies, double occupancies, and Matsubara-axis self energies) of the single-orbital Hubbard model on a two-dimensional square lattice are presented, in order to provide an assessment of our ability to compute accurate results in the thermodynamic limit. Many methods are employed, including auxiliary field quantum Monte Carlo, bare and bold-line diagrammatic Monte Carlo, method of dual fermions, density matrix embedding theory, density matrix renormalization group, dynamical cluster approximation, diffusion Monte Carlo within a fixed node approximation, unrestricted coupled cluster theory, and multi-reference projected Hartree-Fock. Comparison of results obtained by different methods allows for the identification of uncertainties and systematic errors. The importance of extrapolation to converged thermodynamic limit values is emphasized. Cases where agreement between different methods is obtained establish benchmark results that may be useful in the validation of new approaches and the improvement of existing methods.

343 citations


Journal ArticleDOI
TL;DR: In this paper, numerical results for ground-state and excited-state properties of the single-orbital Hubbard model on a two-dimensional square lattice are presented, in order to provide an assessment of our ability to compute accurate results in the thermodynamic limit.
Abstract: Numerical results for ground-state and excited-state properties (energies, double occupancies, and Matsubara-axis self-energies) of the single-orbital Hubbard model on a two-dimensional square lattice are presented, in order to provide an assessment of our ability to compute accurate results in the thermodynamic limit. Many methods are employed, including auxiliary-field quantum Monte Carlo, bare and bold-line diagrammatic Monte Carlo, method of dual fermions, density matrix embedding theory, density matrix renormalization group, dynamical cluster approximation, diffusion Monte Carlo within a fixed-node approximation, unrestricted coupled cluster theory, and multireference projected Hartree-Fock methods. Comparison of results obtained by different methods allows for the identification of uncertainties and systematic errors. The importance of extrapolation to converged thermodynamic-limit values is emphasized. Cases where agreement between different methods is obtained establish benchmark results that may be useful in the validation of new approaches and the improvement of existing methods.

333 citations


Journal ArticleDOI
TL;DR: SuperMC as mentioned in this paper is a CAD-based Monte Carlo program for integrated simulation of nuclear systems by making use of hybrid MC and deterministic methods and advanced computer technologies, which can perform neutron, photon and coupled neutron and photon transport calculation, geometry and physics modeling, results and process visualization.

305 citations


Proceedings Article
06 Jul 2015
TL;DR: A new synthesis of variational inference and Monte Carlo methods where one or more steps of MCMC is incorporated into the authors' variational approximation, resulting in a rich class of inference algorithms bridging the gap between variational methods and MCMC.
Abstract: Recent advances in stochastic gradient variational inference have made it possible to perform variational Bayesian inference with posterior approximations containing auxiliary random variables. This enables us to explore a new synthesis of variational inference and Monte Carlo methods where we incorporate one or more steps of MCMC into our variational approximation. By doing so we obtain a rich class of inference algorithms bridging the gap between variational methods and MCMC, and offering the best of both worlds: fast posterior approximation through the maximization of an explicit objective, with the option of trading off additional computation for additional accuracy. We describe the theoretical foundations that make this possible and show some promising first results.

297 citations


Journal ArticleDOI
TL;DR: In this article, a new technique for analysis of transverse momentum dependent parton distribution functions, based on the Bessel weighting formalism, is presented. But it is only applied to studies of the double longitudinal spin asymmetry in semi-inclusive deep inelastic scattering.
Abstract: In this paper we present a new technique for analysis of transverse momentum dependent parton distribution functions, based on the Bessel weighting formalism. The procedure is applied to studies of the double longitudinal spin asymmetry in semi-inclusive deep inelastic scattering using a new dedicated Monte Carlo generator which includes quark intrinsic transverse momentum within the generalized parton model. Using a fully differential cross section for the process, the effect of four momentum conservation is analyzed using various input models for transverse momentum distributions and fragmentation functions. We observe a few percent systematic offset of the Bessel-weighted asymmetry obtained from Monte Carlo extraction compared to input model calculations, which is due to the limitations imposed by the energy and momentum conservation at the given energy/ Q 2 . We find that the Bessel weighting technique provides a powerful and reliable tool to study the Fourier transform of TMDs with controlled systematics due to experimental acceptances and resolutions with different TMD model inputs.

263 citations


Journal ArticleDOI
TL;DR: This work introduces a simple randomization idea for creating unbiased estimators in such a setting based on a sequence of approximations for computing expectations of path functionals associated with stochastic differential equations (SDEs).
Abstract: In many settings in which Monte Carlo methods are applied, there may be no known algorithm for exactly generating the random object for which an expectation is to be computed. Frequently, however, one can generate arbitrarily close approximations to the random object. We introduce a simple randomization idea for creating unbiased estimators in such a setting based on a sequence of approximations. Applying this idea to computing expectations of path functionals associated with stochastic differential equations (SDEs), we construct finite variance unbiased estimators with a “square root convergence rate” for a general class of multidimensional SDEs. We then identify the optimal randomization distribution. Numerical experiments with various path functionals of continuous-time processes that often arise in finance illustrate the effectiveness of our new approach.

257 citations


Journal ArticleDOI
TL;DR: In this paper, a Monte Carlo simulation (MCS) based approach for efficient evaluation of the system failure probability P f, s of slope stability in spatially variable soils is presented.
Abstract: Monte Carlo simulation (MCS) provides a conceptually simple and robust method to evaluate the system reliability of slope stability, particularly in spatially variable soils. However, it suffers from a lack of efficiency at small probability levels, which are of great interest in geotechnical design practice. To address this problem, this paper develops a MCS-based approach for efficient evaluation of the system failure probability P f , s of slope stability in spatially variable soils. The proposed approach allows explicit modeling of the inherent spatial variability of soil properties in a system reliability analysis of slope stability. It facilitates the slope system reliability analysis using representative slip surfaces (i.e., dominating slope failure modes) and multiple stochastic response surfaces. Based on the stochastic response surfaces, the values of P f , s are efficiently calculated using MCS with negligible computational effort. For illustration, the proposed MCS-based system reliab...

Journal ArticleDOI
TL;DR: In this paper, the authors describe a quantum algorithm which can accelerate Monte Carlo methods in a very general setting, which estimates the expected output value of an arbitrary randomized or quantum subroutine with bounded variance, achieving a near-quadratic speedup over the best possible classical algorithm.
Abstract: Monte Carlo methods use random sampling to estimate numerical quantities which are hard to compute deterministically. One important example is the use in statistical physics of rapidly mixing Markov chains to approximately compute partition functions. In this work, we describe a quantum algorithm which can accelerate Monte Carlo methods in a very general setting. The algorithm estimates the expected output value of an arbitrary randomized or quantum subroutine with bounded variance, achieving a near-quadratic speedup over the best possible classical algorithm. Combining the algorithm with the use of quantum walks gives a quantum speedup of the fastest known classical algorithms with rigorous performance bounds for computing partition functions, which use multiple-stage Markov chain Monte Carlo techniques. The quantum algorithm can also be used to estimate the total variation distance between probability distributions efficiently.

Journal ArticleDOI
TL;DR: An overview of TRIPOLI-4®, the fourth generation of the 3D continuous-energy Monte Carlo code developed by the Service d’Etudes des Reacteurs et de Mathematiques Appliquees at CEA Saclay is presented.

Journal ArticleDOI
TL;DR: The coherence-optimal sampling scheme is proposed: a Markov Chain Monte Carlo sampling, which directly uses the basis functions under consideration to achieve a statistical optimality among all sampling schemes with identical support.

Proceedings Article
06 Jul 2015
TL;DR: It is shown that under standard assumptions, getting one sample from a posterior distribution is differentially private "for free"; and this sample as a statistical estimator is often consistent, near optimal, and computationally tractable; and this observations lead to an "anytime" algorithm for Bayesian learning under privacy constraint.
Abstract: We consider the problem of Bayesian learning on sensitive datasets and present two simple but somewhat surprising results that connect Bayesian learning to "differential privacy", a cryptographic approach to protect individual-level privacy while permitting database-level utility. Specifically, we show that under standard assumptions, getting one sample from a posterior distribution is differentially private "for free"; and this sample as a statistical estimator is often consistent, near optimal, and computationally tractable. Similarly but separately, we show that a recent line of work that use stochastic gradient for Hybrid Monte Carlo (HMC) sampling also preserve differentially privacy with minor or no modifications of the algorithmic procedure at all, these observations lead to an "anytime" algorithm for Bayesian learning under privacy constraint. We demonstrate that it performs much better than the state-of-the-art differential private methods on synthetic and real datasets.

Journal ArticleDOI
TL;DR: Validation results of criticality calculation, burnup calculation, source convergence acceleration, tallies performance and parallel performance shown in this paper prove the capabilities of RMC in dealing with reactor analysis problems with good performances.

Journal ArticleDOI
TL;DR: A method for estimating the probabilities of outcomes of a quantum circuit using Monte Carlo sampling techniques applied to a quasiprobability representation converges to the true quantum probability at a rate determined by the total negativity in the circuit, using a measure of negativity based on the 1-norm of the quasIProbability.
Abstract: We present a method for estimating the probabilities of outcomes of a quantum circuit using Monte Carlo sampling techniques applied to a quasiprobability representation. Our estimate converges to the true quantum probability at a rate determined by the total negativity in the circuit, using a measure of negativity based on the 1-norm of the quasiprobability. If the negativity grows at most polynomially in the size of the circuit, our estimator converges efficiently. These results highlight the role of negativity as a measure of nonclassical resources in quantum computation.

Journal ArticleDOI
TL;DR: Computer simulation results show that the proposed system reliability analysis method can accurately give the system failure probability with a relatively small number of deterministic slope stability analyses.

Posted Content
TL;DR: In this paper, a generalized Pareto distribution fit to the upper tail of the distribution of the simulated importance ratios is used to stabilize importance sampling estimates, including stabilized effective sample size estimates, Monte Carlo error estimates and convergence diagnostics.
Abstract: Importance weighting is a general way to adjust Monte Carlo integration to account for draws from the wrong distribution, but the resulting estimate can be noisy when the importance ratios have a heavy right tail. This routinely occurs when there are aspects of the target distribution that are not well captured by the approximating distribution, in which case more stable estimates can be obtained by modifying extreme importance ratios. We present a new method for stabilizing importance weights using a generalized Pareto distribution fit to the upper tail of the distribution of the simulated importance ratios. The method, which empirically performs better than existing methods for stabilizing importance sampling estimates, includes stabilized effective sample size estimates, Monte Carlo error estimates and convergence diagnostics.

Journal ArticleDOI
TL;DR: In this article, a Bayesian fusion technique for remotely sensed multi-band images is presented, where the observed images are related to the high spectral and high spatial resolution image to be recovered through physical degradations, e.g., spatial and spectral blurring and/or subsampling defined by the sensor characteristics.
Abstract: This paper presents a Bayesian fusion technique for remotely sensed multi-band images. The observed images are related to the high spectral and high spatial resolution image to be recovered through physical degradations, e.g., spatial and spectral blurring and/or subsampling defined by the sensor characteristics. The fusion problem is formulated within a Bayesian estimation framework. An appropriate prior distribution exploiting geometrical considerations is introduced. To compute the Bayesian estimator of the scene of interest from its posterior distribution, a Markov chain Monte Carlo algorithm is designed to generate samples asymptotically distributed according to the target distribution. To efficiently sample from this high-dimension distribution, a Hamiltonian Monte Carlo step is introduced within a Gibbs sampling strategy. The efficiency of the proposed fusion method is evaluated with respect to several state-of-the-art fusion techniques.

ReportDOI
TL;DR: In this article, Monte Carlo simulations are used to demonstrate that regression-discontinuity designs arrive at biased estimates when attributes related to outcomes predict heaping in the running variable.
Abstract: This study uses Monte Carlo simulations to demonstrate that regression-discontinuity designs arrive at biased estimates when attributes related to outcomes predict heaping in the running variable. After showing that our usual diagnostics may not be well suited to identifying this type of problem, we provide alternatives, and then discuss the usefulness of different approaches to addressing the bias. We then consider these issues in multiple non-simulated environments. (JEL C21, C14, I12)

Journal ArticleDOI
TL;DR: An abstract, problem-dependent theorem is given on the cost of the new multilevel estimator based on a set of simple, verifiable assumptions for a typical model problem in subsurface flow and shows significant gains over the standard Metropolis--Hastings estimator.
Abstract: In this paper we address the problem of the prohibitively large computational cost of existing Markov chain Monte Carlo methods for large-scale applications with high-dimensional parameter spaces, e.g., in uncertainty quantification in porous media flow. We propose a new multilevel Metropolis--Hastings algorithm and give an abstract, problem-dependent theorem on the cost of the new multilevel estimator based on a set of simple, verifiable assumptions. For a typical model problem in subsurface flow, we then provide a detailed analysis of these assumptions and show significant gains over the standard Metropolis--Hastings estimator. Numerical experiments confirm the analysis and demonstrate the effectiveness of the method with consistent reductions of more than an order of magnitude in the cost of the multilevel estimator over the standard Metropolis--Hastings algorithm for tolerances $\varepsilon < 10^{-2}$.

Journal ArticleDOI
TL;DR: In this paper, an updated implementation of the Monte Carlo based Glauber Model calculation is presented, which originally was used by the PHOBOS collaboration, and the main improvement w.r.t. the earlier version (v1) (Alver et al. 2008) is the inclusion of Tritium, Helium-3, and Uranium, as well as the treatment of deformed nuclei and GlauBER-Gribov fluctuations of the proton in p + A collisions.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a new method based on Majorana representation of complex fermions, which they dubbed Majorana Quantum Monte Carlo (MQMC) and found a class of SU(N) fermionic models which are sign-free in MQMC.
Abstract: Much attention has been devoted recently to identify possible ways to overcome the notorious sign problem encountered in quantum Monte Carlo simulations. The authors of this paper propose a new method based on Majorana representation of complex fermions, which they dub Majorana Quantum Monte Carlo (MQMC). They find a class of SU(N) fermionic models which are sign-free in MQMC but cannot be solved with other available methods.

Journal ArticleDOI
TL;DR: In this paper, a numerical strategy for the efficient estimation of set-valued failure probabilities, coupling Monte Carlo with optimization methods, is presented in order to both speed up the reliability analysis, and provide a better estimate for the lower and upper bounds of the failure probability.

Proceedings Article
07 Dec 2015
TL;DR: This work describes a method for "distilling" a Monte Carlo approximation to the posterior predictive density into a more compact form, namely a single deep neural network.
Abstract: We consider the problem of Bayesian parameter estimation for deep neural networks, which is important in problem settings where we may have little data, and/ or where we need accurate posterior predictive densities p(y|x, D), eg, for applications involving bandits or active learning One simple approach to this is to use online Monte Carlo methods, such as SGLD (stochastic gradient Langevin dynamics) Unfortunately, such a method needs to store many copies of the parameters (which wastes memory), and needs to make predictions using many versions of the model (which wastes time) We describe a method for "distilling" a Monte Carlo approximation to the posterior predictive density into a more compact form, namely a single deep neural network We compare to two very recent approaches to Bayesian neural networks, namely an approach based on expectation propagation [HLA15] and an approach based on variational Bayes [BCKW15] Our method performs better than both of these, is much simpler to implement, and uses less computation at test time

Journal ArticleDOI
TL;DR: Quasi Monte Carlo (QMC) as mentioned in this paper is an alternative to Monte Carlo, where random points are replaced with low-discrepancy sequences, and the advantage is that QMC estimates usually converge faster than their Monte Carlo counterparts.
Abstract: So far, the algorithms we have discussed rely on Monte Carlo, that is, on averages of random variables. QMC (quasi-Monte Carlo) is an alternative to Monte Carlo where random points are replaced with low-discrepancy sequences. The advantage is that QMC estimates usually converge faster than their Monte Carlo counterparts.

Journal ArticleDOI
03 Dec 2015-Nature
TL;DR: An ab initio calculation of alpha–alpha scattering that uses lattice Monte Carlo simulations and lattice effective field theory to describe the low-energy interactions of protons and neutrons and a technique called the ‘adiabatic projection method’ to reduce the eight-body system to a two-cluster system is described.
Abstract: Processes such as the scattering of alpha particles ((4)He), the triple-alpha reaction, and alpha capture play a major role in stellar nucleosynthesis. In particular, alpha capture on carbon determines the ratio of carbon to oxygen during helium burning, and affects subsequent carbon, neon, oxygen, and silicon burning stages. It also substantially affects models of thermonuclear type Ia supernovae, owing to carbon detonation in accreting carbon-oxygen white-dwarf stars. In these reactions, the accurate calculation of the elastic scattering of alpha particles and alpha-like nuclei--nuclei with even and equal numbers of protons and neutrons--is important for understanding background and resonant scattering contributions. First-principles calculations of processes involving alpha particles and alpha-like nuclei have so far been impractical, owing to the exponential growth of the number of computational operations with the number of particles. Here we describe an ab initio calculation of alpha-alpha scattering that uses lattice Monte Carlo simulations. We use lattice effective field theory to describe the low-energy interactions of protons and neutrons, and apply a technique called the 'adiabatic projection method' to reduce the eight-body system to a two-cluster system. We take advantage of the computational efficiency and the more favourable scaling with system size of auxiliary-field Monte Carlo simulations to compute an ab initio effective Hamiltonian for the two clusters. We find promising agreement between lattice results and experimental phase shifts for s-wave and d-wave scattering. The approximately quadratic scaling of computational operations with particle number suggests that it should be possible to compute alpha scattering and capture on carbon and oxygen in the near future. The methods described here can be applied to ultracold atomic few-body systems as well as to hadronic systems using lattice quantum chromodynamics to describe the interactions of quarks and gluons.

Journal ArticleDOI
TL;DR: A user-friendly open-source Monte Carlo regression package (McSAS) is presented, which aids in the analysis of scattering patterns from uncorrelated, shape-similar scatterers.
Abstract: A user-friendly open-source Monte Carlo regression package (McSAS) is presented, which structures the analysis of small-angle scattering (SAS) using uncorrelated shape-similar particles (or scattering contributions). The underdetermined problem is solvable, provided that sufficient external information is available. Based on this, the user picks a scatterer contribution model (or `shape') from a comprehensive library and defines variation intervals of its model parameters. A multitude of scattering contribution models are included, including prolate and oblate nanoparticles, core–shell objects, several polymer models, and a model for densely packed spheres. Most importantly, the form-free Monte Carlo nature of McSAS means it is not necessary to provide further restrictions on the mathematical form of the parameter distribution; without prior knowledge, McSAS is able to extract complex multimodal or odd-shaped parameter distributions from SAS data. When provided with data on an absolute scale with reasonable uncertainty estimates, the software outputs model parameter distributions in absolute volume fraction, and provides the modes of the distribution (e.g. mean, variance etc.). In addition to facilitating the evaluation of (series of) SAS curves, McSAS also helps in assessing the significance of the results through the addition of uncertainty estimates to the result. The McSAS software can be integrated as part of an automated reduction and analysis procedure in laboratory instruments or at synchrotron beamlines.

Journal ArticleDOI
TL;DR: In this article, an advanced Kriging method is proposed to balance the accuracy and efficiency of implementing reliability analysis, which is used to evaluate the structural reliability of finite element models.