scispace - formally typeset
Search or ask a question

Showing papers on "Monte Carlo method published in 2015"


Journal ArticleDOI
TL;DR: This paper introduces a vital extension of PLS: consistent PLS (PLSc), which provides a correction for estimates when PLS is applied to reflective constructs: the path coefficients, inter-construct correlations, and indicator loadings become consistent.
Abstract: This paper resumes the discussion in information systems research on the use of partial least squares (PLS) path modeling and shows that the inconsistency of PLS path coefficient estimates in the case of reflective measurement can have adverse consequences for hypothesis testing. To remedy this, the study introduces a vital extension of PLS: consistent PLS (PLSc). PLSc provides a correction for estimates when PLS is applied to reflective constructs: The path coefficients, inter-construct correlations, and indicator loadings become consistent. The outcome of a Monte Carlo simulation reveals that the bias of PLSc parameter estimates is comparable to that of covariance-based structural equation modeling. Moreover, the outcome shows that PLSc has advantages when using non-normally distributed data. We discuss the implications for IS research and provide guidelines for choosing among structural equation modeling techniques.

1,306 citations


Journal ArticleDOI
TL;DR: In this article, the authors provide an updated recommendation for the usage of sets of parton distribution functions (PDFs) and the assessment of PDF and PDF+$\alpha_s$ uncertainties suitable for applications at the LHC Run II.
Abstract: We provide an updated recommendation for the usage of sets of parton distribution functions (PDFs) and the assessment of PDF and PDF+$\alpha_s$ uncertainties suitable for applications at the LHC Run II. We review developments since the previous PDF4LHC recommendation, and discuss and compare the new generation of PDFs, which include substantial information from experimental data from the Run I of the LHC. We then propose a new prescription for the combination of a suitable subset of the available PDF sets, which is presented in terms of a single combined PDF set. We finally discuss tools which allow for the delivery of this combined set in terms of optimized sets of Hessian eigenvectors or Monte Carlo replicas, and their usage, and provide some examples of their application to LHC phenomenology.

683 citations


Journal ArticleDOI
TL;DR: A vital extension to partial least squares path modeling is introduced: consistency, which provides two key improvements: path coefficients, parameters of simultaneous equations, construct correlations, and indicator loadings are estimated consistently.

682 citations


Journal ArticleDOI
Khachatryan, Albert M. Sirunyan, Armen Tumasyan, Wolfgang Adam  +2118 moreInstitutions (3)
TL;DR: In this article, the performance and strategies used in electron reconstruction and selection at CERN LHC are presented based on data corresponding to an integrated luminosity of 19.7 inverse femtobarns, collected in proton-proton collisions at sqrt(s) = 8 TeV.
Abstract: The performance and strategies used in electron reconstruction and selection at CMS are presented based on data corresponding to an integrated luminosity of 19.7 inverse femtobarns, collected in proton-proton collisions at sqrt(s) = 8 TeV at the CERN LHC. The paper focuses on prompt isolated electrons with transverse momenta ranging from about 5 to a few 100 GeV. A detailed description is given of the algorithms used to cluster energy in the electromagnetic calorimeter and to reconstruct electron trajectories in the tracker. The electron momentum is estimated by combining the energy measurement in the calorimeter with the momentum measurement in the tracker. Benchmark selection criteria are presented, and their performances assessed using Z, Upsilon, and J/psi decays into electron-positron pairs. The spectra of the observables relevant to electron reconstruction and selection as well as their global efficiencies are well reproduced by Monte Carlo simulations. The momentum scale is calibrated with an uncertainty smaller than 0.3%. The momentum resolution for electrons produced in Z boson decays ranges from 1.7 to 4.5%, depending on electron pseudorapidity and energy loss through bremsstrahlung in the detector material.

633 citations


Journal ArticleDOI
TL;DR: In this paper, a review of the atomic nucleus from the ground up is presented, including the structure of light nuclei, electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter.
Abstract: Quantum Monte Carlo techniques aim at providing a description of complex quantum systems such as nuclei and nucleonic matter from first principles, i.e., realistic nuclear interactions and currents. The methods are similar to those used for many-electron systems in quantum chemistry and condensed matter physics, but are extended to include spin-isospin, tensor, spin-orbit, and three-body interactions. This review shows how to build the atomic nucleus from the ground up. Examples include the structure of light nuclei, electroweak response of nuclei relevant in electron and neutrino scattering, and the properties of dense nucleonic matter.

602 citations


Journal ArticleDOI
TL;DR: A review of the progress in multilevel Monte Carlo path simulation can be found in this article, where the authors highlight the simplicity, flexibility and generality of the multi-level Monte Carlo approach.
Abstract: The author’s presentation of multilevel Monte Carlo path simulation at the MCQMC 2006 conference stimulated a lot of research into multilevel Monte Carlo methods. This paper reviews the progress since then, emphasising the simplicity, flexibility and generality of the multilevel Monte Carlo approach. It also offers a few original ideas and suggests areas for future research.

590 citations


Journal ArticleDOI
TL;DR: An overview of OpenMC, an open source Monte Carlo particle transport code recently developed at the Massachusetts Institute of Technology, which uses continuous-energy cross sections and a constructive solid geometry representation, enabling high-fidelity modeling of nuclear reactors and other systems.

390 citations


Journal ArticleDOI
TL;DR: In this article, a new mathematical framework to the analysis of millimeter wave cellular networks is introduced, which considers realistic path-loss and blockage models derived from recently reported experimental data.
Abstract: In this paper, a new mathematical framework to the analysis of millimeter wave cellular networks is introduced. Its peculiarity lies in considering realistic path-loss and blockage models, which are derived from recently reported experimental data. The path-loss model accounts for different distributions of line-of-sight and non-line-of-sight propagation conditions and the blockage model includes an outage state that provides a better representation of the outage possibilities of millimeter wave communications. By modeling the locations of the base stations as points of a Poisson point process and by relying on a noise-limited approximation for typical millimeter wave network deployments, simple and exact integral as well as approximated and closed-form formulas for computing the coverage probability and the average rate are obtained. With the aid of Monte Carlo simulations, the noise-limited approximation is shown to be sufficiently accurate for typical network densities. The noise-limited approximation, however, may not be sufficiently accurate for ultra-dense network deployments and for sub-gigahertz transmission bandwidths. For these case studies, the analytical approach is generalized to take the other-cell interference into account at the cost of increasing its computational complexity. The proposed mathematical framework is applicable to cell association criteria based on the smallest path-loss and on the highest received power. It accounts for beamforming alignment errors and for multi-tier cellular network deployments. Numerical results confirm that sufficiently dense millimeter wave cellular networks are capable of outperforming micro wave cellular networks, in terms of coverage probability and average rate.

370 citations


Journal ArticleDOI
TL;DR: This paper presents the most recent review of the Geant4-DNA extension, as available to Geant 4 users since June 2015 (release 10.2 Beta), and includes the description of new physical models for thedescription of electron elastic and inelastic interactions in liquid water, as well as new examples dedicated to the simulation of physicochemical and chemical stages of water radiolysis.

370 citations


Journal ArticleDOI
TL;DR: In this article, numerical results for ground state and excited state properties (energies, double occupancies, and Matsubara-axis self energies) of the single-orbital Hubbard model on a two-dimensional square lattice are presented, in order to provide an assessment of our ability to compute accurate results in the thermodynamic limit.
Abstract: Numerical results for ground state and excited state properties (energies, double occupancies, and Matsubara-axis self energies) of the single-orbital Hubbard model on a two-dimensional square lattice are presented, in order to provide an assessment of our ability to compute accurate results in the thermodynamic limit. Many methods are employed, including auxiliary field quantum Monte Carlo, bare and bold-line diagrammatic Monte Carlo, method of dual fermions, density matrix embedding theory, density matrix renormalization group, dynamical cluster approximation, diffusion Monte Carlo within a fixed node approximation, unrestricted coupled cluster theory, and multi-reference projected Hartree-Fock. Comparison of results obtained by different methods allows for the identification of uncertainties and systematic errors. The importance of extrapolation to converged thermodynamic limit values is emphasized. Cases where agreement between different methods is obtained establish benchmark results that may be useful in the validation of new approaches and the improvement of existing methods.

343 citations


Journal ArticleDOI
TL;DR: In this paper, numerical results for ground-state and excited-state properties of the single-orbital Hubbard model on a two-dimensional square lattice are presented, in order to provide an assessment of our ability to compute accurate results in the thermodynamic limit.
Abstract: Numerical results for ground-state and excited-state properties (energies, double occupancies, and Matsubara-axis self-energies) of the single-orbital Hubbard model on a two-dimensional square lattice are presented, in order to provide an assessment of our ability to compute accurate results in the thermodynamic limit. Many methods are employed, including auxiliary-field quantum Monte Carlo, bare and bold-line diagrammatic Monte Carlo, method of dual fermions, density matrix embedding theory, density matrix renormalization group, dynamical cluster approximation, diffusion Monte Carlo within a fixed-node approximation, unrestricted coupled cluster theory, and multireference projected Hartree-Fock methods. Comparison of results obtained by different methods allows for the identification of uncertainties and systematic errors. The importance of extrapolation to converged thermodynamic-limit values is emphasized. Cases where agreement between different methods is obtained establish benchmark results that may be useful in the validation of new approaches and the improvement of existing methods.

Journal ArticleDOI
TL;DR: A general approach for refining protein structure models on the basis of cryo-electron microscopy maps with near-atomic resolution that integrates Monte Carlo sampling with local density-guided optimization, Rosetta all-atom refinement and real-space B-factor fitting is described.
Abstract: New detector technology has improved the resolution of cryo-electron microscopy (cryo-EM), but tools for structure determination from high-resolution maps have lagged behind. DiMaio et al. report structure determination from high-resolution cryo-EM maps using a homologous structure as a starting model. Also in this issue, Wang et al. describe a de novo approach for structure determination that does not require a starting model.

Journal ArticleDOI
TL;DR: SuperMC as mentioned in this paper is a CAD-based Monte Carlo program for integrated simulation of nuclear systems by making use of hybrid MC and deterministic methods and advanced computer technologies, which can perform neutron, photon and coupled neutron and photon transport calculation, geometry and physics modeling, results and process visualization.

Proceedings Article
06 Jul 2015
TL;DR: A new synthesis of variational inference and Monte Carlo methods where one or more steps of MCMC is incorporated into the authors' variational approximation, resulting in a rich class of inference algorithms bridging the gap between variational methods and MCMC.
Abstract: Recent advances in stochastic gradient variational inference have made it possible to perform variational Bayesian inference with posterior approximations containing auxiliary random variables. This enables us to explore a new synthesis of variational inference and Monte Carlo methods where we incorporate one or more steps of MCMC into our variational approximation. By doing so we obtain a rich class of inference algorithms bridging the gap between variational methods and MCMC, and offering the best of both worlds: fast posterior approximation through the maximization of an explicit objective, with the option of trading off additional computation for additional accuracy. We describe the theoretical foundations that make this possible and show some promising first results.

Journal ArticleDOI
TL;DR: In this article, a new technique for analysis of transverse momentum dependent parton distribution functions, based on the Bessel weighting formalism, is presented. But it is only applied to studies of the double longitudinal spin asymmetry in semi-inclusive deep inelastic scattering.
Abstract: In this paper we present a new technique for analysis of transverse momentum dependent parton distribution functions, based on the Bessel weighting formalism. The procedure is applied to studies of the double longitudinal spin asymmetry in semi-inclusive deep inelastic scattering using a new dedicated Monte Carlo generator which includes quark intrinsic transverse momentum within the generalized parton model. Using a fully differential cross section for the process, the effect of four momentum conservation is analyzed using various input models for transverse momentum distributions and fragmentation functions. We observe a few percent systematic offset of the Bessel-weighted asymmetry obtained from Monte Carlo extraction compared to input model calculations, which is due to the limitations imposed by the energy and momentum conservation at the given energy/ Q 2 . We find that the Bessel weighting technique provides a powerful and reliable tool to study the Fourier transform of TMDs with controlled systematics due to experimental acceptances and resolutions with different TMD model inputs.

Journal ArticleDOI
TL;DR: This work introduces a simple randomization idea for creating unbiased estimators in such a setting based on a sequence of approximations for computing expectations of path functionals associated with stochastic differential equations (SDEs).
Abstract: In many settings in which Monte Carlo methods are applied, there may be no known algorithm for exactly generating the random object for which an expectation is to be computed. Frequently, however, one can generate arbitrarily close approximations to the random object. We introduce a simple randomization idea for creating unbiased estimators in such a setting based on a sequence of approximations. Applying this idea to computing expectations of path functionals associated with stochastic differential equations (SDEs), we construct finite variance unbiased estimators with a “square root convergence rate” for a general class of multidimensional SDEs. We then identify the optimal randomization distribution. Numerical experiments with various path functionals of continuous-time processes that often arise in finance illustrate the effectiveness of our new approach.

Journal ArticleDOI
TL;DR: A comparative study on the most popular machine learning methods applied to the challenging problem of customer churning prediction in the telecommunications industry demonstrates clear superiority of the boosted versions of the models against the plain (non-boosted) versions.

Journal ArticleDOI
TL;DR: In this paper, a Monte Carlo simulation (MCS) based approach for efficient evaluation of the system failure probability P f, s of slope stability in spatially variable soils is presented.
Abstract: Monte Carlo simulation (MCS) provides a conceptually simple and robust method to evaluate the system reliability of slope stability, particularly in spatially variable soils. However, it suffers from a lack of efficiency at small probability levels, which are of great interest in geotechnical design practice. To address this problem, this paper develops a MCS-based approach for efficient evaluation of the system failure probability P f , s of slope stability in spatially variable soils. The proposed approach allows explicit modeling of the inherent spatial variability of soil properties in a system reliability analysis of slope stability. It facilitates the slope system reliability analysis using representative slip surfaces (i.e., dominating slope failure modes) and multiple stochastic response surfaces. Based on the stochastic response surfaces, the values of P f , s are efficiently calculated using MCS with negligible computational effort. For illustration, the proposed MCS-based system reliab...

Journal ArticleDOI
TL;DR: In this paper, the authors describe a quantum algorithm which can accelerate Monte Carlo methods in a very general setting, which estimates the expected output value of an arbitrary randomized or quantum subroutine with bounded variance, achieving a near-quadratic speedup over the best possible classical algorithm.
Abstract: Monte Carlo methods use random sampling to estimate numerical quantities which are hard to compute deterministically. One important example is the use in statistical physics of rapidly mixing Markov chains to approximately compute partition functions. In this work, we describe a quantum algorithm which can accelerate Monte Carlo methods in a very general setting. The algorithm estimates the expected output value of an arbitrary randomized or quantum subroutine with bounded variance, achieving a near-quadratic speedup over the best possible classical algorithm. Combining the algorithm with the use of quantum walks gives a quantum speedup of the fastest known classical algorithms with rigorous performance bounds for computing partition functions, which use multiple-stage Markov chain Monte Carlo techniques. The quantum algorithm can also be used to estimate the total variation distance between probability distributions efficiently.

Journal ArticleDOI
TL;DR: In this paper, the authors define a jet image using calorimeter towers as the elements of the image and establish jet-image preprocessing methods, and develop a discriminant for classifying the jet-images derived using Fisher discriminant analysis.
Abstract: We introduce a novel approach to jet tagging and classification through the use of techniques inspired by computer vision. Drawing parallels to the problem of facial recognition in images, we define a jet-image using calorimeter towers as the elements of the image and establish jet-image preprocessing methods. For the jet-image processing step, we develop a discriminant for classifying the jet-images derived using Fisher discriminant analysis. The effectiveness of the technique is shown within the context of identifying boosted hadronic W boson decays with respect to a background of quark- and gluoninitiated jets. Using Monte Carlo simulation, we demonstrate that the performance of this technique introduces additional discriminating power over other substructure approaches, and gives significant insight into the internal structure of jets.

Journal ArticleDOI
TL;DR: An overview of TRIPOLI-4®, the fourth generation of the 3D continuous-energy Monte Carlo code developed by the Service d’Etudes des Reacteurs et de Mathematiques Appliquees at CEA Saclay is presented.

Journal ArticleDOI
TL;DR: In this article, the authors present model specification and estimation of the spatial autoregressive (SAR) model with an endogenous spatial weight matrix, and provide three estimation methods: two-stage instrumental variable (2SIV) method, quasi-maximum likelihood estimation (QMLE) approach, and generalized method of moments (GMM).

Journal ArticleDOI
TL;DR: The coherence-optimal sampling scheme is proposed: a Markov Chain Monte Carlo sampling, which directly uses the basis functions under consideration to achieve a statistical optimality among all sampling schemes with identical support.

Proceedings Article
06 Jul 2015
TL;DR: It is shown that under standard assumptions, getting one sample from a posterior distribution is differentially private "for free"; and this sample as a statistical estimator is often consistent, near optimal, and computationally tractable; and this observations lead to an "anytime" algorithm for Bayesian learning under privacy constraint.
Abstract: We consider the problem of Bayesian learning on sensitive datasets and present two simple but somewhat surprising results that connect Bayesian learning to "differential privacy", a cryptographic approach to protect individual-level privacy while permitting database-level utility. Specifically, we show that under standard assumptions, getting one sample from a posterior distribution is differentially private "for free"; and this sample as a statistical estimator is often consistent, near optimal, and computationally tractable. Similarly but separately, we show that a recent line of work that use stochastic gradient for Hybrid Monte Carlo (HMC) sampling also preserve differentially privacy with minor or no modifications of the algorithmic procedure at all, these observations lead to an "anytime" algorithm for Bayesian learning under privacy constraint. We demonstrate that it performs much better than the state-of-the-art differential private methods on synthetic and real datasets.

Journal ArticleDOI
TL;DR: In this paper, the Nanoporous Materials Genome (NOME) is used to characterize, screen, and analyze the porous material structures for candidate adsorbents for separating an industrially relevant gaseous mixture of xenon and krypton.
Abstract: Accelerating progress in the discovery and deployment of advanced nanoporous materials relies on chemical insight and structure–property relationships for rational design. Because of the complexity of this problem, trial-and-error is heavily involved in the laboratory today. A cost-effective route to aid experimental materials discovery is to construct structure models of nanoporous materials in silico and use molecular simulations to rapidly test them and elucidate data-driven guidelines for rational design. For example, highly tunable nanoporous materials have shown promise as adsorbents for separating an industrially relevant gaseous mixture of xenon and krypton. In this work, we characterize, screen, and analyze the Nanoporous Materials Genome, a database of about 670 000 porous material structures, for candidate adsorbents for xenon/krypton separations. For over half a million structures, the computational resources required for a brute-force screening using grand-canonical Monte Carlo simulations of...

Journal ArticleDOI
TL;DR: Validation results of criticality calculation, burnup calculation, source convergence acceleration, tallies performance and parallel performance shown in this paper prove the capabilities of RMC in dealing with reactor analysis problems with good performances.

Journal ArticleDOI
TL;DR: A method for estimating the probabilities of outcomes of a quantum circuit using Monte Carlo sampling techniques applied to a quasiprobability representation converges to the true quantum probability at a rate determined by the total negativity in the circuit, using a measure of negativity based on the 1-norm of the quasIProbability.
Abstract: We present a method for estimating the probabilities of outcomes of a quantum circuit using Monte Carlo sampling techniques applied to a quasiprobability representation. Our estimate converges to the true quantum probability at a rate determined by the total negativity in the circuit, using a measure of negativity based on the 1-norm of the quasiprobability. If the negativity grows at most polynomially in the size of the circuit, our estimator converges efficiently. These results highlight the role of negativity as a measure of nonclassical resources in quantum computation.

Journal ArticleDOI
TL;DR: In this paper, a three-pass regression lter (3PRF) is proposed to forecast a single time series using many predictor variables with a new estimator called the 3PRF, calculated in closed form and conveniently represented as a set of ordinary least squares regressions.

Journal ArticleDOI
TL;DR: It is argued that many scientific codes, like SKIRT, can benefit from careful object-oriented design and from a friendly user interface, even if it is not a graphical user interface.

Journal ArticleDOI
TL;DR: Computer simulation results show that the proposed system reliability analysis method can accurately give the system failure probability with a relatively small number of deterministic slope stability analyses.