scispace - formally typeset
Search or ask a question

Showing papers on "Monte Carlo method published in 2012"


Journal ArticleDOI
TL;DR: In this paper, a simple test of Granger (1969) non-causality for hetero- geneous panel data models is proposed, based on the individual Wald statistics of Granger non causality averaged across the cross-section units.

2,741 citations


Journal ArticleDOI
TL;DR: A survey of the literature to date of Monte Carlo tree search, intended to provide a snapshot of the state of the art after the first five years of MCTS research, outlines the core algorithm's derivation, impart some structure on the many variations and enhancements that have been proposed, and summarizes the results from the key game and nongame domains.
Abstract: Monte Carlo tree search (MCTS) is a recently proposed search method that combines the precision of tree search with the generality of random sampling. It has received considerable interest due to its spectacular success in the difficult problem of computer Go, but has also proved beneficial in a range of other domains. This paper is a survey of the literature to date, intended to provide a snapshot of the state of the art after the first five years of MCTS research. We outline the core algorithm's derivation, impart some structure on the many variations and enhancements that have been proposed, and summarize the results from the key game and nongame domains to which MCTS methods have been applied. A number of open research questions indicate that the field is ripe for future work.

2,682 citations


Journal ArticleDOI
TL;DR: MMPBSA.py is a program written in Python for streamlining end-state free energy calculations using ensembles derived from molecular dynamics or Monte Carlo simulations, including the Poisson-Boltzmann Model and several implicit solvation models.
Abstract: MM-PBSA is a post-processing end-state method to calculate free energies of molecules in solution. MMPBSA.py is a program written in Python for streamlining end-state free energy calculations using ensembles derived from molecular dynamics (MD) or Monte Carlo (MC) simulations. Several implicit solvation models are available with MMPBSA.py, including the Poisson–Boltzmann Model, the Generalized Born Model, and the Reference Interaction Site Model. Vibrational frequencies may be calculated using normal mode or quasi-harmonic analysis to approximate the solute entropy. Specific interactions can also be dissected using free energy decomposition or alanine scanning. A parallel implementation significantly speeds up the calculation by dividing frames evenly across available processors. MMPBSA.py is an efficient, user-friendly program with the flexibility to accommodate the needs of users performing end-state free energy calculations. The source code can be downloaded at http://ambermd.org/ with AmberTools, rele...

2,528 citations


Book
05 Dec 2012
TL;DR: This paper presents a meta-modelling framework that automates the very labor-intensive and therefore time-heavy and expensive process of manually cataloging samples and generating random numbers.
Abstract: Introduction.- Estimating Volume and Count.- Generating Samples.- Increasing Efficiency.- Random Tours.- Designing and Analyzing Sample Paths.- Generating Pseudorandom Numbers.

2,215 citations


Journal ArticleDOI
TL;DR: This study discusses Monte Carlo confidence intervals for indirect effects, reports the results of a simulation study comparing their performance to that of competing methods, demonstrates the method in applied examples, and discusses several software options for implementation in applied settings.
Abstract: Monte Carlo simulation is a useful but underutilized method of constructing confidence intervals for indirect effects in mediation analysis. The Monte Carlo confidence interval method has several distinct advantages over rival methods. Its performance is comparable to other widely accepted methods of interval construction, it can be used when only summary data are available, it can be used in situations where rival methods (e.g., bootstrapping and distribution of the product methods) are difficult or impossible, and it is not as computer-intensive as some other methods. In this study we discuss Monte Carlo confidence intervals for indirect effects, report the results of a simulation study comparing their performance to that of competing methods, demonstrate the method in applied examples, and discuss several software options for implementation in applied settings.

1,165 citations


Journal ArticleDOI
TL;DR: A significant impact of Monte Carlo dose calculation can be expected in complex geometries where local range uncertainties due to multiple Coulomb scattering will reduce the accuracy of analytical algorithms and in these cases Monte Carlo techniques might reduce the range uncertainty by several mm.
Abstract: The main advantages of proton therapy are the reduced total energy deposited in the patient as compared to photon techniques and the finite range of the proton beam. The latter adds an additional degree of freedom to treatment planning. The range in tissue is associated with considerable uncertainties caused by imaging, patient setup, beam delivery and dose calculation. Reducing the uncertainties would allow a reduction of the treatment volume and thus allow a better utilization of the advantages of protons. This paper summarizes the role of Monte Carlo simulations when aiming at a reduction of range uncertainties in proton therapy. Differences in dose calculation when comparing Monte Carlo with analytical algorithms are analyzed as well as range uncertainties due to material constants and CT conversion. Range uncertainties due to biological effects and the role of Monte Carlo for in vivo range verification are discussed. Furthermore, the current range uncertainty recipes used at several proton therapy facilities are revisited. We conclude that a significant impact of Monte Carlo dose calculation can be expected in complex geometries where local range uncertainties due to multiple Coulomb scattering will reduce the accuracy of analytical algorithms. In these cases Monte Carlo techniques might reduce the range uncertainty by several mm.

1,027 citations


Journal ArticleDOI
TL;DR: High confidence in the MCNP6 code is based on its performance with the verification and validation test suites, comparisons to its predecessor codes, the regression test suite, its code development process, and the underlying high-quality nuclear and atomic databases.
Abstract: MCNP6 is simply and accurately described as the merger of MCNP5 and MCNPX capabilities, but it is much more than the sum of those two computer codes. MCNP6 is the result of five years of effort by ...

977 citations


Journal ArticleDOI
01 Jul 2012-Proteins
TL;DR: A novel program, QUARK, for template‐free protein structure prediction, which can successfully construct 3D models of correct folds in one‐third cases of short proteins up to 100 residues and outperformed the second and third best servers based on the cumulative Z‐score of global distance test‐total scores in the FM category.
Abstract: Ab initio protein folding is one of the major unsolved problems in computational biology due to the difficulties in force field design and conformational search. We developed a novel program, QUARK, for template-free protein structure prediction. Query sequences are first broken into fragments of 1–20 residues where multiple fragment structures are retrieved at each position from unrelated experimental structures. Full-length structure models are then assembled from fragments using replica-exchange Monte Carlo simulations, which are guided by a composite knowledge-based force field. A number of novel energy terms and Monte Carlo movements are introduced and the particular contributions to enhancing the efficiency of both force field and search engine are analyzed in detail. QUARK prediction procedure is depicted and tested on the structure modeling of 145 non-homologous proteins. Although no global templates are used and all fragments from experimental structures with template modeling score (TM-score) >0.5 are excluded, QUARK can successfully construct 3D models of correct folds in 1/3 cases of short proteins up to 100 residues. In the ninth community-wide Critical Assessment of protein Structure Prediction (CASP9) experiment, QUARK server outperformed the second and third best servers by 18% and 47% based on the cumulative Z-score of global distance test-total (GDT-TS) scores in the free modeling (FM) category. Although ab initio protein folding remains a significant challenge, these data demonstrate new progress towards the solution of the most important problem in the field.

844 citations


Journal ArticleDOI
TL;DR: An innovative proton Monte Carlo platform is developed and a custom-designed TOPAS parameter control system was placed at the heart of the code to meet requirements for ease of use, reliability, and repeatability without sacrificing flexibility.
Abstract: Purpose: While Monte Carlo particle transport has proven useful in many areas (treatment head design, dose calculation, shielding design, and imaging studies) and has been particularly important for proton therapy (due to the conformal dose distributions and a finite beam range in the patient), the available general purpose Monte Carlo codes in proton therapy have been overly complex for most clinical medical physicists. The learning process has large costs not only in time but also in reliability. To address this issue, we developed an innovative protonMonte Carlo platform and tested the tool in a variety of proton therapy applications. Methods: Our approach was to take one of the already-established general purpose Monte Carlo codes and wrap and extend it to create a specialized user-friendly tool for proton therapy. The resulting tool, TOol for PArticle Simulation (TOPAS), should make Monte Carlo simulation more readily available for research and clinical physicists. TOPAS can model a passive scattering or scanning beam treatment head, model a patient geometry based on computed tomography(CT)images, score dose, fluence, etc., save and restart a phase space, provides advanced graphics, and is fully four-dimensional (4D) to handle variations in beam delivery and patient geometry during treatment. A custom-designed TOPAS parameter control system was placed at the heart of the code to meet requirements for ease of use, reliability, and repeatability without sacrificing flexibility. Results: We built and tested the TOPAS code. We have shown that the TOPAS parameter system provides easy yet flexible control over all key simulation areas such as geometry setup, particle source setup, scoring setup, etc. Through design consistency, we have insured that user experience gained in configuring one component, scorer or filter applies equally well to configuring any other component, scorer or filter. We have incorporated key lessons from safety management, proactively removing possible sources of user error such as line-ordering mistakes. We have modeled proton therapytreatment examples including the UCSF eye treatment head, the MGH stereotactic alignment in radiosurgery treatment head and the MGH gantry treatment heads in passive scattering and scanning modes, and we have demonstrated dose calculation based on patient-specific CT data. Initial validation results show agreement with measured data and demonstrate the capabilities of TOPAS in simulating beam delivery in 3D and 4D. Conclusions: We have demonstrated TOPAS accuracy and usability in a variety of proton therapy setups. As we are preparing to make this tool freely available for researchers in medical physics, we anticipate widespread use of this tool in the growing proton therapy community.

693 citations


Journal ArticleDOI
TL;DR: An adaptive SMC algorithm is proposed which admits a computational complexity that is linear in the number of samples and adaptively determines the simulation parameters.
Abstract: Approximate Bayesian computation (ABC) is a popular approach to address inference problems where the likelihood function is intractable, or expensive to calculate To improve over Markov chain Monte Carlo (MCMC) implementations of ABC, the use of sequential Monte Carlo (SMC) methods has recently been suggested Most effective SMC algorithms that are currently available for ABC have a computational complexity that is quadratic in the number of Monte Carlo samples (Beaumont et al, Biometrika 86:983---990, 2009; Peters et al, Technical report, 2008; Toni et al, J Roy Soc Interface 6:187---202, 2009) and require the careful choice of simulation parameters In this article an adaptive SMC algorithm is proposed which admits a computational complexity that is linear in the number of samples and adaptively determines the simulation parameters We demonstrate our algorithm on a toy example and on a birth-death-mutation model arising in epidemiology

530 citations


Journal ArticleDOI
Daniele S. M. Alves1, Nima Arkani-Hamed, S. Arora2, Yang Bai1, Matthew Baumgart3, Joshua Berger4, Matthew R. Buckley5, Bart Butler1, Spencer Chang6, Spencer Chang7, Hsin-Chia Cheng7, Clifford Cheung8, R. Sekhar Chivukula9, Won Sang Cho10, R. Cotta1, Mariarosaria D'Alfonso11, Sonia El Hedri1, Rouven Essig12, Jared A. Evans7, Liam Fitzpatrick13, Patrick J. Fox5, Roberto Franceschini14, Ayres Freitas15, James S. Gainer16, James S. Gainer17, Yuri Gershtein2, R. N.C. Gray2, Thomas Gregoire18, Ben Gripaios19, J.F. Gunion7, Tao Han20, Andy Haas1, P. Hansson1, JoAnne L. Hewett1, Dmitry Hits2, Jay Hubisz21, Eder Izaguirre1, Jared Kaplan1, Emanuel Katz13, Can Kilic2, Hyung Do Kim22, Ryuichiro Kitano23, Sue Ann Koay11, Pyungwon Ko24, David Krohn25, Eric Kuflik26, Ian M. Lewis20, Mariangela Lisanti27, Tao Liu11, Zhen Liu20, Ran Lu26, Markus A. Luty7, Patrick Meade12, David E. Morrissey28, Stephen Mrenna5, Mihoko M. Nojiri, Takemichi Okui29, Sanjay Padhi30, Michele Papucci31, Michael Park2, Myeonghun Park32, Maxim Perelstein4, Michael E. Peskin1, Daniel J. Phalen7, Keith Rehermann33, Vikram Rentala34, Vikram Rentala35, Tuhin S. Roy36, Joshua T. Ruderman27, Veronica Sanz37, Martin Schmaltz13, S. Schnetzer2, Philip Schuster38, Pedro Schwaller39, Pedro Schwaller17, Pedro Schwaller40, Matthew D. Schwartz25, Ariel Schwartzman1, Jing Shao21, J. Shelton41, David Shih2, Jing Shu10, Daniel Silverstein1, Elizabeth H. Simmons9, Sunil Somalwar2, Michael Spannowsky6, Christian Spethmann13, Matthew J. Strassler2, Shufang Su34, Shufang Su35, Tim M. P. Tait35, Brooks Thomas42, Scott Thomas2, Natalia Toro38, Tomer Volansky8, Jay G. Wacker1, Wolfgang Waltenberger43, Itay Yavin44, Felix Yu35, Yue Zhao2, Kathryn M. Zurek26 
TL;DR: A collection of simplified models relevant to the design of new-physics searches at the Large Hadron Collider (LHC) and the characterization of their results is presented in this paper.
Abstract: This document proposes a collection of simplified models relevant to the design of new-physics searches at the Large Hadron Collider (LHC) and the characterization of their results. Both ATLAS and CMS have already presented some results in terms of simplified models, and we encourage them to continue and expand this effort, which supplements both signature-based results and benchmark model interpretations. A simplified model is defined by an effective Lagrangian describing the interactions of a small number of new particles. Simplified models can equally well be described by a small number of masses and cross-sections. These parameters are directly related to collider physics observables, making simplified models a particularly effective framework for evaluating searches and a useful starting point for characterizing positive signals of new physics. This document serves as an official summary of the results from the 'Topologies for Early LHC Searches' workshop, held at SLAC in September of 2010, the purpose of which was to develop a set of representative models that can be used to cover all relevant phase space in experimental searches. Particular emphasis is placed on searches relevant for the first similar to 50-500 pb(-1) of data and those motivated by supersymmetric models. This note largely summarizes material posted at http://lhcnewphysics.org/, which includes simplified model definitions, Monte Carlo material, and supporting contacts within the theory community. We also comment on future developments that may be useful as more data is gathered and analyzed by the experiments.

Journal ArticleDOI
TL;DR: Monte Carlo simulations of the nanoheat engine are performed that demonstrate its feasibility and its ability to operate at a maximum efficiency of 30% under realistic conditions.
Abstract: We propose an experimental scheme to realize a nanoheat engine with a single ion. An Otto cycle may be implemented by confining the ion in a linear Paul trap with tapered geometry and coupling it to engineered laser reservoirs. The quantum efficiency at maximum power is analytically determined in various regimes. Moreover, Monte Carlo simulations of the engine are performed that demonstrate its feasibility and its ability to operate at a maximum efficiency of 30% under realistic conditions.

Journal ArticleDOI
TL;DR: The proposed methodology establishes a single PHEV charging demand model, and then employs queuing theory to describe the behavior of multiple PHEVs, and a modified IEEE 30-bus system integrated with the two demand models proposed is proposed.
Abstract: Millions of electric vehicles (EVs), especially plug-in hybrid EVs (PHEVs), will be integrated into the power grid in the near future. Due to their large quantity and complex charging behavior, the impact of substantial PHEVs charging on the power grid needs to be investigated. Since the charging behavior of PHEVs in a certain regional transmission network or a local distribution network is determined by different uncertain factors, their overall charging demand tends to be uncertain and in this situation probabilistic power flow (PPF) can be applied to analyze the impact of PHEVs charging on the power grid. However, currently there is no suitable model of the overall charging demand of PHEVs available for PPF calculations. In this paper, a methodology of modeling the overall charging demand of PHEVs is proposed. The proposed methodology establishes a single PHEV charging demand model, and then employs queuing theory to describe the behavior of multiple PHEVs. Moreover, two applications are given, i.e., modeling the overall charging demand of PHEVs at an EV charging station and in a local residential community, respectively. Comparison between PPF calculations and Monte Carlo simulation are made on a modified IEEE 30-bus system integrated with the two demand models proposed.

Book ChapterDOI
02 Oct 2012

Journal ArticleDOI
TL;DR: This paper considers the implementation of both maximum likelihood and generalized moments estimators in the context of fixed as well as random effects spatial panel data models and performs comparisons with other available software.
Abstract: splm is an R package for the estimation and testing of various spatial panel data specifications. We consider the implementation of both maximum likelihood and generalized moments estimators in the context of fixed as well as random effects spatial panel data models. This paper is a general description of splm and all functionalities are illustrated using a well-known example taken from Munnell (1990) with productivity data on 48 US states observed over 17 years. We perform comparisons with other available software; and, when this is not possible, Monte Carlo results support our original implementation.

Book
Enrico Zio1
02 Nov 2012
TL;DR: The Monte Carlo simulation method is comprehensively illustrated and a sound understanding of the fundamentals of Monte Carlo sampling and simulation and its application for realistic system modeling is given.
Abstract: Monte Carlo simulation is one of the best tools for performing realistic analysis of complex systems as it allows most of the limiting assumptions on system behavior to be relaxed. The Monte Carlo Simulation Method for System Reliability and Risk Analysis comprehensively illustrates the Monte Carlo simulation method and its application to reliability and system engineering. Readers are given a sound understanding of the fundamentals of Monte Carlo sampling and simulation and its application for realistic system modeling.Whilst many of the topics rely on a high-level understanding of calculus, probability and statistics, simple academic examples will be provided in support to the explanation of the theoretical foundations to facilitate comprehension of the subject matter. Case studies will be introduced to provide the practical value of the most advanced techniques. This detailed approach makes The Monte Carlo Simulation Method for System Reliability and Risk Analysis a key reference for senior undergraduate and graduate students as well as researchers and practitioners. It provides a powerful tool for all those involved in system analysis for reliability, maintenance and risk evaluations.

Journal ArticleDOI
TL;DR: This work presents a novel method for joint inversion of receiver functions and surface wave dispersion data, using a transdimensional Bayesian formulation and shows that the Hierarchical Bayes procedure is a powerful tool in this situation, able to evaluate the level of information brought by different data types in the misfit, thus removing the arbitrary choice of weighting factors.
Abstract: We present a novel method for joint inversion of receiver functions and surface wave dispersion data, using a transdimensional Bayesian formulation. This class of algorithm treats the number of model parameters (e.g. number of layers) as an unknown in the problem. The dimension of the model space is variable and a Markov chain Monte Carlo (McMC) scheme is used to provide a parsimonious solution that fully quantifies the degree of knowledge one has about seismic structure (i.e constraints on the model, resolution, and trade-offs). The level of data noise (i.e. the covariance matrix of data errors) effectively controls the information recoverable from the data and here it naturally determines the complexity of the model (i.e. the number of model parameters). However, it is often difficult to quantify the data noise appropriately, particularly in the case of seismic waveform inversion where data errors are correlated. Here we address the issue of noise estimation using an extended Hierarchical Bayesian formulation, which allows both the variance and covariance of data noise to be treated as unknowns in the inversion. In this way it is possible to let the data infer the appropriate level of data fit. In the context of joint inversions, assessment of uncertainty for different data types becomes crucial in the evaluation of the misfit function. We show that the Hierarchical Bayes procedure is a powerful tool in this situation, because it is able to evaluate the level of information brought by different data types in the misfit, thus removing the arbitrary choice of weighting factors. After illustrating the method with synthetic tests, a real data application is shown where teleseismic receiver functions and ambient noise surface wave dispersion measurements from the WOMBAT array (South-East Australia) are jointly inverted to provide a probabilistic 1D model of shear-wave velocity beneath a given station.

Journal ArticleDOI
TL;DR: In this paper, a probabilistic power flow (PPF) algorithm was applied to evaluate the influence of photovoltaic (PV) generation uncertainty on transmission system performance, and three types of approximation expansions based on cumulants, namely the Gram-Charlier expansion, the Edgeworth expansion, and the Cornish-Fisher expansion, were compared.
Abstract: This paper applies a probabilistic power flow (PPF) algorithm to evaluate the influence of photovoltaic (PV) generation uncertainty on transmission system performance. PV generation has the potential to cause a significant impact on power system reliability in the near future. A cumulant-based PPF algorithm suitable for large systems is used to avoid convolution calculations. Correlation among input random variables is considered. Specifically correlation between adjacent PV resources are considered. Three types of approximation expansions based on cumulants, namely the Gram-Charlier expansion, the Edgeworth expansion, and the Cornish-Fisher expansion, are compared, and their properties, advantages, and deficiencies are discussed. Additionally, a novel probabilistic model of PV generation is developed to obtain the probability density function (PDF) of the PV generation production based on the environmental conditions. The proposed approaches with the three expansions are compared with Monte Carlo simulations (MCS) with results for a 2497-bus representation of the Arizona area of the Western Electricity Coordinating Council (WECC) system.

Journal ArticleDOI
TL;DR: SUR (stepwise uncertainty reduction) strategies are derived from a Bayesian formulation of the problem of estimating a probability of failure of a function f using a Gaussian process model of f and aim at performing evaluations of f as efficiently as possible to infer the value of the probabilities of failure.
Abstract: This paper deals with the problem of estimating the volume of the excursion set of a function f:? d ?? above a given threshold, under a probability measure on ? d that is assumed to be known. In the industrial world, this corresponds to the problem of estimating a probability of failure of a system. When only an expensive-to-simulate model of the system is available, the budget for simulations is usually severely limited and therefore classical Monte Carlo methods ought to be avoided. One of the main contributions of this article is to derive SUR (stepwise uncertainty reduction) strategies from a Bayesian formulation of the problem of estimating a probability of failure. These sequential strategies use a Gaussian process model of f and aim at performing evaluations of f as efficiently as possible to infer the value of the probability of failure. We compare these strategies to other strategies also based on a Gaussian process model for estimating a probability of failure.

Journal ArticleDOI
TL;DR: In this article, a general method that allows one to decay narrow resonances in Les Houches Monte Carlo events in an efficient and accurate way is presented, which preserves both spin correlation and finite width effects to a very good accuracy.
Abstract: We present a general method that allows one to decay narrow resonances in Les Houches Monte Carlo events in an efficient and accurate way. The procedure preserves both spin correlation and finite width effects to a very good accuracy, and is therefore particularly suited for the decay of resonances in production events generated at next-to-leading-order accuracy. The method is implemented as a generic tool in the MadGraph framework, giving access to a very large set of possible applications. We illustrate the validity of the method and the code by applying it to the case of single top and top quark pair production, and show its capabilities on the case of top quark pair production in association with a Higgs boson.

Journal ArticleDOI
TL;DR: A novel approach for estimation variance-based sensitivity indices for models with dependent variables is presented, both the first order and total sensitivity indices are derived as generalizations of Sobolʼ sensitivity indices.

Journal ArticleDOI
TL;DR: Constrainedpoisson-disk sampling is proposed, a new Poisson- disk sampling scheme for polygonal meshes which can be easily tweaked in order to generate customized set of points such as importance sampling or distributions with generic geometric constraints.
Abstract: This paper deals with the problem of taking random samples over the surface of a 3D mesh describing and evaluating efficient algorithms for generating different distributions. We discuss first the problem of generating a Monte Carlo distribution in an efficient and practical way avoiding common pitfalls. Then, we propose Constrained Poisson-disk sampling, a new Poisson-disk sampling scheme for polygonal meshes which can be easily tweaked in order to generate customized set of points such as importance sampling or distributions with generic geometric constraints. In particular, two algorithms based on this approach are presented. An in-depth analysis of the frequency characterization and performance of the proposed algorithms are also presented and discussed.

Journal ArticleDOI
TL;DR: In this article, the current status of limits on hidden photons from past electron beam dump experiments including two new limits from such experiments at the High Energy Accelerator Research Organization in Japan (KEK) and the Laboratoire de l'accel\'erateur lin\'eaire (LAL, Orsay) that have so far not been considered.
Abstract: Hidden sectors with light extra U(1) gauge bosons, so-called hidden photons, have recently attracted some attention because they are a common feature of physics beyond the Standard Model like string theory and supersymmetry and additionally are phenomenologically of great interest regarding recent astrophysical observations. The hidden photon is already constrained by various laboratory experiments and presently searched for in running as well as upcoming experiments. We summarize the current status of limits on hidden photons from past electron beam dump experiments including two new limits from such experiments at the High Energy Accelerator Research Organization in Japan (KEK) and the Laboratoire de l'accel\'erateur lin\'eaire (LAL, Orsay) that have so far not been considered. All our limits take into account the experimental acceptances obtained from Monte Carlo simulations.

Journal ArticleDOI
Daiki Maki1
TL;DR: In this article, the authors introduce cointegration tests allowing for an unknown number of breaks, assuming that the unspecified number of break is smaller than or equal to the maximum number of breaking set a priori.


Journal ArticleDOI
TL;DR: Ouldridge et al. as mentioned in this paper introduced a sequence-dependent parametrization for a coarse-grained DNA model, which introduces sequencedependent stacking and base-pairing interaction strengths chosen to reproduce the melting temperatures of short duplexes.
Abstract: We introduce a sequence-dependent parametrization for a coarse-grained DNA model [T. E. Ouldridge, A. A. Louis, and J. P. K. Doye, J. Chem. Phys. 134, 085101 (2011)] originally designed to reproduce the properties of DNA molecules with average sequences. The new parametrization introduces sequence-dependent stacking and base-pairing interaction strengths chosen to reproduce the melting temperatures of short duplexes. By developing a histogram reweighting technique, we are able to fit our parameters to the melting temperatures of thousands of sequences. To demonstrate the flexibility of the model, we study the effects of sequence on: (a) the heterogeneous stacking transition of single strands, (b) the tendency of a duplex to fray at its melting point, (c) the effects of stacking strength in the loop on the melting temperature of hairpins, (d) the force-extension properties of single strands, and (e) the structure of a kissing-loop complex. Where possible, we compare our results with experimental data and find a good agreement. A simulation code called oxDNA, implementing our model, is available as a free software.

Journal ArticleDOI
TL;DR: The variance-constrained semi-grand-canonical (VC-SGC) ensemble as mentioned in this paper allows for transmutation Monte Carlo simulations of multicomponent systems in multiphase regions of the phase diagram and lends itself to scalable simulations on massively parallel platforms.
Abstract: We present an extension of the semi-grand-canonical (SGC) ensemble that we refer to as the variance-constrained semi-grand-canonical (VC-SGC) ensemble. It allows for transmutation Monte Carlo simulations of multicomponent systems in multiphase regions of the phase diagram and lends itself to scalable simulations on massively parallel platforms. By combining transmutation moves with molecular dynamics steps, structural relaxations and thermal vibrations in realistic alloys can be taken into account. In this way, we construct a robust and efficient simulation technique that is ideally suited for large-scale simulations of precipitation in multicomponent systems in the presence of structural disorder. To illustrate the algorithm introduced in this work, we study the precipitation of Cu in nanocrystalline Fe.

Journal ArticleDOI
TL;DR: This report includes considerations in the application of the TG-43U1 formalism to high-energy photon-emitting sources with particular attention to phantom size effects, interpolation accuracy dependence on dose calculation grid size, and dosimetry parameter dependence on source active length.
Abstract: Purpose: Recommendations of the American Association of Physicists in Medicine (AAPM) and the European Society for Radiotherapy and Oncology (ESTRO) on dose calculations for high-energy (average energy higher than 50 keV) photon-emitting brachytherapy sources are presented, including the physical characteristics of specific192Ir, 137Cs, and 60Co source models. Methods: This report has been prepared by the High Energy Brachytherapy Source Dosimetry (HEBD) Working Group. This report includes considerations in the application of the TG-43U1 formalism to high-energy photon-emitting sources with particular attention to phantom size effects, interpolation accuracy dependence on dose calculation grid size, and dosimetry parameter dependence on source active length. Results: Consensus datasets for commercially available high-energy photon sources are provided, along with recommended methods for evaluating these datasets. Recommendations on dosimetry characterization methods, mainly using experimental procedures and Monte Carlo, are established and discussed. Also included are methodological recommendations on detector choice, detector energy response characterization and phantom materials, and measurement specification methodology. Uncertainty analyses are discussed and recommendations for high-energy sources without consensus datasets are given. Conclusions: Recommended consensus datasets for high-energy sources have been derived for sources that were commercially available as of January 2010. Data are presented according to the AAPM TG-43U1 formalism, with modified interpolation and extrapolation techniques of the AAPM TG-43U1S1 report for the 2D anisotropy function and radial dose function.

Journal ArticleDOI
TL;DR: In this article, the authors describe the implementation details of the colour reconnection model in the event generator Herwig++ and study the impact on final-state observables in detail and confirm the model idea from colour preconfinement on the basis of studies within the cluster hadronization model.
Abstract: We describe the implementation details of the colour reconnection model in the event generator Herwig++. We study the impact on final-state observables in detail and confirm the model idea from colour preconfinement on the basis of studies within the cluster hadronization model. Moreover, we show that the description of minimum bias and underlying event data at the LHC is improved with this model and present results of a tune to available data.

Journal ArticleDOI
TL;DR: The main results presented herein include a synthesis of the properties of the measures defined so far in the scientific literature; the generalizations proposed naturally lead to additions to the body of the previously known measures, leading to the definition of numerous new measures.