scispace - formally typeset
Search or ask a question

Showing papers on "Monte Carlo method published in 2008"


Book
04 Feb 2008
TL;DR: In this article, the authors present a method for setting up Uncertainty and Sensitivity Analyses using Monte Carlo and Linear Regression (MCF) models and a set of experiments.
Abstract: Preface. 1. Introduction to Sensitivity Analysi. 1.1 Models and Sensitivity Analysis. 1.1.1 Definition. 1.1.2 Models. 1.1.3 Models and Uncertainty. 1.1.4 How to Set Up Uncertainty and Sensitivity Analyses. 1.1.5 Implications for Model Quality. 1.2 Methods and Settings for Sensitivity Analysis - An Introduction. 1.2.1 Local versus Global. 1.2.2 A Test Model. 1.2.3 Scatterplots versus Derivatives. 1.2.4 Sigma-normalized Derivatives. 1.2.5 Monte Carlo and Linear Regression. 1.2.6 Conditional Variances - First Path. 1.2.7 Conditional Variances - Second Path. 1.2.8 Application to Model (1.3). 1.2.9 A First Setting: 'Factor Prioritization' 1.2.10 Nonadditive Models. 1.2.11 Higher-order Sensitivity Indices. 1.2.12 Total Effects. 1.2.13 A Second Setting: 'Factor Fixing'. 1.2.14 Rationale for Sensitivity Analysis. 1.2.15 Treating Sets. 1.2.16 Further Methods. 1.2.17 Elementary Effect Test. 1.2.18 Monte Carlo Filtering. 1.3 Nonindependent Input Factors. 1.4 Possible Pitfalls for a Sensitivity Analysis. 1.5 Concluding Remarks. 1.6 Exercises. 1.7 Answers. 1.8 Additional Exercises. 1.9 Solutions to Additional Exercises. 2. Experimental Designs. 2.1 Introduction. 2.2 Dependency on a Single Parameter. 2.3 Sensitivity Analysis of a Single Parameter. 2.3.1 Random Values. 2.3.2 Stratified Sampling. 2.3.3 Mean and Variance Estimates for Stratified Sampling. 2.4 Sensitivity Analysis of Multiple Parameters. 2.4.1 Linear Models. 2.4.2 One-at-a-time (OAT) Sampling. 2.4.3 Limits on the Number of Influential Parameters. 2.4.4 Fractional Factorial Sampling. 2.4.5 Latin Hypercube Sampling. 2.4.6 Multivariate Stratified Sampling. 2.4.7 Quasi-random Sampling with Low-discrepancy Sequences. 2.5 Group Sampling. 2.6 Exercises. 2.7 Exercise Solutions. 3. Elementary Effects Method. 3.1 Introduction. 3.2 The Elementary Effects Method. 3.3 The Sampling Strategy and its Optimization. 3.4 The Computation of the Sensitivity Measures. 3.5 Working with Groups. 3.6 The EE Method Step by Step. 3.7 Conclusions. 3.8 Exercises. 3.9 Solutions. 4. Variance-based Methods. 4.1 Different Tests for Different Settings. 4.2 Why Variance? 4.3 Variance-based Methods. A Brief History. 4.4 Interaction Effects. 4.5 Total Effects. 4.6 How to Compute the Sensitivity Indices. 4.7 FAST and Random Balance Designs. 4.8 Putting the Method to Work: the Infection Dynamics Model. 4.9 Caveats. 4.10 Exercises. 5. Factor Mapping and Metamodelling. 5.1 Introduction. 5.2 Monte Carlo Filtering (MCF). 5.2.1 Implementation of Monte Carlo Filtering. 5.2.2 Pros and Cons. 5.2.3 Exercises. 5.2.4 Solutions. 5.2.5 Examples. 5.3 Metamodelling and the High-Dimensional Model Representation. 5.3.1 Estimating HDMRs and Metamodels. 5.3.2 A Simple Example. 5.3.3 Another Simple Example. 5.3.4 Exercises. 5.3.5 Solutions to Exercises. 5.4 Conclusions. 6. Sensitivity Analysis: from Theory to Practice. 6.1 Example 1: a Composite Indicator. 6.1.1 Setting the Problem. 6.1.2 A Composite Indicator Measuring Countries' Performance in Environmental Sustainability. 6.1.3 Selecting the Sensitivity Analysis Method. 6.1.4 The Sensitivity Analysis Experiment and its Results. 6.1.5 Conclusions. 6.2 Example 2: Importance of Jumps in Pricing Options. 6.2.1 Setting the Problem. 6.2.2 The Heston Stochastic Volatility Model with Jumps. 6.2.3 Selecting a Suitable Sensitivity Analysis Method. 6.2.4 The Sensitivity Analysis Experiment. 6.2.5 Conclusions. 6.3 Example 3: a Chemical Reactor. 6.3.1 Setting the Problem. 6.3.2 Thermal Runaway Analysis of a Batch Reactor. 6.3.3 Selecting the Sensitivity Analysis Method. 6.3.4 The Sensitivity Analysis Experiment and its Results. 6.3.5 Conclusions. 6.4 Example 4: a Mixed Uncertainty-Sensitivity Plot. 6.4.1 In Brief. 6.5 When to use What? Afterword. Bibliography. Index.

4,306 citations


Journal ArticleDOI
TL;DR: In this article, generalized polynomial chaos expansions (PCE) are used to build surrogate models that allow one to compute the Sobol' indices analytically as a post-processing of the PCE coefficients.

1,934 citations


Journal ArticleDOI
TL;DR: In this article, a standardized version of Swamy's test of slope homogeneity for panel data models where the cross section dimension (N ) could be large relative to the time series dimension (T ) was proposed.

1,819 citations


Journal ArticleDOI
TL;DR: It is shown that multigrid ideas can be used to reduce the computational complexity of estimating an expected value arising from a stochastic differential equation using Monte Carlo path simulations.
Abstract: We show that multigrid ideas can be used to reduce the computational complexity of estimating an expected value arising from a stochastic differential equation using Monte Carlo path simulations. In the simplest case of a Lipschitz payoff and a Euler discretisation, the computational cost to achieve an accuracy of O(e) is reduced from O(e-3) to O(e-2 (log e)2). The analysis is supported by numerical results showing significant computational savings.

1,619 citations


Journal ArticleDOI
TL;DR: Herwig++ as mentioned in this paper is a general-purpose Monte Carlo event generator for the simulation of hard lepton-lepton, leptonhadron and hadron-hadron collisions, together with a number of important hard scattering processes.
Abstract: In this paper we describe Herwig++ version 2.3, a general-purpose Monte Carlo event generator for the simulation of hard lepton-lepton, lepton-hadron and hadron-hadron collisions. A number of important hard scattering processes are available, together with an interface via the Les Houches Accord to specialized matrix element generators for additional processes. The simulation of Beyond the Standard Model (BSM) physics includes a range of models and allows new models to be added by encoding the Feynman rules of the model. The parton-shower approach is used to simulate initial- and final-state QCD radiation, including colour coherence effects, with special emphasis on the correct description of radiation from heavy particles. The underlying event is simulated using an eikonal multiple parton-parton scattering model. The formation of hadrons from the quarks and gluons produced in the parton shower is described using the cluster hadronization model. Hadron decays are simulated using matrix elements, where possible including spin correlations and off-shell effects.

1,519 citations


Journal ArticleDOI
TL;DR: In this article, the authors build on the work of Shaw et al. and present three new methods for sampling and evidence evaluation from distributions that may contain multiple modes and significant degeneracies in very high dimensions.
Abstract: In performing a Bayesian analysis of astronomical data, two difficult problems often emerge. First, in estimating the parameters of some model for the data, the resulting posterior distribution may be multimodal or exhibit pronounced (curving) degeneracies, which can cause problems for traditional Markov Chain Monte Carlo (MCMC) sampling methods. Secondly, in selecting between a set of competing models, calculation of the Bayesian evidence for each model is computationally expensive using existing methods such as thermodynamic integration. The nested sampling method introduced by Skilling, has greatly reduced the computational expense of calculating evidence and also produces posterior inferences as a by-product. This method has been applied successfully in cosmological applications by Mukherjee, Parkinson & Liddle, but their implementation was efficient only for unimodal distributions without pronounced degeneracies. Shaw, Bridges & Hobson recently introduced a clustered nested sampling method which is significantly more efficient in sampling from multimodal posteriors and also determines the expectation and variance of the final evidence from a single run of the algorithm, hence providing a further increase in efficiency. In this paper, we build on the work of Shaw et al. and present three new methods for sampling and evidence evaluation from distributions that may contain multiple modes and significant degeneracies in very high dimensions; we also present an even more efficient technique for estimating the uncertainty on the evaluated evidence. These methods lead to a further substantial improvement in sampling efficiency and robustness, and are applied to two toy problems to demonstrate the accuracy and economy of the evidence calculation and parameter estimation. Finally, we discuss the use of these methods in performing Bayesian object detection in astronomical data sets, and show that they significantly outperform existing MCMC techniques. An implementation of our methods will be publicly released shortly.

1,396 citations


Journal ArticleDOI
TL;DR: This work demonstrates algebraic convergence with respect to the total number of collocation points and quantifies the effect of the dimension of the problem (number of input random variables) in the final estimates, indicating for which problems the sparse grid stochastic collocation method is more efficient than Monte Carlo.
Abstract: This work proposes and analyzes a Smolyak-type sparse grid stochastic collocation method for the approximation of statistical quantities related to the solution of partial differential equations with random coefficients and forcing terms (input data of the model). To compute solution statistics, the sparse grid stochastic collocation method uses approximate solutions, produced here by finite elements, corresponding to a deterministic set of points in the random input space. This naturally requires solving uncoupled deterministic problems as in the Monte Carlo method. If the number of random variables needed to describe the input data is moderately large, full tensor product spaces are computationally expensive to use due to the curse of dimensionality. In this case the sparse grid approach is still expected to be competitive with the classical Monte Carlo method. Therefore, it is of major practical relevance to understand in which situations the sparse grid stochastic collocation method is more efficient than Monte Carlo. This work provides error estimates for the fully discrete solution using $L^q$ norms and analyzes the computational efficiency of the proposed method. In particular, it demonstrates algebraic convergence with respect to the total number of collocation points and quantifies the effect of the dimension of the problem (number of input random variables) in the final estimates. The derived estimates are then used to compare the method with Monte Carlo, indicating for which problems the former is more efficient than the latter. Computational evidence complements the present theory and shows the effectiveness of the sparse grid stochastic collocation method compared to full tensor and Monte Carlo approaches.

1,257 citations


Journal ArticleDOI
TL;DR: The current edition of the handbook is intended to provide practitioners with a comprehensive resource for the use of software package Stata, which provides almost all standard commonly used methods of data analysis.

955 citations


Book
02 Sep 2008
TL;DR: In this article, the authors present a review of Probability theory and its application in the field of mine planning, including the following: 1.1 Introduction. 2.2 Discrete-Time, Discrete State Markov Chains (DSMC) 2.3 Monte Carlo Analysis and Results. 3.4 Probabilistic Interpretation.
Abstract: Preface. Acknowledgements. PART 1: THEORY. Chapter 1: Review of Probability Theory. 1.1 Introduction. 1.2 Basic Set Theory. 1.3 Probability. 1.4 Conditional Probability. 1.5 Random Variables and Probability Distributions. 1.6 Measures of Central Tendency, Variability, and Association. 1.7 Linear Combinations of Random Variables. 1.8 Functions of Random Variables. 1.9 Common Discrete Probability Distributions. 1.10 Common Continuous Probability Distributions. 1.11 Extreme-Value Distributions. Chapter2: Discrete random Processes. 2.1 Introduction. 2.2 Discrete-Time, Discrete-State Markov Chains. 2.3 Continuous-Time Markov Chains. 2.4 Queueing Models. Chapter 3: Random Fields. 3.1 Introduction. 3.2 Covariance Function. 3.3 Spectral Density Function. 3.4 Variance Function. 3.5 Correlation Length. 3.6 Some Common Models. 3.7 Random Fields in Higher Dimensions. Chapter 4: Best Estimates, Excursions, and Averages. 4.1 Best Linear Unbiased Estimation. 4.2 Threshold Excursions in One Dimension. 4.3 Threshold Excursions in Two Dimensions. 4.4 Averages. Chapter 5: Estimation. 5.1 Introduction. 5.2 Choosing a Distribution. 5.3 Estimation in Presence of Correlation. 5.4 Advanced Estimation Techniques. Chapter 6: Simulation. 6.1 Introduction. 6.2 Random-Number Generators. 6.3 Generating Nonuniform Random Variables. 6.4 Generating Random Fields. 6.5 Conditional Simulation of Random Fields. 6.6 Monte carlo Simulation. Chapter 7: Reliability-Based Design. 7.1 Acceptable Risk. 7.2 Assessing Risk. 7.3 Background to Design Methodologies. 7.4 Load and Resistance Factor Design. 7.5 Going Beyond Calibration. 7.6 Risk-Based Decision making. PART 2: PRACTICE. Chapter 8: Groundwater Modeling. 8.1 Introduction. 8.2 Finite-Element Model. 8.3 One-Dimensional Flow. 8.4 Simple Two-Dimensional Flow. 8.5 Two-Dimensional Flow Beneath Water-Retaining Structures. 8.6 Three-Dimensional Flow. 8.7 Three Dimensional Exit Gradient Analysis. Chapter 9: Flow Through Earth Dams. 9.1 Statistics of Flow Through Earth Dams. 9.2 Extreme Hydraulic Gradient Statistics. Chapter 10: Settlement of Shallow Foundations. 10.1 Introduction. 10.2 Two-Dimensional Probabilistic Foundation Settlement. 10.3 Three-Dimensional Probabilistic Foundation Settlement. 10.4 Strip Footing Risk Assessment. 10.5 Resistance Factors for Shallow-Foundation Settlement Design. Chapter 11: Bearing Capacity. 11.1 Strip Footings on c-o Soils. 11.2 Load and Resistance Factor Design of Shallow Foundations. 11.3 Summary. Chapter 12: Deep Foundations. 12.1 Introduction. 12.2 Random Finite-Element Method. 12.3 Monte Carlo Estimation of Pile Capacity. 12.4 Summary. Chapter 13: Slope Stability. 13.1 Introduction. 13.2 Probabilistic Slope Stability Analysis. 13.3 Slope Stability Reliability Model. Chapter 14: Earth Pressure. 14.1 Introduction. 14.2 Passive Earth Pressures. 14.3 Active Earth Pressures: Retaining Wall Reliability. Chapter 15: Mine Pillar Capacity. 15.1 Introduction. 15.2 Literature. 15.3 Parametric Studies. 15.4 Probabilistic Interpretation. 15.5 Summary. Chapter 16: Liquefaction. 16.1 Introduction. 16.2 Model Size: Soil Liquefaction. 16.3 Monte Carlo Analysis and Results. 16.4 Summary PART 3: APPENDIXES. APPENDIX A: PROBABILITY TABLES. A.1 Normal Distribution. A.2 Inverse Student t -Distribution. A.3 Inverse Chi-Square Distribution APPENDIX B: NUMERICAL INTEGRATION. B.1 Gaussian Quadrature. APPENDIX C. COMPUTING VARIANCES AND CONVARIANCES OF LOCAL AVERAGES. C.1 One-Dimensional Case. C.2 Two-Dimensional Case C.3 Three-Dimensional Case. Index.

751 citations


BookDOI
01 Jan 2008
TL;DR: In this paper, the authors present an overview of classical Monte Carlo methods in classical statistical physics, including the Monte Carlo method for Particle Transport Problems and the Particle-in-Cell method.
Abstract: Molecular Dynamics.- to Molecular Dynamics.- Wigner Function Quantum Molecular Dynamics.- Classical Monte Carlo.- The Monte Carlo Method, an Introduction.- Monte Carlo Methods in Classical Statistical Physics.- The Monte Carlo Method for Particle Transport Problems.- Kinetic Modelling.- The Particle-in-Cell Method.- Gyrokinetic and Gyrofluid Theory and Simulation of Magnetized Plasmas.- Semiclassical Approaches.- Boltzmann Transport in Condensed Matter.- Semiclassical Description of Quantum Many-Particle Dynamics in Strong Laser Fields.- Quantum Monte Carlo.- World-line and Determinantal Quantum Monte Carlo Methods for Spins, Phonons and Electrons.- Autocorrelations in Quantum Monte Carlo Simulations of Electron-Phonon Models.- Diagrammatic Monte Carlo and Stochastic Optimization Methods for Complex Composite Objects in Macroscopic Baths.- Path Integral Monte Carlo Simulation of Charged Particles in Traps.- Ab-Initio Methods in Physics and Chemistry.- Ab-Initio Approach to the Many-Electron Problem.- Ab-Initio Methods Applied to Structure Optimization and Microscopic Modelling.- Effective Field Approaches.- Dynamical Mean-Field Approximation and Cluster Methods for Correlated Electron Systems.- Local Distribution Approach.- Iterative Methods for Sparse Eigenvalue Problems.- Exact Diagonalization Techniques.- Chebyshev Expansion Techniques.- The Density Matrix Renormalisation Group: Concepts and Applications.- The Conceptual Background of Density-Matrix Renormalization.- Density-Matrix Renormalization Group Algorithms.- Dynamical Density-Matrix Renormalization Group.- Studying Time-Dependent Quantum Phenomena with the Density-Matrix Renormalization Group.- Applications of Quantum Information in the Density-Matrix Renormalization Group.- Density-Matrix Renormalization Group for Transfer Matrices: Static and Dynamical Properties of 1D Quantum Systems at Finite Temperature.- Concepts of High Performance Computing.- Architecture and Performance Characteristics of Modern High Performance Computers.- Optimization Techniques for Modern High Performance Computers.

720 citations


Proceedings ArticleDOI
07 Dec 2008
TL;DR: This paper will briefly describe the nature and relevance of Monte Carlo simulation, the way to perform these simulations and analyze results, and the underlying mathematical techniques required for performing these simulations.
Abstract: This is an introductory tutorial on Monte Carlo simulation, a type of simulation that relies on repeated random sampling and statistical analysis to compute the results. In this paper, we will briefly describe the nature and relevance of Monte Carlo simulation, the way to perform these simulations and analyze results, and the underlying mathematical techniques required for performing these simulations. We will present a few examples from various areas where Monte Carlo simulation is used, and also touch on the current state of software in this area.

Posted Content
Gary King1
TL;DR: In this article, the authors present analytical, Monte Carlo, and empirical evidence on models for event count data and show that the exponential Poisson regression (EPR) model provides analytically, in large samples, and empirically in small, finite samples, a far superior model and optimal estimator.
Abstract: This paper presents analytical, Monte Carlo, and empirical evidence on models for event count data. Event counts are dependent variables that measure the number of times some event occurs. Counts of international events are probably the most common, but numerous examples exist in every empirical field of the discipline. The results of the analysis below strongly suggest that the way event counts have been analyzed in hundreds of important political science studies have produced statistically and substantively unreliable results. Misspecification, inefficiency, bias, inconsistency, insufficiency, and other problems result from the unknowing application of two common methods that are without theoretical justification or empirical unity in this type of data. I show that the exponential Poisson regression (EPR) model provides analytically, in large samples, and empirically, in small, finite samples, a far superior model and optimal estimator. I also demonstrate the advantage of this methodology in an application to nineteenth-century party switching in the U.S. Congress. Its use by political scientists is strongly encouraged.

Book ChapterDOI
TL;DR: It is demonstrated that the maximum of the weights associated with the sample ensemble converges to one as both the sample size and the system dimension tends to infinity, and the weight singularity is established in models with more general multivariate likelihoods, e.g. Gaussian and Cauchy.
Abstract: It has been widely realized that Monte Carlo methods (approximation via a sample ensemble) may fail in large scale systems. This work offers some theoretical insight into this phenomenon in the context of the particle filter. We demonstrate that the maximum of the weights associated with the sample ensemble converges to one as both the sample size and the system dimension tends to infinity. Specifically, under fairly weak assumptions, if the ensemble size grows sub-exponentially in the cube root of the system dimension, the convergence holds for a single update step in state-space models with independent and identically distributed kernels. Further, in an important special case, more refined arguments show (and our simulations suggest) that the convergence to unity occurs unless the ensemble grows super-exponentially in the system dimension. The weight singularity is also established in models with more general multivariate likelihoods, e.g. Gaussian and Cauchy. Although presented in the context of atmospheric data assimilation for numerical weather prediction, our results are generally valid for high-dimensional particle filters.

Book ChapterDOI
TL;DR: The theoretical basis for calculating equilibrium properties of biological molecules by the Monte Carlo method is presented and a discussion of the estimation of errors in properties calculated by Monte Carlo is given.
Abstract: A description of Monte Carlo methods for simulation of proteins is given. Advantages and disadvantages of the Monte Carlo approach are presented. The theoretical basis for calculating equilibrium properties of biological molecules by the Monte Carlo method is presented. Some of the standard and some of the more recent ways of performing Monte Carlo on proteins are presented. A discussion of the estimation of errors in properties calculated by Monte Carlo is given.

Posted Content
TL;DR: In this paper, a multivariate realised kernel is proposed to estimate the ex-post covariation of log-prices, which is guaranteed to be positive semi-definite and robust to measurement noise of certain types.
Abstract: We propose a multivariate realised kernel to estimate the ex-post covariation of log-prices. We show this new consistent estimator is guaranteed to be positive semi-definite and is robust to measurement noise of certain types and can also handle non-synchronous trading. It is the first estimator which has these three properties which are all essential for empirical work in this area. We derive the large sample asymptotics of this estimator and assess its accuracy using a Monte Carlo study. We implement the estimator on some US equity data, comparing our results to previous work which has used returns measured over 5 or 10 minutes intervals. We show the new estimator is substantially more precise.

Journal ArticleDOI
TL;DR: In a standard simulation of time-resolved photon migration in a semi-infinite geometry, the proposed methodology executed on a low-cost graphics processing unit (GPU) is a factor 1000 faster than simulation performed on a single standard processor.
Abstract: General-purpose computing on graphics processing units (GPGPU) is shown to dramatically increase the speed of Monte Carlo simulations of photon migration. In a standard simulation of time-resolved photon migration in a semi-infinite geometry, the proposed methodology executed on a low-cost graphics processing unit (GPU) is a factor 1000 faster than simulation performed on a single standard processor. In addition, we address important technical aspects of GPU-based simulations of photon migration. The technique is expected to become a standard method in Monte Carlo simulations of photon migration.

Journal ArticleDOI
TL;DR: An empirical approach to account for lag-1 autocorrelation in detecting mean shifts in time series of white or red (first-order autoregressive) Gaussian noise using the penalized maximal t test or the Penalized maximal F test is embedded in a stepwise testing algorithm.
Abstract: This study proposes an empirical approach to account for lag-1 autocorrelation in detecting mean shifts in time series of white or red (first-order autoregressive) Gaussian noise using the penalized maximal t test or the penalized maximal F test. This empirical approach is embedded in a stepwise testing algorithm, so that the new algorithms can be used to detect single or multiple changepoints in a time series. The detection power of the new algorithms is analyzed through Monte Carlo simulations. It has been shown that the new algorithms work very well and fast in detecting single or multiple changepoints. Examples of their application to real climate data series (surface pressure and wind speed) are presented. An open-source software package (in R and FORTRAN) for implementing the algorithms, along with a user manual, has been developed and made available online free of charge.

Journal ArticleDOI
TL;DR: Monte Carlo experiments for the mixed logit model indicate the superior performance of the proposed Gaussian quadrature extension over simulation techniques.

Journal ArticleDOI
TL;DR: Some Bayesian hierarchical Gaussian process models are proposed, which tend to produce prediction closer to that from the high-accuracy experiment.
Abstract: Standard practice when analyzing data from different types of experiments is to treat data from each type separately. By borrowing strength across multiple sources, an integrated analysis can produce better results. Careful adjustments must be made to incorporate the systematic differences among various experiments. Toward this end, some Bayesian hierarchical Gaussian process models are proposed. The heterogeneity among different sources is accounted for by performing flexible location and scale adjustments. The approach tends to produce prediction closer to that from the high-accuracy experiment. The Bayesian computations are aided by the use of Markov chain Monte Carlo and sample average approximation algorithms. The proposed method is illustrated with two examples, one with detailed and approximate finite elements simulations for mechanical material design and the other with physical and computer experiments for modeling a food processor.

Journal ArticleDOI
TL;DR: A heuristic optimization that relies on a Monte Carlo search, a conjugate-gradients minimization, and simulated annealing molecular dynamics is applied to a series of subdivisions of the structure into progressively smaller rigid bodies.

Journal ArticleDOI
TL;DR: In this article, conditional logistic regression and generalized linear mixed-effect analysis are used to estimate model parameters, and Monte Carlo simulations demonstrate that the latter is superior when effect size varies over subjects.

Journal ArticleDOI
TL;DR: Distance correlation as mentioned in this paper is a new measure of dependence between random vectors, which is analogous to product-moment covariance and correlation, but unlike the classical definition of correlation, distance correlation is zero only if the random vectors are independent.
Abstract: Distance correlation is a new measure of dependence between random vectors. Distance covariance and distance correlation are analogous to product-moment covariance and correlation, but unlike the classical definition of correlation, distance correlation is zero only if the random vectors are independent. The empirical distance dependence measures are based on certain Euclidean distances between sample elements rather than sample moments, yet have a compact representation analogous to the classical covariance and correlation. Asymptotic properties and applications in testing independence are discussed. Implementation of the test and Monte Carlo results are also presented.

Journal ArticleDOI
TL;DR: In this paper, the eigenvalue-eigenfunction decomposition of an integral operator associated with specific joint probability densities is used to identify a large class of nonclassical nonlinear errors-in-variables models with continuously distributed variables.
Abstract: While the literature on nonclassical measurement error traditionally relies on the availability of an auxiliary data set containing correctly measured observations, we establish that the availability of instruments enables the identification of a large class of nonclassical nonlinear errors-in-variables models with continuously distributed variables. Our main identifying assumption is that, conditional on the value of the true regressors, some “measure of location” of the distribution of the measurement error (e.g., its mean, mode, or median) is equal to zero. The proposed approach relies on the eigenvalue–eigenfunction decomposition of an integral operator associated with specific joint probability densities. The main identifying assumption is used to “index” the eigenfunctions so that the decomposition is unique. We propose a convenient sieve-based estimator, derive its asymptotic properties, and investigate its finite-sample behavior through Monte Carlo simulations.

Journal ArticleDOI
TL;DR: In this article, a Markov chain Monte Carlo (MCMCMC) method was used for the direct generation of synthetic time series of wind power output, which leads to reduced number of states and a lower order of the MCMC at equal power data resolution.
Abstract: This paper contributes a Markov chain Monte Carlo (MCMC) method for the direct generation of synthetic time series of wind power output. It is shown that obtaining a stochastic model directly in the wind power domain leads to reduced number of states and to lower order of the Markov chain at equal power data resolution. The estimation quality of the stochastic model is positively influenced since in the power domain, a lower number of independent parameters is estimated from a given amount of recorded data. The simulation results prove that this method offers excellent fit for both the probability density function and the autocorrelation function of the generated wind power time series. The method is a first step toward simple stochastic black-box models for wind generation.

Journal ArticleDOI
TL;DR: In this paper, a Monte Carlo simulation method is used for the optimal placement and arrangement of wind turbines in a wind park, based on the mean of maximum energy production and minimum cost installation criteria.

Journal ArticleDOI
TL;DR: In this article, the problem of nonparametric modeling of these distributions, borrowing information across centers while also allowing centers to be clustered is addressed, and an efficient Markov chain Monte Carlo algorithm is developed for computation.
Abstract: In multicenter studies, subjects in different centers may have different outcome distributions. This article is motivated by the problem of nonparametric modeling of these distributions, borrowing information across centers while also allowing centers to be clustered. Starting with a stick-breaking representation of the Dirichlet process (DP), we replace the random atoms with random probability measures drawn from a DP. This results in a nested DP prior, which can be placed on the collection of distributions for the different centers, with centers drawn from the same DP component automatically clustered together. Theoretical properties are discussed, and an efficient Markov chain Monte Carlo algorithm is developed for computation. The methods are illustrated using a simulation study and an application to quality of care in U.S. hospitals.

Journal ArticleDOI
TL;DR: Why Monte Carlo standard errors are important, how they can be easily calculated in Markov chain Monte Carlo and how they are used to decide when to stop the simulation are discussed.
Abstract: Current reporting of results based on Markov chain Monte Carlo computations could be improved. In particular, a measure of the accuracy of the resulting estimates is rarely reported. Thus we have little ability to objectively assess the quality of the reported estimates. We address this issue in that we discuss why Monte Carlo standard errors are important, how they can be easily calculated in Markov chain Monte Carlo and how they can be used to decide when to stop the simulation. We compare their use to a popular alternative in the context of two examples.

Journal ArticleDOI
TL;DR: It is shown how spatially realistic Monte Carlo simulations of biological systems can be far more cost-effective than often is assumed, and provide a level of accuracy and insight beyond that of continuum methods.
Abstract: Many important physiological processes operate at time and space scales far beyond those accessible to atom-realistic simulations, and yet discrete stochastic rather than continuum methods may best represent finite numbers of molecules interacting in complex cellular spaces. We describe and validate new tools and algorithms developed for a new version of the MCell simulation program (MCell3), which supports generalized Monte Carlo modeling of diffusion and chemical reaction in solution, on surfaces representing membranes, and combinations thereof. A new syntax for describing the spatial directionality of surface reactions is introduced, along with optimizations and algorithms that can substantially reduce computational costs (e.g., event scheduling, variable time and space steps). Examples for simple reactions in simple spaces are validated by comparison to analytic solutions. Thus we show how spatially realistic Monte Carlo simulations of biological systems can be far more cost-effective than often is assumed, and provide a level of accuracy and insight beyond that of continuum methods.

Journal ArticleDOI
TL;DR: The capability of the SARRP to deliver highly focal beams to multiple animal model systems provides new research opportunities that more realistically bridge laboratory research and clinical translation.
Abstract: Purpose To demonstrate the computed tomography, conformal irradiation, and treatment planning capabilities of a small animal radiation research platform (SARRP). Methods and Materials The SARRP uses a dual-focal spot, constant voltage X-ray source mounted on a gantry with a source-to-isocenter distance of 35 cm. Gantry rotation is limited to 120° from vertical. X-rays of 80–100 kVp from the smaller 0.4-mm focal spot are used for imaging. Both 0.4-mm and 3.0-mm focal spots operate at 225 kVp for irradiation. Robotic translate/rotate stages are used to position the animal. Cone-beam computed tomography is achieved by rotating the horizontal animal between the stationary X-ray source and a flat-panel detector. The radiation beams range from 0.5 mm in diameter to 60 × 60 mm2. Dosimetry is measured with radiochromic films. Monte Carlo dose calculations are used for treatment planning. The combination of gantry and robotic stage motions facilitate conformal irradiation. Results The SARRP spans 3 ft × 4 ft × 6 ft (width × length × height). Depending on the filtration, the isocenter dose outputs at a 1-cm depth in water were 22–375 cGy/min from the smallest to the largest radiation fields. The 20–80% dose falloff spanned 0.16 mm. Cone-beam computed tomography with 0.6 × 0.6 × 0.6 mm3 voxel resolution was acquired with a dose of Conclusion The capability of the SARRP to deliver highly focal beams to multiple animal model systems provides new research opportunities that more realistically bridge laboratory research and clinical translation.

Journal ArticleDOI
TL;DR: Results obtained for a model of inelastic tunneling spectroscopy reveal the applicability of the approach to a wide range of physically important regimes, including high (classical) and low (quantum) temperatures, and weak (perturbative) and strong electron-phonon couplings.
Abstract: A real-time path-integral Monte Carlo approach is developed to study the dynamics in a many-body quantum system coupled to a phonon background until reaching a nonequilibrium stationary state. The approach is based on augmenting an exact reduced equation for the evolution of the system in the interaction picture which is amenable to an efficient path integral (worldline) Monte Carlo approach. Results obtained for a model of inelastic tunneling spectroscopy reveal the applicability of the approach to a wide range of physically important regimes, including high (classical) and low (quantum) temperatures, and weak (perturbative) and strong electron-phonon couplings.