scispace - formally typeset
Search or ask a question

Showing papers on "Monte Carlo method published in 1995"


01 Jan 1995
TL;DR: A Monte Carlo model of steady-state light transport in multi-layered tissues (MCML) has been coded in ANSI Standard C; therefore, the program can be used on various computers and has been in the public domain since 1992.
Abstract: A Monte Carlo model of steady-state light transport in multi-layered tissues (MCML) has been coded in ANSI Standard C; therefore, the program can be used on various computers. Dynamic data allocation is used for MCML, hence the number of tissue layers and grid elements of the grid system can be varied by users at run time. The coordinates of the simulated data for each grid element in the radial and angular directions are optimized. Some of the MCML computational results have been verified with those of other theories or other investigators. The program, including the source code, has been in the public domain since 1992.

2,889 citations


Posted Content
TL;DR: A nonparametric method for automatically selecting the number of autocovariances to use in computing a heteroskedasticity and autocorrelation consistent covariance matrix is proposed and proved to be asymptotically equivalent to one that is optimal under a mean squared error loss function.
Abstract: We propose a nonparametric method for automatically selecting the number of autocovariances to use in computing a heteroskedasticity and autocorrelation consistent covariance matrix. For a given kernel for weighting the autocovariances, we prove that our procedure is asymptotically equivalent to one that is optimal under a mean squared error loss function. Monte Carlo simulations suggest that our procedure performs tolerably well, although it does result in size distortions.

2,798 citations


Journal ArticleDOI
TL;DR: A Monte Carlo model of steady-state light transport in multi-layered tissues (MCML) has been coded in ANSI Standard C; therefore, the program can be used on various computers as mentioned in this paper.

2,678 citations


Journal ArticleDOI
TL;DR: In this paper, the authors introduce a picture of a boson superfluid and show how superfluidity and Bose condensation manifest themselves, showing the excellent agreement between simulations and experimental measurements on liquid and solid helium for such quantities as pair correlations, the superfluid density, the energy, and the momentum distribution.
Abstract: One of Feynman's early applications of path integrals was to superfluid $^{4}\mathrm{He}$. He showed that the thermodynamic properties of Bose systems are exactly equivalent to those of a peculiar type of interacting classical "ring polymer." Using this mapping, one can generalize Monte Carlo simulation techniques commonly used for classical systems to simulate boson systems. In this review, the author introduces this picture of a boson superfluid and shows how superfluidity and Bose condensation manifest themselves. He shows the excellent agreement between simulations and experimental measurements on liquid and solid helium for such quantities as pair correlations, the superfluid density, the energy, and the momentum distribution. Major aspects of computational techniques developed for a boson superfluid are discussed: the construction of more accurate approximate density matrices to reduce the number of points on the path integral, sampling techniques to move through the space of exchanges and paths quickly, and the construction of estimators for various properties such as the energy, the momentum distribution, the superfluid density, and the exchange frequency in a quantum crystal. Finally the path-integral Monte Carlo method is compared to other quantum Monte Carlo methods.

1,908 citations


Journal ArticleDOI
TL;DR: BEAM, a general purpose Monte Carlo code to simulate the radiation beams from radiotherapy units including high-energy electron and photon beams, 60Co beams and orthovoltage units, is described.
Abstract: This paper describes BEAM, a general purpose Monte Carlo code to simulate the radiation beams from radiotherapy units including high-energy electron and photon beams, 60Co beams and orthovoltage units. The code handles a variety of elementary geometric entities which the user puts together as needed (jaws, applicators, stacked cones, mirrors, etc.), thus allowing simulation of a wide variety of accelerators. The code is not restricted to cylindrical symmetry. It incorporates a variety of powerful variance reduction techniques such as range rejection, bremsstrahlung splitting and forcing photon interactions. The code allows direct calculation of charge in the monitor ion chamber. It has the capability of keeping track of each particle's history and using this information to score separate dose components (e.g., to determine the dose from electrons scattering off the applicator). The paper presents a variety of calculated results to demonstrate the code's capabilities. The calculated dose distributions in a water phantom irradiated by electron beams from the NRC 35 MeV research accelerator, a Varian Clinac 2100C, a Philips SL75-20, an AECL Therac 20 and a Scanditronix MM50 are all shown to be in good agreement with measurements at the 2 to 3% level. Eighteen electron spectra from four different commercial accelerators are presented and various aspects of the electron beams from a Clinac 2100C are discussed. Timing requirements and selection of parameters for the Monte Carlo calculations are discussed.

1,444 citations


Journal ArticleDOI
TL;DR: In inverse problems, obtaining a maximum likelihood model is usually not sucient, as the theory linking data with model parameters is nonlinear and the a posteriori probability in the model space may not be easy to describe.
Abstract: Probabilistic formulation of inverse problems leads to the definition of a probability distribution in the model space. This probability distribution combines a priori information with new information obtained by measuring some observable parameters (data). As, in the general case, the theory linking data with model parameters is nonlinear, the a posteriori probability in the model space may not be easy to describe (it may be multimodal, some moments may not be defined, etc.). When analyzing an inverse problem, obtaining a maximum likelihood model is usually not sucient, as we normally also wish to have infor

1,124 citations


Journal ArticleDOI
TL;DR: This paper presents a framework for Bayesian model choice, along with an MCMC algorithm that does not suffer from convergence difficulties, and applies equally well to problems where only one model is contemplated but its proper size is not known at the outset.
Abstract: SUMMARY Markov chain Monte Carlo (MCMC) integration methods enable the fitting of models of virtually unlimited complexity, and as such have revolutionized the practice of Bayesian data analysis. However, comparison across models may not proceed in a completely analogous fashion, owing to violations of the conditions sufficient to ensure convergence of the Markov chain. In this paper we present a framework for Bayesian model choice, along with an MCMC algorithm that does not suffer from convergence difficulties. Our algorithm applies equally well to problems where only one model is contemplated but its proper size is not known at the outset, such as problems involving integer-valued parameters, multiple changepoints or finite mixture distributions. We illustrate our approach with two published examples.

985 citations


Journal ArticleDOI
TL;DR: This paper demonstrates practical approaches for determining relative parameter sensitivity with respect to a model's optimal objective function value, decision variables, and other analytic functions of a solution.
Abstract: In applications of operations research models, decision makers must assess the sensitivity of outputs to imprecise values for some of the model's parameters. Existing analytic approaches for classic optimization models rely heavily on duality properties for assessing the impact of local parameter variations, parametric programming for examining systematic variations in model coefficients, or stochastic programming for ascertaining a robust solution. This paper accommodates extensive simultaneous variations in any of an operations research model's parameters. For constrained optimization models, the paper demonstrates practical approaches for determining relative parameter sensitivity with respect to a model's optimal objective function value, decision variables, and other analytic functions of a solution. Relative sensitivity is assessed by assigning a portion of variation in an output value to each parameter that is imprecisely specified. The computing steps encompass optimization, Monte Carlo sampling, ...

958 citations


Journal ArticleDOI
TL;DR: A Monte Carlo method is implemented for four ‘chi‐squared’ test statistics, three of which involved combination of alleles, and evaluated their performance on a real data set.
Abstract: Summary In an association analysis comparing cases and controls with respect to allele frequencies at a highly polymorphic locus, a potential problem is that the conventional chi-squared test may not be valid for a large, sparse contingency table. However, reliance on statistics with known asymptotic distribution is now unnecessary, as Monte Carlo simulations can be performed to estimate the significance level of any test statistic. We have implemented a Monte Carlo method for four ‘chi-squared’ test statistics, three of which involved combination of alleles, and evaluated their performance on a real data set. Combining rare alleles to avoid small expected cell counts, and considering each allele in turn against the rest, reduced the power to detect a genuine association when the number of alleles was very large. We should either not combine alleles at all, or combine them in such a way that preserves the evidence for an association.

954 citations


Journal ArticleDOI
TL;DR: In this article, a Monte Carlo collision (MCC) package including the null collision method has been developed, as an addition to the usual PIC charged particle scheme which will be discussed here.

881 citations


Journal ArticleDOI
TL;DR: This work proposes MCMC methods distantly related to simulated annealing, which simulate realizations from a sequence of distributions, allowing the distribution being simulated to vary randomly over time.
Abstract: Markov chain Monte Carlo (MCMC; the Metropolis-Hastings algorithm) has been used for many statistical problems, including Bayesian inference, likelihood inference, and tests of significance. Though the method generally works well, doubts about convergence often remain. Here we propose MCMC methods distantly related to simulated annealing. Our samplers mix rapidly enough to be usable for problems in which other methods would require eons of computing time. They simulate realizations from a sequence of distributions, allowing the distribution being simulated to vary randomly over time. If the sequence of distributions is well chosen, then the sampler will mix well and produce accurate answers for all the distributions. Even when there is only one distribution of interest, these annealing-like samplers may be the only known way to get a rapidly mixing sampler. These methods are essential for attacking very hard problems, which arise in areas such as statistical genetics. We illustrate the methods wi...

Book
01 Jan 1995
TL;DR: In this article, Monte Carlo Methods for the Self-Avoiding Walk and Monte Carlo Simulation of Neutral and Charged Polymer Solutions: Effects of Long-range Interactions are presented.
Abstract: 1. Introduction. General Aspects of Computer Simulation Techniques and Their Applicaitons in Polymer Physics 2. Monte Carlo Methods for the Self-Avoiding Walk 3. Structure and Dynamics of Neutral and Charged Polymer Solutions: Effects of Long-Range Interactions 4. Entanglement Effects in Polymer Melts 5. Molecular Dynamics of Glassy Polymers 6. Monte Carlo Simulations of the Glass Transition of Polymers 7. Monte Carlo Studies of Polymer Blends and Block Copolymer Thermodynamics 9. Computer Simulations of Tethered Chains

Journal ArticleDOI
TL;DR: In this article, an approach is presented to solve the reverse problem of statistical mechanics: reconstruction of interaction potentials from radial distribution functions, consisting of the iterative adjustment of the interaction potential to known radial distribution function using a Monte Carlo simulation technique and statistical-mechanics relations to connect deviations of canonical averages with Hamiltonian parameters.
Abstract: An approach is presented to solve the reverse problem of statistical mechanics: reconstruction of interaction potentials from radial distribution functions. The method consists of the iterative adjustment of the interaction potential to known radial distribution functions using a Monte Carlo simulation technique and statistical-mechanics relations to connect deviations of canonical averages with Hamiltonian parameters. The method is applied to calculate the effective interaction potentials between the ions in aqueous NaCl solutions at two different concentrations. The reference ion-ion radial distribution functions, calculated in separate molecular dynamics simulations with water molecules, are reproduced in Monte Carlo simulations, using the effective interaction potentials for the hydrated ions. Application of the present method should provide an effective and economical way to simulate equilibrium properties for very large molecular systems (e.g., polyelectrolytes) in the presence of hydrated ions, as well as to offer an approach to reduce a complexity in studies of various associated and aggregated systems in solution.

Journal ArticleDOI
TL;DR: The recently formulated WHAM method is an extension of Ferrenberg and Swendsen's multiple histogram technique for free‐energy and potential of mean force calculations and provides an analysis of the statistical accuracy of the potential ofmean force as well as a guide to the most efficient use of additional simulations to minimize errors.
Abstract: The recently formulated weighted histogram analysis method (WHAM)1 is an extension of Ferrenberg and Swendsen's multiple histogram technique for free-energy and potential of mean force calculations. As an illustration of the method, we have calculated the two-dimensional potential of mean force surface of the dihedrals gamma and chi in deoxyadenosine with Monte Carlo simulations using the all-atom and united-atom representation of the AMBER force fields. This also demonstrates one of the major advantages of WHAM over umbrella sampling techniques. The method also provides an analysis of the statistical accuracy of the potential of mean force as well as a guide to the most efficient use of additional simulations to minimize errors. © 1995 John Wiley & Sons, Inc.

Journal ArticleDOI
TL;DR: In this paper, a mixed algorithm for Monte Carlo simulation of relativistic electron and positron transport in matter is described, where cross sections for the different interaction mechanisms are approximated by expressions that permit the generation of random tracks by using purely analytical methods.
Abstract: A mixed algorithm for Monte Carlo simulation of relativistic electron and positron transport in matter is described. Cross sections for the different interaction mechanisms are approximated by expressions that permit the generation of random tracks by using purely analytical methods. Hard elastic collisions, with scattering angle greater than a preselected cutoff value, and hard inelastic collisions and radiative events, with energy loss larger than given cutoff values, are simulated in detail. Soft interactions, with scattering angle or energy loss less than the corresponding cutoffs, are simulated by means of multiple scattering approaches. This algorithm handles lateral displacements correctly and completely avoids difficulties related with interface crossing. The simulation is shown to be stable under variations of the adopted cutoffs; these can be made quite large, thus speeding up the simulation considerably, without altering the results. The reliability of the algorithm is demonstrated through a comparison of simulation results with experimental data. Good agreement is found for electrons and positrons with kinetic energies down to a few keV.

Book
07 Aug 1995
TL;DR: In this paper, the authors present a mathematical model of a GA multimodal fitness function, genetic drift, GA with sharing, and repeat (parallel) GA uncertainty estimates evolutionary programming -a variant of GA.
Abstract: Part 1 Preliminary statistics: random variables random nunmbers probability probability distribution, distribution function and density function joint and marginal probability distributions mathematical expectation, moments, variances and covariances conditional probability Monte Carlo integration importance sampling stochastic processes Markov chains homogeneous, inhomogeneous, irreducible and aperiodic Markov chains the limiting probability. Part 2 Direct, linear and iterative-linear inverse methods: direct inversion methods model based inversion methods linear/linearized inverse methods iterative linear methods for quasi-linear problems Bayesian formulation solution using probabilistic formulation. Part 3 Monte Carlo methods: enumerative or grid search techniques Monte Carlo inversion hybrid Monte Carlo-linear inversion directed Monte Carlo methods. Part 4 Simulated annealing methods: metropolis algorithm heat bath algorithm simulated annealing without rejected moves fast simulated annealing very fast simulated reannealing mean field annealing using SA in geophysical inversion. Part 5 Genetic algorithms: a classical GA schemata and the fundamental theorem of genetic algorithms problems combining elements of SA into a new GA a mathematical model of a GA multimodal fitness functions, genetic drift, GA with sharing, and repeat (parallel) GA uncertainty estimates evolutionary programming - a variant of GA. Part 6 Geophysical applications of SA and GA: 1-D seismic waveform inversion pre-stack migration velocity estimation inversion of resistivity sounding data for 1-D earth models inversion of resistivity profiling data for 2-D earth models inversion of magnetotelluric sounding data for 1-D earth models stochastic reservoir modelling seismic deconvolution by mean field annealing and Hopfield network. Part 7 Uncertainty estimation: methods of numerical integration simulated annealing - the Gibbs' sampler genetic algorithm - the parallel Gibbs' sampler numerical examples.

Journal ArticleDOI
TL;DR: Kernel density estimators have been used to estimate home range size but little is known of their statistical properties, so four hypothetical models of home range suggested by Boulanger and White (1990) were used to evaluate bias and precision of these estimators.
Abstract: Kernel density estimators have been used to estimate home range size but little is known of their statistical properties. I applied kernel-based estimators of home range size, calculated from 95% probability contours of nonparametric density estimators, to computer-simulated radiolocation data. Four hypothetical models of home range suggested by Boulanger and White (1990) were used to evaluate bias and precision of these estimators in estimating known home range sizes. Kernel methods compared well with the best methods that are available for home range size estimation provided the appropriate level of smoothing was selected. I used brush rabbit (Sylvilagus bachmani) telemetry data to illustrate how Monte Carlo methods may also be used to assess estimator performance from field radiolocation data. A kernel estimator is preferred to a harmonic mean estimator in this example because it is less biased (i.e., the harmonic mean method has an inherent problem).

Journal ArticleDOI
TL;DR: It is demonstrated that a variety of boundary conditions stipulated on the Radiative Transfer Equation can be implemented in a FEM approach, as well as the specification of a light source by a Neumann condition rather than an isotropic point source.
Abstract: This paper extends our work on applying the Finite Element Method (FEM) to the propagation of light in tissue. We address herein the topics of boundary conditions and source specification for this method. We demonstrate that a variety of boundary conditions stipulated on the Radiative Transfer Equation can be implemented in a FEM approach, as well as the specification of a light source by a Neumann condition rather than an isotropic point source. We compare results for a number of different combinations of boundary and source conditions under FEM, as well as the corresponding cases in a Monte Carlo model.

Proceedings ArticleDOI
15 Sep 1995
TL;DR: This work presents a powerful alternative for constructing robust Monte Carlo estimators, by combining samples from several distributions in a way that is provably good, and can reduce variance significantly at little additional cost.
Abstract: Monte Carlo integration is a powerful technique for the evaluation of difficult integrals. Applications in rendering include distribution ray tracing, Monte Carlo path tracing, and form-factor computation for radiosity methods. In these cases variance can often be significantly reduced by drawing samples from several distributions, each designed to sample well some difficult aspect of the integrand. Normally this is done by explicitly partitioning the integration domain into regions that are sampled differently. We present a powerful alternative for constructing robust Monte Carlo estimators, by combining samples from several distributions in a way that is provably good. These estimators are unbiased, and can reduce variance significantly at little additional cost. We present experiments and measurements from several areas in rendering: calculation of glossy highlights from area light sources, the “final gather” pass of some radiosity algorithms, and direct solution of the rendering equation using bidirectional path tracing. CR Categories: I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism; I.3.3 [Computer Graphics]: Picture/Image Generation; G.1.9 [Numerical Analysis]: Integral Equations— Fredholm equations. Additional


Book
01 Feb 1995
TL;DR: In this paper, the mathematical foundations of Bayesian image analysis and its algorithms are discussed, and the necessary background from imaging is sketched and illustrated by a number of concrete applications like restoration, texture segmentation and motion analysis.
Abstract: The book is mainly concerned with the mathematical foundations of Bayesian image analysis and its algorithms. This amounts to the study of Markov random fields and dynamic Monte Carlo algorithms like sampling, simulated annealing and stochastic gradient algorithms. The approach is introductory and elementary: given basic concepts from linear algebra and real analysis it is self-contained. No previous knowledge from image analysis is required. Knowledge of elementary probability theory and statistics is certainly beneficial but not absolutely necessary. The necessary background from imaging is sketched and illustrated by a number of concrete applications like restoration, texture segmentation and motion analysis.

Journal ArticleDOI
TL;DR: In this paper, Monte Carlo experimentation is used to investigate the finite sample properties of the maximum likelihood and corrected ordinary least squares (COLS) estimators of the half-normal stochastic frontier production function.
Abstract: This paper uses Monte Carlo experimentation to investigate the finite sample properties of the maximum likelihood (ML) and corrected ordinary least squares (COLS) estimators of the half-normal stochastic frontier production function. Results indicate substantial bias in both ML and COLS when the percentage contribution of inefficiency in the composed error (denoted by γ*) is small, and also that ML should be used in preference to COLS because of large mean square error advantages when γ* is greater than 50%. The performance of a number of tests of the existence of technical inefficiency is also investigated. The Wald and likelihood ratio (LR) tests are shown to have incorrect size. A one-sided LR test and a test of the significance of the third moment of the OLS residuals are suggested as alternatives, and are shown to have correct size, with the one-sided LR test having the better power of the two.

Journal ArticleDOI
TL;DR: It is described how a full Bayesian analysis can deal with unresolved issues, such as the choice between fixed- and random-effects models, the choice of population distribution in a random- effects analysis, the treatment of small studies and extreme results, and incorporation of study-specific covariates.
Abstract: Current methods for meta-analysis still leave a number of unresolved issues, such as the choice between fixed- and random-effects models, the choice of population distribution in a random-effects analysis, the treatment of small studies and extreme results, and incorporation of study-specific covariates. We describe how a full Bayesian analysis can deal with these and other issues in a natural way, illustrated by a recent published example that displays a number of problems. Such analyses are now generally available using the BUGS implementation of Markov chain Monte Carlo numerical integration techniques. Appropriate proper prior distributions are derived, and sensitivity analysis to a variety of prior assumptions carried out. Current methods are briefly summarized and compared to the full Bayes analysis.

Journal ArticleDOI
TL;DR: Based on QCD-inspired models for multiple jets production, the authors developed a Monte Carlo program to study jet and associated particle production in high energy $pp$, $pA$ and $AA$ collisions.
Abstract: Based on QCD-inspired models for multiple jets production, we developed a Monte Carlo program to study jet and the associated particle production in high energy $pp$, $pA$ and $AA$ collisions. The physics behind the program which includes multiple minijet production, soft excitation, nuclear shadowing of parton distribution functions and jet interaction in dense matter is briefly discussed. A detailed description of the program and instructions on how to use it are given.

Journal ArticleDOI
TL;DR: In this paper, the authors compared the performance of quasi-random and random Monte Carlo methods for multidimensional integrals with respect to variance, variation, smoothness, and dimension.

BookDOI
01 Jan 1995
TL;DR: The book is mainly concerned with the mathematical foundations of Bayesian image analysis and its algorithms, which amounts to the study of Markov random fields and dynamic Monte Carlo algorithms like sampling, simulated annealing and stochastic gradient algorithms.

Book
13 Apr 1995
TL;DR: In this paper, the authors introduce Monte Carlo Methods and their application in the field of X-ray production and micro-analysis, including charge collection microscopy and Cathodoluminescence.
Abstract: Preface 1. An Introducton to Monte Carlo Methods 2. Constructing a Simulation 3. The Single Scattering Model 4. The Plural Scattering Model 5. Practical Applications of Monte Carlo Models 6. Backscattered Electrons 7. Charge Collection Microscopy and Cathodoluminescence 8. Secondary Electrons and Imaging 9. X-ray Production and Micro-Analysis 10. What Next in Monte Carlo Simulations?

Book ChapterDOI
Art B. Owen1
01 Jan 1995
TL;DR: In this article, a hybrid of Monte Carlo and quasi-Monte Carlo methods is presented, in which certain low discrepancy point sets and sequences due to Faure, Niederreiter and Sobol' are obtained and their digits are randomly permuted.
Abstract: This article presents a hybrid of Monte Carlo and Quasi-Monte Carlo methods. In this hybrid, certain low discrepancy point sets and sequences due to Faure, Niederreiter and Sobol’ are obtained and their digits are randomly permuted. Since this randomization preserves the equidistribution properties of the points it also preserves the proven bounds on their quadrature errors. The accuracy of an estimated integrand can be assessed by replication, consisting of independent re-randomizations.

Proceedings ArticleDOI
R.W. Kelsall1
03 Apr 1995
TL;DR: If the authority ascribed to Monte Carlo models of devices at 1/spl mu/m feature size is to be maintained, modelling of the fundamental physics must be further improved, and the device model must be made more realistic.
Abstract: There can be little doubt that the Monte Carlo method for semiconductor device simulation has enormous power as a research tool. It represents a detailed physical model of the semiconductor material(s), and provides a high degree of insight into the microscopic transport processes. However, if the authority ascribed to Monte Carlo models of devices at 1/spl mu/m feature size is to be maintained for devices below O.1/spl mu/m, modelling of the fundamental physics must be further improved. And if the Monte Carlo method is to be successful as a semiconductor device design tool, the device model must be made more realistic. Success in the industrial sector depends on this, but also on achieving fast run-times optimisation - where the scope and need for ingenuity is now greatest.

Journal ArticleDOI
Moshe Buchinsky1
TL;DR: In this paper, a Monte Carlo study examines several estimation procedures of the asymptotic covariance matrix in quantile and censored quantile regression models: design matrix bootstrap, error bootstrapping, order statistic, sigma bootstrap and heteroskedastic kernel.