scispace - formally typeset
Search or ask a question

Showing papers on "Monte Carlo method published in 1985"


Journal ArticleDOI
TL;DR: In this article, the authors proposed a joint probability density function (pdf) of the three components of velocity and of the composition variables (species mass fractions and enthalpy) to calculate the properties of turbulent reactive flow fields.

2,578 citations


Journal ArticleDOI
TL;DR: In this paper, the authors examined the impact of correlated error among the dependent and independent variables in order to explore whether or not artificial interaction terms can be generated, and the results are clear-cut: Artifactual interaction cannot be created; true interactions can be attentuated.

1,486 citations


Proceedings ArticleDOI
01 Dec 1985
TL;DR: Powerful and general techniques for converting Monte Carlo algorithms into deterministic algorithms are used to convert the Monte Carlo algorithm for the MIS problem into a simple deterministic algorithm with the same parallel running time.
Abstract: Simple parallel algorithms for the maximal independent set (MIS) problem are presented. The first algorithm is a Monte Carlo algorithm with a very local property. The local property of this algorithm may make it a useful protocol design tool in distributed computing environments and artificial intelligence. One of the main contributions of this paper is the development of powerful and general techniques for converting Monte Carlo algorithms into deterministic algorithms. These techniques are used to convert the Monte Carlo algorithm for the MIS problem into a simple deterministic algorithm with the same parallel running time.

1,004 citations


Journal ArticleDOI
TL;DR: This paper describes a practical method for probabilistic sensitivity analysis, in which uncertainties in all values are considered simultaneously, and a parametric model that permits each distribution to be specified by two values: the baseline estimate and a bound of the 95 percent confidence interval.
Abstract: The data for medical decision analyses are often unreliable. Traditional sensitivity analysis--varying one or more probability or utility estimates from baseline values to see if the optimal strategy changes--is cumbersome if more than two values are allowed to vary concurrently. This paper describes a practical method for probabilistic sensitivity analysis, in which uncertainties in all values are considered simultaneously. The uncertainty in each probability and utility is assumed to possess a probability distribution. For ease of application we have used a parametric model that permits each distribution to be specified by two values: the baseline estimate and a bound (upper or lower) of the 95 percent confidence interval. Following multiple simulations of the decision tree in which each probability and utility is randomly assigned a value within its distribution, the following results are recorded: (a) the mean and standard deviation of the expected utility of each strategy; (b) the frequency with which each strategy is optimal; (c) the frequency with which each strategy "buys" or "costs" a specified amount of utility relative to the remaining strategies. As illustrated by an application to a previously published decision analysis, this technique is easy to use and can be a valuable addition to the armamentarium of the decision analyst.

771 citations


Journal ArticleDOI
TL;DR: In this article, a series of Monte Carlo simulations has been carried out to characterize the temperature and size dependence of the results for liquid water using the TIP4P potential function.
Abstract: A series of Monte Carlo simulations has been carried out to characterize the temperature and size dependence of the results for liquid water using the TIP4P potential function. Five temperatures from -25 to 100°C and four system sizes from 64 to 512 molecules have been studied. Comparisons are made with experimental thermodynamic and structural data as well as results of prior simulations.

728 citations


Journal ArticleDOI
TL;DR: In this paper, the free energy of hydration of methanol and ethane in dilute soluton was calculated using Monte Carlo simulations using double-wide sampling, and it was shown that only two or three Monte-Carlo simulations are necessary to obtain results with high precision.
Abstract: Perturbation theory has been applied to calculate the relative free energies of hydration of methanol and ethane in dilute soluton. It is demonstrated that only two or three Monte Carlo simulations using double‐wide sampling are necessary to obtain results with high precision. The small statistical uncertainty in the computed change in free energy of hydration and the good accord with experimental thermodynamic data are most encouraging for application of the procedure to a wide range of problems. Structural effects accompanying the mutation of methanol to ethane in water are also discussed; hydrogen bonding to the solute is essentialy eliminated by only a 25% reduction in the atomic charges of methanol.

644 citations


Journal ArticleDOI
TL;DR: The Monte Carlo method gives good agreement for the 15-MV x-ray dose in electronic disequilibrium situations, such as the buildup region, near beam boundaries, and near low-density heterogeneities.
Abstract: Arrays were generated using the Monte Carlo method representing the energy absorbed throughout waterlike phantoms from charged particles and scatter radiation set in motion by primary interactions at one location. The resulting ‘‘dose spread arrays’’ were normalized to the collision fraction of the kinetic energy released by the primary photons. These arrays are convolved with the relative primary fluence interacting in a phantom to obtain three‐dimensional dose distributions. The method gives good agreement for the 15‐MV x‐ray dose in electronic disequilibrium situations, such as the buildup region, near beam boundaries, and near low‐density heterogeneities.

558 citations


Journal ArticleDOI
TL;DR: This work has used Monte Carlo code (EGS) to compute photon spectra for a number of different linear accelerators and finds the mean photon energy to have a value lower than the generally perceived value of one-third the maximum energy.
Abstract: For accurate three‐dimensional treatment planning, new models of dose calculations are being developed which require the knowledge of the energy spectra and angular distributions of the photons incident on the surface of the patient. Knowledge of the spectra is also useful in other applications, including the design of filters and beam modifying devices and determination of factors to convert ionization chamber measurements to dose. We have used Monte Carlo code (egs) to compute photon spectra for a number of different linear accelerators. Both the target and the flattening filter have been accurately modeled. We find the mean photon energy to have a value lower than the generally perceived value of one‐third the maximum energy. As expected, the spectra become softer as the distance from the central axis increases. Verification of the spectra is performed by computing dose distributions and half‐value layers in water using the calculated spectra and comparing the results with measured data. We also examined the angular distributions of photons incident on the surface of the phantom. In currently used models of dose computations, it is assumed that the angular distribution of photons with respect to fan lines emanating from the source is negligible. Although the angular spread of photons with respect to the incident direction has been found to be small, its contribution to the diffuseness of the beam boundaries is significant.

520 citations


Journal ArticleDOI
TL;DR: In this paper, the multivariate asymptotic distribution of sequential Chi-square test statistics is investigated, and it is shown that the statistics of Chi-squaredifference tests have an asymptic intercorrelation which may be expressed in closed form and which is, in many cases, quite high.
Abstract: The multivariate asymptotic distribution of sequential Chi-square test statistics is investigated. It is shown that: (a) when sequential Chi-square statistics are calculated for nested models on the same data, the statistics have an asymptotic intercorrelation which may be expressed in closed form, and which is, in many cases, quite high; and (b) sequential Chi-squaredifference tests are asymptotically independent. Some Monte Carlo evidence on the applicability of the theory is provided.

491 citations


Book
01 Jan 1985
TL;DR: In this paper, the authors considered the problem of solving problems with multiple scales, problems with different time scales, nonlinear normal mode initialization of numerical weather prediction models, diffusion synthetic acceleration of transport iterations with application to a radiation hydrodynamics problem, implicit methods in combustion and chemical kinetics modeling, implicit adaptive-grid radiation hyddrynamics, and multiple time-scale methods in the Tokamak magnetohydrodynamic system.
Abstract: Various topics concerning multiple time scales are discussed. The subjects addressed include: considerations on solving problems with multiple scales, problems with different time scales, nonlinear normal mode initialization of numerical weather prediction models, diffusion synthetic acceleration of transport iterations with application to a radiation hydrodynamics problem, implicit methods in combustion and chemical kinetics modeling, implicit adaptive-grid radiation hydrodynamics, and multiple time-scale methods in Tokamak magnetohydrodynamics. Also covered are: hybrid and collisional implicit plasma simulation models, simulation of low-frequency electromagnetic phenomena in plasmas, orbit averaging and subcycling in particle simulation of plasmas, direct implicit plasma simulation, direct methods in N-body simulations, and molecular dynamics and Monte Carlo simulation of rare events.

437 citations



Journal ArticleDOI
TL;DR: In this paper, a differential equation model to describe the pinching, degrading response of hysteretic elements is presented, where the model consists of a nonpinching element, in series with a "slip-lock" element.
Abstract: A differential equation model to describe pinching, degrading response of hysteretic elements is presented. The model consists of a nonpinching hysteretic element, in series with a “slip-lock” element. Zero mean response statistics for a single degree of freedom oscillator whose stiffness is described by the series model, computed by equivalent linearization and by Monte Carlo simulation, are compared. The model response statistics are seen to be reasonable estimated by equivalent linearization.

Journal ArticleDOI
TL;DR: In this article, the authors proposed and discussed a type of new robust estimators for covariance/correlation matrices and principal components via projection-pursuit techniques, which are of both rotational equivariance and high breakdown point.
Abstract: This article proposes and discusses a type of new robust estimators for covariance/correlation matrices and principal components via projection-pursuit techniques. The most attractive advantage of the new procedures is that they are of both rotational equivariance and high breakdown point. Besides, they are qualitatively robust and consistent at elliptic underlying distributions. The Monte Carlo study shows that the best of the new estimators compare favorably with other robust methods. They provide as good a performance as M-estimators and somewhat better empirical breakdown properties.

Journal ArticleDOI
TL;DR: In this paper, a statistical formulation of the multifragmentation of finite nuclei is given, which considers the generalization of the liquid-drop model for hot nuclei and allows one to calculate thermodynamic quantities characterizing the nuclear ensemble at the disassembly stage.

Journal ArticleDOI
TL;DR: The fully nonlinear approach presented is rooted in statistical mechanics and formulate inversion as a problem of Bayesian estimation, in which the prior probability distribution is the Gibbs distribution of statistical mechanics.
Abstract: Nonlinear inverse problems are usually solved with linearized techniques that depend strongly on the accuracy of initial estimates of the model parameters. With linearization, objective functions can be minimized efficiently, but the risk of local rather than global optimization can be severe. I address the problem confronted in nonlinear inversion when no good initial guess of the model parameters can be made. The fully nonlinear approach presented is rooted in statistical mechanics. Although a large nonlinear problem might appear computationally intractable without linearization, reformulation of the same problem into smaller, interdependent parts can lead to tractable computation while preserving nonlinearities. I formulate inversion as a problem of Bayesian estimation, in which the prior probability distribution is the Gibbs distribution of statistical mechanics. Solutions are then obtained by maximizing the posterior probability of the model parameters. Optimization is performed with a Monte Carlo te...

Journal ArticleDOI
TL;DR: Fourier techniques that greatly accelerate simulations on large lattices and a new technique for including quark vacuum-polarization corrections that admits any number of flavors, odd or even, without the need for nested Monte Carlo calculations are introduced.
Abstract: We present a new analysis of Langevin simulation techniques for lattice field theories, including a general discussion of errors and algorithm speed. We introduce Fourier techniques that greatly accelerate simulations on large lattices. We also introduce a new technique for including quark vacuum-polarization corrections that admits any number of flavors, odd or even, without the need for nested Monte Carlo calculations. Our analysis is supported by a variety of numerical experiments.

Journal ArticleDOI
TL;DR: In this paper, the authors evaluated naive density estimates and density estimates corrected for edge effect using mean maximum distance moved (MMDM) for small mammal populations by Monte Carlo methods and found that the MMDM had less than 22% bias.
Abstract: We evaluated naive density estimates and density estimates corrected for “edge effect” using mean maximum distance moved (MMDM) for small mammal populations by Monte Carlo methods. Two densities, 25 and 100/ha, were generated in random or slightly clumped spatial patterns within a 4-ha area and populations had average capture probabilities of either 0.16 or 0.24 allowing variation in time, behavior, and heterogeneity. Animals were assumed to have a bivariate normal utilization distribution of either 0.25, 0.5, or 0.75 ha. An 18 by 18 trapping grid with 7 m trap spacing was simulated with trapping over 6 or 8 occasions. Evaluation of 1,393 repetitions divided among 8 different cases revealed a large positive bias (69–89%) for the naive density estimates, and density estimates by using the MMDM had less than 22% bias. A robustness to home range size was demonstrated by the MMDM. Difficulties with both methods are indicated.

Journal ArticleDOI
TL;DR: In this paper, a square-lattice model of an amphiphile-oil-water system is developed in which oil and water molecules occupy single sites and amphiphiles occupy chains of sites.
Abstract: A square‐lattice model of amphiphile‐oil–water systems is developed in which oil and water molecules occupy single sites and amphiphiles occupy chains of sites. Energies and free energies estimated by Monte Carlo sampling of configuration space show that when the head, or water‐loving portion, of the amphiphile has no tendency to hydrate or surround itself with water, as opposed to surrounding itself with other heads, the capability of even long amphiphiles to solubilize repellant oil and water into a single phase is weak. Although the Monte Carlo free energies deviate markedly from those given by quasichemical theory, the deviation of the phase behavior is modest. Computer drawings of typical equilibrium configurations show highly irregular interfaces, apparently caused by capillary waves which are pronounced in two dimensions.

Journal ArticleDOI
TL;DR: In this paper, the fragmentation of a hot nuclear system is studied in a statistical model and the partitions of the system are calculated by means of a Monte Carlo technique, and the resulting mass spectra, multiplicity distributions, average values of temperature, entropy, heat capacity and break-up density are presented and discussed.

Journal ArticleDOI
TL;DR: In this article, a computer simulation study was designed to investigate the extent to which the interrater reliability of a clinical scale is affected by the number of scales or scales used.
Abstract: A computer simulation study was designed to in vestigate the extent to which the interrater reliability of a clinical scale is affected by the number of cate gories or scale points (2, 3, 4, ... ,100). Results in dicate that reliability increases steadily up to 7 scale points, beyond which no substantial increases occur, even when the number of scale poirits is increased to as many as 100. These findings hold under the follow ing conditions: (1) The research investigator has insuf ficient a priori knowledge to use as a reliable guide line for deciding on an appropriate number of scale points to employ, and (2) the dichotomous and ordinal categories being considered all have an underlying metric or continuous scale format.

Journal ArticleDOI
TL;DR: In this article, an algorithm to include the Pauli exclusion principle in the Ensemble Monte Carlo method was presented, which indicates that significant changes in the transport properties of GaAs have to be expected when degenerate conditions are reached.
Abstract: An algorithm to include the Pauli exclusion principle in the Ensemble Monte Carlo method is presented. The results indicate that significant changes in the transport properties of GaAs have to be expected when degenerate conditions are reached. Important repercussions should be found in the modeling of microwave devices, where one often deals with highly doped regions.

Journal ArticleDOI
TL;DR: First results of massive Monte Carlo simulations of the d=3 Ising spin-glass with ±J bond distribution, performed on a fast special purpose computer, show a qualitative change in the behavior of the system and best fits for the spin- glass correlation length and relaxation time favor equilibrium phase transition at Tc/J≊1.2.
Abstract: First results of massive Monte Carlo simulations of the d=3 Ising spin-glass with ±J bond distribution, performed on a fast special purpose computer, are presented. A qualitative change in the behavior of the system and best fits for the spin-glass correlation length and relaxation time favor equilibrium phase transition at Tc/J≊1.2. .AE

Book ChapterDOI
TL;DR: MCNP is a very general Monte Carlo neutron-photon transport code with approximately 250 person years of Group X-6 code development invested and has as its data base the best cross-section evaluations available.
Abstract: MCNP is a very general Monte Carlo neutron-photon transport code with approximately 250 person years of Group X-6 code development invested. It is highly portable, user-oriented, and a true production code as it is used about 60 Cray hours per month by about 150 Los Alamos users. It has as its data base the best cross-section evaluations available. MCNP contains state-of-the-art traditional and adaptive Monte Carlo techniques to be applied to the solution of an ever-increasing number of problems. Excellent user-oriented documentation is available for all facets of the MCNP code. Many useful and important variants of MCNP exist for special applications.

Posted Content
TL;DR: The authors examined the small sample properties of tests of rational expectations models and showed that the asymptotic distribution of test statistics can be extremely misleading when the tine series examined are highly autoregressive.
Abstract: We examine the small sample properties of tests of rational expectations models. We show using Monte Carlo experiments that the asymptotic distribution of test statistics can be extremely misleading when the tine series examined are highly autoregressive. In particular, a practitioner relying on the asymptotic distribution will reject true models too frequently. We also show that this problem is especially severe with detrended data. We present correct small sample critical values for our canonical problem.


Journal ArticleDOI
TL;DR: In this article, a contact function for two arbitrary ellipsoids is derived, where the numerical value of the contact function is less than 1 if they overlap, and greater than 2 if they do not.

Journal ArticleDOI
TL;DR: In this paper, an improved method for generating configurations according to the SU(2) heatbath distribution, which is also a central component of the SI(3) “quasi-heatbath” method of Cabibbo and Marinari, is presented.

Journal ArticleDOI
TL;DR: The steady-state scattering function for driven diffusive systems with a single conserved density is investigated and it is found that d = 2 is the borderline dimension with marginally nondiffusive behavior; for d larger than 2 the spread is diffusive with anisotropic long-time-tail corrections.
Abstract: The steady-state scattering function for driven diffusive systems with a single conserved density is investigated in view of the intrinsically faster, as compared to the predictions of an ordinary diffusion law, spreading of density fluctuations observed in stationary driven diffusive systems at low dimensionality, and, consequently, the divergence of excess noise for small frequencies. It is found that d = 2 is the borderline dimension with marginally nondiffusive behavior; for d larger than 2 the spread is diffusive with anisotropic long-time-tail corrections. The derivations presented are confirmed by Monte Carlo simulation results for a driven hard-core lattice gas.

Journal ArticleDOI
TL;DR: In this paper, a method to analyze reliability of soil slopes using the response surface method is described, where the soil slope is modelled and analyzed by a finite element code as in prevalent deterministic studies, and the simulation is repeated a limited number of times to give point estimates of the response corresponding to uncertainties in the model parameters.
Abstract: A method to analyze reliability of soil slopes using the response surface method is described. The soil slope is modelled and analyzed by a finite element code as in prevalent deterministic studies. The simulation which is usually expensive, is repeated a limited number of times to give point estimates of the response corresponding to uncertainties in the model parameters. A graduating function is then fit to these point estimates so that the response given by the finite element code can be reasonably approximated by the graduating function within the region of interest. The approximating function, called the response surface, is used to replace the code in subsequent repetitive computations required in a statistical reliability analysis. The procedure is applied to a sample problem in slope stability involving uncertain soil properties. It is shown that the slope stability statistics from the response surface is within 1–9% of the statistics based on a direct Monte Carlo simulation using the finite eleme...

Journal ArticleDOI
TL;DR: In this article, the results of Monte Carlo simulations on a system of hard ellipsoids of revolution with length-to-breadth ratios a/b = 3, 2.75, 2, 1.25 and b/a = 3.8.
Abstract: We present the results of Monte Carlo simulations on a system of hard ellipsoids of revolution with length-to-breadth ratios a/b = 3, 2.75, 2, 1.25 and b/a = 3, 2.75, 2, 1.25. We identify four distinct phases, viz. isotropic fluid, nematic fluid, ordered solid and plastic solid. The coexistence points of all first order phase transitions are located by performing absolute free energy computations for all coexisting phases. We find nematic phases only for a/b ≥ 2.75 and a/b ≤ 1/2.75. A plastic solid is only observed for 1.25 ≥ a/b ≥ 0.8. It is found that the phase diagram is surprisingly symmetric under interchange of the major and minor axes of the ellipsoids.