scispace - formally typeset
Search or ask a question

Showing papers on "Monte Carlo method published in 1992"


Book
01 Jan 1992
TL;DR: This chapter discusses Monte Carlo methods and Quasi-Monte Carlo methods for optimization, which are used for numerical integration, and their applications in random numbers and pseudorandom numbers.
Abstract: Preface 1. Monte Carlo methods and Quasi-Monte Carlo methods 2. Quasi-Monte Carlo methods for numerical integration 3. Low-discrepancy point sets and sequences 4. Nets and (t,s)-sequences 5. Lattice rules for numerical integration 6. Quasi- Monte Carlo methods for optimization 7. Random numbers and pseudorandom numbers 8. Nonlinear congruential pseudorandom numbers 9. Shift-Register pseudorandom numbers 10. Pseudorandom vector generation Appendix A. Finite fields and linear recurring sequences Appendix B. Continued fractions Bibliography Index.

3,815 citations


Journal ArticleDOI
TL;DR: The case is made for basing all inference on one long run of the Markov chain and estimating the Monte Carlo error by standard nonparametric methods well-known in the time-series and operations research literature.
Abstract: Markov chain Monte Carlo using the Metropolis-Hastings algorithm is a general method for the simulation of stochastic processes having probability densities known up to a constant of proportionality. Despite recent advances in its theory, the practice has remained controversial. This article makes the case for basing all inference on one long run of the Markov chain and estimating the Monte Carlo error by standard nonparametric methods well-known in the time-series and operations research literature. In passing it touches on the Kipnis-Varadhan central limit theorem for reversible Markov chains, on some new variance estimators, on judging the relative efficiency of competing Monte Carlo schemes, on methods for constructing more rapidly mixing Markov chains and on diagnostics for Markov chain Monte Carlo.

1,912 citations


Journal ArticleDOI
15 Jul 1992-EPL
TL;DR: In this article, the authors proposed a new global optimization method (Simulated Tempering) for simulating effectively a system with a rough free-energy landscape (i.e., many coexisting states) at finite nonzero temperature.
Abstract: We propose a new global optimization method (Simulated Tempering) for simulating effectively a system with a rough free-energy landscape (i.e., many coexisting states) at finite nonzero temperature. This method is related to simulated annealing, but here the temperature becomes a dynamic variable, and the system is always kept at equilibrium. We analyse the method on the Random Field Ising Model, and we find a dramatic improvement over conventional Metropolis and cluster methods. We analyse and discuss the conditions under which the method has optimal performances.

1,723 citations


Journal ArticleDOI
TL;DR: A model based upon steady-state diffusion theory which describes the radial dependence of diffuse reflectance of light from tissues is developed and the optical properties derived for the phantoms are within 5%-10% of those determined by other established techniques.
Abstract: A model based upon steady-state diffusion theory which describes the radial dependence of diffuse reflectance of light from tissues is developed. This model incorporates a photon dipole source in order to satisfy the tissue boundary conditions and is suitable for either refractive index matched or mismatched surfaces. The predictions of the model were compared with Monte Carlo simulations as well as experimental measurements made with tissue simulating phantoms. The model describes the reflectance data accurately to radial distances as small as 0.5 mm when compared to Monte Carlo simulations and agrees with experimental measurements to distances as small as 1 mm. A nonlinear least-squares fitting procedure has been used to determine the tissue optical properties from the radial reflectance data in both phantoms and tissues in vivo. The optical properties derived for the phantoms are within 5%-10% of those determined by other established techniques. The in vivo values are also consistent with those reported by other investigators.

1,541 citations


Journal ArticleDOI
TL;DR: A review of goodness-of-fit indices for structural equation models and the Monte Carlo studies that have empirically assessed their distributional properties can be found in this paper, where a more complete understanding of their properties and suitability requires further research.
Abstract: This article reviews proposed goodness-of-fit indices for structural equation models and the Monte Carlo studies that have empirically assessed their distributional properties. The cumulative contributions of the studies are summarized, and the variables under which the indices are studied are noted. A primary finding is that many of the indices used until the late 1980s, including Joreskog and Sorbom's (1981) GFI and Bentler and Bonett's (1980) NFI, indicated better fit when sample size increased. More recently developed indices based on the chi-square noncentrality parameter are discussed and the relevant Monte Carlo studies reviewed. Although a more complete understanding of their properties and suitability requires further research, the recommended fit indices are the McDonald (1989) noncentrality index, the Bentler (1990)-McDonald and Marsh (1990) RNI (or the bounded counterpart CFI), and Bollen's (1989) DELTA2.

1,068 citations


Journal ArticleDOI
TL;DR: A new type of Monte Carlo move is introduced that makes it possible to carry out large scale conformational changes of the chain molecule in a single trial move in a novel approach that allows efficient numerical simulation of systems consisting of flexible chain molecules.
Abstract: We propose a novel approach that allows efficient numerical simulation of systems consisting of flexible chain molecules. The method is especially suitable for the numerical simulation of dense chain systems and monolayers. A new type of Monte Carlo move is introduced that makes it possible to carry out large scale conformational changes of the chain molecule in a single trial move. Our scheme is based on the selfavoiding random walk algorithm of Rosenbluth and Rosenbluth. As an illustration, we compare the results of a calculation of mean-square end to end lengths for single chains on a two-dimensional square lattice with corresponding data gained from other simulations.

1,017 citations


Journal ArticleDOI
TL;DR: This article describes a transformation that simplifies the problem and places it into a form that allows efficient calculation using standard numerical multiple integration algorithms.
Abstract: The numerical computation of a multivariate normal probability is often a difficult problem. This article describes a transformation that simplifies the problem and places it into a form that allows efficient calculation using standard numerical multiple integration algorithms. Test results are presented that compare implementations of two algorithms that use the transformation with currently available software.

1,012 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed a new effective Monte Carlo (MC) procedure for direct calculation of the free energy in a single MC run, where the partition function of the expanded ensemble is introduced including a sum of canonical partition functions with a set of temperatures and additive factors.
Abstract: We propose a new effective Monte Carlo (MC) procedure for direct calculation of the free energy in a single MC run. The partition function of the expanded ensemble is introduced including a sum of canonical partition functions with a set of temperatures and additive factors (modification). Random walk in the space of both particle coordinates and temperatures provides calculation of free energy in a wide range of T. The method was applied to a primitive model of electrolyte including the region of low temperatures. In similar way other variants of expanded ensembles are constructed (e.g., over the number of particles N or volume V). Its facilities in quantum statistics (path integral Monte Carlo) and some other applications are also discussed.

1,006 citations


Journal ArticleDOI
TL;DR: In this paper, a Markov chain Monte Carlo method is used to approximate the whole likelihood function in autologistic models and other exponential family models for dependent data, and the parameter value (if any) maximizing this function approximates the MLE.
Abstract: Maximum likelihood estimates (MLEs) in autologistic models and other exponential family models for dependent data can be calculated with Markov chain Monte Carlo methods (the Metropolis algorithm or the Gibbs sampler), which simulate ergodic Markov chains having equilibrium distributions in the model. From one realization of such a Markov chain, a Monte Carlo approximant to the whole likelihood function can be constructed. The parameter value (if any) maximizing this function approximates the MLE

869 citations


Book
01 Jan 1992
TL;DR: In this article, the renormalization group is used for the transfer matrix series expansion in the model of mean field theory and thermodynamics, and for the series expansions in Monte Carlo simulations.
Abstract: Introduction Statistical mechanics and thermodynamics Models Mean-field theories The transfer matrix Series expansions Monte Carlo simulations The renormalization group Implementations of the renormalization group.

788 citations


Journal ArticleDOI
09 Apr 1992-Nature
TL;DR: Application of the 'dead-end elimination' theorem effectively controls the computational explosion of the rotamer combinatorial problem, thereby allowing the determination of the global minimum energy conformation of a large collection of side chains.
Abstract: The prediction of a protein's tertiary structure is still a considerable problem because the huge amount of possible conformational space¹ makes it computationally difficult. With regard to side-chain modelling, a solution has been attempted by the grouping of side-chain conformations into representative sets of rotamers²⁻⁵. Nonetheless, an exhaustive combinatorial search is still limited to carefully indentified packing units⁵⁶ containing a limited number of residues. For larger systems other strategies had to be developed, such as the Monte Carlo Procedure⁶⁷ and the genetic algorithm and clustering approach⁸. Here we present a theorem, referred to as the 'dead-end elimination' theorem, which imposes a suitable condition to identify rotamers that cannot be members of the global minimum energy conformation. Application of this theorem effectively controls the computational explosion of the rotamer combinatorial problem, thereby allowing the determination of the global minimum energy conformation of a large collection of side chains.

Journal ArticleDOI
TL;DR: In this paper, single field principal component analysis (PCA), direct singular value decomposition (SVD), canonical correlation analysis (CCA), and combined PCA of two fields are applied to a 39-winter dataset consisting of normalized seasonal mean sea surface temperature anomalies over the North Pacific and concurrent 500-mb height anomaly over the same region.
Abstract: Single field principal component analysis (PCA), direct singular value decomposition (SVD), canonical correlation analysis (CCA), and combined principal component analysis (CPCA) of two fields are applied to a 39-winter dataset consisting of normalized seasonal mean sea surface temperature anomalies over the North Pacific and concurrent 500-mb height anomalies over the same region. The CCA solutions are obtained by linear transformations of the SVD solutions. Spatial patterns and various measures of the variances and covariances explained by the modes derived from the different types of expansions are compared, with emphasis on the relative merits of SVD versus CCA. Results for two different analysis domains (i.e., the Pacific sector versus a full hemispheric domain for the 500-mb height field) are also compared in order to assess the domain dependence of the two techniques. The SVD solution is also compared with the results of 28 Monte Carlo simulations in which the temporal order of the SST gri...

Journal ArticleDOI
TL;DR: In this paper, the Gibbs sampler is proposed as a mechanism for implementing a conceptually and computationally simple solution in multivariate state-space modeling, forecasting, and smoothing, allowing for the possibilities of nonnormal errors and nonlinear functionals in the state equation, the observational equation, or both.
Abstract: A solution to multivariate state-space modeling, forecasting, and smoothing is discussed. We allow for the possibilities of nonnormal errors and nonlinear functionals in the state equation, the observational equation, or both. An adaptive Monte Carlo integration technique known as the Gibbs sampler is proposed as a mechanism for implementing a conceptually and computationally simple solution in such a framework. The methodology is a general strategy for obtaining marginal posterior densities of coefficients in the model or of any of the unknown elements of the state space. Missing data problems (including the k-step ahead prediction problem) also are easily incorporated into this framework. We illustrate the broad applicability of our approach with two examples: a problem involving nonnormal error distributions in a linear model setting and a one-step ahead prediction problem in a situation where both the state and observational equations are nonlinear and involve unknown parameters.

Journal ArticleDOI
TL;DR: For folding on a simple two-dimensional lattice it is found that the genetic algorithm is dramatically superior to conventional Monte Carlo methods.

Journal ArticleDOI
23 Oct 1992-Science
TL;DR: The performance of the hybrid AM1-TIP3P model was further validated by consideration of bimolecular complexes with water and by computation of the free energies of solvation of organic molecules using statistical perturbation theory.
Abstract: A Monte Carlo quantum mechanical-molecular mechanical (QM-MM) simulation method was used to determine the contributions of the solvent polarization effect to the total interaction energies between solute and solvent for amino acid side chains and nucleotide bases in aqueous solution. In the present AM1-TIP3P approach, the solute molecule is characterized by valence electrons and nucleus cores with Hartree-Fock theory incorporating explicit solvent effects into the total Hamiltonian, while the solvent is approximated by the three-point charge TIP3P model. The polarization energy contributes 10 to 20 percent of the total electrostatic energy in these systems. The performance of the hybrid AM1-TIP3P model was further validated by consideration of bimolecular complexes with water and by computation of the free energies of solvation of organic molecules using statistical perturbation theory. Excellent agreement with ab initio 6-31G(d) results and experimental solvation free energies was obtained.


Journal ArticleDOI
TL;DR: In this article, the limit of the random empirical measures associated with the Bird algorithm is shown to be a deterministic measure-valued function satisfying an equation close (in a certain sense) to the Boltzmann equation.
Abstract: Bird's direct simulation Monte Carlo method for the Boltzmann equation is considered. The limit (as the number of particles tends to infinity) of the random empirical measures associated with the Bird algorithm is shown to be a deterministic measure-valued function satisfying an equation close (in a certain sense) to the Boltzmann equation. A Markov jump process is introduced, which is related to Bird's collision simulation procedure via a random time transformation. Convergence is established for the Markov process and the random time transformation. These results, together with some general properties concerning the convergence of random measures, make it possible to characterize the limiting behavior of the Bird algorithm.

Journal ArticleDOI
TL;DR: In this article, meta-analytic methods were used to integrate the findings of a sample of Monte Carlo studies of the robustness of the F test in the one-and two-factor fixed effects ANOVA models.
Abstract: Meta-analytic methods were used to integrate the findings of a sample of Monte Carlo studies of the robustness of the F test in the one- and two-factor fixed effects ANOVA models. Monte Carlo results for theWelch (1947) and Kruskal-Wallis (Kruskal & Wallis, 1952) tests were also analyzed. The meta-analytic results provided strong support for the robustness of the Type I error rate of the F test when certain assumptions were violated. The F test also showed excellent power properties. However, the Type I error rate of the F test was sensitive to unequal variances, even when sample sizes were equal. The error rate of the Welch test was insensitive to unequal variances when the population distribution was normal, but nonnormal distributions tended to inflate its error rate and to depress its power. Meta-analytic and exact statistical theory results were used to summarize the effects of assumption violations for the tests.

Journal ArticleDOI
TL;DR: For a piecewise linear barrier switching between two values as a Markov process, exact and Monte Carlo rersults reveal a novel resonantlike phenomenon as a function of the barrier fluctuation rate.
Abstract: We consider the problem of thermally activated potential barrier crossing in the presence of fluctuations of the barrier itself. For piecewise linear barrier switching between two values as a Markov process, exact and Monte Carlo results reveal a novel resonantlike phenomenon as a function of the barrier fluctuation rate. For very slow variations the average crossing time is the average of the times required to diffuse over each of the barriers separately; for very fast variations the mean crossing time is that required to cross the average barrier. At intermediate rates the crossing is strongly correlated with the potential variation and the escape rate exhibits a local maximum at a ``resonant'' fluctuation rate.

Journal ArticleDOI
TL;DR: In this paper, a configurational-bias Monte Carlo (DBMC) algorithm was proposed to calculate the chemical potential of arbitrary chain molecules in a computer simulation, based on a generalization of the Siepmann's method.
Abstract: The authors present a method for calculating the chemical potential of arbitrary chain molecules in a computer simulation. The method is based on a generalization of Siepmann's method for calculating the chemical potential of chain molecules with a finite number of conformations. Next, the authors show that it is also possible to extend the configurational-bias Monte Carlo scheme developed recently by Siepmann and Frenkel (1992) to continuously deformable molecules. The utility of their technique for computing the chemical potential of chain molecules is demonstrated by computing the chemical potential of a fully flexible chain consisting of 10-20 segments in a moderately dense atomic fluid. Under these conditions the conventional particle-insertion schemes fail completely. In addition, they show that their novel configurational-bias Monte Carlo scheme compares favourably with conventional Monte Carlo procedures for chain molecules.

Journal ArticleDOI
TL;DR: An essentially exact solution of the infinite-dimensional Hubbard model is made possible by a new self-consistent Monte Carlo procedure near half filling antiferromagnetism and a pseudogap in the single-particle density of states.
Abstract: An essentially exact solution of the infinite-dimensional Hubbard model is made possible by a new self-consistent Monte Carlo procedure. Near half filling antiferromagnetism and a pseudogap in the single-particle density of states are found for sufficiently large values of the intrasite Coulomb interaction. At half filling the antiferromagnetic transition temperature obtains its largest value when the intrasite Coulomb interaction U\ensuremath{\approxeq}3.

Journal ArticleDOI
TL;DR: In this paper, Monte Carlo methods are used to study the size and power of serial-correlation-corrected versions of the Dickey-Fuller (1979,1981) unit root tests appropriate when the time series has unknown mean.

01 Jan 1992
TL;DR: In this article, Monte Carlo techniques are used to fit dependent and independent variables least squares fit to a polynomial least-squares fit to an arbitrary function fitting composite peaks direct application of the maximum likelihood.
Abstract: Uncertainties in measurements probability distributions error analysis estimates of means and errors Monte Carlo techniques dependent and independent variables least-squares fit to a polynomial least-squares fit to an arbitrary function fitting composite peaks direct application of the maximum likelihood. Appendices: numerical methods matrices graphs and tables histograms and graphs computer routines in Pascal.

Journal ArticleDOI
TL;DR: The Type I and II error properties of the t test were evaluated by means of a Monte Carlo study that sampled 8 real distribution shapes identified by Micceri (1986, 1989) as being representative of types encountered in psychology and education research.
Abstract: The Type I and II error properties of the t test were evaluated by means of a Monte Carlo study that sampled 8 real distribution shapes identified by Micceri (1986, 1989) as being representative of types encountered in psychology and education research

Journal ArticleDOI
TL;DR: In this paper, the authors considered how ARCH effects may be handled in time series models formulated in terms of unobserved components, including a random walk plus noise model with both disturbances subject to ARCH and an ARCH-M model with a time-varying parameter.

Journal ArticleDOI
TL;DR: In this paper, a Monte Carlo simulation of polyethylene at equilibrium is performed in an isobaricisothermal statistical-mechanical ensemble, which permits calculation of the density of the polymer matrix at specified conditions of pressure and temperature.
Abstract: Polyethylene at equilibrium is studied by computer simulation. Configuration space is sampled efficiently by a novel Monte Carlo simulation scheme developed for the study of long molecules at high densities. Simulations are carried out in an isobaric‐isothermal statistical‐mechanical ensemble which permits calculation of the density of the polymer matrix at specified conditions of pressure and temperature. A systematic study of the polymer at different temperatures indicates a phase transition; in agreement with experiment, at low temperatures, the polyethylene model studied here crystallizes spontaneously. At temperatures above the melting point, the simulated melt is described accurately by the model.

Journal ArticleDOI
TL;DR: It is shown that photon therapy beams can be characterized with great accuracy from a combination of precalculated Monte Carlo energy deposition kernels and dose distributions measured in a water phantom.
Abstract: A method for photon dose calculation in radio therapy planning using pencil beam energy deposition kernels is presented. It is designed to meet the requirements of an algorithm for 3-D treatment planning that is general enough to handle irregularly shaped radiation fields incident on a heterogeneous patient. It is point oriented and thus faster than a full 3-D convolution algorithm and uses the same physical data base to characterize a clinical beam as a full 3-D convolution algorithm. It is shown that photon therapy beams can be characterized with great accuracy from a combination of precalculated Monte Carlo energy deposition kernels and dose distributions measured in a water phantom. The data are used to derive analytical pencil beam kernels that are approximately partitionated into the dose from (i) primary released electrons and positrons, (ii) scattered, bremsstrahlung, and annihilation photons, (iii) contaminating photons, and (iv) charged particles from the collimator head. A semianalytical integration method, based on triangulation of the field, is developed for dose calculation using the analytical kernels. Dose is calculated in units normalized to the incident energy fluence which facilitates output factor calculation. For application in heterogeneous media, a scatter correction factor is derived using monodirectional convolution along the ray path. In homogeneous media results are compared with measurements and in heterogeneous media with Monte Carlo calculations and the Batho method.

Journal ArticleDOI
TL;DR: This work shows how the Wolff algorithm, now accepted as the best cluster-flipping Monte Carlo algorithm for beating ``critical slowing down,'' can yield incorrect answers due to subtle correlations in ``high quality'' random number generators.
Abstract: The Wolff algorithm is now accepted as the best cluster-flipping Monte Carlo algorithm for beating ``critical slowing down.'' We show how this method can yield incorrect answers due to subtle correlations in ``high quality'' random number generators.

Journal ArticleDOI
TL;DR: In this paper, Monte Carlo programs for lattice models using supercomputers are vectorized using Monte Carlo methods for improved efficiency of computer simulations in statistical mechanics, including simulation of random growth processes.
Abstract: Vectorisation of Monte Carlo programs for lattice models using supercomputers.- Parallel algorithms for statistical physics problems.- New monte carlo methods for improved efficiency of computer simulations in statistical mechanics.- Simulation of random growth processes.- Recent progress in the simulation of classical fluids.- Monte Carlo techniques for quantum fluids, solids and droplets.- Quantum lattice problems.- Simulations of macromolecules.- Percolation, critical phenomena in dilute magnets, cellular automata and related problems.- Interfaces, wetting phenomena, incommensurate phases.- Spin glasses, orientational glasses and random field systems.- Recent developments in the Monte Carlo simulation of condensed matter.

Journal ArticleDOI
TL;DR: Gibbs Ensemble Monte Carlo (GEMC) as mentioned in this paper is a widely used Monte Carlo method for direct determination of phase coexistence in fluids, which requires only a single simulation per coexistence point.
Abstract: This paper provides an extensive review of the literature on the Gibbs ensemble Monte Carlo method for direct determination of phase coexistence in fluids. The Gibbs ensemble technique is based on performing a simulation in two distinct regions in a way that ensures that the conditions of phase coexistence are satisfied in a statistical sense. Contrary to most other available techniques for this purpose, such as thermodynamic integration, grand canonical Monte Carlo or Widom test particle insertions, the Gibbs ensemble technique involves only a single simulation per coexistence point. A significant body of literature now exists on the method, its theoretical foundations, and proposed modifications for efficient determination of equilibria involving dense fluids and complex intermolecular potentials. Some practical aspects of Gibbs ensemble simulation are also discussed in this review. Applications of the technique to date range from studies of simple model potentials (for example Lennard–Jones, s...