scispace - formally typeset
Search or ask a question

Showing papers on "Monte Carlo method published in 2001"


BookDOI
01 Jan 2001
TL;DR: This book presents the first comprehensive treatment of Monte Carlo techniques, including convergence results and applications to tracking, guidance, automated target recognition, aircraft navigation, robot navigation, econometrics, financial modeling, neural networks, optimal control, optimal filtering, communications, reinforcement learning, signal enhancement, model averaging and selection.
Abstract: Monte Carlo methods are revolutionizing the on-line analysis of data in fields as diverse as financial modeling, target tracking and computer vision. These methods, appearing under the names of bootstrap filters, condensation, optimal Monte Carlo filters, particle filters and survival of the fittest, have made it possible to solve numerically many complex, non-standard problems that were previously intractable. This book presents the first comprehensive treatment of these techniques, including convergence results and applications to tracking, guidance, automated target recognition, aircraft navigation, robot navigation, econometrics, financial modeling, neural networks, optimal control, optimal filtering, communications, reinforcement learning, signal enhancement, model averaging and selection, computer vision, semiconductor design, population biology, dynamic Bayesian networks, and time series analysis. This will be of great value to students, researchers and practitioners, who have some basic knowledge of probability. Arnaud Doucet received the Ph. D. degree from the University of Paris-XI Orsay in 1997. From 1998 to 2000, he conducted research at the Signal Processing Group of Cambridge University, UK. He is currently an assistant professor at the Department of Electrical Engineering of Melbourne University, Australia. His research interests include Bayesian statistics, dynamic models and Monte Carlo methods. Nando de Freitas obtained a Ph.D. degree in information engineering from Cambridge University in 1999. He is presently a research associate with the artificial intelligence group of the University of California at Berkeley. His main research interests are in Bayesian statistics and the application of on-line and batch Monte Carlo methods to machine learning. Neil Gordon obtained a Ph.D. in Statistics from Imperial College, University of London in 1993. He is with the Pattern and Information Processing group at the Defence Evaluation and Research Agency in the United Kingdom. His research interests are in time series, statistical data analysis, and pattern recognition with a particular emphasis on target tracking and missile guidance.

6,574 citations


Journal ArticleDOI
TL;DR: In this article, global sensitivity indices for rather complex mathematical models can be efficiently computed by Monte Carlo (or quasi-Monte Carlo) methods, which are used for estimating the influence of individual variables or groups of variables on the model output.

3,921 citations


Journal ArticleDOI
TL;DR: In this paper, the authors describe variational and fixed-node diffusion quantum Monte Carlo methods and how they may be used to calculate the properties of many-electron systems and describe a selection of applications to ground and excited states of solids and clusters.
Abstract: This article describes the variational and fixed-node diffusion quantum Monte Carlo methods and how they may be used to calculate the properties of many-electron systems. These stochastic wave-function-based approaches provide a very direct treatment of quantum many-body effects and serve as benchmarks against which other techniques may be compared. They complement the less demanding density-functional approach by providing more accurate results and a deeper understanding of the physics of electronic correlation in real materials. The algorithms are intrinsically parallel, and currently available high-performance computers allow applications to systems containing a thousand or more electrons. With these tools one can study complicated problems such as the properties of surfaces and defects, while including electron correlation effects with high precision. The authors provide a pedagogical overview of the techniques and describe a selection of applications to ground and excited states of solids and clusters.

1,957 citations


Journal ArticleDOI
TL;DR: A more robust algorithm is developed called MixtureMCL, which integrates two complimentary ways of generating samples in the estimation of Monte Carlo Localization algorithms, and is applied to mobile robots equipped with range finders.

1,945 citations


Journal ArticleDOI
TL;DR: In this article, a set simulation approach is proposed to compute small failure probabilities encountered in reliability analysis of engineering systems, which can be expressed as a product of larger conditional failure probabilities by introducing intermediate failure events.

1,890 citations


Journal ArticleDOI
01 Jun 2001-Genetics
TL;DR: The results show that even a single nonrecombining genetic locus can provide substantial power to test the hypothesis of no ongoing migration and/or to test models of symmetric migration between the two populations.
Abstract: A Markov chain Monte Carlo method for estimating the relative effects of migration and isolation on genetic diversity in a pair of populations from DNA sequence data is developed and tested using simulations. The two populations are assumed to be descended from a panmictic ancestral population at some time in the past and may (or may not) after that be connected by migration. The use of a Markov chain Monte Carlo method allows the joint estimation of multiple demographic parameters in either a Bayesian or a likelihood framework. The parameters estimated include the migration rate for each population, the time since the two populations diverged from a common ancestral population, and the relative size of each of the two current populations and of the common ancestral population. The results show that even a single nonrecombining genetic locus can provide substantial power to test the hypothesis of no ongoing migration and/or to test models of symmetric migration between the two populations. The use of the method is illustrated in an application to mitochondrial DNA sequence data from a fish species: the threespine stickleback (Gasterosteus aculeatus).

1,338 citations


Book ChapterDOI
01 Jan 2001
TL;DR: Many real-world data analysis tasks involve estimating unknown quantities from some given observations, and all inference on the unknown quantities is based on the posterior distribution obtained from Bayes’ theorem.
Abstract: Many real-world data analysis tasks involve estimating unknown quantities from some given observations. In most of these applications, prior knowledge about the phenomenon being modelled is available. This knowledge allows us to formulate Bayesian models, that is prior distributions for the unknown quantities and likelihood functions relating these quantities to the observations. Within this setting, all inference on the unknown quantities is based on the posterior distribution obtained from Bayes’ theorem. Often, the observations arrive sequentially in time and one is interested in performing inference on-line. It is therefore necessary to update the posterior distribution as data become available. Examples include tracking an aircraft using radar measurements, estimating a digital communications signal using noisy measurements, or estimating the volatility of financial instruments using stock market data. Computational simplicity in the form of not having to store all the data might also be an additional motivating factor for sequential methods.

1,232 citations


Journal ArticleDOI
TL;DR: A hierarchical regression model for meta-analysis of studies reporting estimates of test sensitivity and specificity is described, which allows more between- and within-study variability than fixed-effect approaches, by allowing both test stringency and test accuracy to vary across studies.
Abstract: An important quality of meta-analytic models for research synthesis is their ability to account for both within- and between-study variability. Currently available meta-analytic approaches for studies of diagnostic test accuracy work primarily within a fixed-effects framework. In this paper we describe a hierarchical regression model for meta-analysis of studies reporting estimates of test sensitivity and specificity. The model allows more between- and within-study variability than fixed-effect approaches, by allowing both test stringency and test accuracy to vary across studies. It is also possible to examine the effects of study specific covariates. Estimates are computed using Markov Chain Monte Carlo simulation with publicly available software (BUGS). This estimation method allows flexibility in the choice of summary statistics. We demonstrate the advantages of this modelling approach using a recently published meta-analysis comparing three tests used to detect nodal metastasis of cervical cancer.

1,232 citations



Journal ArticleDOI
TL;DR: The authors fit nonlinearly mean-reverting models to real dollar exchange rates over the post-Bretton Woods period, consistent with a theoretical literature on transactions costs in international arbitrage.
Abstract: We fit nonlinearly mean-reverting models to real dollar exchange rates over the post-Bretton Woods period, consistent with a theoretical literature on transactions costs in international arbitrage. The half lives of real exchange rate shocks, calculated through Monte Carlo integration, imply faster adjustment speeds than hitherto recorded. Monte Carlo simulations reconcile our results with the large empirical literature on unit roots in real exchange rates by showing that when the real exchange rate is nonlinearly mean reverting, standard univariate unit root tests have low power, while multivariate tests have much higher power to reject a false null hypothesis.

1,122 citations


Journal ArticleDOI
TL;DR: An efficient Monte Carlo algorithm using a random walk in energy space to obtain a very accurate estimate of the density of states for classical statistical models that overcomes the tunneling barrier between coexisting phases at first-order phase transitions.
Abstract: We describe an efficient Monte Carlo algorithm using a random walk in energy space to obtain a very accurate estimate of the density of states for classical statistical models. The density of states is modified at each step when the energy level is visited to produce a flat histogram. By carefully controlling the modification factor, we allow the density of states to converge to the true value very quickly, even for large systems. From the density of states at the end of the random walk, we can estimate thermodynamic quantities such as internal energy and specific heat capacity by calculating canonical averages at any temperature. Using this method, we not only can avoid repeating simulations at multiple temperatures, but we can also estimate the free energy and entropy, quantities that are not directly accessible by conventional Monte Carlo simulations. This algorithm is especially useful for complex systems with a rough landscape since all possible energy levels are visited with the same probability. As with the multicanonical Monte Carlo technique, our method overcomes the tunneling barrier between coexisting phases at first-order phase transitions. In this paper, we apply our algorithm to both first- and second-order phase transitions to demonstrate its efficiency and accuracy. We obtained direct simulational estimates for the density of states for two-dimensional ten-state Potts models on lattices up to 200 x 200 and Ising models on lattices up to 256 x 256. Our simulational results are compared to both exact solutions and existing numerical data obtained using other methods. Applying this approach to a three-dimensional +/-J spin-glass model, we estimate the internal energy and entropy at zero temperature; and, using a two-dimensional random walk in energy and order-parameter space, we obtain the (rough) canonical distribution and energy landscape in order-parameter space. Preliminary data suggest that the glass transition temperature is about 1.2 and that better estimates can be obtained with more extensive application of the method. This simulational method is not restricted to energy space and can be used to calculate the density of states for any parameter by a random walk in the corresponding space.

Book
01 Jan 2001
TL;DR: Fisheries and Modelling Fish Population Dynamics The Objectives of Stock Assessment Characteristics of Mathematical Models Types of Model Structure Simple Population Models Introduction Assumptions-Explicit and Implicit Density-Independent Growth Density -Dependent Models Responses to Fishing Pressure The Logistic Model in Fisheries Age-Structured Models Simple Yield-per-Recruit Model Parameter Estimation Models and Data Least Squared Residuals Nonlinear Estimation Likelihood Bayes' The
Abstract: Fisheries and Modelling Fish Population Dynamics The Objectives of Stock Assessment Characteristics of Mathematical Models Types of Model Structure Simple Population Models Introduction Assumptions-Explicit and Implicit Density-Independent Growth Density-Dependent Models Responses to Fishing Pressure The Logistic Model in Fisheries Age-Structured Models Simple Yield-per-Recruit Model Parameter Estimation Models and Data Least Squared Residuals Nonlinear Estimation Likelihood Bayes' Theorem Concluding Remarks Computer-Intensive Methods Introduction Resampling Randomization Tests Jackknife Methods Bootstrapping Methods Monte Carlo Methods Bayesian Methods Relationships between Methods Computer Programming Randomization Tests Introduction Hypothesis Testing Randomization of Structured Data Statistical Bootstrap Methods The Jackknife and Pseudo Values The Bootstrap Bootstrap Statistics Bootstrap Confidence Intervals Concluding Remarks Monte Carlo Modelling Monte Carlo Models Practical Requirements A Simple Population Model A Non-Equilibrium Catch Curve Concluding Remarks Characterization of Uncertainty Introduction Asymptotic Standard Errors Percentile Confidence Intervals Using Likelihoods Likelihood Profile Confidence Intervals Percentile Likelihood Profiles for Model Outputs Markov Chain Monte Carlo (MCMC) Conclusion Growth of Individuals Growth in Size von Bertalanffy Growth Model Alternatives to von Bertalanffy Comparing Growth Curves Concluding Remarks Stock Recruitment Relationships Recruitment and Fisheries Stock Recruitment Biology Beverton-Holt Recruitment Model Ricker Model Deriso's Generalized Model Residual Error Structure The Impact of Measurement Errors Environmental Influences Recruitment in Age-Structured Models Concluding Remarks Surplus Production Models Introduction Equilibrium Methods Surplus Production Models Observation Error Estimates Beyond Simple Models Uncertainty of Parameter Estimates Risk Assessment Projections Practical Considerations Conclusions Age-Structured Models Types of Models Cohort Analysis Statistical Catch-at-Age Concluding Remarks Size-Based Models Introduction The Model Structure Conclusion Appendix: The Use of Excel in Fisheries Bibliography Index

Journal ArticleDOI
TL;DR: This work proposes a new technique for tracking moving target distributions, known as particle filters, which does not suffer from a progressive degeneration as the target sequence evolves.
Abstract: Markov chain Monte Carlo (MCMC) sampling is a numerically intensive simulation technique which has greatly improved the practicality of Bayesian inference and prediction. However, MCMC sampling is too slow to be of practical use in problems involving a large number of posterior (target) distributions, as in dynamic modelling and predictive model selection. Alternative simulation techniques for tracking moving target distributions, known as particle filters, which combine importance sampling, importance resampling and MCMC sampling, tend to suffer from a progressive degeneration as the target sequence evolves. We propose a new technique, based on these same simulation methodologies, which does not suffer from this progressive degeneration.

Journal ArticleDOI
TL;DR: This paper presents efficient simulation-based algorithms called particle filters to solve the optimal filtering problem as well as the optimal fixed-lag smoothing problem forJump Markov linear systems.
Abstract: Jump Markov linear systems (JMLS) are linear systems whose parameters evolve with time according to a finite state Markov chain. In this paper, our aim is to recursively compute optimal state estimates for this class of systems. We present efficient simulation-based algorithms called particle filters to solve the optimal filtering problem as well as the optimal fixed-lag smoothing problem. Our algorithms combine sequential importance sampling, a selection scheme, and Markov chain Monte Carlo methods. They use several variance reduction methods to make the most of the statistical structure of JMLS. Computer simulations are carried out to evaluate the performance of the proposed algorithms. The problems of on-line deconvolution of impulsive processes and of tracking a maneuvering target are considered. It is shown that our algorithms outperform the current methods.

Journal ArticleDOI
TL;DR: The Monte Carlo cross validation developed in this paper is an asymptotically consistent method in determining the number of components in calibration model and can avoid an unnecessary large model and therefore decreases the risk of over-fitting for the calibration model.

Journal ArticleDOI
TL;DR: Three new generalized-ensemble algorithms that combine the merits of the multicanonical algorithm, simulated tempering, and replica-exchange method are presented, which are tested with short peptide systems.
Abstract: In complex systems with many degrees of freedom such as peptides and proteins, there exists a huge number of local-minimum-energy states. Conventional simulations in the canonical ensemble are of little use, because they tend to get trapped in states of these energy local minima. A simulation in generalized ensemble performs a random walk in potential energy space and can overcome this difficulty. From only one simulation run, one can obtain canonical-ensemble averages of physical quantities as functions of temperature by the single-histogram and/or multiple-histogram reweighting techniques. In this article we review uses of the generalized-ensemble algorithms in biomolecular systems. Three well-known methods, namely, multicanonical algorithm, simulated tempering, and replica-exchange method, are described first. Both Monte Carlo and molecular dynamics versions of the algorithms are given. We then present three new generalized-ensemble algorithms that combine the merits of the above methods. The effectiveness of the methods for molecular simulations in the protein folding problem is tested with short peptide systems. © 2001 John Wiley & Sons, Inc. Biopolymers (Pept Sci) 60: 96–123, 2001

Journal ArticleDOI
TL;DR: Two Monte Carlo simulations are presented that compare the efficacy of the Hedges and colleagues, Rosenthal-Rubin, and Hunter-Schmidt methods for combining correlation coefficients for cases in which population effect sizes were both fixed and variable.
Abstract: The efficacy of the Hedges and colleagues, Rosenthal-Rubin, and Hunter-Schmidt methods for combining correlation coefficients was tested for cases in which population effect sizes were both fixed and variable. After a brief tutorial on these meta-analytic methods, the author presents two Monte Carlo simulations that compare these methods for cases in which the number of studies in the meta-analysis and the average sample size of studies were varied. In the fixed case the methods produced comparable estimates of the average effect size; however, the HunterSchmidt method failed to control the Type I error rate for the associated significance tests. In the variable case, for both the Hedges and colleagues and HunterSchmidt methods, Type I error rates were not controlled for meta-analyses including 15 or fewer studies and the probability of detecting small effects was less than .3. Some practical recommendations are made about the use of meta-analysis .

Book ChapterDOI
06 Jun 2001
TL;DR: Applications to stochastic solution of integral equations are given for the case where an approximation of the full solution function or a family of functionals of the solution depending on a parameter of a certain dimension is sought.
Abstract: We study Monte Carlo approximations to high dimensional parameter dependent integrals. We survey the multilevel variance reduction technique introduced by the author in [4] and present extensions and new developments of it. The tools needed for the convergence analysis of vector-valued Monte Carlo methods are discussed, as well. Applications to stochastic solution of integral equations are given for the case where an approximation of the full solution function or a family of functionals of the solution depending on a parameter of a certain dimension is sought.

Book
18 May 2001
TL;DR: In this article, Monte Carlo simulations of two-dimensional Dense Media Models and three-dimensional Simulations are used to solve Rough Surface Scattering problems. But they do not address the problem of detection of buried objects.
Abstract: Preface. Monte Carlo Simulations of Layered Media. Integral Equation Formulations and Basic Numerical Methods. Scattering and Emission By a Periodic Rough Surface. Random Rough Surface Simulations. Fast Computational Methods for Solving Rough Surface Scattering Problems. Three-Dimensional Wave Scattering from Two-Dimensional Rough Surfaces. Volume Scattering Simulations. Particle Positions for Dense Media Characterizations and Simulations. Simulations of Two-Dimensional Dense Media. Dense Media Models and Three-Dimensional Simulations. Angular Correlation Function and Detection of Buried Object. Multiple Scattering by Cylinders in the Presence of Boundaries. Electromagnetic Waves Scattering By Vegetation. Index.

Journal ArticleDOI
TL;DR: In this paper, a systematic expansion of induced gluon radiation associated with jet production in a dense QCD plasma is derived using a reaction operator formalism, which leads to a simple algebraic proof of the color triviality of single inclusive distributions and a solvable set of recursion relations.

Journal ArticleDOI
TL;DR: This paper returns to the formulas developed in [1] concerning the “greeks” used in European options, and answers the question of optimal weight functional in the sense of minimal variance.
Abstract: This paper presents an original probabilistic method for the numerical computations of Greeks (i.e. price sensitivities) in finance. Our approach is based on the {\it integration-by-parts} formula, which lies at the core of the theory of variational stochastic calculus, as developed in the Malliavin calculus. The Greeks formulae, both with respect to initial conditions and for smooth perturbations of the local volatility, are provided for general discontinuous path-dependent payoff functionals of multidimensional diffusion processes. We illustrate the results by applying the formula to exotic European options in the framework of the Black and Scholes model. Our method is compared to the Monte Carlo finite difference approach and turns out to be very efficient in the case of discontinuous payoff functionals.

Journal ArticleDOI
01 Dec 2001-Genetics
TL;DR: Methods of estimating two-locus sample probabilities under a neutral model are extended in several ways and properties of a maximum-likelihood estimator of the recombination parameter based on independent linked pairs of sites are obtained.
Abstract: Methods of estimating two-locus sample probabilities under a neutral model are extended in several ways. Estimation of sample probabilities is described when the ancestral or derived status of each allele is specified. In addition, probabilities for two-locus diploid samples are provided. A method for using these two-locus probabilities to test whether an observed level of linkage disequilibrium is unusually large or small is described. In addition, properties of a maximum-likelihood estimator of the recombination parameter based on independent linked pairs of sites are obtained. A composite-likelihood estimator, for more than two linked sites, is also examined and found to work as well, or better, than other available ad hoc estimators. Linkage disequilibrium in the Xq28 and Xq25 region of humans is analyzed in a sample of Europeans (CEPH). The estimated recombination parameter is about five times smaller than one would expect under an equilibrium neutral model.

Book
01 Feb 2001
TL;DR: The theoretical foundations of advanced mean field methods are covered, the relation between the different approaches are explored, the quality of the approximation obtained is examined, and their application to various areas of probabilistic modeling is demonstrated.
Abstract: A major problem in modern probabilistic modeling is the huge computational complexity involved in typical calculations with multivariate probability distributions when the number of random variables is large. Because exact computations are infeasible in such cases and Monte Carlo sampling techniques may reach their limits, there is a need for methods that allow for efficient approximate computations. One of the simplest approximations is based on the mean field method, which has a long history in statistical physics. The method is widely used, particularly in the growing field of graphical models. Researchers from disciplines such as statistical physics, computer science, and mathematical statistics are studying ways to improve this and related methods and are exploring novel application areas. Leading approaches include the variational approach, which goes beyond factorizable distributions to achieve systematic improvements; the TAP (Thouless-Anderson-Palmer) approach, which incorporates correlations by including effective reaction terms in the mean field theory; and the more general methods of graphical models. Bringing together ideas and techniques from these diverse disciplines, this book covers the theoretical foundations of advanced mean field methods, explores the relation between the different approaches, examines the quality of the approximation obtained, and demonstrates their application to various areas of probabilistic modeling.

Journal ArticleDOI
TL;DR: In this article, a method for combining QCD matrix elements and parton showers in Monte Carlo simulations of hadronic final states in $e+e^-$ annihilation is proposed, which provides a leading-order description of hard multi-jet configurations together with jet fragmentation.
Abstract: We propose a method for combining QCD matrix elements and parton showers in Monte Carlo simulations of hadronic final states in $e^+e^-$ annihilation. The matrix element and parton shower domains are separated at some value $y_{ini}$ of the jet resolution, defined according to the $k_T$-clustering algorithm. The matrix elements are modified by Sudakov form factors and the parton showers are subjected to a veto procedure to cancel dependence on $y_{ini}$ to next-to-leading logarithmic accuracy. The method provides a leading-order description of hard multi-jet configurations together with jet fragmentation, while avoiding the most serious problems of double counting. We present first results of an approximate implementation using the event generator APACIC++.

Journal ArticleDOI
TL;DR: The authors use the setting of singular perturbations, which allows them to study both weak and strong interactions among the states of the chain and give the asymptotic behavior of many controlled stochastic dynamic systems when the perturbation parameter tends to 0.
Abstract: This is an important contribution to a modern area of applied probability that deals with nonstationary Markov chains in continuous time. This area is becoming increasingly useful in engineering, economics, communication theory, active networking, and so forth, where the Markov-chain system is subject to frequent  uctuations with clusters of states such that the chain  uctuates very rapidly among different states of a cluster but changes less rapidly from one cluster to another. The authors use the setting of singular perturbations, which allows them to study both weak and strong interactions among the states of the chain. This leads to simpliŽ cations through the averaging principle, aggregation, and decomposition. The main results include asymptotic expansions of the corresponding probability distributions, occupations measures, limiting normality, and exponential rates. These results give the asymptotic behavior of many controlled stochastic dynamic systems when the perturbation parameter tends to 0. The classical analytical method employs the asymptotic expansions of onedimensional distributions of the Markov chain as solutions to a system of singularly perturbed ordinary differential equations. Indeed, the asymptotic behavior of solutions of such equations is well studied and understood. A more probabilistic approach also used by the authors is based on the tightness of the family of probability measures generated by the singularly perturbed Markov chain with the corresponding weak convergence properties. Both of these methods are illustrated by practical dynamic optimization problems, in particular by hierarchical production planning in a manufacturing system. An important contribution is the last chapter, Chapter 10, which describes numerical methods to solve various control and optimization problems involving Markov chains. Altogether the monograph consists of three parts, with Part I containing necessary, technically rather demanding facts about Markov processes (which in the nonstationary case are deŽ ned through martingales.) Part II derives the mentioned asymptotic expansions, and Part III deals with several applications, including Markov decision processes and optimal control of stochastic dynamic systems. This technically demanding book may be out of reach of many readers of Technometrics. However, the use of Markov processes has become common for numerous real-life complex stochastic systems. To understand the behavior of these systems, the sophisticated mathematical methods described in this book may be indispensable.

Journal ArticleDOI
TL;DR: Joint Bayesian estimation of all latent variables, model parameters, and parameters that determine the probability law of the latent process is carried out by a new MCMC method called permutation sampling.
Abstract: Bayesian estimation of a very general model class, where the distribution of the observations depends on a latent process taking values in a discrete state space, is discussed in this article. This model class covers finite mixture modeling, Markov switching autoregressive modeling, and dynamic linear models with switching. The consequences the unidentifiability of this type of model has on Markov chain Monte Carlo (MCMC) estimation are explicitly dealt with. Joint Bayesian estimation of all latent variables, model parameters, and parameters that determine the probability law of the latent process is carried out by a new MCMC method called permutation sampling. The permutation sampler first samples from the unconstrained posterior–which often can be done in a convenient multimove manner–and then applies a permutation of the current labeling of the states of the latent process. In a first run, the random permutation sampler used selected the permutation randomly. The MCMC output of the random permutation s...

Journal ArticleDOI
TL;DR: An efficient algorithm is described that can measure an observable quantity in a percolation system for all values of the site or bond occupation probability from zero to one in an amount of time that scales linearly with the size of the system.
Abstract: We describe in detail an efficient algorithm for studying site or bond percolation on any lattice. The algorithm can measure an observable quantity in a percolation system for all values of the site or bond occupation probability from zero to one in an amount of time that scales linearly with the size of the system. We demonstrate our algorithm by using it to investigate a number of issues in percolation theory, including the position of the percolation transition for site percolation on the square lattice, the stretched exponential behavior of spanning probabilities away from the critical point, and the size of the giant component for site percolation on random graphs.

Journal ArticleDOI
TL;DR: In this article, a Monte Carlo solution of the Boltzmann Transport Equation (BTE) for phonons is presented. But the authors neglect to account for dispersion and interactions between the longitudinal and transverse polarizations of phonon propagation.
Abstract: The Boltzmann Transport Equation (BTE) for phonons best describes the heat flow in solid nonmetallic thin films. The BTE, in its most general form, however, is difficult to solve analytically or even numerically using deterministic approaches. Past research has enabled its solution by neglecting important effects such as dispersion and interactions between the longitudinal and transverse polarizations of phonon propagation. In this article, a comprehensive Monte Carlo solution technique of the BTE is presented. The method accounts for dual polarizations of phonon propagation, and non-linear dispersion relationships. Scattering by various mechanisms is treated individually. Transition between the two polarization branches, and creation and destruction of phonons due to scattering is taken into account. The code has been verified and evaluated by close examination of its ability or failure to capture various regimes of phonon transport ranging from diffusive to the ballistic limit. Validation results show close agreement with experimental data for silicon thin films with and without doping. Simulation results show that above 100 K, transverse acoustic phonons are the primary carriers of energy in silicon.

Journal ArticleDOI
TL;DR: In this paper, a comprehensive study of the transport dynamics of electrons in the ternary compounds, Al/sub x/Ga/sub 1-x/N and In/sub ng/g/ng/s/n g/n/g n/g 1.x/n, is presented, which includes all of the major scattering mechanisms.
Abstract: We present a comprehensive study of the transport dynamics of electrons in the ternary compounds, Al/sub x/Ga/sub 1-x/N and In/sub x/Ga/sub 1-x/N. Calculations are made using a nonparabolic effective mass energy band model. Monte Carlo simulation that includes all of the major scattering mechanisms. The band parameters used in the simulation are extracted from optimized pseudopotential band calculations to ensure excellent agreement with experimental information and ab initio band models. The effects of alloy scattering on the electron transport physics are examined. The steady state velocity field curves and low field mobilities are calculated for representative compositions of these alloys at different temperatures and ionized impurity concentrations. A field dependent mobility model is provided for both ternary compounds AlGaN and InGaN. The parameters for the low and high field mobility models for these ternary compounds are extracted and presented. The mobility models can be employed in simulations of devices that incorporate the ternary III-nitrides.

Journal ArticleDOI
TL;DR: In this article, the authors present a method for carrying out long time scale dynamics simulations within the harmonic transition state theory approximation, where saddle point searches are carried out using random initial directions.
Abstract: We present a method for carrying out long time scale dynamics simulations within the harmonic transition state theory approximation. For each state of the system, characterized by a local minimum on the potential energy surface, multiple searches for saddle points are carried out using random initial directions. The dimer method is used for the saddle point searches and the rate for each transition mechanism is estimated using harmonic transition state theory. Transitions are selected and the clock advanced according to the kinetic Monte Carlo algorithm. Unlike traditional applications of kinetic Monte Carlo, the atoms are not assumed to sit on lattice sites and a list of all possible transitions need not be specified beforehand. Rather, the relevant transitions are found on the fly during the simulation. A multiple time scale simulation of Al(100) crystal growth is presented where the deposition event, occurring on the time scale of picoseconds, is simulated by ordinary classical dynamics, but the time i...