scispace - formally typeset
Search or ask a question

Showing papers on "Monte Carlo method published in 2004"


Journal ArticleDOI
TL;DR: In this article, the authors show that with simple extensions of the shower algorithm in Monte Carlo programs, one can implement NLO corrections to the hardest emission that overcome the problems of negative weighted events found in previous implementations.
Abstract: I show that with simple extensions of the shower algorithms in Monte Carlo programs, one can implement NLO corrections to the hardest emission that overcome the problems of negative weighted events found in previous implementations. Simple variants of the same method can be used for an improved treatment of matrix element corrections in Shower Monte Carlo programs.

1,766 citations


Journal ArticleDOI
TL;DR: It is shown that with simple extensions of the shower algorithms in Monte Carlo programs, one can implement NLO corrections to the hardest emission that overcome the problems of negative weighted events found in previous implementations.
Abstract: I show that with simple extensions of the shower algorithms in Monte Carlo programs, one can implement NLO corrections to the hardest emission that overcome the problems of negative weighted events found in previous implementations. Simple variants of the same method can be used for an improved treatment of matrix element corrections in Shower Monte Carlo programs.

1,749 citations


01 Jan 2004
TL;DR: In this paper, the Monte Carlo method is not compelling for one dimensional integration, but it is more compelling for a d-dimensional integral evaluated withM points, so that the error in I goes down as 1/ √ M and is smaller if the variance σ 2 f of f is smaller.
Abstract: so that the error in I goes down as 1/ √ M and is smaller if the variance σ 2 f of f is smaller For a one dimensional integration the Monte Carlo method is not compelling However consider a d dimensional integral evaluated withM points For a uniform mesh each dimension of the integral getsM1/d points, so that the separation is h = M−1/d The error in the integration over one h cube is of order hd+2, since we are approximating the surface by a linear interpolation (a plane) with an O(h2) error The total error in the integral is Mhd+2 = M−2/d The error in the Monte Carlo method remains M−1/2, so that this method wins for d > 4 We can reduce the error in I by reducing the effective σf This is done by concentrating the sampling where f (x) is large, using a weight function w(x) (ie w(x) > 0, ∫ 1 0 w(x) = 1) I = ∫ 1

1,642 citations


Journal ArticleDOI
TL;DR: Simulated data sets were used to test the power and accuracy of Monte Carlo resampling methods in generating statistical thresholds for identifying F0 immigrants in populations with ongoing gene flow, and hence for providing direct, real‐time estimates of migration rates.
Abstract: Genetic assignment methods use genotype likelihoods to draw inference about where individuals were or were not born, potentially allowing direct, real-time estimates of dispersal. We used simulated data sets to test the power and accuracy of Monte Carlo resampling methods in generating statistical thresholds for identifying F0 immigrants in populations with ongoing gene flow, and hence for providing direct, real-time estimates of migration rates. The identification of accurate critical values required that resampling methods preserved the linkage disequilibrium deriving from recent generations of immigrants and reflected the sampling variance present in the data set being analysed. A novel Monte Carlo resampling method taking into account these aspects was proposed and its efficiency was evaluated. Power and error were relatively insensitive to the frequency assumed for missing alleles. Power to identify F0 immigrants was improved by using large sample size (up to about 50 individuals) and by sampling all populations from which migrants may have originated. A combination of plotting genotype likelihoods and calculating mean genotype likelihood ratios (DLR) appeared to be an effective way to predict whether F0 immigrants could be identified for a particular pair of populations using a given set of markers.

1,481 citations



Journal ArticleDOI
TL;DR: A priori error estimates for the computation of the expected value of the solution are given and a comparison of the computational work required by each numerical approximation is included to suggest intuitive conditions for an optimal selection of the numerical approximation.
Abstract: We describe and analyze two numerical methods for a linear elliptic problem with stochastic coefficients and homogeneous Dirichlet boundary conditions. Here the aim of the com- putations is to approximate statistical moments of the solution, and, in particular, we give a priori error estimates for the computation of the expected value of the solution. The first method gener- ates independent identically distributed approximations of the solution by sampling the coefficients of the equation and using a standard Galerkin finite element variational formulation. The Monte Carlo method then uses these approximations to compute corresponding sample averages. The sec- ond method is based on a finite dimensional approximation of the stochastic coefficients, turning the original stochastic problem into a deterministic parametric elliptic problem. A Galerkin finite element method, of either the h -o rp-version, then approximates the corresponding deterministic solution, yielding approximations of the desired statistics. We present a priori error estimates and include a comparison of the computational work required by each numerical approximation to achieve a given accuracy. This comparison suggests intuitive conditions for an optimal selection of the numerical approximation.

899 citations


Journal ArticleDOI
TL;DR: The results challenge the recently proposed notion that a set of six icosahedrally‐arranged orientations is optimal for DT‐MRI and show that at least 20 unique samplingorientations are necessary for a robust estimation of anisotropy, whereas at least 30 unique sampling orientations are required for a strong estimation of tensor‐orientation and mean diffusivity.
Abstract: There are conflicting opinions in the literature as to whether it is more beneficial to use a large number of gradient sampling orientations in diffusion tensor MRI (DT-MRI) experiments than to use a smaller number of carefully chosen orientations. In this study, Monte Carlo simulations were used to study the effect of using different gradient sampling schemes on estimates of tensor-derived quantities assuming a b-value of 1000 smm –2 . The study focused in particular on the effect that the number of unique gradient orientations has on uncertainty in estimates of tensor-orientation, and on estimates of the trace and anisotropy of the diffusion tensor. The results challenge the recently proposed notion that a set of six icosahedrally-arranged orientations is optimal for DT-MRI. It is shown that at least 20 unique sampling orientations are necessary for a robust estimation of anisotropy, whereas at least 30 unique sampling orientations are required for a robust estimation of tensor-orientation and mean diffusivity. Finally, the performance of sampling schemes that use low numbers of sampling orientations, but make efficient use of available gradient power, are compared to less efficient schemes with larger numbers of sampling orientations, and the relevant scenarios in which each type of scheme should be used are discussed. Magn Reson Med 51:807– 815, 2004. Published 2004 Wiley-Liss, Inc.†

824 citations


Journal ArticleDOI
TL;DR: In this paper, a discrete-time approximation for decoupled forward-backward stochastic dierential equations is proposed, and the L p norm of the error is shown to be of the order of the time step.

615 citations


Journal ArticleDOI
TL;DR: In this article, a particle representation of the filtering distributions, and their evolution through time using sequential importance sampling and resampling ideas are developed for performing smoothing computations in general state-space models.
Abstract: We develop methods for performing smoothing computations in general state-space models. The methods rely on a particle representation of the filtering distributions, and their evolution through time using sequential importance sampling and resampling ideas. In particular, novel techniques are presented for generation of sample realizations of historical state sequences. This is carried out in a forward-filtering backward-smoothing procedure that can be viewed as the nonlinear, non-Gaussian counterpart of standard Kalman filter-based simulation smoothers in the linear Gaussian case. Convergence in the mean squared error sense of the smoothed trajectories is proved, showing the validity of our proposed method. The methods are tested in a substantial application for the processing of speech signals represented by a time-varying autoregression and parameterized in terms of time-varying partial correlation coefficients, comparing the results of our algorithm with those from a simple smoother based on the filte...

588 citations


Journal ArticleDOI
TL;DR: Using Monte Carlo simulations it is shown that estimation algorithms can come close to attaining the limit given in the expression and explicit quantitative results are provided to show how the limit of the localization accuracy is reduced by factors such as pixelation of the detector and noise sources in the detection system.

587 citations


Journal ArticleDOI
TL;DR: A central limit theorem for the Monte Carlo estimates produced by these computational methods is established in this paper, and applies in a general framework which encompasses most of the sequential Monte Carlo methods that have been considered in the literature, including the resample-move algorithm of Gilks and Berzuini [J. R. Stat. Ser. B Statol. 63 (2001) 127,146] and the residual resampling scheme.
Abstract: The term “sequential Monte Carlo methods” or, equivalently, “particle filters,” refers to a general class of iterative algorithms that performs Monte Carlo approximations of a given sequence of distributions of interest (πt). We establish in this paper a central limit theorem for the Monte Carlo estimates produced by these computational methods. This result holds under minimal assumptions on the distributions πt, and applies in a general framework which encompasses most of the sequential Monte Carlo methods that have been considered in the literature, including the resample-move algorithm of Gilks and Berzuini [J. R. Stat. Soc. Ser. B Stat. Methodol. 63 (2001) 127–146] and the residual resampling scheme. The corresponding asymptotic variances provide a convenient measurement of the precision of a given particle filter. We study, in particular, in some typical examples of Bayesian applications, whether and at which rate these asymptotic variances diverge in time, in order to assess the long term reliability of the considered algorithm.

Journal ArticleDOI
TL;DR: In this article, the authors examined the universality of interstellar turbulence from observed structure functions of 27 giant molecular clouds and Monte Carlo modeling, and quantified the degree of turbulence universality by Monte Carlo simulations that reproduce the mean squared velocity residuals of the observed cloud-to-cloud relationship.
Abstract: The universality of interstellar turbulence is examined from observed structure functions of 27 giant molecular clouds and Monte Carlo modeling. We show that the structure functions, ?v = vol?, derived from wide-field imaging of 12CO J=1-0 emission from individual clouds are described by a narrow range in the scaling exponent, ?, and the scaling coefficient, vo. The similarity of turbulent structure functions emphasizes the universality of turbulence in the molecular interstellar medium and accounts for the cloud-to-cloud size/line width relationship initially identified by Larson. The degree of turbulence universality is quantified by Monte Carlo simulations that reproduce the mean squared velocity residuals of the observed cloud-to-cloud relationship. Upper limits to the variation of the scaling amplitudes and exponents for molecular clouds are ~10%-20%. The measured invariance of turbulence for molecular clouds with vastly different sizes, environments, and star formation activity suggests a common formation mechanism such as converging turbulent flows within the diffuse interstellar medium and a limited contribution of energy from sources within the cloud with respect to large-scale driving mechanisms.

Journal Article
TL;DR: This work presents an algorithm that computes the exact posterior probability of a subnetwork, e.g., a directed edge, and shows that also in domains with a large number of variables, exact computation is feasible, given suitable a priori restrictions on the structures.
Abstract: Learning a Bayesian network structure from data is a well-motivated but computationally hard task. We present an algorithm that computes the exact posterior probability of a subnetwork, e.g., a directed edge; a modified version of the algorithm finds one of the most probable network structures. This algorithm runs in time O(n 2n + nk+1C(m)), where n is the number of network variables, k is a constant maximum in-degree, and C(m) is the cost of computing a single local marginal conditional likelihood for m data instances. This is the first algorithm with less than super-exponential complexity with respect to n. Exact computation allows us to tackle complex cases where existing Monte Carlo methods and local search procedures potentially fail. We show that also in domains with a large number of variables, exact computation is feasible, given suitable a priori restrictions on the structures; combining exact and inexact methods is also possible. We demonstrate the applicability of the presented algorithm on four synthetic data sets with 17, 22, 37, and 100 variables.

Journal ArticleDOI
TL;DR: The proposed algorithm can handle virtually any type of process dynamics, factor structure, and payout specification, and gives valid confidence intervals for the true value of the Bermudan option price.
Abstract: This paper describes a practical algorithm based on Monte Carlo simulation for the pricing of multidimensional American (i.e., continuously exercisable) and Bermudan (i.e., discretely exercisable) ...

Journal ArticleDOI
TL;DR: A critical appraisal of reliability procedures for high dimensions is presented and it is observed that some types of Monte Carlo based simulation procedures in fact are capable of treating high dimensional problems.

Journal ArticleDOI
TL;DR: In this article, the authors consider forecasting using a combination, when no model coincides with a non-constant data generation process (DGP), and show that combining forecasts adds value, and can even dominate the best individual device.
Abstract: Summary We consider forecasting using a combination, when no model coincides with a non-constant data generation process (DGP). Practical experience suggests that combining forecasts adds value, and can even dominate the best individual device. We show why this can occur when forecasting models are differentially mis-specified, and is likely to occur when the DGP is subject to location shifts. Moreover, averaging may then dominate over estimated weights in the combination. Finally, it cannot be proved that only non-encompassed devices should be retained in the combination. Empirical and Monte Carlo illustrations confirm the analysis.

Journal ArticleDOI
TL;DR: The equation of state of a two-component Fermi gas with attractive short-range interspecies interactions using the fixed-node diffusion Monte Carlo method and results show a molecular regime with repulsive interactions well described by the dimer-dimer scattering length.
Abstract: We calculate the equation of state of a two-component Fermi gas with attractive short-range interspecies interactions using the fixed-node diffusion Monte Carlo method. The interaction strength is varied over a wide range by tuning the value $a$ of the $s$-wave scattering length of the two-body potential. For $ag0$ and $a$ smaller than the inverse Fermi wave vector our results show a molecular regime with repulsive interactions well described by the dimer-dimer scattering length ${a}_{m}=0.6a$. The pair correlation functions of parallel and opposite spins are also discussed as a function of the interaction strength.

Journal ArticleDOI
TL;DR: In this article, the authors propose a method to estimate and sum the relevant autocorrelation functions, which is argued to produce more certain error estimates than binning techniques and hence to help toward a better exploitation of expensive simulations.


Journal ArticleDOI
TL;DR: In this article, a higher-order solution of the means and variance of hydraulic head for saturated flow in randomly heterogeneous porous media was obtained by the combination of Karhunen-Loeve decomposition, polynomial expansion, and perturbation methods.

Journal ArticleDOI
TL;DR: In this article, a series of first principles molecular dynamics and Monte Carlo simulations were carried out for liquid water to investigate the reproducibility of different sampling approaches, including Car−Parrinello and Born−Oppenheimer simulations.
Abstract: A series of first principles molecular dynamics and Monte Carlo simulations were carried out for liquid water to investigate the reproducibility of different sampling approaches. These simulations include Car−Parrinello molecular dynamics simulations using the program cpmd with different values of the fictitious electron mass in the microcanonical and canonical ensembles, Born−Oppenheimer molecular dynamics using the programs cpmd and cp2k in the microcanonical ensemble, and Metropolis Monte Carlo using cp2k in the canonical ensemble. With the exception of one simulation for 128 water molecules, all other simulations were carried out for systems consisting of 64 molecules. Although the simulations yield somewhat fortuitous agreement in structural properties, analysis of other properties demonstrate that one should exercise caution when assuming the reproducibility of Car−Parrinello and Born−Oppenheimer molecular dynamics simulations for small system sizes in the microcanonical ensemble. In contrast, the m...

Journal ArticleDOI
TL;DR: In this article, a review of generalized ensemble algorithms for complex systems with many degrees of freedom such as spin glass and biomolecular systems is presented. And five new generalized-ensemble algorithms which are extensions of the above methods are presented.
Abstract: In complex systems with many degrees of freedom such as spin glass and biomolecular systems, conventional simulations in canonical ensemble suffer from the quasi-ergodicity problem. A simulation in generalized ensemble performs a random walk in potential energy space and overcomes this difficulty. From only one simulation run, one can obtain canonical ensemble averages of physical quantities as functions of temperature by the single-histogram and/or multiple-histogram reweighting techniques. In this article we review the generalized ensemble algorithms. Three well-known methods, namely, multicanonical algorithm (MUCA), simulated tempering (ST), and replica-exchange method (REM), are described first. Both Monte Carlo (MC) and molecular dynamics (MD) versions of the algorithms are given. We then present five new generalized-ensemble algorithms which are extensions of the above methods.

Journal ArticleDOI
TL;DR: In this article, the authors examined the universality of interstellar turbulence from observed structure functions of 27 giant molecular clouds and Monte Carlo modeling, and showed that the structure functions, dv=v0 l^gamma, derived from wide field imaging of CO J=1-0 emission from individual clouds are described by a narrow range in the scaling exponent, gamma, and the scaling coefficient, v0.
Abstract: The universality of interstellar turbulence is examined from observed structure functions of 27 giant molecular clouds and Monte Carlo modeling. We show that the structure functions, dv=v0 l^gamma, derived from wide field imaging of CO J=1-0 emission from individual clouds are described by a narrow range in the scaling exponent, gamma, and the scaling coefficient, v0. The similarity of turbulent structure functions emphasizes the universality of turbulence in the molecular interstellar medium and accounts for the cloud-to-cloud size-line width relationship initially identified by Larson (1981). The degree of turbulence universality is quantified by Monte Carlo simulations that reproduce the mean squared velocity residuals of the observed cloud-to-cloud relationship. Upper limits to the variation of the scaling amplitudes and exponents for molecular clouds are ~10-20%. The measured invariance of turbulence for molecular clouds with vastly different sizes, environments, and star formation activity suggests a common formation mechanism such as converging turbulent flows within the diffuse ISM and a limited contribution of energy from sources within the cloud with respect to large scale driving mechanisms.

Proceedings ArticleDOI
01 Jan 2004
TL;DR: An efficient real-time algorithm that solves the data association problem and is capable of initiating and terminating a varying number of tracks, which shows remarkable performance compared to the greedy algorithm and the multiple hypothesis tracker under extreme conditions.
Abstract: In this paper, we consider the general multiple-target tracking problem in which an unknown number of targets appears and disappears at random times and the goal is to find the tracks of targets from noisy observations. We propose an efficient real-time algorithm that solves the data association problem and is capable of initiating and terminating a varying number of tracks. We take the data-oriented, combinatorial optimization approach to the data association problem but avoid the enumeration of tracks by applying a sampling method called Markov chain Monte Carlo (MCMC). The MCMC data association algorithm can be viewed as a "deferred logic" method since its decision about forming a track is based on both current and past observations. At the same time, it can be viewed as an approximation to the optimal Bayesian filter. The algorithm shows remarkable performance compared to the greedy algorithm and the multiple hypothesis tracker (MHT) under extreme conditions, such as a large number of targets in a dense environment, low detection probabilities, and high false alarm rates.

Journal ArticleDOI
TL;DR: The extended Bouc-Wen differential model is one of the most widely accepted phenomenological models of hysteresis in mechanics and it is routinely used in the characterization of nonlinear damping and in system identification.
Abstract: The extended Bouc-Wen differential model is one of the most widely accepted phenomenological models of hysteresis in mechanics. It is routinely used in the characterization of nonlinear damping and in system identification. In this paper, the differential model of hysteresis is carefully re-examined and two significant issues are uncovered. First, it is found that the unspecified parameters of the model are functionally redundant. One of the parameters can be eliminated through suitable transformations in the parameter space. Second, local and global sensitivity analyses are conducted to assess the relative sensitivity of each model parameter. Through extensive Monte Carlo simulations, it is found that some parameters of the hysteretic model are rather insensitive. If the values of these insensitive parameters are fixed, a greatly simplified model is obtained.

Journal ArticleDOI
TL;DR: In this article, the authors compare 3D and 1D calculations using the same physics input in order to evaluate the conditions under which the 3D calculation is required and when the considerably simpler 1D calculation was adequate.
Abstract: A Monte Carlo calculation of the atmospheric neutrino fluxes [Barr et al., Phys. Rev. D 39, 3532 (1989); Agrawal et al., ibid. 53, 1314 (1996)] has been extended to take account of the three-dimensional (3D) nature of the problem, including the bending of secondary particles in the geomagnetic field. Emphasis has been placed on minimizing the approximations when introducing the 3D considerations. In this paper, we describe the techniques used and quantify the effects of the small approximations which remain. We compare 3D and 1D calculations using the same physics input in order to evaluate the conditions under which the 3D calculation is required and when the considerably simpler 1D calculation is adequate. We find that the 1D and 3D results are essentially identical for ${E}_{\ensuremath{ u}}g5\mathrm{GeV}$ except for small effects in the azimuthal distributions due to bending of the secondary muon by the geomagnetic field during their propagation in the atmosphere.

Journal ArticleDOI
TL;DR: In this article, the authors compared the performance of Response Surface (RS) and Artificial Neural Network (ANN) techniques with the First Order Reliability Method (FORM) and Monte Carlo Simulation with Adaptive Importance Sampling technique with approximated and exact limit state functions.

Journal ArticleDOI
TL;DR: In this article, an efficient and exact method that enables global Bayesian analysis of cosmic microwave background (CMB) data is described. But the method does not hinge on special assumptions about the survey geometry or noise properties, etc., it is based on a Monte Carlo approach and hence parallelizes trivially.
Abstract: We describe an efficient and exact method that enables global Bayesian analysis of cosmic microwave background (CMB) data. The method reveals the joint posterior density (or likelihood for flat priors) of the power spectrum ${C}_{\ensuremath{\ell}}$ and the CMB signal. Foregrounds and instrumental parameters can be simultaneously inferred from the data. The method allows the specification of a wide range of foreground priors. We explicitly show how to propagate the non-Gaussian dependency structure of the ${C}_{\ensuremath{\ell}}$ posterior through to the posterior density of the parameters. If desired, the analysis can be coupled to theoretical (cosmological) priors and can yield the posterior density of cosmological parameter estimates directly from the time-ordered data. The method does not hinge on special assumptions about the survey geometry or noise properties, etc., It is based on a Monte Carlo approach and hence parallelizes trivially. No trace or determinant evaluations are necessary. The feasibility of this approach rests on the ability to solve the systems of linear equations which arise. These are of the same size and computational complexity as the map-making equations. We describe a preconditioned conjugate gradient technique that solves this problem and demonstrate in a numerical example that the computational time required for each Monte Carlo sample scales as ${n}_{p}^{3/2}$ with the number of pixels ${n}_{p}$. We use our method to analyze the data from the Differential Microwave Radiometer on the Cosmic Background Explorer and explore the non-Gaussian joint posterior density of the ${C}_{\ensuremath{\ell}}$ from the Differential Microwave Radiometer on the Cosmic Background Explorer in several projections.

Journal ArticleDOI
TL;DR: In this paper, a Monte Carlo algorithm for doing simulations in classical statistical physics in a different way is described, where instead of sampling the probability distribution at a fixed temperature, a random walk is performed in energy space to extract an estimate for the density of states.
Abstract: We describe a Monte Carlo algorithm for doing simulations in classical statistical physics in a different way. Instead of sampling the probability distribution at a fixed temperature, a random walk is performed in energy space to extract an estimate for the density of states. The probability can be computed at any temperature by weighting the density of states by the appropriate Boltzmann factor. Thermodynamic properties can be determined from suitable derivatives of the partition function and, unlike “standard” methods, the free energy and entropy can also be computed directly. To demonstrate the simplicity and power of the algorithm, we apply it to models exhibiting first-order or second-order phase transitions.

Journal ArticleDOI
TL;DR: Molecular Dynamics Simulations of Water and Biomolecules with a Monte Carlo Constant Pressure Algorithm as discussed by the authors was used to simulate water and biomolecules in the simulation.