scispace - formally typeset
Search or ask a question

Showing papers on "Monte Carlo method published in 2010"


Journal ArticleDOI
TL;DR: The POWHEG BOX is illustrated to provide an illustration of the needed theoretical ingredients, a view of how the code is organized and a description of what a user should provide in order to use it.
Abstract: In this work we illustrate the POWHEG BOX, a general computer code framework for implementing NLO calculations in shower Monte Carlo programs according to the POWHEG method. Aim of this work is to provide an illustration of the needed theoretical ingredients, a view of how the code is organized and a description of what a user should provide in order to use it.

2,560 citations


Journal ArticleDOI
TL;DR: It is shown here how it is possible to build efficient high dimensional proposal distributions by using sequential Monte Carlo methods, which allows not only to improve over standard Markov chain Monte Carlo schemes but also to make Bayesian inference feasible for a large class of statistical models where this was not previously so.
Abstract: Summary. Markov chain Monte Carlo and sequential Monte Carlo methods have emerged as the two main tools to sample from high dimensional probability distributions. Although asymptotic convergence of Markov chain Monte Carlo algorithms is ensured under weak assumptions, the performance of these algorithms is unreliable when the proposal distributions that are used to explore the space are poorly chosen and/or if highly correlated variables are updated independently. We show here how it is possible to build efficient high dimensional proposal distributions by using sequential Monte Carlo methods. This allows us not only to improve over standard Markov chain Monte Carlo schemes but also to make Bayesian inference feasible for a large class of statistical models where this was not previously so. We demonstrate these algorithms on a non-linear state space model and a Levy-driven stochastic volatility model.

1,869 citations


Journal ArticleDOI
TL;DR: The R package FME is applied to a model describing the dynamics of the HIV virus and provides a Markov-chain based method to estimate parameter confidence intervals.
Abstract: Mathematical simulation models are commonly applied to analyze experimental or environmental data and eventually to acquire predictive capabilities. Typically these models depend on poorly defined, unmeasurable parameters that need to be given a value. Fitting a model to data, so-called inverse modelling, is often the sole way of finding reasonable values for these parameters. There are many challenges involved in inverse model applications, e.g., the existence of non-identifiable parameters, the estimation of parameter uncertainties and the quantification of the implications of these uncertainties on model predictions. The R package FME is a modeling package designed to confront a mathematical model with data. It includes algorithms for sensitivity and Monte Carlo analysis, parameter identifiability, model fitting and provides a Markov-chain based method to estimate parameter confidence intervals. Although its main focus is on mathematical systems that consist of differential equations, FME can deal with other types of models. In this paper, FME is applied to a model describing the dynamics of the HIV virus.

865 citations


Journal ArticleDOI
TL;DR: GENIE as mentioned in this paper is a large-scale software system, consisting of ∼ 120 000 lines of C++ code, featuring a modern object-oriented design and extensively validated physics content, which supports the full life-cycle of simulation and generator-related analysis tasks.
Abstract: GENIE [1] is a new neutrino event generator for the experimental neutrino physics community. The goal of the project is to develop a ‘canonical’ neutrino interaction physics Monte Carlo whose validity extends to all nuclear targets and neutrino flavors from MeV to PeV energy scales. Currently, emphasis is on the few-GeV energy range, the challenging boundary between the non-perturbative and perturbative regimes, which is relevant for the current and near future long-baseline precision neutrino experiments using accelerator-made beams. The design of the package addresses many challenges unique to neutrino simulations and supports the full life-cycle of simulation and generator-related analysis tasks. GENIE is a large-scale software system, consisting of ∼ 120 000 lines of C ++ code, featuring a modern object-oriented design and extensively validated physics content. The first official physics release of GENIE was made available in August 2007, and at the time of the writing of this article, the latest available version was v2.4.4.

859 citations


MonographDOI
18 Oct 2010
TL;DR: This comprehensive treatment of contemporary quasi-Monte Carlo methods, digital nets and sequences, and discrepancy theory starts from scratch with detailed explanations of the basic concepts and then advances to current methods used in research.
Abstract: Indispensable for students, invaluable for researchers, this comprehensive treatment of contemporary quasi-Monte Carlo methods, digital nets and sequences, and discrepancy theory starts from scratch with detailed explanations of the basic concepts and then advances to current methods used in research. As deterministic versions of the Monte Carlo method, quasi-Monte Carlo rules have increased in popularity, with many fruitful applications in mathematical practice. These rules require nodes with good uniform distribution properties, and digital nets and sequences in the sense of Niederreiter are known to be excellent candidates. Besides the classical theory, the book contains chapters on reproducing kernel Hilbert spaces and weighted integration, duality theory for digital nets, polynomial lattice rules, the newest constructions by Niederreiter and Xing and many more. The authors present an accessible introduction to the subject based mainly on material taught in undergraduate courses with numerous examples, exercises and illustrations.

765 citations


Journal ArticleDOI
01 May 2010-Genetics
TL;DR: The approximation of marginal likelihood using thermodynamic integration in MIGRATE allows the evaluation of complex population genetic models, not only of whether sampling locations belong to a single panmictic population, but also of competing complex structured population models.
Abstract: For many biological investigations, groups of individuals are genetically sampled from several geographic locations. These sampling locations often do not reflect the genetic population structure. We describe a framework using marginal likelihoods to compare and order structured population models, such as testing whether the sampling locations belong to the same randomly mating population or comparing unidirectional and multidirectional gene flow models. In the context of inferences employing Markov chain Monte Carlo methods, the accuracy of the marginal likelihoods depends heavily on the approximation method used to calculate the marginal likelihood. Two methods, modified thermodynamic integration and a stabilized harmonic mean estimator, are compared. With finite Markov chain Monte Carlo run lengths, the harmonic mean estimator may not be consistent. Thermodynamic integration, in contrast, delivers considerably better estimates of the marginal likelihood. The choice of prior distributions does not influence the order and choice of the better models when the marginal likelihood is estimated using thermodynamic integration, whereas with the harmonic mean estimator the influence of the prior is pronounced and the order of the models changes. The approximation of marginal likelihood using thermodynamic integration in MIGRATE allows the evaluation of complex population genetic models, not only of whether sampling locations belong to a single panmictic population, but also of competing complex structured population models.

621 citations


Journal ArticleDOI
TL;DR: In this paper, a new augmented Dickey-Fuller-type test for unit roots which accounts for two structural breaks was proposed, where the breaks whose time of occurrence is assumed to be unknown are modeled as innovational outliers and thus take effect gradually.
Abstract: In this paper, we propose a new augmented Dickey–Fuller-type test for unit roots which accounts for two structural breaks We consider two different specifications: (a) two breaks in the level of a trending data series and (b) two breaks in the level and slope of a trending data series The breaks whose time of occurrence is assumed to be unknown are modeled as innovational outliers and thus take effect gradually Using Monte Carlo simulations, we show that our proposed test has correct size, stable power, and identifies the structural breaks accurately

571 citations


Journal ArticleDOI
TL;DR: In this article, the authors present a novel study on the problem of constructing mass models for the Milky Way, concentrating on features regarding the dark matter halo component, and the main result of this analysis is a novel determination of the local dark-matter halo density which, assuming spherical symmetry and either an Einasto or an NFW density profile is found to be around 0.39 GeV cm −3 with a 1-��
Abstract: We present a novel study on the problem of constructing mass models for the Milky Way, concentrating on features regarding the dark matter halo component. We have considered a variegated sample of dynamical observables for the Galaxy, including several results which have appeared recently, and studied a 7- or 8dimensional parameter space - defining the Galaxy model - by implementing a Bayesian approach to the parameter estimation based on a Markov Chain Monte Carlo method. The main result of this analysis is a novel determination of the local dark matter halo density which, assuming spherical symmetry and either an Einasto or an NFW density profile is found to be around 0.39 GeV cm −3 with a 1-��

540 citations


Journal ArticleDOI
TL;DR: In this paper, an improved prescription for the merging of matrix elements with parton showers, extending the CKKW approach, is proposed. But the authors do not consider the effect of colour information from matrix elements obtained through colour sampling.
Abstract: We derive an improved prescription for the merging of matrix elements with parton showers, extending the CKKW approach. A flavour-dependent phase space separation criterion is proposed. We show that this new method preserves the logarithmic accuracy of the shower, and that the original proposal can be derived from it. One of the main requirements for the method is a truncated shower algorithm. We outline the corresponding Monte Carlo procedures and apply the new prescription to QCD jet production in e+e- collisions and Drell-Yan lepton pair production. Explicit colour information from matrix elements obtained through colour sampling is incorporated in the merging and the influence of different prescriptions to assign colours in the large N_C limit is studied. We assess the systematic uncertainties of the new method.

521 citations


Journal ArticleDOI
TL;DR: The results show that microdosimetric measurements in liquid water are necessary to assess quantitatively the validity of the software implementation for the liquid water phase, and represent a first step in the extension of the GEANT4 Monte Carlo toolkit to the simulation of biological effects of ionizing radiation.
Abstract: Purpose: TheGEANT4 general-purpose Monte Carlo simulation toolkit is able to simulate physical interaction processes of electrons, hydrogen and helium atoms with charge states ( H 0 , H + ) and ( He 0 , He + , He 2 + ), respectively, in liquid water, the main component of biological systems, down to the electron volt regime and the submicrometer scale, providing GEANT4 users with the so-called “GEANT4-DNA” physics models suitable for microdosimetry simulation applications. The corresponding software has been recently re-engineered in order to provide GEANT4 users with a coherent and unique approach to the simulation of electromagnetic interactions within the GEANT4 toolkit framework (since GEANT4 version 9.3 beta). This work presents a quantitative comparison of these physics models with a collection of experimental data in water collected from the literature. Methods: An evaluation of the closeness between the total and differential cross section models available in theGEANT4 toolkit for microdosimetry and experimental reference data is performed using a dedicated statistical toolkit that includes the Kolmogorov–Smirnov statistical test. The authors used experimental data acquired in water vapor as direct measurements in the liquid phase are not yet available in the literature. Comparisons with several recommendations are also presented. Results: The authors have assessed the compatibility of experimental data withGEANT4microdosimetry models by means of quantitative methods. The results show that microdosimetric measurements in liquid water are necessary to assess quantitatively the validity of the software implementation for the liquid water phase. Nevertheless, a comparison with existing experimental data in water vapor provides a qualitative appreciation of the plausibility of the simulation models. The existing reference data themselves should undergo a critical interpretation and selection, as some of the series exhibit significant deviations from each other. Conclusions: TheGEANT4-DNA physics models available in the GEANT4 toolkit have been compared in this article to available experimental data in the water vapor phase as well as to several published recommendations on the mass stopping power. These models represent a first step in the extension of the GEANT4 Monte Carlo toolkit to the simulation of biological effects of ionizing radiation.

410 citations


Journal ArticleDOI
TL;DR: In this article, a general subtraction scheme, STRIPPER (SecToR Improved Phase sPacE for real Radiation), is derived for the evaluation of next-tonext-to-leading order (NNLO) QCD contributions from double-real radiation to processes with at least two particles in the final state at leading order.


Journal ArticleDOI
Qianqian Fang1
TL;DR: A fast mesh-based Monte Carlo (MC) photon migration algorithm for static and time-resolved imaging in 3D complex media and an efficient ray-tracing technique using Plücker Coordinates is implemented.
Abstract: We describe a fast mesh-based Monte Carlo (MC) photon migration algorithm for static and time-resolved imaging in 3D complex media. Compared with previous works using voxel-based media discretization, a mesh-based approach can be more accurate in modeling targets with curved boundaries or locally refined structures. We implement an efficient ray-tracing technique using Plucker Coordinates. The Barycentric coordinates computed from Plucker-formed ray-tracing enables us to use linear Lagrange basis functions to model both media properties and fluence distribution, leading to further improvement in accuracy. The Plucker-coordinate ray-polygon intersection test can be extended to hexahedral or high-order elements. Excellent agreement is found when comparing mesh-based MC with the analytical diffusion model and 3D voxel-based MC code in both homogeneous and heterogeneous cases. Realistic time-resolved imaging results are observed for a complex human brain anatomy using mesh-based MC. We also include multi-threading support in the software and will port it to a graphics processing unit platform in the near future.

Journal ArticleDOI
TL;DR: It is suggested that GPUs have the potential to facilitate the growth of statistical modeling into complex data-rich domains through the availability of cheap and accessible many-core computation.
Abstract: We present a case-study on the utility of graphics cards to perform massively parallel simulation of advanced Monte Carlo methods. Graphics cards, containing multiple Graphics Processing Units (GPUs), are self-contained parallel computational devices that can be housed in conventional desktop and laptop computers and can be thought of as prototypes of the next generation of many-core processors. For certain classes of population-based Monte Carlo algorithms they offer massively parallel simulation, with the added advantage over conventional distributed multi-core processors that they are cheap, easily accessible, easy to maintain, easy to code, dedicated local devices with low power consumption. On a canonical set of stochastic simulation examples including population-based Markov chain Monte Carlo methods and Sequential Monte Carlo methods, we nd speedups from 35 to 500 fold over conventional single-threaded computer code. Our findings suggest that GPUs have the potential to facilitate the growth of statistical modelling into complex data rich domains through the availability of cheap and accessible many-core computation. We believe the speedup we observe should motivate wider use of parallelizable simulation methods and greater methodological attention to their design.

Journal ArticleDOI
TL;DR: Sparse polynomial chaos (PC) expansions are introduced in order to compute sensitivity indices and a bootstrap technique is developed which eventually yields confidence intervals on the results.

Journal ArticleDOI
TL;DR: In this paper, the authors describe the methodology of continuum variational and diffusion quantum Monte Carlo calculations, which are based on many-body wavefunctions and are capable of achieving very high accuracy.
Abstract: This topical review describes the methodology of continuum variational and diffusion quantum Monte Carlo calculations. These stochastic methods are based on many-body wavefunctions and are capable of achieving very high accuracy. The algorithms are intrinsically parallel and well suited to implementation on petascale computers, and the computational cost scales as a polynomial in the number of particles. A guide to the systems and topics which have been investigated using these methods is given. The bulk of the article is devoted to an overview of the basic quantum Monte Carlo methods, the forms and optimization of wavefunctions, performing calculations under periodic boundary conditions, using pseudopotentials, excited-state calculations, sources of calculational inaccuracy, and calculating energy differences and forces.

Journal ArticleDOI
TL;DR: This paper develops a set of methods enabling an information-theoretic distributed control architecture to facilitate search by a mobile sensor network that captures effects in more general scenarios that are not possible with linearized methods.
Abstract: This paper develops a set of methods enabling an information-theoretic distributed control architecture to facilitate search by a mobile sensor network. Given a particular configuration of sensors, this technique exploits the structure of the probability distributions of the target state and of the sensor measurements to control the mobile sensors such that future observations minimize the expected future uncertainty of the target state. The mutual information between the sensors and the target state is computed using a particle filter representation of the posterior probability distribution, making it possible to directly use nonlinear and non-Gaussian target state and sensor models. To make the approach scalable to increasing network sizes, single-node and pairwise-node approximations to the mutual information are derived for general probability density models, with analytically bounded error. The pairwise-node approximation is proven to be a more accurate objective function than the single-node approximation. The mobile sensors are cooperatively controlled using a distributed optimization, yielding coordinated motion of the network. These methods are explored for various sensing modalities, including bearings-only sensing, range-only sensing, and magnetic field sensing, all with potential for search and rescue applications. For each sensing modality, the behavior of this non-parametric method is compared and contrasted with the results of linearized methods, and simulations are performed of a target search using the dynamics of actual vehicles. Monte Carlo results demonstrate that as network size increases, the sensors more quickly localize the target, and the pairwise-node approximation provides superior performance to the single-node approximation. The proposed methods are shown to produce similar results to linearized methods in particular scenarios, yet they capture effects in more general scenarios that are not possible with linearized methods.

Journal ArticleDOI
TL;DR: In this article, the authors conduct a Monte Carlo simulation to evaluate the consequences of omitting or misspecifying the unobserved heterogeneity distribution in single-spell discrete-duration models.

Journal ArticleDOI
TL;DR: A new grid-based Boltzmann equation solver, Acuros, was developed specifically for performing accurate and rapid radiotherapy dose calculations and benchmarked its performance against Monte Carlo for 6 and 18 MV photon beams in heterogeneous media.
Abstract: A new grid-based Boltzmann equation solver, Acuros, was developed specifically for performing accurate and rapid radiotherapy dose calculations. In this study we benchmarked its performance against Monte Carlo for 6 and 18 MV photon beams in heterogeneous media. Acuros solves the coupled Boltzmann transport equations for neutral and charged particles on a locally adaptive Cartesian grid. The Acuros solver is an optimized rewrite of the general purpose Attila software, and for comparable accuracy levels, it is roughly an order of magnitude faster than Attila. Comparisons were made between Monte Carlo (EGSnrc) and Acuros for 6 and 18 MV photon beams impinging on a slab phantom comprising tissue, bone and lung materials. To provide an accurate reference solution, Monte Carlo simulations were run to a tight statistical uncertainty (sigma approximately 0.1%) and fine resolution (1-2 mm). Acuros results were output on a 2 mm cubic voxel grid encompassing the entire phantom. Comparisons were also made for a breast treatment plan on an anthropomorphic phantom. For the slab phantom in regions where the dose exceeded 10% of the maximum dose, agreement between Acuros and Monte Carlo was within 2% of the local dose or 1 mm distance to agreement. For the breast case, agreement was within 2% of local dose or 2 mm distance to agreement in 99.9% of voxels where the dose exceeded 10% of the prescription dose. Elsewhere, in low dose regions, agreement for all cases was within 1% of the maximum dose. Since all Acuros calculations required less than 5 min on a dual-core two-processor workstation, it is efficient enough for routine clinical use. Additionally, since Acuros calculation times are only weakly dependent on the number of beams, Acuros may ideally be suited to arc therapies, where current clinical algorithms may incur long calculation times.

Journal ArticleDOI
Sung Eun Cho1
TL;DR: In this paper, a probabilistic slope stability analysis is presented, where two-dimensional random fields are generated based on a Karhunen-Loeve expansion in a fashion consistent with a specified marginal distribution function and an autocorrelation function, and a Monte Carlo simulation is used to determine the statistical response based on the generated random fields.
Abstract: In this paper, a numerical procedure for probabilistic slope stability analysis is presented. This procedure extends the traditional limit equilibrium method of slices to a probabilistic approach that accounts for the uncertainties and spatial variation of the soil strength parameters. In this study, two-dimensional random fields were generated based on a Karhunen-Loeve expansion in a fashion consistent with a specified marginal distribution function and an autocorrelation function. A Monte Carlo simulation was then used to determine the statistical response based on the generated random fields. This approach makes no assumption about the critical failure surface. Rather, the critical failure surface corresponding to the input random fields of soil properties is searched during the process of analysis. A series of analyses was performed to verify the application potential of the proposed method and to study the effects of uncertainty due to the spatial heterogeneity on the stability of slope. The results show that the proposed method can efficiently consider the various failure mechanisms caused by the spatial variability of soil property in the probabilistic slope stability assessment.

Journal ArticleDOI
TL;DR: In this paper, the asymptotic properties of the NLS estimators of such regression models were derived and compared with the traditional model that involves aggregating or equally weighting data to estimate a model at the same sampling frequency.

Journal ArticleDOI
TL;DR: A probabilistic method to compute distributions of PECs by means of a stochastic stationary substance/material flow modeling that is applicable to any substance with a distinct lack of data concerning environmental fate, exposure, emission and transmission characteristics.
Abstract: An elementary step towards a quantitative assessment of the risks of new compounds or pollutants (chemicals, materials) to the environment is to estimate their environmental concentrations. Thus, the calculation of predicted environmental concentrations (PECs) builds the basis of a first exposure assessment. This paper presents a probabilistic method to compute distributions of PECs by means of a stochastic stationary substance/material flow modeling. The evolved model is basically applicable to any substance with a distinct lack of data concerning environmental fate, exposure, emission and transmission characteristics. The model input parameters and variables consider production, application quantities and fate of the compounds in natural and technical environments. To cope with uncertainties concerning the estimation of the model parameters (e.g. transfer and partitioning coefficients, emission factors) as well as uncertainties about the exposure causal mechanisms (e.g. level of compound production and application) themselves, we utilized and combined sensitivity and uncertainty analysis, Monte Carlo simulation and Markov Chain Monte Carlo modeling. The combination of these methods is appropriate to calculate realistic PECs when facing a lack of data. The proposed model is programmed and carried out with the computational tool R and implemented and validated with data for an exemplary case study of flows of the engineered nanoparticle nano-TiO"2 in Switzerland.

Journal ArticleDOI
TL;DR: In this article, the authors study the spin-1/2 Ising model and the Blume-Capel model at various values of the parameter $D$ on the simple cubic lattice, using a hybrid of the local Metropolis, the single cluster and the wall cluster algorithm.
Abstract: We study the spin-1/2 Ising model and the Blume-Capel model at various values of the parameter $D$ on the simple cubic lattice. To this end we perform Monte Carlo simulations using a hybrid of the local Metropolis, the single cluster and the wall cluster algorithm. Using finite size scaling we determine the value ${D}^{\ensuremath{\ast}}=0.656(20)$ of the parameter $D$, where leading corrections to scaling vanish. We find $\ensuremath{\omega}=0.832(6)$ for the exponent of leading corrections to scaling. In order to compute accurate estimates of critical exponents, we construct improved observables that have a small amplitude of the leading correction for any model. Analyzing data obtained for $D=0.641$ and 0.655 on lattices of a linear size up to $L=360$ we obtain $\ensuremath{ u}=0.63002(10)$ and $\ensuremath{\eta}=0.03627(10)$. We compare our results with those obtained from previous Monte Carlo simulations and high-temperature series expansions of lattice models, by using field-theoretic methods and experiments.

Journal ArticleDOI
TL;DR: An overview of importance sampling—a popular sampling tool used for Monte Carlo computing and its mathematical foundation and properties that determine its accuracy in Monte Carlo approximations are discussed.
Abstract: We provide a short overview of importance sampling—a popular sampling tool used for Monte Carlo computing. We discuss its mathematical foundation and properties that determine its accuracy in Monte Carlo approximations. We review the fundamental developments in designing efficient importance sampling (IS) for practical use. This includes parametric approximation with optimization-based adaptation, sequential sampling with dynamic adaptation through resampling and population-based approaches that make use of Markov chain sampling. Copyright © 2009 John Wiley & Sons, Inc. For further resources related to this article, please visit the WIREs website.

Proceedings ArticleDOI
15 Nov 2010
TL;DR: In this paper, the authors introduce quantum spin systems and several computational methods for studying their ground-state and finite-temperature properties, including symmetry-breaking and critical phenomena, in the simpler setting of Monte Carlo studies of classical spin systems.
Abstract: These lecture notes introduce quantum spin systems and several computational methods for studying their ground‐state and finite‐temperature properties. Symmetry‐breaking and critical phenomena are first discussed in the simpler setting of Monte Carlo studies of classical spin systems, to illustrate finite‐size scaling at continuous and first‐order phase transitions. Exact diagonalization and quantum Monte Carlo (stochastic series expansion) algorithms and their computer implementations are then discussed in detail. Applications of the methods are illustrated by results for some of the most essential models in quantum magnetism, such as the S = 1/2 Heisenberg antiferromagnet in one and two dimensions, as well as extended models useful for studying quantum phase transitions between antiferromagnetic and magnetically disordered states.

Journal ArticleDOI
TL;DR: The geometrical model that is being introduced in this paper takes into account registered vessel traffic data and generalised vessel dynamics and uses advanced statistical and optimisation methods (Monte Carlo and genetic algorithms).

Journal ArticleDOI
TL;DR: In this paper, the authors present the achievements of the last years of the experimental and theoretical groups working on hadronic cross section measurements at the low energy e+e- colliders in Beijing, Frascati, Ithaca, Novosibirsk, Stanford and Tsukuba and sketch the prospects in these fields for the years to come.
Abstract: We present the achievements of the last years of the experimental and theoretical groups working on hadronic cross section measurements at the low energy e+e- colliders in Beijing, Frascati, Ithaca, Novosibirsk, Stanford and Tsukuba and on tau decays. We sketch the prospects in these fields for the years to come. We emphasise the status and the precision of the Monte Carlo generators used to analyse the hadronic cross section measurements obtained as well with energy scans as with radiative return, to determine luminosities and tau decays. The radiative corrections fully or approximately implemented in the various codes and the contribution of the vacuum polarisation are discussed.

Posted Content
TL;DR: The authors provide an overview of the existing literature on panel data models with error cross-sectional dependence and distinguish between spatial dependence and factor structure dependence, and analyse the implications of weak and strong crosssectional dependence on the properties of the estimators.
Abstract: This paper provides an overview of the existing literature on panel data models with error cross-sectional dependence We distinguish between spatial dependence and factor structure dependence and we analyse the implications of weak and strong cross-sectional dependence on the properties of the estimators We consider estimation under strong and weak exogeneity of the regressors for both T fixed and T large cases Available tests for error cross-sectional dependence and methods for determining the number of factors are discussed in detail The finite-sample properties of some estimators and statistics are investigated using Monte Carlo experiments

Book
23 Aug 2010
TL;DR: This chapter discusses Bayesian Inference and Markov Chain Monte Carlo, which automates the very labor-intensive and therefore time-heavy and expensive process of Bayesian Model Selection.
Abstract: Preface Acknowledgments Publisher's Acknowledgments 1 Bayesian Inference and Markov Chain Monte Carlo 11 Bayes 111 Specification of Bayesian Models 112 The Jeffreys Priors and Beyond 12 Bayes Output 121 Credible Intervals and Regions 122 Hypothesis Testing: Bayes Factors 13 Monte Carlo Integration 131 The Problem 132 Monte Carlo Approximation 133 Monte Carlo via Importance Sampling 14 Random Variable Generation 141 Direct or TransformationMethods 142 Acceptance-Rejection Methods 143 The Ratio-of-UniformsMethod and Beyond 144 Adaptive Rejection Sampling 145 Perfect Sampling 15 Markov ChainMonte Carlo 151 Markov Chains 152 Convergence Results 153 Convergence Diagnostics Exercises 2 The Gibbs Sampler 21 The Gibbs Sampler 22 Data Augmentation 23 Implementation Strategies and Acceleration Methods 231 Blocking and Collapsing 232 Hierarchical Centering and Reparameterization 233 Parameter Expansion for Data Augmentation 234 Alternating Subspace-Spanning Resampling 24 Applications 241 The Student-tModel 242 Robit Regression or Binary Regression with the Student-t Link 243 Linear Regression with Interval-Censored Responses Exercises Appendix 2A: The EMand PX-EMAlgorithms 3 The Metropolis-Hastings Algorithm 31 TheMetropolis-Hastings Algorithm 311 Independence Sampler 312 RandomWalk Chains 313 Problems withMetropolis-Hastings Simulations 32 Variants of theMetropolis-Hastings Algorithm 321 The Hit-and-Run Algorithm 322 The Langevin Algorithm 323 TheMultiple-TryMH Algorithm 33 Reversible Jump MCMC Algorithm for Bayesian Model Selection Problems 331 Reversible JumpMCMC Algorithm 332 Change-Point Identification 34 Metropolis-Within-Gibbs Sampler for ChIP-chip Data Analysis 341 Metropolis-Within-Gibbs Sampler 342 Bayesian Analysis for ChIP-chip Data Exercises 4 Auxiliary Variable MCMC Methods 41 Simulated Annealing 42 Simulated Tempering 43 The Slice Sampler 44 The Swendsen-Wang Algorithm 45 TheWolff Algorithm 46 The Mo/ller Algorithm 47 The Exchange Algorithm 48 The DoubleMH Sampler 481 Spatial AutologisticModels 49 Monte CarloMH Sampler 491 Monte CarloMH Algorithm 492 Convergence 493 Spatial AutologisticModels (Revisited) 494 Marginal Inference 410 Applications 4101 AutonormalModels 4102 Social Networks Exercises 5 Population-Based MCMC Methods 51 Adaptive Direction Sampling 52 Conjugate GradientMonte Carlo 53 SampleMetropolis-Hastings Algorithm 54 Parallel Tempering 55 EvolutionaryMonte Carlo 551 Evolutionary Monte Carlo in Binary-Coded Space 552 EvolutionaryMonte Carlo in Continuous Space 553 Implementation Issues 554 Two Illustrative Examples 555 Discussion 56 Sequential Parallel Tempering for Simulation of High Dimensional Systems 561 Build-up Ladder Construction 562 Sequential Parallel Tempering 563 An Illustrative Example: the Witch s Hat Distribution 564 Discussion 57 Equi-Energy Sampler 58 Applications 581 Bayesian Curve Fitting 582 Protein Folding Simulations: 2D HPModel 583 Bayesian Neural Networks for Nonlinear Time Series Forecasting Exercises Appendix 5A: Protein Sequences for 2D HPModels 6 Dynamic Weighting 61 DynamicWeighting 611 The IWIWPrinciple 612 Tempering DynamicWeighting Algorithm 613 DynamicWeighting in Optimization 62 DynamicallyWeighted Importance Sampling 621 The Basic Idea 622 A Theory of DWIS 623 Some IWIWp Transition Rules 624 Two DWIS Schemes 625 Weight Behavior Analysis 626 A Numerical Example 63 Monte Carlo Dynamically Weighted Importance Sampling 631 Sampling from Distributions with Intractable Normalizing Constants 632 Monte Carlo Dynamically Weighted Importance Sampling 633 Bayesian Analysis for Spatial Autologistic Models 64 Sequentially Dynamically Weighted Importance Sampling Exercises 7 Stochastic Approximation Monte Carlo 71 MulticanonicalMonte Carlo 72 1/k-Ensemble Sampling 73 TheWang-Landau Algorithm 74 Stochastic ApproximationMonte Carlo 75 Applications of Stochastic ApproximationMonte Carlo 751 Efficient p-Value Evaluation for Resampling-Based Tests 752 Bayesian Phylogeny Inference 753 Bayesian Network Learning 76 Variants of Stochastic ApproximationMonte Carlo 761 Smoothing SAMC forModel Selection Problems 762 Continuous SAMC for Marginal Density Estimation 763 Annealing SAMC for Global Optimization 77 Theory of Stochastic ApproximationMonte Carlo 771 Convergence 772 Convergence Rate 773 Ergodicity and its IWIWProperty 78 Trajectory Averaging: Toward the Optimal Convergence Rate 781 Trajectory Averaging for a SAMCMC Algorithm 782 Trajectory Averaging for SAMC 783 Proof of Theorems 782 and 783 Exercises Appendix 7A: Test Functions for Global Optimization 8 Markov Chain Monte Carlo with Adaptive Proposals 81 Stochastic Approximation-Based Adaptive Algorithms 811 Ergodicity andWeak Law of Large Numbers 812 AdaptiveMetropolis Algorithms 82 Adaptive IndependentMetropolis-Hastings Algorithms 83 Regeneration-Based Adaptive Algorithms 831 Identification of Regeneration Times 832 Proposal Adaptation at Regeneration Times 84 Population-Based Adaptive Algorithms 841 ADS, EMC, NKC andMore 842 Adaptive EMC 843 Application to Sensor Placement Problems Exercises References Index

Journal ArticleDOI
TL;DR: In this article, a Bayesian approach to the problem of inference for gravitational wave observations using a network (containing an arbitrary number) of instruments, for the computation of the Bayes factor between two hypotheses and the evaluation of the marginalized posterior density functions of the unknown model parameters is presented.
Abstract: The present operation of the ground-based network of gravitational-wave laser interferometers in enhanced configuration and the beginning of the construction of second-generation (or advanced) interferometers with planned observation runs beginning by 2015 bring the search for gravitational waves into a regime where detection is highly plausible. The development of techniques that allow us to discriminate a signal of astrophysical origin from instrumental artefacts in the interferometer data and to extract the full range of information are therefore some of the primary goals of the current work. Here we report the details of a Bayesian approach to the problem of inference for gravitational wave observations using a network (containing an arbitrary number) of instruments, for the computation of the Bayes factor between two hypotheses and the evaluation of the marginalized posterior density functions of the unknown model parameters. The numerical algorithm to tackle the notoriously difficult problem of the evaluation of large multidimensional integrals is based on a technique known as nested sampling, which provides an attractive (and possibly superior) alternative to more traditional Markov-chain Monte Carlo methods. We discuss the details of the implementation of this algorithm and its performance against a Gaussian model of the background noise, considering the specific case of the signal produced by the in-spiral of binary systems of black holes and/or neutron stars, although the method is completely general and can be applied to other classes of sources. We also demonstrate the utility of this approach by introducing a new coherence test to distinguish between the presence of a coherent signal of astrophysical origin in the data of multiple instruments and the presence of incoherent accidental artefacts, and the effects on the estimation of the source parameters as a function of the number of instruments in the network.