scispace - formally typeset
Search or ask a question

Showing papers on "Monte Carlo method published in 2013"


Journal ArticleDOI
TL;DR: In this article, the authors used Monte-Carlo simulations to determine the critical sample size from which on the magnitude of a correlation can be expected to be stable, which depends on the effect size, the width of the corridor of stability, and the requested confidence that the trajectory does not leave this corridor any more.

1,302 citations


Journal ArticleDOI
TL;DR: The SRIM (formerly TRIM) Monte Carlo simulation code is widely used to compute a number of parameters relevant to ion beam implantation and ion beam processing of materials as discussed by the authors.
Abstract: The SRIM (formerly TRIM) Monte Carlo simulation code is widely used to compute a number of parameters relevant to ion beam implantation and ion beam processing of materials. It also has the capability to compute a common radiation damage exposure unit known as atomic displacements per atom (dpa). Since dpa is a standard measure of primary radiation damage production, most researchers who employ ion beams as a tool for inducing radiation damage in materials use SRIM to determine the dpa associated with their irradiations. The use of SRIM for this purpose has been evaluated and comparisons have been made with an internationally-recognized standard definition of dpa, as well as more detailed atomistic simulations of atomic displacement cascades. Differences between the standard and SRIM-based dpa are discussed and recommendations for future usage of SRIM in radiation damage studies are made. In particular, it is recommended that when direct comparisons between ion and neutron data are intended, the Kinchin–Pease option of SRIM should be selected.

1,097 citations


Journal ArticleDOI
01 Jul 2013
TL;DR: The basic principles and the most common Monte Carlo algorithms are reviewed, among which rejection sampling, importance sampling and Monte Carlo Markov chain (MCMC) methods are reviewed.
Abstract: Bayesian inference often requires integrating some function with respect to a posterior distribution. Monte Carlo methods are sampling algorithms that allow to compute these integrals numerically when they are not analytically tractable. We review here the basic principles and the most common Monte Carlo algorithms, among which rejection sampling, importance sampling and Monte Carlo Markov chain (MCMC) methods. We give intuition on the theoretical justification of the algorithms as well as practical advice, trying to relate both. We discuss the application of Monte Carlo in experimental physics, and point to landmarks in the literature for the curious reader.

1,067 citations


Book
09 Jun 2013
TL;DR: This chapter discusses the application of the Binomial Distribution to network Modelling and Evaluation of Simple Systems and System Reliability Evaluation Using Probability Distributions.
Abstract: Introduction. Basic Probability Theory. Application of the Binomial Distribution. Network Modelling and Evaluation of Simple Systems. Network Modelling and Evaluation of Complex Systems. Probability Distributions in Reliability Evaluation. System Reliability Evaluation Using Probability Distributions. Monte Carlo Simulation. Epilogue.

1,062 citations


Journal ArticleDOI
TL;DR: In this paper, the authors extend the Common Correlated Effects (CCE) approach to heterogeneous panel data models with lagged dependent variables and/or weakly exogenous regressors.

974 citations


Journal ArticleDOI
TL;DR: In this article, an alternative summation of the MultiNest draws, called importance nested sampling (INS), is presented, which can calculate the Bayesian evidence at up to an order of magnitude higher accuracy than vanilla NS with no change in the way Multi-Nest explores the parameter space.
Abstract: Bayesian inference involves two main computational challenges. First, in estimating the parameters of some model for the data, the posterior distribution may well be highly multi-modal: a regime in which the convergence to stationarity of traditional Markov Chain Monte Carlo (MCMC) techniques becomes incredibly slow. Second, in selecting between a set of competing models the necessary estimation of the Bayesian evidence for each is, by definition, a (possibly high-dimensional) integration over the entire parameter space; again this can be a daunting computational task, although new Monte Carlo (MC) integration algorithms offer solutions of ever increasing efficiency. Nested sampling (NS) is one such contemporary MC strategy targeted at calculation of the Bayesian evidence, but which also enables posterior inference as a by-product, thereby allowing simultaneous parameter estimation and model selection. The widely-used MultiNest algorithm presents a particularly efficient implementation of the NS technique for multi-modal posteriors. In this paper we discuss importance nested sampling (INS), an alternative summation of the MultiNest draws, which can calculate the Bayesian evidence at up to an order of magnitude higher accuracy than `vanilla' NS with no change in the way MultiNest explores the parameter space. This is accomplished by treating as a (pseudo-)importance sample the totality of points collected by MultiNest, including those previously discarded under the constrained likelihood sampling of the NS algorithm. We apply this technique to several challenging test problems and compare the accuracy of Bayesian evidences obtained with INS against those from vanilla NS.

674 citations


01 Jan 2013
TL;DR: Gelman et al. as discussed by the authors used Markov Chain Monte Carlo (MCMC) sampling method to produce samples from the posterior distribution, where the likelihood of observing the data (in this case choices and RTs) given each parameter value and the prior probability of the parameters.
Abstract: where P (x|θ) is the likelihood of observing the data (in this case choices and RTs) given each parameter value and P (θ) is the prior probability of the parameters. In most cases the computation of the denominator is quite complicated and requires to compute an analytically intractable integral. Sampling methods like Markov-Chain Monte Carlo (MCMC) (Gamerman and Lopes, 2006) circumvent this problem by providing a way to produce samples from the posterior distribution. These methods have been used with great success in many different scenarios (Gelman et al., 2003) and will be discussed in more detail below.

533 citations


Journal ArticleDOI
TL;DR: In this paper, a general method that allows one to decay narrow resonances in Les Houches Monte Carlo events in an efficient and accurate way is presented, which preserves both spin correlation and finite width effects to a very good accuracy.
Abstract: We present a general method that allows one to decay narrow resonances in Les Houches Monte Carlo events in an efficient and accurate way. The procedure preserves both spin correlation and finite width effects to a very good accuracy, and is therefore particularly suited for the decay of resonances in production events generated at next-to-leading-order accuracy. The method is implemented as a generic tool in the Mad-Graph5 framework, giving access to a very large set of possible applications. We illustrate the validity of the method and the code by applying it to the case of single top and top quark pair production, and show its capabilities on the case of top quark pair production in association with a Higgs boson.

489 citations


Journal ArticleDOI
TL;DR: An original and easily implementable method called AK-IS for active learning and Kriging-based Importance Sampling, based on the AK-MCS algorithm, that enables the correction or validation of the FORM approximation with only a very few mechanical model computations.

458 citations


Journal ArticleDOI
TL;DR: This work proposes a general mathematical framework and an algorithmic approach for optimal experimental design with nonlinear simulation-based models, and focuses on finding sets of experiments that provide the most information about targeted sets of parameters.

372 citations


Journal ArticleDOI
TL;DR: The Monte Carlo approach is used to simulate the trajectories of emitted photons propagating in water from the transmitter towards the receiver, and it is shown that, except for highly turbid waters, the channel time dispersion can be neglected when working over moderate distances.
Abstract: We consider channel characterization for underwater wireless optical communication (UWOC) systems. We focus on the channel impulse response and, in particular, quantify the channel time dispersion for different water types, link distances, and transmitter/receiver characteristics, taking into account realistic parameters. We use the Monte Carlo approach to simulate the trajectories of emitted photons propagating in water from the transmitter towards the receiver. During their propagation, photons are absorbed or scattered as a result of their interaction with different particles present in water. To model angle scattering, we use the two-term Henyey-Greenstein model in our channel simulator. We show that this model is more accurate than the commonly used Henyey-Greenstein model, especially in pure sea waters. Through the numerical results that we present, we show that, except for highly turbid waters, the channel time dispersion can be neglected when working over moderate distances. In other words, under such conditions, we do not suffer from any inter-symbol interference in the received signal. Lastly, we study the performance of a typical UWOC system in terms of bit-error-rate using the simple on-off-keying modulation. The presented results give insight into the design of UWOC systems.

Journal ArticleDOI
TL;DR: In this paper, the authors compare three spectral retrieval methods: optimal estimation, differential evolution Markov chain Monte Carlo, and bootstrap Monte Carlo on a synthetic water-dominated hot Jupiter and find that the three approaches agree for high spectral resolution, high signal-to-noise data expected to come from potential future spaceborne missions, but disagree for low-resolution, low signal tonoise spectra representative of current observations.
Abstract: Exoplanet atmosphere spectroscopy enables us to improve our understanding of exoplanets just as remote sensing in our own solar system has increased our understanding of the solar system bodies. The challenge is to quantitatively determine the range of temperatures and molecular abundances allowed by the data, which is often difficult given the low information content of most exoplanet spectra that commonly leads to degeneracies in the interpretation. A variety of spectral retrieval approaches have been applied to exoplanet spectra, but no previous investigations have sought to compare these approaches. We compare three different retrieval methods: optimal estimation, differential evolution Markov chain Monte Carlo, and bootstrap Monte Carlo on a synthetic water-dominated hot Jupiter. We discuss expectations of uncertainties in abundances and temperatures given current and potential future observations. In general, we find that the three approaches agree for high spectral resolution, high signal-to-noise data expected to come from potential future spaceborne missions, but disagree for low-resolution, low signal-to-noise spectra representative of current observations. We also compare the results from a parameterized temperature profile versus a full classical Level-by-Level approach and discriminate in which situations each of these approaches is applicable. Furthermore, we discuss the implications of our models for the inferred C-to-O ratios of exoplanetary atmospheres. Specifically, we show that in the observational limit of a few photometric points, the retrieved C/O is biased toward values near solar and near one simply due to the assumption of uninformative priors.

Journal ArticleDOI
TL;DR: State-of-the-art Monte Carlo techniques for computing fluid coexistence properties (Gibbs simulations) and adsorption simulations in nanoporous materials such as zeolites and metal–organic frameworks are reviewed.
Abstract: We review state-of-the-art Monte Carlo (MC) techniques for computing fluid coexistence properties (Gibbs simulations) and adsorption simulations in nanoporous materials such as zeolites and metal–o...

Journal ArticleDOI
TL;DR: This approach provides the framework for the integration of DNA sequence and shape analyses in genome-wide studies and requires only nucleotide sequence as input and instantly predicts multiple structural features of DNA.
Abstract: We present a method and web server for predicting DNA structural features in a high-throughput (HT) manner for massive sequence data. This approach provides the framework for the integration of DNA sequence and shape analyses in genome-wide studies. The HT methodology uses a sliding-window approach to mine DNA structural information obtained from Monte Carlo simulations. It requires only nucleotide sequence as input and instantly predicts multiple structural features of DNA (minor groove width, roll, propeller twist and helix twist). The results of rigorous validations of the HT predictions based on DNA structures solved by X-ray crystallography and NMR spectroscopy, hydroxyl radical cleavage data, statistical analysis and cross-validation, and molecular dynamics simulations provide strong confidence in this approach. The DNAshape web server is freely available at http://rohslab.cmb.usc.edu/DNAshape/.

Journal ArticleDOI
TL;DR: Under the assumption of the existence of a uniform additive model error term, ABC algorithms give exact results when sufficient summaries are used, which allows the approximation made in many previous application papers to be understood, and should guide the choice of metric and tolerance in future work.
Abstract: Approximate Bayesian computation (ABC) or likelihood-free inference algorithms are used to find approximations to posterior distributions without making explicit use of the likelihood function, depending instead on simulation of sample data sets from the model. In this paper we show that under the assumption of the existence of a uniform additive model error term, ABC algorithms give exact results when sufficient summaries are used. This interpretation allows the approximation made in many previous application papers to be understood, and should guide the choice of metric and tolerance in future work. ABC algorithms can be generalized by replacing the 0–1 cut-off with an acceptance probability that varies with the distance of the simulated data from the observed data. The acceptance density gives the distribution of the error term, enabling the uniform error usually used to be replaced by a general distribution. This generalization can also be applied to approximate Markov chain Monte Carlo algorithms. In light of this work, ABC algorithms can be seen as calibration techniques for implicit stochastic models, inferring parameter values in light of the computer model, data, prior beliefs about the parameter values, and any measurement or model errors.

Journal ArticleDOI
TL;DR: A general survey is provided on the capability of Monte Carlo (MC) modeling in tissue optics while paying special attention to the recent progress in the development of methods for speeding up MC simulations.
Abstract: A general survey is provided on the capability of Monte Carlo (MC) modeling in tissue optics while paying special attention to the recent progress in the development of methods for speeding up MC simulations. The principles of MC modeling for the simulation of light transport in tissues, which includes the general procedure of tracking an individual photon packet, common light-tissue interactions that can be simulated, frequently used tissue models, common contact/noncontact illumination and detection setups, and the treatment of time-resolved and fre- quency-domain optical measurements, are briefly described to help interested readers achieve a quick start. Following that, a variety of methods for speeding up MC simulations, which includes scaling methods, perturbation methods, hybrid methods, variance reduction techniques, parallel computation, and special methods for fluorescence simulations, as well as their respective advantages and disadvantages are discussed. Then the applications of MC methods in tissue optics, laser Doppler flowmetry, photodynamic therapy, optical coherence tomography, and diffuse opticaltomography are briefly surveyed. Finally, the potential directionsfor the future development of the MC method in tissue optics are discussed. © The Authors. Published by SPIE under a Creative Commons Attribution 3.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI. (DOI: 10.1117/1.JBO.18.5.050902)

Journal ArticleDOI
TL;DR: In this paper, the authors investigated sampling efficiency and convergence for Monte Carlo analysis and showed that noncollapsing space-filling sampling strategies, illustrated here with the maximin and uniform Latin hypercube designs, highly enhance the sampling efficiency, and render a desired level of accuracy of the outcomes attainable with far lesser runs.

Book
07 Aug 2013
TL;DR: In this article, the average number of scatterings encountered by reflected and transmitted photons in any given layer is investigated and two vertical weighting functions are investigated, including one based on the maximum penetration of reflected photons, which proves useful for solar reflectance measurements.
Abstract: Photon transport in plane-parallel, vertically inhomogeneous clouds is investigated and applied to cloud remote sensing techniques that use solar reflectance or transmittance measurements for retrieving droplet effective radius. Transport is couched in terms of weighting functions which approximate the relative contribution of individual layers to the overall retrieval. Two vertical weightings are investigated, including one based on the average number of scatterings encountered by reflected and transmitted photons in any given layer. A simpler vertical weighting, based on the maximum penetration of reflected photons, proves useful for solar reflectance measurements. These weighting functions are highly dependent on droplet absorption and solar/viewing geometry. A superposition technique, using adding/doubling radiative transfer procedures, is used to accurately determine both weightings, avoiding time-consuming Monte Carlo methods. Effective radius retrievals from modeled vertically structured liquid water clouds are then made using the standard near-infrared bands and compared with size estimates based on the proposed weighting functions. Agreement between the two methods is generally within several tenths of a micrometer, much better than expected retrieval accuracy. Though the emphasis is on photon transport in clouds, the derived weightings can be applied to any multiple-scattering plane-parallel radiative transfer problem, including arbitrary combinations of cloud, aerosol, and gas layers.

Journal ArticleDOI
TL;DR: In this article, a Monte Carlo model for the structure of low-mass (total mass <25 M?) planetary systems that form by the in situ gravitational assembly of planetary embryos into final planets is presented.
Abstract: We present a Monte Carlo model for the structure of low-mass (total mass <25 M ?) planetary systems that form by the in situ gravitational assembly of planetary embryos into final planets. Our model includes distributions of mass, eccentricity, inclination, and period spacing that are based on the simulation of a disk of 20 M ?, forming planets around a solar-mass star, and assuming a power-law surface density distribution that drops with distance a as a ?1.5. The output of the Monte Carlo model is then subjected to the selection effects that mimic the observations of a transiting planet search such as that performed by the Kepler satellite. The resulting comparison of the output to the properties of the observed sample yields an encouraging agreement in terms of the relative frequencies of multiple-planet systems and the distribution of the mutual inclinations when moderate tidal circularization is taken into account. The broad features of the period distribution and radius distribution can also be matched within this framework, although the model underpredicts the distribution of small period ratios. This likely indicates that some dissipation is still required in the formation process. The most striking deviation between the model and observations is in the ratio of single to multiple systems in that there are roughly 50% more single-planet candidates observed than are produced in any model population. This suggests that some systems must suffer additional attrition to reduce the number of planets or increase the range of inclinations.

Journal ArticleDOI
TL;DR: The SMC^2 algorithm proposed in this paper is a sequential Monte Carlo algorithm, defined in the theta-dimension, which propagates and resamples many particle filters in the x-dimension.
Abstract: We consider the generic problem of performing sequential Bayesian inference in a state-space model with observation process y, state process x and fixed parameter theta. An idealized approach would be to apply the iterated batch importance sampling (IBIS) algorithm of Chopin (2002). This is a sequential Monte Carlo algorithm in the theta-dimension, that samples values of theta, reweights iteratively these values using the likelihood increments p(y_t|y_1:t-1, theta), and rejuvenates the theta-particles through a resampling step and a MCMC update step. In state-space models these likelihood increments are intractable in most cases, but they may be unbiasedly estimated by a particle filter in the x-dimension, for any fixed theta. This motivates the SMC^2 algorithm proposed in this article: a sequential Monte Carlo algorithm, defined in the theta-dimension, which propagates and resamples many particle filters in the x-dimension. The filters in the x-dimension are an example of the random weight particle filter as in Fearnhead et al. (2010). On the other hand, the particle Markov chain Monte Carlo (PMCMC) framework developed in Andrieu et al. (2010) allows us to design appropriate MCMC rejuvenation steps. Thus, the theta-particles target the correct posterior distribution at each iteration t, despite the intractability of the likelihood increments. We explore the applicability of our algorithm in both sequential and non-sequential applications and consider various degrees of freedom, as for example increasing dynamically the number of x-particles. We contrast our approach to various competing methods, both conceptually and empirically through a detailed simulation study, included here and in a supplement, and based on particularly challenging examples.

Journal ArticleDOI
TL;DR: The history of the Monte Carlo for complex chemical systems Towhee open source Monte Carlo molecular simulation tool is discussed in this article, and a proof is given that the Widom insertion method for computing the Wasserstein distance is correct.
Abstract: The history of the Monte Carlo for complex chemical systems Towhee open source Monte Carlo molecular simulation tool is discussed. A proof is given that the Widom insertion method for computing the...

Journal ArticleDOI
01 May 2013-Genetics
TL;DR: Modifications are introduced to the rjMCMC algorithms that remove the constraint on the new species divergence time when splitting and alter the gene trees to remove incompatibilities, and are found to improve mixing of the Markov chain for both simulated and empirical data sets.
Abstract: Several computational methods have recently been proposed for delimiting species using multilocus sequence data. Among them, the Bayesian method of Yang and Rannala uses the multispecies coalescent model in the likelihood framework to calculate the posterior probabilities for the different species-delimitation models. It has a sound statistical basis and is found to have nice statistical properties in simulation studies, such as low error rates of undersplitting and oversplitting. However, the method suffers from poor mixing of the reversible-jump Markov chain Monte Carlo (rjMCMC) algorithms. Here, we describe several modifications to the algorithms. We propose a flexible prior that allows the user to specify the probability that each node on the guide tree represents a true speciation event. We also introduce modifications to the rjMCMC algorithms that remove the constraint on the new species divergence time when splitting and alter the gene trees to remove incompatibilities. The new algorithms are found to improve mixing of the Markov chain for both simulated and empirical data sets.

Journal ArticleDOI
TL;DR: Analytical and numerical evidence for self-correcting behavior in the quantum spin lattice model known as the 3D cubic code is reported and it is proved that its memory time is at least L(cβ), where L is the lattice size, β is the inverse temperature of the bath, and c>0 is a constant coefficient.
Abstract: A big open question in the quantum information theory concerns the feasibility of a self-correcting quantum memory. A quantum state recorded in such memory can be stored reliably for a macroscopic time without need for active error correction, if the memory is in contact with a cold enough thermal bath. Here we report analytic and numerical evidence for self-correcting behavior in the quantum spin lattice model known as the 3D cubic code. We prove that its memory time is at least L^(cβ), where L is the lattice size, β is the inverse temperature of the bath, and c>0 is a constant coefficient. However, this bound applies only if the lattice size L does not exceed a critical value which grows exponentially with β. In that sense, the model can be called a partially self-correcting memory. We also report a Monte Carlo simulation indicating that our analytic bounds on the memory time are tight up to constant coefficients. To model the readout step we introduce a new decoding algorithm, which can be implemented efficiently for any topological stabilizer code. A longer version of this work can be found in Bravyi and Haah, arXiv:1112.3252.

Proceedings Article
05 Dec 2013
TL;DR: A new method, Stochastic gradient Riemannian Langevin dynamics, which is simple to implement and can be applied to large scale data is proposed and achieves substantial performance improvements over the state of the art online variational Bayesian methods.
Abstract: In this paper we investigate the use of Langevin Monte Carlo methods on the probability simplex and propose a new method, Stochastic gradient Riemannian Langevin dynamics, which is simple to implement and can be applied to large scale data. We apply this method to latent Dirichlet allocation in an online mini-batch setting, and demonstrate that it achieves substantial performance improvements over the state of the art online variational Bayesian methods.

Book
20 May 2013
TL;DR: In the last three decades, there has been a dramatic increase in the use of interacting particle methods as a powerful tool in real-world applications of Monte Carlo simulation in computational physics, population biology, computer sciences, and statistical machine learning.
Abstract: In the last three decades, there has been a dramatic increase in the use of interacting particle methods as a powerful tool in real-world applications of Monte Carlo simulation in computational physics, population biology, computer sciences, and statistical machine learning. Ideally suited to parallel and distributed computation, these advanced particle algorithms include nonlinear interacting jump diffusions; quantum, diffusion, and resampled Monte Carlo methods; Feynman-Kac particle models; genetic and evolutionary algorithms; sequential Monte Carlo methods; adaptive and interacting Markov chain Monte Carlo models; bootstrapping methods; ensemble Kalman filters; and interacting particle filters.


Journal ArticleDOI
TL;DR: In this article, an efficient approach for probabilistic transmission expansion planning (TEP) that considers load and wind power generation uncertainties is proposed, where an upper bound on total load shedding is introduced in order to obtain network solutions that have an acceptable probability of load curtailment.
Abstract: This paper proposes an efficient approach for probabilistic transmission expansion planning (TEP) that considers load and wind power generation uncertainties. The Benders decomposition algorithm in conjunction with Monte Carlo simulation is used to tackle the proposed probabilistic TEP. An upper bound on total load shedding is introduced in order to obtain network solutions that have an acceptable probability of load curtailment. The proposed approach is applied on Garver six-bus test system and on IEEE 24-bus reliability test system. The effect of contingency analysis, load and mainly wind production uncertainties on network expansion configurations and costs is investigated. It is shown that the method presented can be used effectively to study the effect of increasing wind power integration on TEP of systems with high wind generation uncertainties.

Journal ArticleDOI
TL;DR: In this article, the use of the Monte Carlo method within the Materials Studio application is surveyed, which integrates a large number of modules for molecular simulation. Several of these modules work by generating...
Abstract: We survey the use of the Monte Carlo method within the Materials Studio application, which integrates a large number of modules for molecular simulation. Several of these modules work by generating...

Journal ArticleDOI
TL;DR: Sciacchitano et al. as discussed by the authors presented a method to quantify the uncertainty of PIV data, i.e., the unknown actual error of the measured velocity field is estimated using the velocity field itself as input along with the original images.
Abstract: A novel method is presented to quantify the uncertainty of PIV data. The approach is a posteriori, i.e. the unknown actual error of the measured velocity field is estimated using the velocity field itself as input along with the original images. The principle of the method relies on the concept of super-resolution: the image pair is matched according to the cross-correlation analysis and the residual distance between matched particle image pairs (particle disparity vector) due to incomplete match between the two exposures is measured. The ensemble of disparity vectors within the interrogation window is analyzed statistically. The dispersion of the disparity vector returns the estimate of the random error, whereas the mean value of the disparity indicates the occurrence of a systematic error. The validity of the working principle is first demonstrated via Monte Carlo simulations. Two different interrogation algorithms are considered, namely the cross-correlation with discrete window offset and the multi-pass with window deformation. In the simulated recordings, the effects of particle image displacement, its gradient, out-of-plane motion, seeding density and particle image diameter are considered. In all cases good agreement is retrieved, indicating that the error estimator is able to follow the trend of the actual error with satisfactory precision. Experiments where time-resolved PIV data are available are used to prove the concept under realistic measurement conditions. In this case the ‘exact’ velocity field is unknown; however a high accuracy estimate is obtained with an advanced interrogation algorithm that exploits the redundant information of highly temporally oversampled data (pyramid correlation, Sciacchitano et al (2012 Exp. Fluids 53 1087–105)). The image-matching estimator returns the instantaneous distribution of the estimated velocity measurement error. The spatial distribution compares very well with that of the actual error with maxima in the highly sheared regions and in the 3D turbulent regions. The high level of correlation between the estimated error and the actual error indicates that this new approach can be utilized to directly infer the measurement uncertainty from PIV data. A procedure is shown where the results of the error estimation are employed to minimize the measurement uncertainty by selecting the optimal interrogation window size.

Journal ArticleDOI
TL;DR: It is proved that convergence of the multilevel Monte Carlo algorithm for estimating any bounded, linear functional and any continuously Fréchet differentiable non-linear functional of the solution is convergence.
Abstract: We consider the application of multilevel Monte Carlo methods to elliptic PDEs with random coefficients. We focus on models of the random coefficient that lack uniform ellipticity and boundedness with respect to the random parameter, and that only have limited spatial regularity. We extend the finite element error analysis for this type of equation, carried out in Charrier et al. (SIAM J Numer Anal, 2013), to more difficult problems, posed on non-smooth domains and with discontinuities in the coefficient. For this wider class of model problem, we prove convergence of the multilevel Monte Carlo algorithm for estimating any bounded, linear functional and any continuously Frechet differentiable non-linear functional of the solution. We further improve the performance of the multilevel estimator by introducing level dependent truncations of the Karhunen---Loeve expansion of the random coefficient. Numerical results complete the paper.