Topic
Markov chain Monte Carlo
About: Markov chain Monte Carlo is a research topic. Over the lifetime, 20187 publications have been published within this topic receiving 746551 citations. The topic is also known as: MCMC & Markov chain Monte Carlo methods.
Papers published on a yearly basis
Papers
More filters
••
01 Jan 1994TL;DR: Theoretical and experimental results demonstrate superiority of the Monte Carlo estimation approach, and suitability of mathematical morphology in studying important properties of a particular binary random image model, widely known as a Markov random field model.
Abstract: Morphological granulometries are frequently used as descriptors of granularity, or texture, within a binary image. In this paper, we study the problem of estimating the (discrete) size distribution and size density of a random binary image by means of empirical, as well as, Monte Carlo estimators. Theoretical and experimental results demonstrate superiority of the Monte Carlo estimation approach, and suitability of mathematical morphology in studying important properties of a particular binary random image model, widely known as a Markov random field model.
8 citations
•
TL;DR: In this paper, the authors analyzed the computational efficiency of approximate Bayesian computation (ABC), which approximates a likelihood function by drawing pseudo-samples from the associated model, and showed that it is unnecessary to tune the number of pseudo samples used in ABC-MCMC.
Abstract: We analyze the computational efficiency of approximate Bayesian computation (ABC), which approximates a likelihood function by drawing pseudo-samples from the associated model. For the rejection sampling version of ABC, it is known that multiple pseudo-samples cannot substantially increase (and can substantially decrease) the efficiency of the algorithm as compared to employing a high-variance estimate based on a single pseudo-sample. We show that this conclusion also holds for a Markov chain Monte Carlo version of ABC, implying that it is unnecessary to tune the number of pseudo-samples used in ABC-MCMC. This conclusion is in contrast to particle MCMC methods, for which increasing the number of particles can provide large gains in computational efficiency.
8 citations
••
01 Jan 2001TL;DR: A Metropolis-Hastings style algorithm for simulating the Candy model and parameter estimation method based on Monte Carlo approximation of the likelihood function is presented.
Abstract: This paper presents a parameter estimation method for the Candy model based on Monte Carlo approximation of the likelihood function. In order to produce such an approximation a Metropolis-Hastings style algorithm for simulating the Candy model is introduced.
8 citations
••
10 Jul 1999TL;DR: This paper shows how a Bayesian treatment using the Markov chain Monte Carlo method can allow for a full covariance matrix with multilayer perceptron neural network.
Abstract: In a multivariate regression problem it is often assumed that residuals of outputs are independent of each other. In many applications a more realistic model would allow dependencies between the outputs. In this paper we show how a Bayesian treatment using the Markov chain Monte Carlo method can allow for a full covariance matrix with multilayer perceptron neural network.
8 citations
••
TL;DR: This work discusses an automatic proposal distribution useful for ABC-MCMC algorithms and is inspired by the theory of quasi-likelihood (QL) functions, which is obtained by modelling the distribution of the summary statistics as a function of the parameters.
Abstract: Approximate Bayesian Computation (ABC) is a useful class of methods for Bayesian inference when the likelihood function is computationally intractable. In practice, the basic ABC algorithm may be inefficient in the presence of discrepancy between prior and posterior. Therefore, more elaborate methods, such as ABC with the Markov chain Monte Carlo algorithm (ABC-MCMC), should be used. However, the elaboration of a proposal density for MCMC is a sensitive issue and very difficult in the ABC setting, where the likelihood is intractable. We discuss an automatic proposal distribution useful for ABC-MCMC algorithms. This proposal is inspired by the theory of quasi-likelihood (QL) functions and is obtained by modelling the distribution of the summary statistics as a function of the parameters. Essentially, given a real-valued vector of summary statistics, we reparametrize the model by means of a regression function of the statistics on parameters, obtained by sampling from the original model in a pilot-run simulation study. The QL theory is well established for a scalar parameter, and it is shown that when the conditional variance of the summary statistic is assumed constant, the QL has a closed-form normal density. This idea of constructing proposal distributions is extended to non constant variance and to real-valued parameter vectors. The method is illustrated by several examples and by an application to a real problem in population genetics.
8 citations