scispace - formally typeset
Search or ask a question

Showing papers on "Monte Carlo method published in 2006"


Book
01 Jan 2006
TL;DR: In this paper, the authors present an analysis of the IEEE Reliability Test System (IRTS) and evaluate the reliability worth of the test system with Monte Carlo simulation and three-order equations for overlapping events.
Abstract: Introduction. Generating Capacity-Basic Probability Methods. Generating Capacity-Frequency and Duration Method. Interconnected Systems. Operating Reserve. Composite Generation and Transmission Systems. Distribution Systems-Basic Techniques and Radial Networks. Distribution Systems-Parallel and Meshed Networks. Distribution Systems-Extended Techniques. Substations and Switching Stations. Plant and Station Availability. Applications of Monte Carlo Simulation. Evaluation of Reliability Worth. Epilogue. Appendix 1: Definitions. Appendix 2: Analysis of the IEEE Reliability Test System. Appendix 3: Thirdorder Equations for Overlapping Events. Solutions to Problems. Index.

3,712 citations


Journal ArticleDOI
TL;DR: In this paper, the authors propose a methodology to sample sequentially from a sequence of probability distributions that are defined on a common space, each distribution being known up to a normalizing constant.
Abstract: Summary. We propose a methodology to sample sequentially from a sequence of probability distributions that are defined on a common space, each distribution being known up to a normalizing constant. These probability distributions are approximated by a cloud of weighted random samples which are propagated over time by using sequential Monte Carlo methods. This methodology allows us to derive simple algorithms to make parallel Markov chain Monte Carlo algorithms interact to perform global optimization and sequential Bayesian estimation and to compute ratios of normalizing constants. We illustrate these algorithms for various integration tasks arising in the context of Bayesian inference.

1,684 citations


Journal ArticleDOI
Piotr Golonka1, Zbigniew Was1
TL;DR: In this article, the authors present a discussion of the precision for the PHOTOS Monte Carlo algorithm, with improved implementation of QED interference and multiple-photon radiation, and they found that the current version of PHOTOS is of 0.1% in the case of Z and W decays.
Abstract: We present a discussion of the precision for the PHOTOS Monte Carlo algorithm, with improved implementation of QED interference and multiple-photon radiation. The main application of PHOTOS is the generation of QED radiative corrections in decays of any resonances, simulated by a "host" Monte Carlo generator. By careful comparisons automated with the help of the MC-TESTER tool specially tailored for that purpose, we found that the precision of the current version of PHOTOS is of 0.1% in the case of Z and W decays. In the general case, the precision of PHOTOS was also improved, but this will not be quantified here.

1,063 citations


Posted Content
TL;DR: This article proposed a new variance estimator for OLS as well as for nonlinear estimators such as logit, probit and GMM, that provcides cluster-robust inference when there is two-way or multi-way clustering that is non-nested.
Abstract: In this paper we propose a new variance estimator for OLS as well as for nonlinear estimators such as logit, probit and GMM, that provcides cluster-robust inference when there is two-way or multi-way clustering that is non-nested. The variance estimator extends the standard cluster-robust variance estimator or sandwich estimator for one-way clustering (e.g. Liang and Zeger (1986), Arellano (1987)) and relies on similar relatively weak distributional assumptions. Our method is easily implemented in statistical packages, such as Stata and SAS, that already offer cluster-robust standard errors when there is one-way clustering. The method is demonstrated by a Monte Carlo analysis for a two-way random effects model; a Monte Carlo analysis of a placebo law that extends the state-year effects example of Bertrand et al. (2004) to two dimensions; and by application to two studies in the empirical public/labor literature where two-way clustering is present.

923 citations


Journal ArticleDOI
TL;DR: In this article, basic properties and recent developments of Chebyshev expansion-based algorithms and the kernel polynomial method are reviewed, and an illustration on how the k-means algorithm is successfully embedded into other numerical techniques, such as cluster perturbation theory or Monte Carlo simulation, is provided.
Abstract: Efficient and stable algorithms for the calculation of spectral quantities and correlation functions are some of the key tools in computational condensed-matter physics. In this paper basic properties and recent developments of Chebyshev expansion based algorithms and the kernel polynomial method are reviewed. Characterized by a resource consumption that scales linearly with the problem dimension these methods enjoyed growing popularity over the last decade and found broad application not only in physics. Representative examples from the fields of disordered systems, strongly correlated electrons, electron-phonon interaction, and quantum spin systems are discussed in detail. In addition, an illustration on how the kernel polynomial method is successfully embedded into other numerical techniques, such as cluster perturbation theory or Monte Carlo simulation, is provided.

786 citations


Journal ArticleDOI
TL;DR: A new continuous-time solver for quantum impurity models such as those relevant to dynamical mean field theory, based on a stochastic sampling of a perturbation expansion in the impurity-bath hybridization parameter is presented, which allows very efficient simulations even at low temperatures and for strong interactions.
Abstract: We present a new continuous-time solver for quantum impurity models such as those relevant to dynamical mean field theory. It is based on a stochastic sampling of a perturbation expansion in the impurity-bath hybridization parameter. Comparisons with Monte Carlo and exact diagonalization calculations confirm the accuracy of the new approach, which allows very efficient simulations even at low temperatures and for strong interactions. As examples of the power of the method we present results for the temperature dependence of the kinetic energy and the free energy, enabling an accurate location of the temperature-driven metal-insulator transition.

771 citations


Journal ArticleDOI
TL;DR: A surface-based version of the cluster size exclusion method used for multiple comparisons correction and a new method for generating regions of interest on the cortical surface using a sliding threshold of cluster exclusion followed by cluster growth are implemented.

703 citations


Journal ArticleDOI
TL;DR: In this article, a simple test for causality in the frequency domain is proposed to investigate the predictive content of the yield spread for future output growth, which can also be applied to cointegrated systems.

647 citations


Journal ArticleDOI
07 Dec 2006-Scanning
TL;DR: The CASINO program as discussed by the authors is a single scattering Monte CArlo SImulation of electroN trajectory in sOlid specially designed for low-beam interaction in a bulk and thin foil.
Abstract: This paper is a guide to the ANSI standard C code of CASINO program which is a single scattering Monte CArlo SImulation of electroN trajectory in sOlid specially designed for low-beam interaction in a bulk and thin foil. CASINO can be used either on a DOS-based PC or on a UNIX-based workstation. This program uses tabulated Mott elastic cross sections and experimentally determined stopping powers. Function pointers are used for the most essential routine so that different physical models can easily be implemented. CASINO can be used to generate all of the recorded signals (x-rays, secondary, and backscattered) in a scanning electron microscope either as a point analysis, as a linescan, or as an image format, for all the accelerated voltages (0.1–30 kV). As an example of application, it was found that a 20 nm Guinier-Preston Mg2Si in a light aluminum matrix can, theoretically, be imaged with a microchannel backscattered detector at 5 keV with a beam spot size of 5 nm.

597 citations


Journal ArticleDOI
TL;DR: In this paper, three-dimensional geometrical models for concrete are generated taking the random structure of aggregates at the mesoscopic level into consideration, where the aggregate particles are generated from a certain aggregate size distribution and then placed into the concrete specimen in such a way that there is no intersection between the particles.

594 citations


Journal ArticleDOI
TL;DR: In this paper, a brief general introduction on the nature of Monte Carlo methods that can be skipped by readers acquainted with them is given, and the application of these methods to multivariable problems is discussed.
Abstract: This paper opens with a brief general introduction on the nature of Monte Carlo methods that can be skipped by readers acquainted with them. I then deal more specifically with the application of these methods to multivariable problems, and I indicate certain relatively unexplored areas of this field where further research might be profitable. As I believe is appropriate, some of my material is exploratory, speculative, and controversial, and accordingly I hope it will stimulate discussion.

Proceedings ArticleDOI
01 Sep 2006
TL;DR: In this paper a comparison is made between four frequently encountered resampling algorithms for particle filters and a theoretical framework is introduced to be able to understand and explain the differences between the resamplings algorithms.
Abstract: In this paper a comparison is made between four frequently encountered resampling algorithms for particle filters. A theoretical framework is introduced to be able to understand and explain the differences between the resampling algorithms. This facilitates a comparison of the algorithms with respect to their resampling quality and computational complexity. Using extensive Monte Carlo simulations the theoretical results are verified. It is found that systematic resampling is favourable, both in terms of resampling quality and computational complexity.

Journal ArticleDOI
TL;DR: Monte Carlo methods are proposed, which build on recent advances on the exact simulation of diffusions, for performing maximum likelihood and Bayesian estimation for discretely observed diffusions.
Abstract: Summary. The objective of the paper is to present a novel methodology for likelihood-based inference for discretely observed diffusions. We propose Monte Carlo methods, which build on recent advances on the exact simulation of diffusions, for performing maximum likelihood and Bayesian estimation.

Journal ArticleDOI
TL;DR: In this paper, a new continuum 3D radiative transfer code, MCFOST, based on a Monte-Carlo method, is presented to calculate monochromatic images in scattered light and/or thermal emission.
Abstract: Aims.We present a new continuum 3D radiative transfer code, MCFOST, based on a Monte-Carlo method. MCFOST can be used to calculate (i) monochromatic images in scattered light and/or thermal emission; (ii) polarisation maps; (iii) interferometric visibilities; (iv) spectral energy distributions; and (v) dust temperature distributions of protoplanetary disks. Methods: .Several improvements to the standard Monte Carlo method are implemented in MCFOST to increase efficiency and reduce convergence time, including wavelength distribution adjustments, mean intensity calculations, and an adaptive sampling of the radiation field. The reliability and efficiency of the code are tested against a previously-defined benchmark, using a 2D disk configuration. No significant difference (no more than 10% and usually much less) is found between the temperatures and SEDs calculated by MCFOST and by other codes included in the benchmark. Results: . A study of the lowest disk mass detectable by Spitzer, around young stars, is presented and the colours of "representative" parametric disks compared to recent IRAC and MIPS Spitzer colours of solar-like young stars located in nearby star-forming regions.

Journal ArticleDOI
TL;DR: A review of recent progress in the development of particle track simulation for electron, low-energy light ions and finally the recent model development for the low energy electron cross-sections in liquid water can be found in this article.

Journal ArticleDOI
TL;DR: The history of the correction for wall attenuation and scatter in an ion chamber is presented as it demonstrates the interplay between a specific problem and the development of tools to solve the problem which in turn leads to applications in other areas.
Abstract: Monte Carlo techniques have become ubiquitous in medical physics over the last 50 years with a doubling of papers on the subject every 5 years between the first PMB paper in 1967 and 2000 when the numbers levelled off. While recognizing the many other roles that Monte Carlo techniques have played in medical physics, this review emphasizes techniques for electron-photon transport simulations. The broad range of codes available is mentioned but there is special emphasis on the EGS4/EGSnrc code system which the author has helped develop for 25 years. The importance of the 1987 Erice Summer School on Monte Carlo techniques is highlighted. As an illustrative example of the role Monte Carlo techniques have played, the history of the correction for wall attenuation and scatter in an ion chamber is presented as it demonstrates the interplay between a specific problem and the development of tools to solve the problem which in turn leads to applications in other areas.

Book
01 Jan 2006
TL;DR: In this paper, Monte Carlo Methods are used for hard disks and spheres, density matrices and path integral, order and disorder in spin systems, and dynamic Monte-Carlo Methods.
Abstract: 1 Monte Carlo Methods 2 Hard Disks and Spheres 3 Density Matrices and Path Integrals 4 The Bose Gas 5 Order and Disorder in Spin Systems 6 Entropic Forces 7 Dynamic Monte-Carlo Methods

Proceedings ArticleDOI
24 Jul 2006
TL;DR: A novel methodology based on an efficient form of importance sampling, mixture importance sampling is proposed for statistical SRAM design and analysis, which is comprehensive, computationally efficient and in excellent agreement with standard Monte Carlo techniques.
Abstract: In this paper, we propose a novel methodology for statistical SRAM design and analysis. It relies on an efficient form of importance sampling, mixture importance sampling. The method is comprehensive, computationally efficient and the results are in excellent agreement with those obtained via standard Monte Carlo techniques. All this comes at significant gains in speed and accuracy, with speedup of more than 100/spl times/ compared to regular Monte Carlo. To the best of our knowledge, this is the first time such a methodology is applied to the analysis of SRAM designs.

Journal ArticleDOI
TL;DR: In this article, the authors consider a method that stops the simulation when the width of a confidence interval based on an ergodic average is less than a user-specified value.
Abstract: Markov chain Monte Carlo is a method of producing a correlated sample to estimate features of a target distribution through ergodic averages. A fundamental question is when sampling should stop; that is, at what point the ergodic averages are good estimates of the desired quantities. We consider a method that stops the simulation when the width of a confidence interval based on an ergodic average is less than a user-specified value. Hence calculating a Monte Carlo standard error is a critical step in assessing the simulation output. We consider the regenerative simulation and batch means methods of estimating the variance of the asymptotic normal distribution. We give sufficient conditions for the strong consistency of both methods and investigate their finite-sample properties in various examples.

Journal ArticleDOI
TL;DR: The transfer code SEDONA as mentioned in this paper has been developed to calculate the light curves, spectra, and polarization of aspherical supernova models from the onset of free expansion in the supernova ejecta.
Abstract: We discuss Monte Carlo techniques for addressing the three-dimensional time-dependent radiative transfer problem in rapidly expanding supernova atmospheres. The transfer code SEDONA has been developed to calculate the light curves, spectra, and polarization of aspherical supernova models. From the onset of free expansion in the supernova ejecta, SEDONA solves the radiative transfer problem self-consistently, including a detailed treatment of gamma-ray transfer from radioactive decay and with a radiative equilibrium solution of the temperature structure. Line fluorescence processes can also be treated directly. No free parameters need be adjusted in the radiative transfer calculation, providing a direct link between multidimensional hydrodynamic explosion models and observations. We describe the computational techniques applied in SEDONA and verify the code by comparison to existing calculations. We find that convergence of the Monte Carlo method is rapid and stable even for complicated multidimensional configurations. We also investigate the accuracy of a few commonly applied approximations in supernova transfer, namely, the stationarity approximation and the two-level atom expansion opacity formalism.

Journal ArticleDOI
TL;DR: The results verify in both theoretical levels that SiCNTs seem to be more suitable materials for hydrogen storage than pure CNTs.
Abstract: A multiscale theoretical approach is used for the investigation of hydrogen storage in silicon-carbon nanotubes (SiCNTs). First, ab initio calculations at the density functional level of theory (DFT) showed an increase of 20% in the binding energy of H2 in SiCNTs compared with pure carbon nanotubes (CNTs). This is explained by the alternative charges that exist in the SiCNT walls. Second, classical Monte Carlo simulation of nanotube bundles showed an even larger increase of the storage capacity in SiCNTs, especially in low temperature and high-pressure conditions. Our results verify in both theoretical levels that SiCNTs seem to be more suitable materials for hydrogen storage than pure CNTs.

Journal ArticleDOI
TL;DR: Improvements in the calculated dose have been shown when models consider volume scatter and changes in electron transport, especially when the extension of the irradiated volume was limited and when low densities were present in or adjacent to the fields.
Abstract: A study of the performance of five commercial radiotherapy treatment planning systems (TPSs) for common treatment sites regarding their ability to model heterogeneities and scattered photons has been performed. The comparison was based on CT information for prostate, head and neck, breast and lung cancer cases. The TPSs were installed locally at different institutions and commissioned for clinical use based on local procedures. For the evaluation, beam qualities as identical as possible were used: low energy (6 MV) and high energy (15 or 18 MV) x-rays. All relevant anatomical structures were outlined and simple treatment plans were set up. Images, structures and plans were exported, anonymized and distributed to the participating institutions using the DICOM protocol. The plans were then re-calculated locally and exported back for evaluation. The TPSs cover dose calculation techniques from correction-based equivalent path length algorithms to model-based algorithms. These were divided into two groups based on how changes in electron transport are accounted for ((a) not considered and (b) considered). Increasing the complexity from the relatively homogeneous pelvic region to the very inhomogeneous lung region resulted in less accurate dose distributions. Improvements in the calculated dose have been shown when models consider volume scatter and changes in electron transport, especially when the extension of the irradiated volume was limited and when low densities were present in or adjacent to the fields. A Monte Carlo calculated algorithm input data set and a benchmark set for a virtual linear accelerator have been produced which have facilitated the analysis and interpretation of the results. The more sophisticated models in the type b group exhibit changes in both absorbed dose and its distribution which are congruent with the simulations performed by Monte Carlo-based virtual accelerator.

Journal ArticleDOI
TL;DR: In this paper, the authors present three algorithms for calculating rate constants and sampling transition paths for rare events in simulations with stochastic dynamics, which do not require a priori knowledge of the phase-space density and are suitable for equilibrium or nonequilibrium systems in stationary state.
Abstract: We present three algorithms for calculating rate constants and sampling transition paths for rare events in simulations with stochastic dynamics. The methods do not require a priori knowledge of the phase-space density and are suitable for equilibrium or nonequilibrium systems in stationary state. All the methods use a series of interfaces in phase space, between the initial and final states, to generate transition paths as chains of connected partial paths, in a ratchetlike manner. No assumptions are made about the distribution of paths at the interfaces. The three methods differ in the way that the transition path ensemble is generated. We apply the algorithms to kinetic Monte Carlo simulations of a genetic switch and to Langevin dynamics simulations of intermittently driven polymer translocation through a pore. We find that the three methods are all of comparable efficiency, and that all the methods are much more efficient than brute-force simulation.

Journal ArticleDOI
TL;DR: A flexible and fast Monte Carlo-based model of diffuse reflectance has been developed for the extraction of the absorption and scattering properties of turbid media, such as human tissues, and it was found that optical properties could be extracted from the experimentally measured diffuse reflectances spectra with an average error of 3% or less.
Abstract: A flexible and fast Monte Carlo-based model of diffuse reflectance has been developed for the extraction of the absorption and scattering properties of turbid media, such as human tissues. This method is valid for a wide range of optical properties and is easily adaptable to existing probe geometries, provided a single phantom calibration measurement is made. A condensed Monte Carlo method was used to speed up the forward simulations. This model was validated by use of two sets of liquid-tissue phantoms containing Nigrosin or hemoglobin as absorbers and polystyrene spheres as scatterers. The phantoms had a wide range of absorption (0-20 cm(-1)) and reduced scattering coefficients (7-33 cm(-1)). Mie theory and a spectrophotometer were used to determine the absorption and reduced scattering coefficients of the phantoms. The diffuse reflectance spectra of the phantoms were measured over a wavelength range of 350-850 nm. It was found that optical properties could be extracted from the experimentally measured diffuse reflectance spectra with an average error of 3% or less for phantoms containing hemoglobin and 12% or less for phantoms containing Nigrosin.

01 Jan 2006
TL;DR: MoGo as mentioned in this paper is the first computer Go program using UCB1 for multi-armed bandit problem and also the intelligent random simulation with patterns which has improved significantly the performance of MoGo, which is now a top level Go program on $9\times9$ and $13\times13$ Go boards.
Abstract: Algorithm UCB1 for multi-armed bandit problem has already been extended to Algorithm UCT (Upper bound Confidence for Tree) which works for minimax tree search. We have developed a Monte-Carlo Go program, MoGo, which is the first computer Go program using UCT. We explain our modification of UCT for Go application and also the intelligent random simulation with patterns which has improved significantly the performance of MoGo. UCT combined with pruning techniques for large Go board is discussed, as well as parallelization of UCT. MoGo is now a top level Go program on $9\times9$ and $13\times13$ Go boards.

Journal ArticleDOI
TL;DR: A detailed description is provided of a new worm algorithm, enabling the accurate computation of thermodynamic properties of quantum many-body systems in continuous space, at finite temperature, within the general path integral Monte Carlo scheme.
Abstract: A detailed description is provided of a new worm algorithm, enabling the accurate computation of thermodynamic properties of quantum many-body systems in continuous space, at finite temperature. The algorithm is formulated within the general path integral Monte Carlo (PIMC) scheme, but also allows one to perform quantum simulations in the grand canonical ensemble, as well as to compute off-diagonal imaginary-time correlation functions, such as the Matsubara Green function, simultaneously with diagonal observables. Another important innovation consists of the expansion of the attractive part of the pairwise potential energy into elementary (diagrammatic) contributions, which are then statistically sampled. This affords a complete microscopic account of the long-range part of the potential energy, while keeping the computational complexity of all updates independent of the size of the simulated system. The computational scheme allows for efficient calculations of the superfluid fraction and off-diagonal correlations in space-time, for system sizes which are orders of magnitude larger than those accessible to conventional PIMC. We present illustrative results for the superfluid transition in bulk liquid $^{4}\mathrm{He}$ in two and three dimensions, as well as the calculation of the chemical potential of hcp $^{4}\mathrm{He}$.

Journal ArticleDOI
TL;DR: The transfer code SEDONA as mentioned in this paper has been developed to calculate the lightcurves, spectra, and polarization of aspherical supernova models from the onset of freeexpansion in the supernova ejecta, including a detailed treatment of gamma-ray transfer from radioactive decay and with a radiative equilibrium solution of the temperature structure.
Abstract: We discuss Monte-Carlo techniques for addressing the 3-dimensional time-dependent radiative transfer problem in rapidly expanding supernova atmospheres. The transfer code SEDONA has been developed to calculate the lightcurves, spectra, and polarization of aspherical supernova models. From the onset of free-expansion in the supernova ejecta, SEDONA solves the radiative transfer problem self-consistently, including a detailed treatment of gamma-ray transfer from radioactive decay and with a radiative equilibrium solution of the temperature structure. Line fluorescence processes can also be treated directly. No free parameters need be adjusted in the radiative transfer calculation, providing a direct link between multi-dimensional hydrodynamical explosion models and observations. We describe the computational techniques applied in SEDONA, and verify the code by comparison to existing calculations. We find that convergence of the Monte Carlo method is rapid and stable even for complicated multi-dimensional configurations. We also investigate the accuracy of a few commonly applied approximations in supernova transfer, namely the stationarity approximation and the two-level atom expansion opacity formalism.

Journal ArticleDOI
TL;DR: In this article, a nonparametric bootstrap is proposed in estimating location parameters and the corresponding variances, and an estimate of bias and a measure of variance of the point estimate are computed using the Monte Carlo method.
Abstract: Purposive sampling is described as a random selection of sampling units within the segment of the population with the most information on the characteristic of interest. Nonparametric bootstrap is proposed in estimating location parameters and the corresponding variances. An estimate of bias and a measure of variance of the point estimate are computed using the Monte Carlo method. The bootstrap estimator of the population mean is efficient and consistent in the homogeneous, heterogeneous, and two-segment populations simulated. The design-unbiased approximation of the standard error estimate differs substantially from the bootstrap estimate in severely heterogeneous and positively skewed populations.

Book ChapterDOI
15 Aug 2006
TL;DR: The ancestor indices {ai t}i=1 allow us to keep track of exactly what happens in each resampling step, and the bookkeeping added to the propagation step 2b is a good place to start.
Abstract: Algorithm 1 Bootstrap particle filter (for i = 1, . . . , N) 1. Initialization (t = 0): (a) Sample x i 0 ∼ p(x0). (b) Set initial weights: w i 0 = 1/N. 2. for t = 1 to T do (a) Resample: sample ancestor indices ai t ∼ C({w j t−1}j=1). (b) Propagate: sample x i t ∼ p(xt | x ai t t−1). x i 0:t = {x ai t 0:t−1, x i t}. (c) Weight: compute w̃ i t = p(yt | x i t) and normalize w i t = w̃ i t/ ∑N j=1 w̃ j t . The ancestor indices {ai t}i=1 allow us to keep track of exactly what happens in each resampling step. Note the bookkeeping added to the propagation step 2b. 2/22 Bookkeeping – ancestral path

Journal ArticleDOI
TL;DR: In this paper, a particular type of Markov chain Monte Carlo (MCMC) sampling algorithm is highlighted which allows probabilistic sampling in variable dimension spaces, and it is shown that once evidence calculations are performed, the results of complex variable dimension sampling algorithms can be replicated with simple and more familiar fixed dimensional MCMC sampling techniques.
Abstract: SUMMARY In most geophysical inverse problems the properties of interest are parametrized using a fixed number of unknowns. In some cases arguments can be used to bound the maximum number of parameters that need to be considered. In others the number of unknowns is set at some arbitrary value and regularization is used to encourage simple, non-extravagant models. In recent times variable or self-adaptive parametrizations have gained in popularity. Rarely, however, is the number of unknowns itself directly treated as an unknown. This situation leads to a transdimensional inverse problem, that is, one where the dimension of the parameter space is a variable to be solved for. This paper discusses trans-dimensional inverse problems from the Bayesian viewpoint. A particular type of Markov chain Monte Carlo (MCMC) sampling algorithm is highlighted which allows probabilistic sampling in variable dimension spaces. A quantity termed the evidence or marginal likelihood plays a key role in this type of problem. It is shown that once evidence calculations are performed, the results of complex variable dimension sampling algorithms can be replicated with simple and more familiar fixed dimensional MCMC sampling techniques. Numerical examples are used to illustrate the main points. The evidence can be difficult to calculate, especially in high-dimensional non-linear inverse problems. Nevertheless some general strategies are discussed and analytical expressions given for certain linear problems.