scispace - formally typeset
Search or ask a question

Showing papers in "Statistics and Computing in 2006"


Journal ArticleDOI
TL;DR: The ergodicity of the resulting non-Markovian sampler is proved, and the efficiency of the combination of adaptive Metropolis samplers and delayed rejection outperforms the original methods.
Abstract: We propose to combine two quite powerful ideas that have recently appeared in the Markov chain Monte Carlo literature: adaptive Metropolis samplers and delayed rejection. The ergodicity of the resulting non-Markovian sampler is proved, and the efficiency of the combination is demonstrated with various examples. We present situations where the combination outperforms the original methods: adaptation clearly enhances efficiency of the delayed rejection algorithm in cases where good proposal distributions are not available. Similarly, delayed rejection provides a systematic remedy when the adaptation process has a slow start.

1,394 citations


Journal ArticleDOI
TL;DR: The essential ideas of DE and MCMC are integrated, resulting in Differential Evolution Markov Chain (DE-MC), a population MCMC algorithm, in which multiple chains are run in parallel, showing simplicity, speed of calculation and convergence, even for nearly collinear parameters and multimodal densities.
Abstract: Differential Evolution (DE) is a simple genetic algorithm for numerical optimization in real parameter spaces. In a statistical context one would not just want the optimum but also its uncertainty. The uncertainty distribution can be obtained by a Bayesian analysis (after specifying prior and likelihood) using Markov Chain Monte Carlo (MCMC) simulation. This paper integrates the essential ideas of DE and MCMC, resulting in Differential Evolution Markov Chain (DE-MC). DE-MC is a population MCMC algorithm, in which multiple chains are run in parallel. DE-MC solves an important problem in MCMC, namely that of choosing an appropriate scale and orientation for the jumping distribution. In DE-MC the jumps are simply a fixed multiple of the differences of two random parameter vectors that are currently in the population. The selection process of DE-MC works via the usual Metropolis ratio which defines the probability with which a proposal is accepted. In tests with known uncertainty distributions, the efficiency of DE-MC with respect to random walk Metropolis with optimal multivariate Normal jumps ranged from 68% for small population sizes to 100% for large population sizes and even to 500% for the 97.5% point of a variable from a 50-dimensional Student distribution. Two Bayesian examples illustrate the potential of DE-MC in practice. DE-MC is shown to facilitate multidimensional updates in a multi-chain "Metropolis-within-Gibbs" sampling approach. The advantage of DE-MC over conventional MCMC are simplicity, speed of calculation and convergence, even for nearly collinear parameters and multimodal densities.

839 citations


Journal ArticleDOI
TL;DR: The method can cope with a range of models, and exact simulation from the posterior distribution is possible in a matter of minutes, and can be useful within an MCMC algorithm, even when the independence assumptions do not hold.
Abstract: We demonstrate how to perform direct simulation from the posterior distribution of a class of multiple changepoint models where the number of changepoints is unknown. The class of models assumes independence between the posterior distribution of the parameters associated with segments of data between successive changepoints. This approach is based on the use of recursions, and is related to work on product partition models. The computational complexity of the approach is quadratic in the number of observations, but an approximate version, which introduces negligible error, and whose computational cost is roughly linear in the number of observations, is also possible. Our approach can be useful, for example within an MCMC algorithm, even when the independence assumptions do not hold. We demonstrate our approach on coal-mining disaster data and on well-log data. Our method can cope with a range of models, and exact simulation from the posterior distribution is possible in a matter of minutes.

457 citations


Journal ArticleDOI
TL;DR: This paper adapts recently developed simulation-based sequential algorithms to the problem concerning the Bayesian analysis of discretely observed diffusion processes and applies the method to the estimation of parameters in a simple stochastic volatility model of the U.S. short-term interest rate.
Abstract: In this paper, we adapt recently developed simulation-based sequential algorithms to the problem concerning the Bayesian analysis of discretely observed diffusion processes. The estimation framework involves the introduction of m?1 latent data points between every pair of observations. Sequential MCMC methods are then used to sample the posterior distribution of the latent data and the model parameters on-line. The method is applied to the estimation of parameters in a simple stochastic volatility model (SV) of the U.S. short-term interest rate. We also provide a simulation study to validate our method, using synthetic data generated by the SV model with parameters calibrated to match weekly observations of the U.S. short-term interest rate.

149 citations


Journal ArticleDOI
TL;DR: This work has developed an adaptive nonparametric method for constructing smooth estimates of G0 that is inspired by an existing characterization of its maximum-likelihood estimator and yields a flexible empirical Bayes treatment of Dirichlet process mixtures.
Abstract: The Dirichlet process prior allows flexible nonparametric mixture modeling. The number of mixture components is not specified in advance and can grow as new data arrive. However, analyses based on the Dirichlet process prior are sensitive to the choice of the parameters, including an infinite-dimensional distributional parameter G 0. Most previous applications have either fixed G 0 as a member of a parametric family or treated G 0 in a Bayesian fashion, using parametric prior specifications. In contrast, we have developed an adaptive nonparametric method for constructing smooth estimates of G 0. We combine this method with a technique for estimating ?, the other Dirichlet process parameter, that is inspired by an existing characterization of its maximum-likelihood estimator. Together, these estimation procedures yield a flexible empirical Bayes treatment of Dirichlet process mixtures. Such a treatment is useful in situations where smooth point estimates of G 0 are of intrinsic interest, or where the structure of G 0 cannot be conveniently modeled with the usual parametric prior families. Analysis of simulated and real-world datasets illustrates the robustness of this approach.

134 citations


Journal ArticleDOI
TL;DR: Full Bayesian analysis of finite mixtures of multivariate normals with unknown number of components and split and merge moves that produce good mixing of the Markov chains are presented.
Abstract: We present full Bayesian analysis of finite mixtures of multivariate normals with unknown number of components. We adopt reversible jump Markov chain Monte Carlo and we construct, in a manner similar to that of Richardson and Green (1997), split and merge moves that produce good mixing of the Markov chains. The split moves are constructed on the space of eigenvectors and eigenvalues of the current covariance matrix so that the proposed covariance matrices are positive definite. Our proposed methodology has applications in classification and discrimination as well as heterogeneity modelling. We test our algorithm with real and simulated data.

127 citations


Journal ArticleDOI
TL;DR: An approximation method to evaluate the posterior distribution and Bayes estimators by Gibbs sampling, relying on the missing data structure of the mixture model, is presented.
Abstract: This paper deals with a Bayesian analysis of a finite Beta mixture model. We present approximation method to evaluate the posterior distribution and Bayes estimators by Gibbs sampling, relying on the missing data structure of the mixture model. Experimental results concern contextual and non-contextual evaluations. The non-contextual evaluation is based on synthetic histograms, while the contextual one model the class-conditional densities of pattern-recognition data sets. The Beta mixture is also applied to estimate the parameters of SAR images histograms.

111 citations


Journal ArticleDOI
TL;DR: MTE potentials that approximate standard PDF’s and applications of these potentials will extend the types of inference problems that can be modelled with Bayesian networks, as demonstrated using three examples.
Abstract: Mixtures of truncated exponentials (MTE) potentials are an alternative to discretization and Monte Carlo methods for solving hybrid Bayesian networks. Any probability density function (PDF) can be approximated by an MTE potential, which can always be marginalized in closed form. This allows propagation to be done exactly using the Shenoy-Shafer architecture for computing marginals, with no restrictions on the construction of a join tree. This paper presents MTE potentials that approximate standard PDF's and applications of these potentials for solving inference problems in hybrid Bayesian networks. These approximations will extend the types of inference problems that can be modelled with Bayesian networks, as demonstrated using three examples.

78 citations


Journal ArticleDOI
TL;DR: A novel Markov chain Monte Carlo algorithm for estimation of posterior probabilities over discrete model spaces, applicable to families of models for which the marginal likelihood can be analytically calculated, either exactly or approximately, given any fixed structure is introduced.
Abstract: We introduce a novel Markov chain Monte Carlo algorithm for estimation of posterior probabilities over discrete model spaces. Our learning approach is applicable to families of models for which the marginal likelihood can be analytically calculated, either exactly or approximately, given any fixed structure. It is argued that for certain model neighborhood structures, the ordinary reversible Metropolis-Hastings algorithm does not yield an appropriate solution to the estimation problem. Therefore, we develop an alternative, non-reversible algorithm which can avoid the scaling effect of the neighborhood. To efficiently explore a model space, a finite number of interacting parallel stochastic processes is utilized. Our interaction scheme enables exploration of several local neighborhoods of a model space simultaneously, while it prevents the absorption of any particular process to a relatively inferior state. We illustrate the advantages of our method by an application to a classification model. In particular, we use an extensive bacterial database and compare our results with results obtained by different methods for the same data.

63 citations


Journal ArticleDOI
TL;DR: It is demonstrated that the circulant embedding method can be used to generate simulations from stationary processes whose spectral density functions are dictated by a number of popular nonparametric estimators, including all direct spectral estimators (a special case being the periodogram), certain lag window spectral estimator and all basic multitaper spectral estimations.
Abstract: The circulant embedding method for generating statistically exact simulations of time series from certain Gaussian distributed stationary processes is attractive because of its advantage in computational speed over a competitive method based upon the modified Cholesky decomposition. We demonstrate that the circulant embedding method can be used to generate simulations from stationary processes whose spectral density functions are dictated by a number of popular nonparametric estimators, including all direct spectral estimators (a special case being the periodogram), certain lag window spectral estimators, all forms of Welch's overlapped segment averaging spectral estimator and all basic multitaper spectral estimators. One application for this technique is to generate time series for bootstrapping various statistics. When used with bootstrapping, our proposed technique avoids some --- but not all --- of the pitfalls of previously proposed frequency domain methods for simulating time series.

54 citations


Journal ArticleDOI
TL;DR: Bayesian methods for fitting the model to observations of disease spread through space and time in replicate populations are developed and they confirm the findings of earlier non-spatial analyses regarding the dynamics of disease transmission and yield new evidence of environmental heterogeneity in the replicate experiments.
Abstract: Statistical methods are formulated for fitting and testing percolation-based, spatio-temporal models that are generally applicable to biological or physical processes that evolve in spatially distributed populations. The approach is developed and illustrated in the context of the spread of Rhizoctonia solani, a fungal pathogen, in radish but is readily generalized to other scenarios. The particular model considered represents processes of primary and secondary infection between nearest-neighbour hosts in a lattice, and time-varying susceptibility of the hosts. Bayesian methods for fitting the model to observations of disease spread through space and time in replicate populations are developed. These use Markov chain Monte Carlo methods to overcome the problems associated with partial observation of the process. We also consider how model testing can be achieved by embedding classical methods within the Bayesian analysis. In particular we show how a residual process, with known sampling distribution, can be defined. Model fit is then examined by generating samples from the posterior distribution of the residual process, to which a classical test for consistency with the known distribution is applied, enabling the posterior distribution of the P-value of the test used to be estimated. For the Rhizoctonia-radish system the methods confirm the findings of earlier non-spatial analyses regarding the dynamics of disease transmission and yield new evidence of environmental heterogeneity in the replicate experiments.

Journal ArticleDOI
TL;DR: Comparison with existing methods showed that the technique suggested in the paper does not oversmooth the function and is superior in terms of the mean squared error, and it was demonstrated that under additional assumptions on design points the method achieves asymptotic optimality in a wide range of Besov spaces.
Abstract: The paper considers regression problems with univariate design points. The design points are irregular and no assumptions on their distribution are imposed. The regression function is retrieved by a wavelet based reproducing kernel Hilbert space (RKHS) technique with the penalty equal to the sum of blockwise RKHS norms. In order to simplify numerical optimization, the problem is replaced by an equivalent quadratic minimization problem with an additional penalty term. The computational algorithm is described in detail and is implemented with both the sets of simulated and real data. Comparison with existing methods showed that the technique suggested in the paper does not oversmooth the function and is superior in terms of the mean squared error. It is also demonstrated that under additional assumptions on design points the method achieves asymptotic optimality in a wide range of Besov spaces.

Journal ArticleDOI
TL;DR: This paper shows how a simple modification of the proposal mechanism results in faster convergence of the chain and helps to circumvent the problems of Markov Chain Monte Carlo, and demonstrates that these new proposal distributions can greatly outperform the traditional local proposals when it comes to exploring complex heterogenous spaces and multi-modal distributions.
Abstract: As the number of applications for Markov Chain Monte Carlo (MCMC) grows, the power of these methods as well as their shortcomings become more apparent. While MCMC yields an almost automatic way to sample a space according to some distribution, its implementations often fall short of this task as they may lead to chains which converge too slowly or get trapped within one mode of a multi-modal space. Moreover, it may be difficult to determine if a chain is only sampling a certain area of the space or if it has indeed reached stationarity. In this paper, we show how a simple modification of the proposal mechanism results in faster convergence of the chain and helps to circumvent the problems described above. This mechanism, which is based on an idea from the field of "small-world" networks, amounts to adding occasional "wild" proposals to any local proposal scheme. We demonstrate through both theory and extensive simulations, that these new proposal distributions can greatly outperform the traditional local proposals when it comes to exploring complex heterogenous spaces and multi-modal distributions. Our method can easily be applied to most, if not all, problems involving MCMC and unlike many other remedies which improve the performance of MCMC it preserves the simplicity of the underlying algorithm.

Journal ArticleDOI
TL;DR: Modified trimmed mean filters are constructed based on the repeated median offering better shift preservation and are compared w.r.t. fundamental analytical properties and in basic data situations.
Abstract: We discuss moving window techniques for fast extraction of a signal composed of monotonic trends and abrupt shifts from a noisy time series with irrelevant spikes. Running medians remove spikes and preserve shifts, but they deteriorate in trend periods. Modified trimmed mean filters use a robust scale estimate such as the median absolute deviation about the median (MAD) to select an adaptive amount of trimming. Application of robust regression, particularly of the repeated median, has been suggested for improving upon the median in trend periods. We combine these ideas and construct modified filters based on the repeated median offering better shift preservation. All these filters are compared w.r.t. fundamental analytical properties and in basic data situations. An algorithm for the update of the MAD running in time O(log n) for window width n is presented as well.

Journal ArticleDOI
TL;DR: The focus of the present paper is on assessing the evidence for the presence of a discontinuity within a regression function through examination of the standardised differences of ‘left’ and ‘right’ estimators at a variety of covariate values.
Abstract: The existence of a discontinuity in a regression function can be inferred by comparing regression estimates based on the data lying on different sides of a point of interest. This idea has been used in earlier research by Hall and Titterington (1992), Muller (1992) and later authors. The use of nonparametric regression allows this to be done without assuming linear or other parametric forms for the continuous part of the underlying regression function. The focus of the present paper is on assessing the evidence for the presence of a discontinuity within a regression function through examination of the standardised differences of `left' and `right' estimators at a variety of covariate values. The calculations for the test are carried out through distributional results on quadratic forms. A graphical method in the form of a reference band to highlight the sources of the evidence for discontinuities is proposed. The methods are also developed for the two covariate case where there are additional issues associated with the presence of a jump location curve. Methods for estimating this curve are also developed. All the techniques, for the one and two covariate situations, are illustrated through applications.

Journal ArticleDOI
TL;DR: This article exploits the flexibility of lifting by adaptively choosing the kind of prediction according to a criterion and exhibits the benefits of the adaptive lifting on the real inductance plethysmography and motorcycle data.
Abstract: Many wavelet shrinkage methods assume that the data are observed on an equally spaced grid of length of the form 2J for some J. These methods require serious modification or preprocessed data to cope with irregularly spaced data. The lifting scheme is a recent mathematical innovation that obtains a multiscale analysis for irregularly spaced data. A key lifting component is the "predict" step where a prediction of a data point is made. The residual from the prediction is stored and can be thought of as a wavelet coefficient. This article exploits the flexibility of lifting by adaptively choosing the kind of prediction according to a criterion. In this way the smoothness of the underlying `wavelet' can be adapted to the local properties of the function. Multiple observations at a point can readily be handled by lifting through a suitable choice of prediction. We adapt existing shrinkage rules to work with our adaptive lifting methods. We use simulation to demonstrate the improved sparsity of our techniques and improved regression performance when compared to both wavelet and non-wavelet methods suitable for irregular data. We also exhibit the benefits of our adaptive lifting on the real inductance plethysmography and motorcycle data.

Journal ArticleDOI
TL;DR: This paper assesses the accuracy of two approximations that are frequently used in practice, namely an approximation for the probability of an epidemic occurring, and a Gaussian approximation to the final number infected in the event of an outbreak.
Abstract: This paper is concerned with methods for the numerical calculation of the final outcome distribution for a well-known stochastic epidemic model in a closed population. The model is of the SIR (Susceptible?Infected? Removed) type, and the infectious period can have any specified distribution. The final outcome distribution is specified by the solution of a triangular system of linear equations, but the form of the distribution leads to inherent numerical problems in the solution. Here we employ multiple precision arithmetic to surmount these problems. As applications of our methodology, we assess the accuracy of two approximations that are frequently used in practice, namely an approximation for the probability of an epidemic occurring, and a Gaussian approximation to the final number infected in the event of an outbreak. We also present an example of Bayesian inference for the epidemic threshold parameter.

Journal ArticleDOI
TL;DR: A novel deterministic approximate inference technique for conditionally Gaussian state space models where the latent state consists of both multinomial and Gaussian distributed variables that improves upon previously proposed smoothing passes by not making more approximations than implied by the projection onto the chosen parametric form, the assumed density.
Abstract: We describe a novel deterministic approximate inference technique for conditionally Gaussian state space models, i.e. state space models where the latent state consists of both multinomial and Gaussian distributed variables. The method can be interpreted as a smoothing pass and iteration scheme symmetric to an assumed density filter. It improves upon previously proposed smoothing passes by not making more approximations than implied by the projection onto the chosen parametric form, the assumed density. Experimental results show that the novel scheme outperforms these alternative deterministic smoothing passes. Comparisons with sampling methods suggest that the performance does not degrade with longer sequences.

Journal ArticleDOI
TL;DR: This paper describes a method for sampling from a non-standard distribution which is important in both population genetics and directional statistics and uses a Gibbs sampler which seems necessary in practical situations of high dimensions.
Abstract: This paper describes a method for sampling from a non-standard distribution which is important in both population genetics and directional statistics. Current approaches rely on complicated procedures which do not work well, if at all, in high dimensions and usual parameter set-ups. We use a Gibbs sampler which seems necessary in practical situations of high dimensions.

Journal ArticleDOI
TL;DR: The behavior of a class of evolutionary algorithm, known as cellular EA (cEA), is analyzed, and it is compared against a tailored neural network model and against a canonical genetic algorithm for optimization of the p-median problem.
Abstract: This paper develops a study on different modern optimization techniques to solve the p-median problem. We analyze the behavior of a class of evolutionary algorithm (EA) known as cellular EA (cEA), and compare it against a tailored neural network model and against a canonical genetic algorithm for optimization of the p-median problem. We also compare against existing approaches including variable neighborhood search and parallel scatter search, and show their relative performances on a large set of problem instances. Our conclusions state the advantages of using a cEA: wide applicability, low implementation effort and high accuracy. In addition, the neural network model shows up as being the more accurate tool at the price of a narrow applicability and larger customization effort.

Journal ArticleDOI
TL;DR: It is shown how the EM algorithm for nonparametric maximum likelihood (NPML) can be extended to deal with dependence of repeated measures on baseline counts, and a computationally feasible approach is proposed to overcome this problem.
Abstract: Random effect models have often been used in longitudinal data analysis since they allow for association among repeated measurements due to unobserved heterogeneity. Various approaches have been proposed to extend mixed models for repeated count data to include dependence on baseline counts. Dependence between baseline counts and individual-specific random effects result in a complex form of the (conditional) likelihood. An approximate solution can be achieved ignoring this dependence, but this approach could result in biased parameter estimates and in wrong inferences. We propose a computationally feasible approach to overcome this problem, leaving the random effect distribution unspecified. In this context, we show how the EM algorithm for nonparametric maximum likelihood (NPML) can be extended to deal with dependence of repeated measures on baseline counts.

Journal ArticleDOI
TL;DR: The proposed algorithms generalize previous directional updating schemes since they allow the distribution of the auxiliary variable to depend on properties of the target at the current state and identify proposal mechanisms that give unit acceptance rate.
Abstract: New Metropolis---Hastings algorithms using directional updates are introduced in this paper. Each iteration of a directional Metropolis---Hastings algorithm consists of three steps (i) generate a line by sampling an auxiliary variable, (ii) propose a new state along the line, and (iii) accept/reject according to the Metropolis---Hastings acceptance probability. We consider two classes of directional updates. The first uses a point in $${\cal R}$$ n as auxiliary variable, the second an auxiliary direction vector. The proposed algorithms generalize previous directional updating schemes since we allow the distribution of the auxiliary variable to depend on properties of the target at the current state. By letting the proposal distribution along the line depend on the density of the auxiliary variable, we identify proposal mechanisms that give unit acceptance rate. When we use direction vector as auxiliary variable, we get the advantageous effect of large moves in the Markov chain and hence the autocorrelation length of the samples is small. We apply the directional Metropolis---Hastings algorithms to a Gaussian example, a mixture of Gaussian densities, and a Bayesian model for seismic data.

Journal ArticleDOI
TL;DR: The Minimum Description Length principle for online sequence estimation/prediction in a proper learning setup is studied and a new upper bound on the prediction error for countable Bernoulli classes is derived.
Abstract: The Minimum Description Length principle for online sequence estimation/prediction in a proper learning setup is studied. If the underlying model class is discrete, then the total expected square loss is a particularly interesting performance measure: (a) this quantity is finitely bounded, implying convergence with probability one, and (b) it additionally specifies the convergence speed. For MDL, in general one can only have loss bounds which are finite but exponentially larger than those for Bayes mixtures. We show that this is even the case if the model class contains only Bernoulli distributions. We derive a new upper bound on the prediction error for countable Bernoulli classes. This implies a small bound (comparable to the one for Bayes mixtures) for certain important model classes. We discuss the application to Machine Learning tasks such as classification and hypothesis testing, and generalization to countable classes of i.i.d. models.

Journal ArticleDOI
TL;DR: Three multiple-try blocking schemes for Bayesian analysis of nonlinear and non-Gaussian state space models are developed and are able to generate the desired posterior sample, whereas existing methods fail to do so.
Abstract: We develop in this paper three multiple-try blocking schemes for Bayesian analysis of nonlinear and non-Gaussian state space models. To reduce the correlations between successive iterates and to avoid getting trapped in a local maximum, we construct Markov chains by drawing state variables in blocks with multiple trial points. The first and second methods adopt autoregressive and independent kernels to produce the trial points, while the third method uses samples along suitable directions. Using the time series structure of the state space models, the three sampling schemes can be implemented efficiently. In our multimodal examples, the three multiple-try samplers are able to generate the desired posterior sample, whereas existing methods fail to do so.

Journal ArticleDOI
TL;DR: It is illustrated how a likelihood method for fitting models with independent random effects can be applied to seemingly very different models with correlated random effects.
Abstract: When there are two alternative random-effect models leading to the same marginal model, inferences from one model can be used for the other model. We illustrate how a likelihood method for fitting models with independent random effects can be applied to seemingly very different models with correlated random effects. We also discuss some merits of using these alternative models.

Journal ArticleDOI
TL;DR: A new heuristic for an automated, multistage implementation of simulated maximum likelihood which, by adaptively updating the importance sampler, approximates the (locally) optimal importance sampling distribution.
Abstract: Simulated maximum likelihood estimates an analytically intractable likelihood function with an empirical average based on data simulated from a suitable importance sampling distribution. In order to use simulated maximum likelihood in an efficient way, the choice of the importance sampling distribution as well as the mechanism to generate the simulated data are crucial. In this paper we develop a new heuristic for an automated, multistage implementation of simulated maximum likelihood which, by adaptively updating the importance sampler, approximates the (locally) optimal importance sampling distribution. The proposed approach also allows for a convenient incorporation of quasi-Monte Carlo methods. Quasi-Monte Carlo methods produce simulated data which can significantly increase the accuracy of the likelihood-estimate over regular Monte Carlo methods. Several examples provide evidence for the potential efficiency gain of this new method. We apply the method to a computationally challenging geostatistical model of online retailing.

Journal ArticleDOI
TL;DR: A bootstrap based method to construct 1−α simultaneous confidence intervals for relative effects in the one-way layout that takes the stochastic correlation between the test statistics into account and results in narrower simultaneousconfidence intervals than the application of the Bonferroni correction.
Abstract: A bootstrap based method to construct 1?? simultaneous confidence intervals for relative effects in the one-way layout is presented. This procedure takes the stochastic correlation between the test statistics into account and results in narrower simultaneous confidence intervals than the application of the Bonferroni correction. Instead of using the bootstrap distribution of a maximum statistic, the coverage of the confidence intervals for the individual comparisons are adjusted iteratively until the overall confidence level is reached. Empirical coverage and power estimates of the introduced procedure for many-to-one comparisons are presented and compared with asymptotic procedures based on the multivariate normal distribution.

Journal ArticleDOI
TL;DR: The efficiency of the boundary-element method coupled with the flexibility of the Markov chain Monte Carlo technique gives a promising new approach to object identification in electrical tomography.
Abstract: In electrical tomography, multiple measurements of voltage are taken between electrodes on the boundary of a region with the aim of investigating the electrical conductivity distribution within the region. The relationship between conductivity and voltage is governed by an elliptic partial differential equation derived from Maxwell's equations. Recent statistical approaches, combining Bayesian methods with Markov chain Monte Carlo (MCMC) algorithms, allow to greater flexibility than classical inverse solution approaches and require only the calculation of voltages from a conductivity distribution. However, solution of this forward problem still requires the use of the Finite Difference Method (FDM) or the Finite Element Method (FEM) and many thousands of forward solutions are needed which strains practical feasibility. Many tomographic applications involve locating the perimeter of some homogeneous conductivity objects embedded in a homogeneous background. It is possible to exploit this type of structure using the Boundary Element Method (BEM) to provide a computationally efficient alternative forward solution technique. A geometric model is then used to define the region boundary, with priors on boundary smoothness and on the range of feasible conductivity values. This paper investigates the use of a BEM/MCMC approach for electrical resistance tomography (ERT) data. The efficiency of the boundary-element method coupled with the flexibility of the MCMC technique gives a promising new approach to object identification in electrical tomography. Simulated ERT data are used to illustrate the procedures.

Journal ArticleDOI
TL;DR: Viewing bootstrap iteration as a Markov process, a computational algorithm for bias correction based on arbitrarily many bootstrap iterations is developed, which is computationally more efficient and stable than conventional simulation-basedbootstrap iterations.
Abstract: Practical computation of the minimum variance unbiased estimator (MVUE) is often a difficult, if not impossible, task, even though general theory assures its existence under regularity conditions. We propose a new approach based on iterative bootstrap bias correction of the maximum likelihood estimator to accurately approximate the MVUE. Viewing bootstrap iteration as a Markov process, we develop a computational algorithm for bias correction based on arbitrarily many bootstrap iterations. The algorithm, when applied parametrically to finite sample spaces, does not involve Monte Carlo simulation. For infinite sample spaces, a nonparametric version of the algorithm is combined with a preliminary round of Monte Carlo simulation to yield an approximate MVUE. Both algorithms are computationally more efficient and stable than conventional simulation-based bootstrap iterations. Examples are given of both finite and infinite sample spaces to illustrate the effectiveness of our new approach.