scispace - formally typeset
Search or ask a question

Showing papers on "Gibbs algorithm published in 2011"


Journal ArticleDOI
TL;DR: In this article, a Monte Carlo simulation study was conducted exploring the estimation of a two-parameter non-compensatory item response theory (IRT) model, and the estimation method used was a Metropolis-Hastings within Gibbs algorithm that accepted or rejected new parameters in a bivariate fashion.
Abstract: Relatively little research has been conducted with the noncompensatory class of multidimensional item response theory (MIRT) models. A Monte Carlo simulation study was conducted exploring the estimation of a two-parameter noncompensatory item response theory (IRT) model. The estimation method used was a Metropolis-Hastings within Gibbs algorithm that accepted or rejected new parameters in a bivariate fashion. Results showed that acceptable estimation of the noncompensatory model required a sample size of 4,000 people, six unidimensional items per dimension, and latent traits that are not highly correlated. Although the data requirements to estimate this model are a bit daunting, future advances in methodology could make this model valuable for modeling multidimensional data where the latent traits are not expected to be highly correlated.

23 citations


Journal ArticleDOI
TL;DR: The two-parameter Marshall-Olkin Extended Weibull (MOEW) model is considered to analyze the software reliability data and the Markov Chain Monte Carlo (MCMC) method is used to compute the Bayes estimates of the model parameters.
Abstract: In this paper, the two-parameter Marshall-Olkin Extended Weibull (MOEW) model is considered to analyze the software reliability data. The Markov Chain Monte Carlo (MCMC) method is used to compute the Bayes estimates of the model parameters. In this paper, it is assumed that the parameters have non-informative set of priors and they are independently distributed. Under the above priors, we use Gibbs algorithm in OpenBUGS to generate MCMC samples from the posterior density function. Based on the generated samples, we can compute the Bayes estimates of the unknown parameters and also can construct highest posterior density credible intervals. We also compute the maximum likelihood estimate and associated confidence intervals to compare the performances of the Bayes estimators with the classical estimators. One data analysis is performed for illustrative purposes.

14 citations


Proceedings ArticleDOI
06 Jun 2011
TL;DR: A new spatial unmixing algorithm for hyperspectral images is studied, based on the well-known linear mixing model, which is employed to generate samples that are asymptotically distributed according to this posterior distribution.
Abstract: A new spatial unmixing algorithm for hyperspectral images is studied. This algorithm is based on the well-known linear mixing model. The spectral signatures (or endmembers) are assumed to be known while the mixture coefficients (or abundances) are estimated by a Bayesian algorithm. As a pre-processing step, an area filter is employed to partition the image into multiple spectrally consistent connected components or adaptative neighborhoods. Then, spatial correlations are introduced by assigning to the pixels of a given neighbourhood the same hidden labels. More precisely, these pixels are modeled using a new prior distribution taking into account spectral similarity between the neighbors. Abundances are reparametrized by using logistic coefficients to handle the associated physical constraints. Other parameters and hyperparameters are assigned appropriate prior distributions. After computing the joint posterior distribution, a hybrid Gibbs algorithm is employed to generate samples that are asymptotically distributed according to this posterior distribution. The generated samples are finally used to estimate the unknown model parameters. Simulations on synthetic data illustrate the performance of the proposed method.

1 citations


Proceedings ArticleDOI
14 Sep 2011
TL;DR: In this article, a posterior distribution for which the latent partition is restricted to a special numbering leading to the largest separation with its permutations is proposed, and two different measures of separation are proposed, the first one being global but intractable even from very small sample sizes (Kullback divergence), the second one being local and thus very easy to compute (difference of distributions at the MAP).
Abstract: We propose a posterior distribution for which the latent partition is restricted to a special numbering leading to the largest separation with its permutations. Two different measures of separation are proposed, the first one being global but intractable even from very small sample sizes (Kullback divergence), the second one being local and thus very easy to compute (difference of distributions at the MAP). A Gibbs algorithm allows to sample easily according to this new distribution. This procedure is general enough to apply directly with any distribution and some experiments in Gaussian and multinomial settings show particularly encouraging results.