A Bayesian Framework for Image Segmentation With Spatially Varying Mixtures
read more
Citations
Discrete multivariate distributions
Multivariate Mixture Model for Myocardial Segmentation Combining Multi-Source Images
Survey of contemporary trends in color image segmentation
Robust Student's-t Mixture Model With Spatial Constraints and Its Application in Medical Image Segmentation
Estimating the Granularity Coefficient of a Potts-Markov Random Field Within a Markov Chain Monte Carlo Algorithm
References
Maximum likelihood from incomplete data via the EM algorithm
Latent dirichlet allocation
Latent Dirichlet Allocation
Pattern Recognition and Machine Learning
Pattern Recognition and Machine Learning
Related Papers (5)
Segmentation of brain MR images through a hidden Markov random field model and the expectation-maximization algorithm
Frequently Asked Questions (13)
Q2. How many iterations can be achieved in the GBMS?
For the 3-class image of 256 256 pixels,using as feature only the intensity, the algorithm performs one EM iteration per second and convergence may be achieved in 10–50 iterations, depending upon the amount of noise.
Q3. What is the purpose of the comparative experiments?
As one of their purposes is to investigatethe behavior of the compared methods to noise without any bias, the authors have decided to perform the comparative experiments using the MRF texture features for all the methods.
Q4. What are the open questions in a segmentation algorithm?
Important open questions in a segmentation algorithm concern the estimation of the number of image segments as well as the automatic determination of salient features in the case of multidimensional feature vectors [64], [65].
Q5. What is the reason why the Ncut and GBMS methods are not robust?
The explanation is that when the added noise is not smoothed out by PCA as it is the case in MRF texture features, the Ncut and GBMS methods are not robust and provide erroneous segmentations.
Q6. What is the termination criterion of the EM algorithm?
In the termination criterion of the EM algorithm, considered here, convergence was defined as the percentage of change in the log-likelihood (4) between two consecutive iterations to be less than 0.001%, or .
Q7. What is the advantage of the DCM-SVFMM method?
A notable advantage of the DCM-SVFMM method is that it does not need any parameter to be fixed before training which is not the case neither in graph based methods nor in the mean-shift algorithm where the result strongly depends upon the selected parameters.
Q8. What is the criterion for the termination of the EM algorithm?
Since the EM algorithm is sensitive to initialization, in their experiments, the authors have executed a number of iterations of the EM algorithm with a set of randomly generated initial conditions and kept the one giving the maximum value for the log-likelihood.
Q9. How can the function in (7) be maximized independently for each parameter?
The function in (7) can be maximized independently for each parameter providing the following update equations of the mixture model parameters at step(8)The probabilities are computed by setting which yields a second degree equation with respect to(9)where is the number of pixels in the neighborhood of the th pixel.
Q10. What is the probability label for the th image pixel?
Under the Bayesian framework, the probability label for the th image pixel is obtained by marginalizing the parameters(13)Substituting (10) and (12) into (13) the authors obtain , with some easy manipulation, the following expression for the label probabilities:(14)with .
Q11. What is the posterior probability density function of a pixel?
denoting the set of pixels feature vectors , with , which the authors assume to be statistically independent and following Bayes rules, the authors obtain the posterior probability density function given by(3)with the log-density(4)A typical example of is the Gauss–Markov random field prior [43], expressed by(5)where the parameter captures the spatial smoothness of cluster and enforces different degree of smoothness in each cluster in order to better adapt the model to the data.
Q12. What is the vector of features of a -dimensional vector valued image?
Let denote the vector of features (e.g., intensity, textural features) representing the th spatial location, , of a -dimensional vector valued image modeled as independently distributed random variables.
Q13. What is the simplest way to estimate the mixing proportions of the pixel?
The contextual mixing proportions for each pixel are constrained to follow a Dirichlet compound multinomial distribution, thus, avoiding the projection step in the standard EM algorithm [42].