scispace - formally typeset
Search or ask a question

Showing papers on "Markov random field published in 1990"


Journal ArticleDOI
TL;DR: A new inference method, Highest Confidence First (HCF) estimation, is used to infer a unique labeling from the a posteriori distribution that is consistent with both prior knowledge and evidence.
Abstract: Integrating disparate sources of information has been recognized as one of the keys to the success of general purpose vision systems. Image clues such as shading, texture, stereo disparities and image flows provide uncertain, local and incomplete information about the three-dimensional scene. Spatial a priori knowledge plays the role of filling in missing information and smoothing out noise. This thesis proposes a solution to the longstanding open problem of visual integration. It reports a framework, based on Bayesian probability theory, for computing an intermediate representation of the scene from disparate sources of information. The computation is formulated as a labeling problem. Local visual observations for each image entity are reported as label likelihoods. They are combined consistently and coherently on hierarchically structured label trees with a new, computationally simple procedure. The pooled label likelihoods are fused with the a priori spatial knowledge encoded as Markov Random Fields (MRF's). The a posteriori distribution of the labelings are thus derived in a Bayesian formalism. A new inference method, Highest Confidence First (HCF) estimation, is used to infer a unique labeling from the a posteriori distribution. Unlike previous inference methods based on the MRF formalism, HCF is computationally efficient and predictable while meeting the principles of graceful degradation and least commitment. The results of the inference process are consistent with both observable evidence and a priori knowledge. The effectiveness of the approach is demonstrated with experiments on two image analysis problems: intensity edge detection and surface reconstruction. For edge detection, likelihood outputs from a set of local edge operators are integrated with a priori knowledge represented as an MRF probability distribution. For surface reconstruction, intensity information is integrated with sparse depth measurements and a priori knowledge. Coupled MRF's provide a unified treatment of surface reconstruction and segmentation, and an extension of HCF implements a solution method. Experiments using real image and depth data yield robust results. The framework can also be generalized to higher-level vision problems, as well as to other domains.

285 citations


Proceedings ArticleDOI
16 Jun 1990
TL;DR: The authors empirically compare three algorithms for segmenting simple, noisy images and conclude that contextual information from MRF models improves segmentation when the number of categories and the degradation model are known and that parameters can be effectively estimated.
Abstract: The authors empirically compare three algorithms for segmenting simple, noisy images: simulated annealing (SA), iterated conditional modes (ICM), and maximizer of the posterior marginals (MPM). All use Markov random field (MRF) models to include prior contextual information. The comparison is based on artificial binary images which are degraded by Gaussian noise. Robustness is tested with correlated noise and with object and background textured. The ICM algorithm is evaluated when the degradation and model parameters must be estimated, in both supervised and unsupervised modes and on two real images. The results are assessed by visual inspection and through a numerical criterion. It is concluded that contextual information from MRF models improves segmentation when the number of categories and the degradation model are known and that parameters can be effectively estimated. None of the three algorithms is consistently best, but the ICM algorithm is the most robust. The energy of the a posteriori distribution is not always minimized at the best segmentation. >

162 citations


Journal ArticleDOI
16 Jun 1990
TL;DR: The Markov random field formalism is considered, a special case of the Bayesian approach, in which the probability distributions are specified by an energy function, and simple modifications to the energy can give a direct relation to robust statistics or can encourage hysteresis and nonmaximum suppression.
Abstract: An attempt is made to unify several approaches to image segmentation in early vision under a common framework. The energy function, or Markov random field, formalism is very attractive since it enables the assumptions used to be explicitly stated in the energy functions, and it can be extended to deal with many other problems in vision. It is shown that the specified discrete formulations for the energy function are closely related to the continuous formulation. When the mean field theory approach is used, several previous attempts to solve these energy functions are effectively equivalent. By varying the parameters of the energy functions, one can obtain a class of solutions and several nonlinear diffusion approaches to image segmentation, but it can be applied equally well to image or surface reconstruction (where the data are sparse). >

99 citations


Book ChapterDOI
01 Apr 1990
TL;DR: A theoretical formulation for stereo in terms of the Markov Random Field and Bayesian approach to vision is described, which enables it to integrate the depth information from different types of matching primitives, or from different vision modules.
Abstract: We describe a theoretical formulation for stereo in terms of the Markov Random Field and Bayesian approach to vision. This formulation enables us to integrate the depth information from different types of matching primitives, or from different vision modules. We treat the correspondence problem and surface interpolation as different aspects of the same problem and solve them simultaneously, unlike most previous theories. We use techniques from statistical physics to compute properties of our theory and show how it relates to previous work. These techniques also suggest novel algorithms for stereo which are argued to be preferable to standard algorithms on theoretical and experimental grounds. It can be shown (Yuille, Geiger and Bulthoff 1989) that the theory is consistent with some psychophysical experiments which investigate the relative importance of different matching primitives.

54 citations


Proceedings ArticleDOI
16 Jun 1990
TL;DR: A multimodal approach to the problem of velocity estimation that combines the advantages of the feature-based and gradient-based methods by making them cooperate in a single global motion estimator is presented.
Abstract: A multimodal approach to the problem of velocity estimation is presented. It combines the advantages of the feature-based and gradient-based methods by making them cooperate in a single global motion estimator. The theoretical framework is based on global Bayesian decision associated with Markov random field models. The proposed approach addresses, in parallel, the problem of velocity estimation and segmentation. Results on synthetic as well as on real-world image sequences are presented. Accurate motion measurement and detection of motion discontinuities with a surprisingly good quality have been obtained. >

49 citations


Proceedings ArticleDOI
04 Dec 1990
TL;DR: The authors propose a segmentation algorithm which handles both jump and crease edges and has been integrated with a region-based segmentation scheme resulting in a robust surface segmentation method.
Abstract: Consideration is given to the application of Markov random field (MRF) models to the problem of edge labeling in range images. The authors propose a segmentation algorithm which handles both jump and crease edges. The jump and crease edge likelihoods at each edge site are computed using special local operators. These likelihoods are then combined in a Bayesian framework with a MRF prior distribution on the edge labels to derive the a posterior distribution of labels. An approximation to the maximum a posteriori estimate is used to obtain the edge labelings. The edge-based segmentation has been integrated with a region-based segmentation scheme resulting in a robust surface segmentation method. >

41 citations


Journal ArticleDOI
01 May 1990
TL;DR: A probabilistic relaxation algorithm is described for labeling the vertices of a Markov random field defined on a finite graph and it is shown that the estimates of the a posteriori probabilities generated by the algorithm differ from the true values by terms that are at least second order in c.
Abstract: A probabilistic relaxation algorithm is described for labeling the vertices of a Markov random field (MRF) defined on a finite graph. The algorithm has two features which make it attractive. First, the multilinear structure of the relaxation operator allows simple, necessary, and sufficient convergence conditions to be derived. The second advantage is local optimality. Given a class of MRFs indexed by a parameter c, such that when c=0 the vertices are independent, it is shown that the estimates of the a posteriori probabilities generated by the algorithm differ from the true values by terms that are at least second order in c. >

39 citations


Proceedings ArticleDOI
01 Nov 1990
TL;DR: Two novel approaches to texture classification based upon stochastic modeling using Markov Random Fields are presented and contrasted and a new statistic and complexity measure are introduced called the Knearest neighbor statistic (KNS) and complexity (KNC) which measure the overlap in K-nearest neighborhood conditional distributions.
Abstract: Two novel approaches to texture classification based upon stochastic modeling using Markov Random Fields are presented and contrasted. The first approach uses a clique-based probabilistic neighborhood structure and Gibbs distribution to derive the quasi likelihood estimates of the model coefficients. Likelihood ratio tests formed by the quasi-likelihood functions of pairs of textures are evaluated in the decision strategy to classify texture samples. The second approach uses a least squares prediction error model and error signature analysis to model and classify textures. The distribution of the errors is the information used in the decision algorithm which employs K-nearest neighbors techniques. A new statistic and complexity measure are introduced called the Knearest neighbor statistic (KNS) and complexity (KNC) which measure the overlap in K-nearest neighbor conditional distributions. Parameter vectors for each model, neighborhood size and structure, performance of the maximum likelihood and K-nearest neighbor decision strategies are presented and interesting results discussed. Results from classifying real video pictures of six cloth textures are presented and analyzed.© (1990) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

28 citations


Journal ArticleDOI
TL;DR: The estimation of 2D motion from spatio-temporally sampled image sequences is discussed, concentrating on the optimization aspect of the problem formulated through a Bayesian framework based on Markov random field models.

25 citations


Journal ArticleDOI
TL;DR: In this article, a primal-dual constrained optimization procedure is proposed to reconstruct images from finite sets of noisy projections that may be available only over limited or sparse angles using a Markov random field (MRF) that includes information about the mass, center of mass and convex hull of the object.

24 citations


Proceedings ArticleDOI
03 Apr 1990
TL;DR: It is shown that optic flow estimation and segmentation can be expressed, within a Bayesian decision framework, as a global estimation problem, involving complex 3D motions and occlusions.
Abstract: An approach to the problem of optic flow estimation and segmentation from image sequences is presented. It is shown that optic flow estimation and segmentation can be expressed, within a Bayesian decision framework, as a global estimation problem. The unknown process to be estimated corresponds to the 2D relative velocity field and to the motion boundaries. Several observations are used in the scheme, involving the spatiotemporal gradients of the image sequence and the output of an intensity edge detector. The unknown velocity field and motion discontinuities are modeled using a joint Markov random field, allowing the smoothing of the velocity field and the preservation of motion boundaries. Critical areas, such as occluding regions, are detected using a likelihood test and, in this case, a modified interaction model is applied. Results are presented on a real-world digital TV sequence involving complex 3D motions and occlusions. >

Proceedings ArticleDOI
16 Jun 1990
TL;DR: A Bayesian approach is proposed for stereo matching to derive the maximum a posteriori estimation of depth by using the invariant property of image intensity and modeling the disparity as a Markov random field (MRF).
Abstract: A Bayesian approach is proposed for stereo matching to derive the maximum a posteriori estimation of depth. How a pyramid data structure can be combined with simulated annealing to speed up convergence in stereo matching is described. Using the invariant property of image intensity and modeling the disparity as a Markov random field (MRF), the pyramid structure is followed from high (coarse) level to low (fine) level to derive the maximum a posteriori estimates. Simulation results on both random dot diagrams and synthesized images show the promise of this multiresolution stereo approach. >

Proceedings ArticleDOI
05 Nov 1990
TL;DR: An approximation to the Maximum A Posteriori (MAP) and the Maximum Posterior Marginal (MPM) estimates of the region labels is computed and implemented using a parallel optimization network.
Abstract: Several algorithms for segmenting single look Synthetic Aperture Radar (SAR) complex data into regions of similar and homogeneous statistical characteristics are presented. The image model is composed of two models, one for the speckled complex amplitudes, and the other for the region labels. Speckle is modeled from the physics of the SAR imaging and processing system, and region labels are represented as a Markov random field. Based on this composite image model, an approximation to the Maximum A Posteriori (MAP) and the Maximum Posterior Marginal (MPM) estimates of the region labels is computed and implemented using a parallel optimization network. The performance of this algorithm on highly speckled fine resolution SAR data is discussed and illustrated using both simulated and actual SAR complex data.

Proceedings ArticleDOI
03 Apr 1990
TL;DR: An extension of the Bayesian estimation of 2-D motion in spatiotemporally sampled image sequences is presented by incorporating the color cue into the estimation process, thus allowing Y-C1-C2, RGB, or other formats.
Abstract: An extension of the Bayesian estimation of 2-D motion in spatiotemporally sampled image sequences is presented by incorporating the color cue into the estimation process. Instead of scalar image intensity, three-component vector representation of color images is used, thus allowing Y-C1-C2, RGB, or other formats. The maximum a posteriori probability estimation is shown to result in a three-term energy minimizationj. A white Gaussian noise model is used for the displaced pel differences of each image component, and a coupled vector-binary Markov random field model is used for displacement and discontinuity fields. The resulting criterion is optimized using the method of discrete state-space simulated annealing. Improvements in the quality of estimated displacement fields due to additional color information are demonstrated through several experimental results. >

Proceedings ArticleDOI
04 Dec 1990
TL;DR: A network is described that recognizes objects from uncertain image-derivable descriptions by making the recognition and segmentation decisions simultaneously, in a cooperative way, using a coupled Markov random field to provide a single formal framework for both.
Abstract: A network is described that recognizes objects from uncertain image-derivable descriptions. The network handles uncertainty by making the recognition and segmentation decisions simultaneously, in a cooperative way. Both problems are posed as labeling problems, and a coupled Markov random field (MRF) is used to provide a single formal framework for both. Prior domain knowledge is represented as weights within the MRF network and interacts with the evidence to yield a labeling decision. The domain problem is the recognition of structured objects composed of simple junction and link primitives. Implementation experiments demonstrate the parallel segmentation and recognition of multiple objects in noisy ambiguous scenes with occlusion. >

Proceedings ArticleDOI
03 Apr 1990
TL;DR: A new Bayesian measure referred to as the minimum description length (MDL) then allows learning the conditional probabilities for the nonparametric MRF texture models of the mitochondria and background regions of the EMA image.
Abstract: Nonparametric Markov random field (MRF) texture modeling for the purpose of segmenting electron-microscope autoradiography (EMA) images is discussed. A Bayesian approach is assumed for addressing the basic problem of learning which model among a number of nonparametric MRF models best represents an observed texture. Nonparametric MRF models are inherently quite complex, prompting inclusion of a complexity measure within the Bayesian framework. The measure adopted is the Rissanen complexity, which quite naturally incorporates into the Bayesian analysis. The new Bayesian measure referred to as the minimum description length (MDL) then allows learning the conditional probabilities for the nonparametric MRF texture models of the mitochondria and background regions of the EMA image. Experiments show the results of segmenting an EMA image using these models. >

Journal ArticleDOI
TL;DR: An algorithm is presented for smoothing data piecewise modeled by linear equations within regions of a one-dimensional or two-dimensional field, from measurements corrupted by additive noise.
Abstract: An algorithm is presented for smoothing data piecewise modeled by linear equations within regions of a one-dimensional or two-dimensional field, from measurements corrupted by additive noise. Its main feature is the combination of Markov random field (MRF) models with recursive least squares (RLS) techniques in order to estimate the model parameters within the regions. Applications to one-dimensional and two-dimensional data are given, with particular emphasis on the segmentation of images with piecewise constant intensity levels. >

Proceedings ArticleDOI
16 Jun 1990
TL;DR: The results show that the algorithm successfully segments the region of interest even when the signal-to-noise ratio is low, which suggests an MLL modeling since the regions are spatially smooth.
Abstract: The problem of restoring noisy images when the model parameters are not known is discussed. The underlying field, x, is modeled as a noncausal Markov random field (MRF), namely, either a multilevel logistic (MLL) or a Gaussian MRF, and is corrupted by additive independently identically distributed (i.i.d.) Gaussian noise. The application is a restoration/segmentation of regions of interest in an image obtained from histologies of brain sections, which suggests an MLL modeling since the regions are spatially smooth. The presented algorithm maximizes the joint likelihood of the observations, y, and x given the unknown parameters. The parameters of the noise and the random field are estimated separately through a maximum likelihood technique given the current estimate of x, and the underlying field is estimated through a maximum a posteriori method. In the case of images modeled by MLL MRFs, the result of the restoration is actually a segmentation since the collection of all pixels with the same level defines a region. The results show that the algorithm successfully segments the region of interest even when the signal-to-noise ratio is low. >

Proceedings ArticleDOI
23 Sep 1990
TL;DR: The floating, 1-D, cyclic Markov random field (F1DCMRF) based optimization technique is designed and implemented on a time sequence of echocardiograms to perform cavity boundary detection as mentioned in this paper.
Abstract: The floating, 1-D, cyclic Markov random field (F1DCMRF) energy function (EF) based optimization technique is designed and implemented on a time sequence of echocardiograms to perform cavity boundary detection. Temporal information from the sequence is utilized intelligently through an adaptive multilevel energy function. The weight assigned to the temporal continuity component of the EF is allowed to increase as the correlation between the F1DCMRF configuration at time t, R/sub t/, and the convergence configuration at time t-1, R/sub t-1//sup conv/, improves. This allows for a high temporal weight in sequence images that have a high degree of similarity and a low weight in those that do not. Using a F1DCMRF eliminates ad hoc preliminary boundary location estimation; thus, large errors which could have been introduced due to preprocessing are avoided at very little additional computational cost. >

Book ChapterDOI
01 Jan 1990
TL;DR: If the use of Markov Random fields in the context of Image Analysis can be given some fundamental justification then there is a remarkable connection between Probabilistic Image Analysis, Statistical Mechanics and Lattice-based Euclidean Quantum Field Theory.
Abstract: Markov random fields based on the lattice Z 2 have been extensively used in image analysis in a Bayesian framework as a-priori models for the intensity field and on the dual lattice (Z 2) as models for boundaries The choice of these models has usually been based on algorithmic considerations in order to exploit the local structure inherent in Markov fields No fundamental justification has been offered for the use of Markov random fields (see, for example, GEMAN-GEMAN [1984], MARROQUIN-MITTER-POGGIO [1987]) It is well known that there is a one-one correspondence between Markov fields and Gibbs fields on a lattice and the Markov Field is simulated by creating a Markov chain whose invariant measure is precisely the Gibbs measure There are many ways to perform this simulation and one such way is the celebrated Metropolis Algorithm This is also the basic idea behind Stochastic Quantization We thus see that if the use of Markov Random fields in the context of Image Analysis can be given some fundamental justification then there is a remarkable connection between Probabilistic Image Analysis, Statistical Mechanics and Lattice-based Euclidean Quantum Field Theory We may thus expect ideas of Statistical Mechanics and Euclidean Quantum Field Theory to have a bearing on Image Analysis and in the other direction we may hope that problems in image analysis (especially problems of inference on geometrical structures) may have some influence on statistical physics

Book ChapterDOI
01 Jan 1990
TL;DR: The main difference between Markov random fields and continuous-time Markov chains is that the local characteristics do not always determine the probability law of a Markov Random Field as mentioned in this paper.
Abstract: Publisher Summary The chapter presents the ways in which Markov processes are related to other types of stochastic processes. It covers the main features of discrete Markov chains and continuous-time Markov chains. The major difference between Markov random fields and Markov chains is that the local characteristics do not always determine the probability law of a Markov random field. In general, there exists a convex set of probabilities with a prescribed set of local characteristics, extreme points of which are interpreted as “pure states,” while other elements, as mixtures of pure states, exhibit phase transitions. This can happen only if the graph has infinitely many vertices and then, in a given model, whether phase transitions exist depends on the temperature: Only if the temperature falls below a critical value is there a phase transition. An alternative manifestation is that at low temperature the system exhibits long-range order: Correlations among states at different sites do not decrease to zero as the spatial separation increases to infinity. Thus the Ising model, at low temperature and in the absence of an external magnetic field, admits two pure phases and long-range order, the latter with the physical interpretation of spontaneous magnetization.

Proceedings ArticleDOI
01 Jan 1990
TL;DR: The probabilistic model-based approach utilizes the intensity data directly for reconstructing 3D surfaces without first solving the correspondence problem or estimating optical flow explicitly, to compute the maximum posterior probability 3D surface reconstruction based on the observed images.
Abstract: Reconsiructing 3D surfaces using multiple intensity-images is an important problem in computer vision. Most approaches for this problem require either finding the 2D features correspondence or estimating the optical flow first. The probabilistic model-based approach shown in this paper utilizes the intensity data directly (i.e., no feature extraction) for reconstructing 3D surfaces without first solving the correspondence problem or estimating optical flow explicitly. We model 3D objects as surface patches where each patch is described as a function known up to the values of a few parameters. Surface reconstruction is then treated as the problem of parameter estimation based on two or more images taken by a moving camera. By constructing the likelihood function for the surface parameters and modeling prior knowledge about 3D surfaces with a Markov random field, we are able to compute the maximum posterior probability 3D surface reconstruction based on the observed images. This paper presents some experimental results based on a sequence of intensity images taken by a moving camera. Our approach has the advantages of: (i) directly estimating shape for surface patches, thus making object recognition simpler; (ii) formally incorporating prior knowledge about 3D surfaces; (iii) being highly parallel in required computation, hence promising for real-time operation; (iv) producing optimal accuracy in a probabilistic sense; (v) being algorithmically simple; (vi) being robust with real data.


Dissertation
01 Jan 1990
TL;DR: A recursive technique is developed which enables us to achieve maximum likelihood estimation for the underlying parameter and to carry out the EM algorithm for parameter estimation when only noisy data are available, and a simultaneous procedure of parameter estimation and restoration is developed.
Abstract: The aim of the thesis is to investigate classes of model-based approaches to statistical image analysis. We explored the properties of models and examined the problem of parameter estimation from the original image data and, in particular, from noisy versions of the the scene. We concentrated on Markov random field (MRF) models, Markov mesh random field (MMRF) models and Multi-dimensional Markov chain (MDMC) models. In Chapter 2, for the one-dimensional version of Markov random fields, we developed a recursive technique which enables us to achieve maximum likelihood estimation for the underlying parameter and to carry out the EM algorithm for parameter estimation when only noisy data are available. This technique also enables us, in just a single pass, to generate a sample from a one-dimensional Markov random field. Although, unfortunately, this technique cannot be extended to two- or multi-dimensional models, it was applied to many cases in this thesis. Since, for two-dimensional Markov random fields, the density of each row (column), conditionally on all other rows (columns) is of the form of a one-dimensional Markov random field, and since the distribution of the original image, conditionally on the noisy version of data, is still a Markov random field, the technique can be used on different forms of conditional density of one row (column). In Chapter 3, therefore, we developed the line-relaxation method for simulating MRFs and maximum line pseudo-likelihood estimation of parameter(s), and in Chapter 5, we developed a simultaneous procedure of parameter estimation and restoration, in which line pseudo-likelihood and a modified EM algorithm were used. The first part of Chapter 3 and Chapter 4 concentrate on inference for two-dimensional MRFs. We obtained a matrix expression for partition functins for general models, and a more explicit form for a multi-colour Ising model, and thus located the positions of critical points of this multi-colour model. We examined the asymptotic properties of an asymmetric, two-colour Ising model. For general models, in Chapter 4, we explored asymptotic properties under an "independence" or a "near independence" condition, and then developed the approach of maximum approximate-likelihood estimation. For three-dimensional MMRF models, in chapter 6, a generalization of Devijver's F-G-H algorithm is developed for restoration. In Chapter 7, the recursive technique was again used to introduce MDMC models, which form a natural extension of a Markov chain. By suitable choice of model parameters, textures can be generated that are similar to those simulated from MRFs, but the simulation procedure is computationally much more economical. The recursive technique also enables us to maximize the likelihood function of the model. For all three sorts of prior random field models considered in this thesis, we developed a simultaneous procedure for parameter estimation and image restoration, when only noisy data are available. The currently restored image was used, together with noisy data, in modified versions of the EM algorithm. In simulation studies, quite good results were obtained, in terms of estimation of parameters in both the original model and, particularly, in the noise model, and in terms of restoration.

Proceedings ArticleDOI
01 Sep 1990
TL;DR: In this paper, a new iterative algorithm for 3D reconstruction under constraints from a limited number of radiographic projections with adjustment of the constraints during the iterations is presented, where the first step is a classical iterative reconstruction ART type method (Algebraic Reconstruction Technique) which provides a rough volumic reconstructed 3D zone containing a flaw.
Abstract: We present a new iterative algorithm for 3D reconstruction under constraints from a limited number of radiographic projections with adjustment of the constraints during the iterations. The first step of the algorithm is a classical iterative reconstruction ART type method (Algebraic Reconstruction Technique) which provides a rough volumic reconstructed 3D zone containing a flaw. Then this reconstructed zone is modelled by a Markov Random Field (MRF) which allows us to estimate some 3D support and orientation constraints using a Bayesian restoration method. This fundamental step is an important one in the sense that it allows the introduction of local geometric a priori knowledge concerning the faults. The next step consists in reintroducing these strong constraints in the reconstruction algorithm. Few iterations of the algorithm are necessary to improve quality of the reconstructed 3D zone. Simulated radiographic projections allow the performance of the algorithm to be evaluated© (1990) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Book ChapterDOI
01 Jan 1990
TL;DR: In this paper, a comparison of object reconstructions based on maximum a posteriori (MAP) estimates where the prior probability densities are members of exponential families is made, for standard diagnostic tasks, a maximum entropy regulariser performs comparably to a positivity-norm regulariser.
Abstract: In pinhole-coded-aperture medical imaging the linear system of equations to be inverted for an object reconstruction is often vastly underdetermined (up to 90%, say). Our interest is in a comparison of object reconstructions based on maximum a posteriori (MAP) estimates where the prior probability densities are members of exponential families. Preliminary results indicate that, for standard diagnostic tasks, a maximum entropy regulariser performs comparably to a positivity-norm regulariser.

Proceedings ArticleDOI
01 Aug 1990
TL;DR: A new MRF model, called Modified Markov Random Field Model, is proposed, which can be easily obtained for stochastic and natural textures and suitable for texture synthesis and data compression.
Abstract: Markov Random Field(MRF) model is a very useful model for image texture processing. But its stability condition is hardly to meet for natural textures. To find a stable MRF model is difficult and complex in computation. In this paper a new MRF model, called Modified Markov Random Field Model, is proposed; A stable Modified MRF model can be easily obtained for stochastic and natural textures. It is suitable for texture synthesis and data compression.

Proceedings ArticleDOI
22 Oct 1990
TL;DR: In this paper, the authors compare the performance of the generalized expectation maximization (GEM) maximum a posteriori (MAP) algorithm, a Bayesian algorithm which uses a Markov random field prior, and the expectation-maximization maximum likelihood (EMM) algorithm.
Abstract: In single photon emission computed tomography (SPECT), every reconstruction algorithm must use some model for the response of the gamma camera to emitted gamma -rays. The true camera response is both spatially variant and object dependent. These two properties result from the effects of scatter, septal penetration, and attenuation, and they forestall determination of the true response with any precision. This motivates the investigation of the performance of reconstruction algorithms when there are errors between the camera response used in the reconstruction algorithm and the true response of the gamma camera. In this regard, the authors compare the filtered backprojection algorithm, the expectation-maximization maximum likelihood algorithm, and the generalized expectation maximization (GEM) maximum a posteriori (MAP) algorithm, a Bayesian algorithm which uses a Markov random field prior. >