scispace - formally typeset
Search or ask a question

Showing papers on "Markov random field published in 1996"


01 Jan 1996
TL;DR: In this article, a general model for multisource classification of remotely sensed data based on Markov Random Fields (MRF) is proposed, which exploits spatial class dependencies (spatial context) between neighboring pixels in an image, and temporal class dependencies between different images of the same scene.
Abstract: Abstruct- A general model for multisource classification of remotely sensed data based on Markov Random Fields (MRF) is proposed. A specific model for fusion of optical images, synthetic aperture radar (SAR) images, and GIS (Geographic Information Systems) ground cover data is presented in detail and tested. The MRF model exploits spatial class dependencies (spatial context) between neighboring pixels in an image, and temporal class dependencies between different images of the same scene. By including the temporal aspect of the data, the proposed model is suitable for detection of class changes between the acquisition dates of different images. The performance of the proposed model is investigated by fusing Landsat TM images, multitemporal ERS-1 SAR images, and GIS ground-cover maps for land-use classification, and on agricultural crop classification based on Landsat TM images, multipolarization SAR images, and GI§ crop field border maps. The performance of the MRF model is compared to a simpler reference fusion model. On an average, the MRF model results in slightly higher (2%) classification accuracy when the same data is used as input to the two models. When GI§ field border data is included in the MRF model, the classification accuracy of the MRF model improves by 8%. For change detection in agricultural areas, 75% of the actual class changes are detected by the MRF model, compared to 62% for the reference model. Based on the well-founded theoretical basis of Markov Random Field models for classification tasks and the encouraging experimental results in our small-scale study, we conclude that the proposed MRF model is useful for classification of multisource satellite imagery.

448 citations


Journal ArticleDOI
TL;DR: The authors conclude that the proposed MRF model is useful for classification of multisource satellite imagery.
Abstract: A general model for multisource classification of remotely sensed data based on Markov random fields (MRF) is proposed. A specific model for fusion of optical images, synthetic aperture radar (SAR) images, and GIS (geographic information systems) ground cover data is presented in detail and tested. The MRF model exploits spatial class dependencies (spatial context) between neighboring pixels in an image, and temporal class dependencies between different images of the same scene. By including the temporal aspect of the data, the proposed model is suitable for detection of class changes between the acquisition dates of different images. The performance of the proposed model is investigated by fusing Landsat TM images, multitemporal ERS-1 SAR images, and GIS ground-cover maps for land-use classification, and on agricultural crop classification based on Landsat TM images, multipolarization SAR images, and GIS crop field border maps. The performance of the MRF model is compared to a simpler reference fusion model. On an average, the MRF model results in slightly higher (2%) classification accuracy when the same data is used as input to the two models. When GIS field border data is included in the MRF model, the classification accuracy of the MRF model improves by 8%. For change detection in agricultural areas, 75% of the actual class changes are detected by the MRF model, compared to 62% for the reference model. Based on the well-founded theoretical basis of Markov random field models for classification tasks and the encouraging experimental results in our small-scale study, the authors conclude that the proposed MRF model is useful for classification of multisource satellite imagery.

447 citations


Journal ArticleDOI
TL;DR: The Bayesian protocol can produce substantial improvements in relative quantitation over the standard FBP protocol, particularly when short transmission scans are used.
Abstract: We describe a practical statistical methodology for the reconstruction of PET images. Our approach is based on a Bayesian formulation of the imaging problem. The data are modelled as independent Poisson random variables and the image is modelled using a Markov random field smoothing prior. We describe a sequence of calibration procedures which are performed before reconstruction: (i) calculation of accurate attenuation correction factors from re-projected Bayesian reconstructions of the transmission image; (ii) estimation of the mean of the randoms component in the data; and (iii) computation of the scatter component in the data using a Klein - Nishina-based scatter estimation method. The Bayesian estimate of the PET image is then reconstructed using a pre-conditioned conjugate gradient method. We performed a quantitation study with a multi-compartment chest phantom in a Siemens/CTI ECAT931 system. Using 40 1 min frames, we computed the ensemble mean and variance over several regions of interest from images reconstructed using the Bayesian and a standard filtered backprojection (FBP) protocol. The values for the region of interest were compared with well counter data for each compartment. These results show that the Bayesian protocol can produce substantial improvements in relative quantitation over the standard FBP protocol, particularly when short transmission scans are used. An example showing the application of the method to a clinical chest study is also given.

216 citations


Journal ArticleDOI
TL;DR: Three optimisation techniques are presented, Deterministic Pseudo-Annealing (DPA), Game Strategy Approach (GSA), and Modified Metropolis Dynamics (MMD), in order to carry out image classification using a Markov random field model.

183 citations


Journal ArticleDOI
TL;DR: This paper presents a classical multiscale model which consists of a label pyramid and a whole observation field, and proposes a hierarchical Markov random field model based on this classical model, which results in a relaxation algorithm with a new annealing scheme: the multitemperatureAnnealing (MTA) scheme, which consist of associating higher temperatures to higher levels in order to be less sensitive to local minima at coarser grids.

113 citations


Journal ArticleDOI
TL;DR: A Markov random field model with a Gibbs probability distribution (GPD) is proposed for describing particular classes of grayscale images which can be called spatially uniform stochastic textures and experiments in modeling natural textures show the utility of the proposed model.
Abstract: A Markov random field model with a Gibbs probability distribution (GPD) is proposed for describing particular classes of grayscale images which can be called spatially uniform stochastic textures. The model takes into account only multiple short- and long-range pairwise interactions between the gray levels in the pixels. An effective learning scheme is introduced to recover structure and strength of the interactions using maximal likelihood estimates of the potentials in the GPD as desired parameters. The scheme is based on an analytic initial approximation of the estimates and their subsequent refinement by a stochastic approximation. Experiments in modeling natural textures show the utility of the proposed model.

109 citations


Journal ArticleDOI
TL;DR: A distance measure to match the query image to the database content under possible orientation and scale differences between the textures of the same type is proposed, based on comparing the gray-level difference histograms collected in accord with the structure of multiple pairwise pixel interactions in the subimages to be matched.

97 citations


Journal ArticleDOI
TL;DR: An MRF-based scheme to perform object delineation involves extracting straight lines from the edge map of an image and an MRF model is used to group these lines to delineate buildings in aerial images.
Abstract: Traditionally, Markov random field (MRF) models have been used in low-level image analysis. The article presents an MRF-based scheme to perform object delineation. The proposed edge-based approach involves extracting straight lines from the edge map of an image. Then, an MRF model is used to group these lines to delineate buildings in aerial images.

93 citations


Proceedings ArticleDOI
18 Jun 1996
TL;DR: These experiments demonstrate that many textures previously considered as different categories can be modeled and synthesized in a common framework, and interprets and clarifies many previous concepts and methods for texture analysis and synthesis from a unified point of view.
Abstract: In this paper, a minimax entropy principle is studied, based on which a novel theory, called FRAME (Filters, Random fields And Minimax Entropy) is proposed for texture modeling. FRAME combines attractive aspects of two important themes in texture modeling: multi-channel filtering and Markov random field (MRF) modeling. It incorporates the responses of a set of well selected filters into the distribution over a random field and hence has a much stronger descriptive ability than the traditional MRF models. Furthermore, it interprets and clarifies many previous concepts and methods for texture analysis and synthesis from a unified point of view. Algorithms are proposed for probability inference, stochastic simulation and filter selection. Experiments on a variety of textures are described to illustrate our theory and to show the performance of our algorithms. These experiments demonstrate that many textures previously considered as different categories can be modeled and synthesized in a common framework.

92 citations


Journal ArticleDOI
TL;DR: A supervised texture segmentation scheme is proposed in this article that results in an optimal segmentation of the textured image including images from remote sensing.
Abstract: A supervised texture segmentation scheme is proposed in this article. The texture features are extracted by filtering the given image using a filter bank consisting of a number of Gabor filters with different frequencies, resolutions, and orientations. The segmentation model consists of feature formation, partition, and competition processes. In the feature formation process, the texture features from the Gabor filter bank are modeled as a Gaussian distribution. The image partition is represented as a noncausal Markov random field (MRF) by means of the partition process. The competition process constrains the overall system to have a single label for each pixel. Using these three random processes, the a posteriori probability of each pixel label is expressed as a Gibbs distribution. The corresponding Gibbs energy function is implemented as a set of constraints on each pixel by using a neural network model based on Hopfield network. A deterministic relaxation strategy is used to evolve the minimum energy state of the network, corresponding to a maximum a posteriori (MAP) probability. This results in an optimal segmentation of the textured image. The performance of the scheme is demonstrated on a variety of images including images from remote sensing.

86 citations


Journal ArticleDOI
TL;DR: This work has proposed iterative algorithms for recovering high resolution albedo with the knowledge of high resolution height and vice versa for surface reconstruction.
Abstract: Given a set of low resolution camera images of a Lambertian surface, it is possible to reconstruct high resolution luminance and height information, when the relative displacements of the image frames are known. We have proposed iterative algorithms for recovering high resolution albedo with the knowledge of high resolution height and vice versa. The problem of surface reconstruction has been tackled in a Bayesian framework and has been formulated as one of minimizing an error function. Markov Random Fields (MRF) have been employed to characterize the a priori constraints on the solution space. As for the surface height, we have attempted a direct computation without refering to surface orientations, while increasing the resolution by camera jittering.

Journal ArticleDOI
TL;DR: It is shown that this key problem may be studied by considering the restriction of a Markov random field to a part of its original site set, and several general properties of the restricted field are derived.
Abstract: The association of statistical models and multiresolution data analysis in a consistent and tractable mathematical framework remains an intricate theoretical and practical issue. Several consistent approaches have been proposed previously to combine Markov random field (MRF) models and multiresolution algorithms in image analysis: renormalization group, subsampling of stochastic processes, MRFs defined on trees or pyramids, etc. For the simulation or a practical use of these models in statistical estimation, an important issue is the preservation of the local Markovian property of the representation at the different resolution levels. It is shown that this key problem may be studied by considering the restriction of a Markov random field (defined on some simple finite nondirected graph) to a part of its original site set. Several general properties of the restricted field are derived. The general form of the distribution of the restriction is given. "Locality" of the field is studied by exhibiting a neighborhood structure with respect to which the restricted field is an MRF. Sufficient conditions for the new neighborhood structure to be "minimal" are derived. Several consequences of these general results related to various "multiresolution" MRF-based modeling approaches in image analysis are presented.

Journal ArticleDOI
TL;DR: A Bayesian contextual classification scheme is presented in connection with modified M-estimates and a discrete Markov random field model and shows that the suggested scheme outperforms conventional noncontextual classifiers as well as contextual classifiers which are based on least squares estimates or other spatial interaction models.
Abstract: A Bayesian contextual classification scheme is presented in connection with modified M-estimates and a discrete Markov random field model. The spatial dependence of adjacent class labels is characterized based on local transition probabilities in order to use contextual information. Due to the computational load required to estimate class labels in the final stage of optimization and the need to acquire robust spectral attributes derived from the training samples, modified M-estimates are implemented to characterize the joint class-conditional distribution. The experimental results show that the suggested scheme outperforms conventional noncontextual classifiers as well as contextual classifiers which are based on least squares estimates or other spatial interaction models.

Proceedings ArticleDOI
16 Sep 1996
TL;DR: A Bayesian approach to conceal errors in digital video encoded using the MPEG1 or MPEG2 compression scheme and a maximum a posteriori estimate of the missing macroblocks and motion vectors is described based on the model.
Abstract: In ATM networks cell loss causes data to be dropped in the channel. When digital video is transmitted over these networks one must be able to reconstruct the missing data so that the impact of these errors is minimized. In this paper we describe a Bayesian approach to conceal these errors. Assuming that the digital video has been encoded using the MPEG1 or MPEG2 compression scheme, each frame is modeled as a Markov random field. A maximum a posteriori estimate of the missing macroblocks and motion vectors is described based on the model.

Journal ArticleDOI
TL;DR: A maximum a posteriori deconvolution algorithm for two-dimensional ultrasound radio frequency (RF) images based on a new Markov random field image model incorporating spatial smoothness constraints and physical models for specular reflections and diffuse scattering is developed.
Abstract: Observed medical ultrasound images are degraded representations of the true tissue reflectance. The specular reflections at boundaries between regions of different tissue types are blurred, and the diffuse scattering within such regions also contains speckle. This reduces the diagnostic value of such images. In order to remove both blur and speckle, the authors develop a maximum a posteriori deconvolution algorithm for two-dimensional (2-D) ultrasound radio frequency (RF) images based on a new Markov random field image model incorporating spatial smoothness constraints and physical models for specular reflections and diffuse scattering. During stochastic relaxation, the algorithm alternates steps of restoration and segmentation, and includes estimation of reflectance parameters. The smoothness constraints regularize the overall procedure, and the algorithm uses the specular reflection model to locate region boundaries. The resulting restorations of some simulated and real RF images are significantly better than those produced by Wiener filtering.

Proceedings ArticleDOI
16 Sep 1996
TL;DR: The designed Markov random field model takes into account both the phenomenon of speckle noise through Rayleigh's law, and notions of geometry related to the shape of object shadows to improve the sonar image segmentation while speeding up the iterative optimization scheme.
Abstract: This paper deals with sonar image segmentation based on a hierarchical Markovian modeling. The designed Markov random field (MRF) model takes into account both the phenomenon of speckle noise through Rayleigh's law, and notions of geometry related to the shape of object shadows. We adopt an 8-connexity neighbourhood in order to discriminate geometric and non-regular shadows. MRF are well adapted for this kind of segmentation where a priori knowledge about the shapes we are searching is available. Besides, the introduced hierarchical modeling allows us to successfully improve the sonar image segmentation while speeding up the iterative optimization scheme.

Book ChapterDOI
15 Apr 1996
TL;DR: A 3D autoregressive model is employed and a sampling based interpolator is developed in which reconstructed data is generated as a typical realization from the underlying AR process, achieving a perceptually improved result.
Abstract: This paper presents a new technique for interpolating missing data in image sequences. A 3D autoregressive (AR) model is employed and a sampling based interpolator is developed in which reconstructed data is generated as a typical realization from the underlying AR process. rather than e.g. least squares (LS). In this way a perceptually improved result is achieved. A hierarchical gradient-based motion estimator, robust in regions of corrupted data, employing a Markov random field (MRF) motion prior is also presented for the estimation of motion before interpolation.

Proceedings ArticleDOI
16 Sep 1996
TL;DR: A non-homogeneous Markov random field (MRF) image model in the multiresolution framework is formulated, motivated by results in maximum likelihood parameter estimation of MRF models.
Abstract: The popularity of Bayesian methods in image processing applications has generated great interest in image modeling. A good image model needs to be non-homogeneous to be able to adapt to the local characteristics of the different regions in an image. In the past however, such a formulation was difficult since it was not clear as to how to choose the parameters of the non-homogeneous model. But now motivated by results in maximum likelihood parameter estimation of MRF models, we formulate in this paper a non-homogeneous Markov random field (MRF) image model in the multiresolution framework. The advantage of the multiresolution framework is two fold: first, it makes it possible to estimate the parameters of the nonhomogeneous MRF at any resolution by using the image at the coarser resolution. Second, it yields multiresolution algorithms which are computationally efficient and more robust than their single resolution counterparts. Experimental results in tomographic image reconstruction and optical flow computation problems verify the superior modeling provided by the new model.

Journal ArticleDOI
I.Y. Kim1, H.S. Yang2
TL;DR: In this paper, a unified approach for image understanding based on the Markov random field models is presented, where the image segmentation and interpretation processes cooperate in the simultaneous optimization process so that the erroneous segmentation can be compensatedly recovered by continuous estimation of the unified energy function.
Abstract: This paper presents a unified approach for the image understanding problem based on the Markov random field models. In the proposed scheme, the image segmentation and interpretation processes cooperate in the simultaneous optimization process so that the erroneous segmentation and misinterpretation can be compensatedly recovered by continuous estimation of the unified energy function.

Proceedings ArticleDOI
16 Sep 1996
TL;DR: This work characterises the features of a regularised ML estimate, or equivalently a MAP estimate, of an image (or a signal) in relation with the form of the regularisation and exhibits the relationship between several features of the estimate and the forms of the PF.
Abstract: We characterise the features of a regularised ML estimate, or equivalently a MAP estimate, of an image (or a signal) in relation with the form of the regularisation. The unknown image (signal) is observed through a linear operator and the data are corrupted by white Gaussian noise. Its reconstruction is regularised by the energy of a first-order Markov random field where the contributions of the transitions between adjacent neighbours are weighted using general potential functions (PFs). We exhibit the relationship between several features of the estimate and the form of the PF. Points of interest are the edge recovery, the stability of the estimator, the estimation of locally constant regions, the bias over large transitions, the resolution. The exposed theoretical considerations are corroborated by numerical simulations.

Proceedings ArticleDOI
08 Oct 1996
TL;DR: In this paper, the Markov random field model is used to segment sonar images to localize the sea bottom areas and the projected shadow areas corresponding to objects lying on the sea floor.
Abstract: We use the Markov random field model in order to segment sonar images, i.e. to localize the sea bottom areas and the projected shadow areas corresponding to objects lying on sea floor. This model requires on one hand knowledge about the statistical distributions relative to the different zones and ont the other hand the estimation of the law parameters. The Kolmogorov criterion or the (chi) 2 criterion allow to estimate the distribution laws. The estimation maximization algorithm or the stochastic estimation maximization algorithm are used to determine the maximum likelihood estimate of the law parameters. Those algorithms are initialized with the Kmean algorithm. Results are showing on real sonar pictures.© (1996) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Journal ArticleDOI
TL;DR: The model was developed as a central part of an algorithm for automatic analysis of genetic experiments, positioned in a lattice structure by a robot, and seems to be a fast, accurate, and robust solution to the problem.

Journal ArticleDOI
TL;DR: A new scheme for the estimation of Markov random field line process parameters which uses geometric CAD models of the objects in the scene to obtain true edge labels which are otherwise not available and use of canonical Markovrandom field representation to reduce the number of parameters is presented.
Abstract: We present a new scheme for the estimation of Markov random field line process parameters which uses geometric CAD models of the objects in the scene. The models are used to generate synthetic images of the objects from random view points. The edge maps computed from the synthesized images are used as training samples to estimate the line process parameters using a least squares method. We show that this parameter estimation method is useful for detecting edges in range as well as intensity edges. The main contributions of the paper are: 1) use of CAD models to obtain true edge labels which are otherwise not available; and 2) use of canonical Markov random field representation to reduce the number of parameters.

Journal ArticleDOI
TL;DR: A two-level probabilistic method for grouping edge-based descriptive primitives by using a voting mechanism based on the Direct Hough Transform and a Simulated Annealing algorithm to find the best label configuration is proposed.

Journal ArticleDOI
TL;DR: The relaxation step of the proposed edge-detection algorithm greatly reduces noise effects, gets better edge localization such as line ends and corners, and plays a crucial role in refining edge outputs.
Abstract: This paper presents a new scheme for probabilistic relaxation labeling that consists of an update function and a dictionary construction method. The nonlinear update function is derived from Markov random field theory and Bayes' formula. The method combines evidence from neighboring label assignments and eliminates label ambiguity efficiently. This result is important for a variety of image processing tasks, such as image restoration, edge enhancement, edge detection, pixel classification, and image segmentation. The authors successfully applied this method to edge detection. The relaxation step of the proposed edge-detection algorithm greatly reduces noise effects, gets better edge localization such as line ends and corners, and plays a crucial role in refining edge outputs. The experiments show that our algorithm converges quickly and is robust in noisy environments.

Book ChapterDOI
01 Jan 1996
TL;DR: The model entropy of the mixed tree/chain graph model is shown to reduce the entropy of both the bigram and context-free models.
Abstract: Stochastic language models incorporating both n-grams and context-free grammars are proposed. A constrained context-free model specified by a stochastic context-free prior distribution with superimposed n-gram frequency constraints is derived and the resulting maximum-entropy distribution is shown to induce a Markov random field with neighborhood structure at the leaves determined by the relative n-gram frequencies. A computationally efficient version, the mixed tree/chain graph model, is derived with identical neighborhood structure. In this model, a word-tree derivation is given by a stochastic context-free prior on trees down to the preterminal (part-of-speech) level and word attachment is made by a nonstationary Markov chain. Using the Penn TreeBank, a comparison of the mixed tree/chain graph model to both the n-gram and context-free models is performed using entropy measures. The model entropy of the mixed tree/chain graph model is shown to reduce the entropy of both the bigram and context-free models.

Journal ArticleDOI
TL;DR: The main contribution lies in the regularization of a large-support ill-posed observation operator using a locally constant binary image Markov random field and can be applied whenever a binary image is observed using a linear system and corrupted by Gaussian noise.

Proceedings ArticleDOI
16 Sep 1996
TL;DR: This paper develops a joint scheme for segmentation and image interpretation in a multiresolution framework, where segmentation (low level) and interpretation (high level) interleave, and finds the optimal interpretation labels by using the simulated annealing algorithm.
Abstract: Interpreting images is a difficult task to automate. Image interpretation essentially consists of both low level and high level vision tasks. In this paper, we develop a joint scheme for segmentation and image interpretation in a multiresolution framework, where segmentation (low level) and interpretation (high level) interleave. The idea being that the interpretation block should be able to guide the segmentation block which in turn helps the interpretation block in better interpretation. We assume that the conditional probability of the interpretation labels, given the knowledge vector and the measurement vector is a Markov random field (MRF) and formulate the problem as a MAP estimation problem at each resolution. We find the optimal interpretation labels by using the simulated annealing algorithm. The proposed algorithm is validated on some real scene images.

Proceedings ArticleDOI
05 Aug 1996
TL;DR: Experimental results show that the performance of the tagger gets improved as the authors add more statistical information, and that MRF-based tagging model is better than HMM based tagging in data sparseness problem.
Abstract: Probabilistic models have been widely used for natural language processing. Part-of-speech tagging, which assingns the most likely tag to each word in a given sentence, is one of the problems which can be solved by statistical approach. Many researchers have tried to solve the problem by hidden Markov model (HMM), which is well known as one of the statistical models. But it has many difficulties: integrating heterogeneous information, coping with data sparseness problem, and adapting to new environments. In this paper, we propose a Markov radom field (MRF) model based approach to the tagging problem. The MRF provides the base frame to combine various statistical information with maximum entropy (ME) method. As Gibbs distribution can be used to describe a posteriori probability of tagging, we use it in maximum a posteriori (MAP) estimation of optimizing process. Besides, several tagging models are developed to show the effect of adding information. Experimental results show that the performance of the tagger gets improved as we add more statistical information, and that MRF-based tagging model is better than HMM based tagging in data sparseness problem.

03 Oct 1996
TL;DR: This work formalizes a decision-theoretic framework for the solution of the image matching problem by constructing a Bayesian model of the problem, and describes its prior expectations about the general form of the mapping, such as its smoothness, through Gibbs modeling.
Abstract: The problem of determining the mapping between a pair of images is called image matching and is fundamental in image processing. We formalize a decision-theoretic framework for its solution by constructing a Bayesian model of the problem. Unlike traditional matching methods, the Bayesian formulation formally embodies the notions of uncertainty in the measurements and prior information that may be available about the problem. We illustrate the advantages of the approach and its development through the implementation of a volume warping system to ameliorate the difficult task of anatomical localization in tomographic scans of human anatomy. The likelihood of a mapping can in general be inferred from an observed image pair or their features by measuring the degree to which one image is made similar to the other through the mapping. A natural choice for our solution would be the mapping with the greatest likelihood. The problem of calculating the maximum likelihood estimate, however, is ill-posed because of the sparsity of informative features within the images. This motivates the introduction of prior information, in the form of constraints, with which to regularize the matching problem. We describe our prior expectations about the general form of the mapping, such as its smoothness, through Gibbs modeling. The most probable mapping given both the prior and sample information is the maximum a posteriori solution. We estimate a finite element representation of this value using a multi-level optimization scheme. In addition, an iterative Gibbs sampling algorithm is developed to stochastically estimate the minimum mean squared error solution. Its implementation capitalizes on our finite element approximation to the mapping, which allows the posterior distribution to be represented as a Markov random field. Beyond the estimation of our mappings, Bayesian analysis can also determine their uncertainty or reliability. We demonstrate these aspects of the approach on two- and three-dimensional images of the human brain.