scispace - formally typeset
Search or ask a question

Showing papers on "Markov random field published in 1995"


Book
01 Aug 1995
TL;DR: This book presents a comprehensive study on the use of MRFs for solving computer vision problems, and covers the following parts essential to the subject: introduction to fundamental theories, formulations of MRF vision models, MRF parameter estimation, and optimization algorithms.
Abstract: From the Publisher: Markov random field (MRF) theory provides a basis for modeling contextual constraints in visual processing and interpretation. It enables us to develop optimal vision algorithms systematically when used with optimization principles. This book presents a comprehensive study on the use of MRFs for solving computer vision problems. The book covers the following parts essential to the subject: introduction to fundamental theories, formulations of MRF vision models, MRF parameter estimation, and optimization algorithms. Various vision models are presented in a unified framework, including image restoration and reconstruction, edge and region segmentation, texture, stereo and motion, object matching and recognition, and pose estimation. This book is an excellent reference for researchers working in computer vision, image processing, statistical pattern recognition, and applications of MRFs. It is also suitable as a text for advanced courses in these areas.

1,333 citations


Journal ArticleDOI
TL;DR: An unsupervised segmentation algorithm which uses Markov random field models for color textures which characterize a texture in terms of spatial interaction within each color plane and interaction between different color planes is presented.
Abstract: We present an unsupervised segmentation algorithm which uses Markov random field models for color textures. These models characterize a texture in terms of spatial interaction within each color plane and interaction between different color planes. The models are used by a segmentation algorithm based on agglomerative hierarchical clustering. At the heart of agglomerative clustering is a stepwise optimal merging process that at each iteration maximizes a global performance functional based on the conditional pseudolikelihood of the image. A test for stopping the clustering is applied based on rapid changes in the pseudolikelihood. We provide experimental results that illustrate the advantages of using color texture models and that demonstrate the performance of the segmentation algorithm on color images of natural scenes. Most of the processing during segmentation is local making the algorithm amenable to high performance parallel implementation. >

485 citations


Journal ArticleDOI
TL;DR: The algorithm was notably successful in the detection of minimal cancers manifested by masses, and an extensive study of the effects of the algorithm's parameters on its sensitivity and specificity was performed in order to optimize the method for a clinical, observer performance study.
Abstract: A technique is proposed for the detection of tumors in digital mammography. Detection is performed in two steps: segmentation and classification. In segmentation, regions of interest are first extracted from the images by adaptive thresholding. A further reliable segmentation is achieved by a modified Markov random field (MRF) model-based method. In classification, the MRF segmented regions are classified into suspicious and normal by a fuzzy binary decision tree based on a series of radiographic, density-related features. A set of normal (50) and abnormal (45) screen/film mammograms were tested. The latter contained 48 biopsy proven, malignant masses of various types and subtlety. The detection accuracy of the algorithm was evaluated by means of a free response receiver operating characteristic curve which shows the relationship between the detection of true positive masses and the number of false positive alarms per image. The results indicated that a 90% sensitivity can be achieved in the detection of different types of masses at the expense of two falsely detected signals per image. The algorithm was notably successful in the detection of minimal cancers manifested by masses /spl les/10 mm in size. For the 16 such cases in the authors' dataset, a 94% sensitivity was observed with 1.5 false alarms per image. An extensive study of the effects of the algorithm's parameters on its sensitivity and specificity was also performed in order to optimize the method for a clinical, observer performance study. >

304 citations


Journal ArticleDOI
TL;DR: A global contour model based on a stable and regenerative shape matrix, which is invariant and unique under rigid motions is proposed, which yields prior distribution that exerts influence over a global model while allowing for deformations.
Abstract: This paper considers the problem of modeling and extracting arbitrary deformable contours from noisy images. We propose a global contour model based on a stable and regenerative shape matrix, which is invariant and unique under rigid motions. Combined with Markov random field to model local deformations, this yields prior distribution that exerts influence over a global model while allowing for deformations. We then cast the problem of extraction into posterior estimation and show its equivalence to energy minimization of a generalized active contour model. We discuss pertinent issues in shape training, energy minimization, line search strategies, minimax regularization and initialization by generalized Hough transform. Finally, we present experimental results and compare its performance to rigid template matching. >

189 citations


Journal ArticleDOI
TL;DR: An unsupervised texture segmentation method that does not require knowledge about the different texture regions, their parameters, or the number of available texture classes to be known a priori is presented.
Abstract: Many studies have proven that statistical model-based texture segmentation algorithms yield good results provided that the model parameters and the number of regions be known a priori. In this correspondence, we present an unsupervised texture segmentation method that does not require knowledge about the different texture regions, their parameters, or the number of available texture classes. The proposed algorithm relies on the analysis of local and global second and higher order spatial statistics of the original images. The segmentation map is modeled using an augmented-state Markov random field, including an outlier class that enables dynamic creation of new regions during the optimization process. A Bayesian estimate of this map is computed using a deterministic relaxation algorithm. Results on real-world textured images are presented. >

114 citations


Journal ArticleDOI
TL;DR: The present paper shows how the mean field theory can be applied to MRF model-based motion estimation, and it provides results nearly as good as SA but with much faster convergence.
Abstract: Previously, Markov random field (MRF) model-based techniques have been proposed for image motion estimation. Since motion estimation is usually an ill-posed problem, various constraints are needed to obtain a unique and stable solution. The main advantage of the MRF approach is its capacity to incorporate such constraints, for instance, motion continuity within an object and motion discontinuity at the boundaries between objects. In the MRF approach, motion estimation is often formulated as an optimization problem, and two frequently used optimization methods are simulated annealing (SA) and iterative-conditional mode (ICM). Although the SA is theoretically optimal in the sense of finding the global optimum, it usually takes many iterations to converge. The ICM, on the other hand, converges quickly, but its results are often unsatisfactory due to its "hard decision" nature. Previously, the authors have applied the mean field theory to image segmentation and image restoration problems. It provides results nearly as good as SA but with much faster convergence. The present paper shows how the mean field theory can be applied to MRF model-based motion estimation. This approach is demonstrated on both synthetic and real-world images, where it produced good motion estimates. >

79 citations


Journal ArticleDOI
TL;DR: It is shown that hidden Markov models are dense among essentially all finitestate discrete-time stationary processes and finite-state lattice-based stationary random fields, and to a consistent non-parametric estimator.
Abstract: A noninvertible function of a first-order Markov process or of a nearest-neighbor Markov random field is called a hidden Markov model. Hidden Markov models are generally not Markovian. In fact, they may have complex and long range interactions, which is largely the reason for their utility. Applications include signal and image processing, speech recognition and biological modeling. We show that hidden Markov models are dense among essentially all finite-state discrete-time stationary processes and finite-state lattice-based stationary random fields. This leads to a nearly universal parameterization of stationary processes and stationary random fields, and to a consistent nonparametric estimator. We show the results of attempts to fit simple speech and texture patterns.

69 citations


Proceedings ArticleDOI
11 Aug 1995
TL;DR: It is proposed to classify hierarchical MRF-based approaches as explicit and implicit methods, with appropriate subclasses, and several specific examples of each class of approach are described.
Abstract: The need for hierarchical statistical tools for modeling and processing image data, as well as the success of Markov random fields (MRFs) in image processing, have recently given rise to a significant research activity on hierarchical MRFs and their application to image analysis problems. Important contributions, relying on different models and optimization procedures, have thus been recorded in the literature. This paper presents a synthetic overview of available models and algorithms, as well as an attempt to clarify the vocabulary in this field. We propose to classify hierarchical MRF-based approaches as explicit and implicit methods, with appropriate subclasses. Each of these major classes is defined in the paper, and several specific examples of each class of approach are described.

68 citations


20 Nov 1995
TL;DR: This thesis presents an integrated approach in modeling, extracting, detecting and classifying deformable contours directly from noisy images, using minimax principle to derive a regularization criterion whereby the values can be automatically and implicitly determined along the contour.
Abstract: This thesis presents an integrated approach in modeling, extracting, detecting and classifying deformable contours directly from noisy images. We begin by conducting a case study on regularization, formulation and initialization of the active contour models (snakes). Using minimax principle, we derive a regularization criterion whereby the values can be automatically and implicitly determined along the contour. Furthermore, we formulate a set of energy functionals which yield snakes that contain Hough transform as a special case. Subsequently, we consider the problem of modeling and extracting arbitrary deformable contours from noisy images. We combine a stable, invariant and unique contour model with Markov random field to yield prior distribution that exerts influence over an arbitrary global model while allowing for deformation. Under the Bayesian framework, contour extraction turns into posterior estimation, which is in turn equivalent to energy minimization in a generalized active contour model. Finally, we integrate these lower level visual tasks with pattern recognition processes of detection and classification. Based on the Nearman-Pearson lemma, we derive the optimal detection and classification tests. As the summation is peaked in most practical applications, only small regions need to be considered in marginalizing the distribution. The validity of our formulation have been confirmed by extensive and rigorous experimentations.

68 citations


Journal ArticleDOI
TL;DR: This work discusses the design of loss functions with a local structure that depend only on a binary misclassification vector and calculates the Bayes estimate using Markov chain Monte Carlo and simulated annealing algorithms.
Abstract: Unlike the development of more accurate prior distributions for use in Bayesian imaging, the design of more sensible estimators through loss functions has been neglected in the literature. We discuss the design of loss functions with a local structure that depend only on a binary misclassification vector. The proposed approach is similar to modeling with a Markov random field. The Bayes estimate is calculated in a two-step algorithm using Markov chain Monte Carlo and simulated annealing algorithms. We present simulation experiments with the Ising model, where the observations are corrupted with Gaussian and flip noise.

47 citations


Journal ArticleDOI
TL;DR: This paper presents a novel approach by introducing a Bayesian probability of homogeneity in a general statistical context that is particularly beneficial for cases in which estimation-based methods are most prone to error: when little information is contained in some of the regions and, therefore, parameter estimates are unreliable.
Abstract: Region-based image segmentation methods require some criterion for determining when to merge regions. This paper presents a novel approach by introducing a Bayesian probability of homogeneity in a general statistical context. The authors' approach does not require parameter estimation and is therefore particularly beneficial for cases in which estimation-based methods are most prone to error: when little information is contained in some of the regions and, therefore, parameter estimates are unreliable. The authors apply this formulation to three distinct parametric model families that have been used in past segmentation schemes: implicit polynomial surfaces, parametric polynomial surfaces, and Gaussian Markov random fields. The authors present results on a variety of real range and intensity images. >

Proceedings ArticleDOI
23 Oct 1995
TL;DR: This paper deals with the detection and extraction of poorly contrasted rectilinear structures in textured areas, using a Markov random field model, in the analysis of pavement distress, and more particularly pavement cracks.
Abstract: This paper deals with the detection and extraction of poorly contrasted rectilinear structures in textured areas, using a Markov random field model. The application is in the analysis of pavement distress, and more particularly pavement cracks. A local crack detection is first performed, where the pavement texture is seen as additive correlated noise. The resulting line image is then projected onto a regular lattice composed of straight line segments. A graph structure is associated with this lattice, which allows the definition of a Markovian crack model, where sites are no longer the image pixels, but straight line segments. The model is used to determine the location and shape of the rectilinear structures, with a given orientation, in the observed lattice. The actual defects can then be extracted by simple post-processing.

Journal ArticleDOI
TL;DR: A general framework is presented, based in Bayesian estimation theory with the use of Markov random field models to construct the prior distribution, so that the solution to the unwrapping problem is characterized as the minimizer of a piecewise-quadratic functional.
Abstract: A general framework is presented for the design of parallel algorithms for two-dimensional, path-independent phase unwrapping of locally inconsistent, noisy principal-value phase fields that may contain regions of invalid information. This framework is based in Bayesian estimation theory with the use of Markov random field models to construct the prior distribution, so that the solution to the unwrapping problem is characterized as the minimizer of a piecewise-quadratic functional. This method allows one to design a variety of parallel algorithms with different computational properties, which simultaneously perform the desired path-independent unwrapping, interpolate over regions with invalid data, and reduce the noise. It is also shown how this approach may be extended to the case of discontinuous phase fields, incorporating information from fringe patterns of different frequencies.

Journal ArticleDOI
TL;DR: A new criterion for classifying multispectral remote sensing images or textured images by using spectral and spatial information is proposed and a stepwise classification algorithm is derived.
Abstract: A new criterion for classifying multispectral remote sensing images or textured images by using spectral and spatial information is proposed. The images are modeled with a hierarchical Markov Random Field (MRF) model that consists of the observed intensity process and the hidden class label process. The class labels are estimated according to the maximum a posteriori (MAP) criterion, but some reasonable approximations are used to reduce the computational load. A stepwise classification algorithm is derived and is confirmed by simulation and experimental results. >

Journal ArticleDOI
TL;DR: In this article, two classification approaches were investigated for the mapping of tropical forests from Landsat-TM data of a region north of Manaus in the Brazilian state of Amazonas.
Abstract: Two classification approaches were investigated for the mapping of tropical forests from Landsat-TM data of a region north of Manaus in the Brazilian state of Amazonas. These incorporated textural information and made use of fuzzy approaches to classification. In eleven class classifications the texture-based classifiers (based on a Markov random field model) consistently provided higher classification accuracies than conventional per-pixel maximum likelihood and minimum distance classifications, indicating that they are more able to characterize accurately several regenerating forest classes. Measures of the strength of class memberships derived from three classification algorithms (based on the probability density function, a posteriori probability and the Mahalanobis distance) could be used to derive fuzzy image classifications and be used in post-classification processing. The latter, involving either the summation of class memberships over a local neighbourhood or the application of homogene...

Proceedings ArticleDOI
23 Oct 1995
TL;DR: This paper deals with motion-segmentation, that is, with the partitioning of the image into regions of homogeneous motion, and is able to get a good segmentation from the very beginning of the sequence, and to manage the appearance of new objects in the scene.
Abstract: This paper deals with motion-segmentation, that is, with the partitioning of the image into regions of homogeneous motion. Here, homogeneous means that in each region a 2D polynomial model (e.g. an affine one) is able to describe at each location the underlying "true" motion with a predefined precision /spl eta/. However, no estimation of this true motion field is required. The motion models are computed using a multiresolution robust estimator. Therefore, as opposed to almost all other motion-segmentation scheme, the motion model of a given region only needs to be estimated once at a given time instant. Moreover, the determination of the boundaries between the different regions, which is stated as a statistical regularization based on a multiscale Markov random field (MRF) modeling, only requires one pass. Finally, thanks to the definition of an explicit detection step of areas where the error between the underlying motion and the one given by the estimated models is not within the precision /spl eta/, we are able to get a good segmentation from the very beginning of the sequence, and to manage the appearance of new objects in the scene, as well as the momentary increase in the complexity of motion in already existing regions. Results obtained on many real image sequences have validated our approach.

Journal ArticleDOI
TL;DR: Methods for approximately computing the marginal probability mass functions and means of a Markov random field (MRF) by approximating the lattice by a tree and several theoretical results concerning fixed-point problems are proven.
Abstract: Methods for approximately computing the marginal probability mass functions and means of a Markov random field (MRF) by approximating the lattice by a tree are described. Applied to the a posteriori MRF these methods solve Bayesian spatial pattern classification and image restoration problems. The methods are described, several theoretical results concerning fixed-point problems are proven, and four numerical examples are presented, including comparison with optimal estimators and the iterated conditional mode estimator and including two agricultural optical remote sensing problems. >

Journal ArticleDOI
TL;DR: The multispectral model is used in a Bayesian algorithm for the restoration of color images, in which the resulting nonlinear estimates are shown to be quantitatively and visually superior to linear estimates generated by multichannel Wiener and least squares restoration.
Abstract: Multispectral images consist of multiple channels, each containing data acquired from a different band within the frequency spectrum. Since most objects emit or reflect energy over a large spectral bandwidth, there usually exists a significant correlation between channels. Due to often harsh imaging environments, the acquired data may be degraded by both blur and noise. Simply applying a monochromatic restoration algorithm to each frequency band ignores the cross-channel correlation present within a multispectral image. A Gibbs prior is proposed for multispectral data modeled as a Markov random field, containing both spatial and spectral cliques. Spatial components of the model use a nonlinear operator to preserve discontinuities within each frequency band, while spectral components incorporate nonstationary cross-channel correlations. The multispectral model is used in a Bayesian algorithm for the restoration of color images, in which the resulting nonlinear estimates are shown to be quantitatively and visually superior to linear estimates generated by multichannel Wiener and least squares restoration. >

Journal ArticleDOI
TL;DR: In this paper, a Markov random field or Gibbs model was proposed for piecewise homogeneous images using not only low-order clique interactions to model homogeneity, but also high order clique interaction to model the continuity of borders in the images.
Abstract: We expound the idea that priors for Bayesian image processing should be constructed as adequate probabilistic models. We illustrate the idea by developing a Markov random field or Gibbs model for a class of piecewise homogeneous images using not only low-order clique interactions to model homogeneity, but also high-order clique interactions to model the continuity of borders in the images. We propose a semi-autonomous parameter estimation method which takes into account the global properties of the images. With the parameters determined this way, we demonstrate that the proposed model is indeed "image-modeling"; random realizations contain piecewise homogeneous global features similar to the given data image. By using a $\chi\sp2$ goodness-of-fit test, we show that the model can be regarded as a statistically adequate model. We further demonstrate that the most essential parameter of the image model related to the regularization parameter in optimization problem is naturally determined in the construction of the image model. We illustrate the superior performance of the model in restoration experiments. Inspired by the above success, we generalize the model for the modeling piecewise smooth images. We discuss its application to the image reconstruction problem in Positron Emission Tomography (PET) within the Bayesian framework. The usefulness of the prior in MAP estimation is investigated using a three-stage procedure: running first the Filtered Backprojection algorithm, then a Bayesian restoration process, followed by Iterated Conditional Modes (ICM). We show the effectiveness of the restoration process, and further compare the performance of a MMSE estimator (implemented based on Monte Carlo methods) with the MAP estimator based on the restoration model alone. The advantages of our model are illustrated using both simulated and real PET data. We perform further evaluation experiments and compare the performance of our algorithms with its competitors based on a training-and-testing methodology and statistical hypothesis tests. Using specific figures of merit which are medically relevant as the basis for comparison, we demonstrate that the performance of the model proposed is statistically significantly better.

Journal ArticleDOI
TL;DR: A class of penalty functions for use in estimation and image regularization for vectors whose indexes are locations in a finite lattice are proposed and their relationship to Markov random field priors is explored.
Abstract: A class of penalty functions for use in estimation and image regularization is proposed These penalty functions are defined for vectors whose indexes are locations in a finite lattice as the discrepancy between the vector and a shifted version of itself After motivating this class of penalty functions, their relationship to Markov random field priors is explored One of the penalty functions proposed, a divergence roughness penalty, is shown to be a discretization of a penalty proposed by Good and Gaskins (1971) for use in density estimation One potential use in estimation problems is explored An iterative algorithm that takes advantage of induced neighborhood structures is proposed and convergence of the algorithm is proven under specified conditions Examples in emission tomographic imaging and radar imaging are given >

Proceedings ArticleDOI
TL;DR: The recognition of seismic texture can be reduced to a simple mathematical operation which makes the method very efficient and has a relatively low demand in computer storage during computation.
Abstract: Statistical recognition of seismic reflection patterns is a useful aid to seismic data interpretation. It classifies windows of carefully processed seismic data into a number of groups, each characterized by a distinct reflection pattern. The recognition is based on a set of reference patterns extracted from representative or geologically well understood zones. The seismic data is modelled as a Markov random field. The probability distribution of patterns in a given data window is used to characterize its seismic texture. The recognition of seismic texture can be reduced to a simple mathematical operation which makes the method very efficient. In addition, the method has a relatively low demand in computer storage during computation. The method of recognizing seismic texture is illustrated on a set of offshore reflection data. Applied to these data the method distinguishes clearly between different layers and efficiently separates zones of different internal stratification, even when they are hard to see for the naked eye.

Journal ArticleDOI
TL;DR: A family of approximations, denoted by "cluster approximation", for the computation of the mean of a Markov random field (MRF) is described, which proves several existence, uniqueness, and convergence-of-algorithm results.
Abstract: We describe a family of approximations, denoted by "cluster approximations", for the computation of the mean of a Markov random field (MRF). This is a key computation in image processing when applied to the a posteriori MRF. The approximation is to account exactly for only spatially local interactions. Application of the approximation requires the solution of a nonlinear multivariable fixed-point equation for which we prove several existence, uniqueness, and convergence-of-algorithm results. Four numerical examples are presented, including comparison with Monte Carlo calculations. >

Journal ArticleDOI
TL;DR: In this article, a multilayer Markov random field is proposed to estimate three basic physical quantities that are nonlinearly related to the observations in synthetic magnetic resonance imaging in a Bayesian framework.
Abstract: SUMMARY Synthetic magnetic resonance imaging involves the estimation, based on a set of measured images with noise, of three basic physical quantities that are nonlinearly related to the observations. The methods currently available for this ill-conditoned inverse problem either do not provide sufficiently accurate estimates or require time-consuming data collection. We formulate this nonlinear problem in a Bayesian framework, taking into account knowledge about the physics of the magnetic resonance imaging experiment, statistical properties of the experimental noise, and prior information about the underlying physical quantities, modelled by a suitable Markov random field. A new multilayer Markov random field is proposed. Inference is drawn by means of Markov chain Monte Carlo methods or iterated conditional modes. Some examples are included to demonstrate how synthetic magnetic resonance imaging by this approach can be performed in an accurate and reliable way.

Book ChapterDOI
11 Dec 1995
TL;DR: A new and general segmentation algorithm involving 3D adaptive K-Means clustering in a multiresolution wavelet basis is proposed and demonstrated via application to phantom images as well as MR brain scans.
Abstract: Segmentation of MR brain scans has received an enormous amount of attention in the medical imaging community over the past several years. In this paper we propose a new and general segmentation algorithm involving 3D adaptive K-Means clustering in a multiresolution wavelet basis. The voxel image of the brain is segmented into five classes namely, cerebrospinal fluid, gray matter, white matter, bone and background (remaining pixels). The segmentation problem is formulated as a maximum a posteriori (MAP) estimation problem wherein, the prior is assumed to be a Markov Random Field (MRF). The MAP estimation is achieved using an iterated conditional modes technique (ICM) in wavelet basis. Performance of the segmentation algorithm is demonstrated via application to phantom images as well as MR brain scans.

Journal ArticleDOI
TL;DR: This work describes how the GRF can be efficiently incorporated into optimization processes in several representative applications, ranging from image segmentation to image enhancement, and demonstrates that various features of images can all be properly characterized by a GRF.
Abstract: The Gibbs random field (GRF) has proved to be a simple and practical way of parameterizing the Markov random field, which has been widely used to model an image or image-related process in many image processing applications. In particular, the GRF can be employed to construct an efficient Bayesian estimation that often yields optimal results. We describe how the GRF can be efficiently incorporated into optimization processes in several representative applications, ranging from image segmentation to image enhancement. One example is the segmentation of computerized tomography (CT) volumetric image sequence in which the GRF has been incorporated into K-means clustering to enforce the neighborhood constraints. Another example is the artifact removal in discrete cosine transform-based low bit rate image compression where GRF has been used to design an enhancement algorithm that reduces the "blocking effect" and the 'Wnging effect" while still preserving the image details. The third example is the integration of GRF in a wavelet-based subband video coding scheme in which the highfrequency subbands are segmented and quantized with spatial constraints specified by a GRF, and the subsequent enhancement of the decompressed images is accomplished by smoothing with another type of GRF. With these diverse examples, we are able to demonstrate that various features of images can all be properly characterized by a GRF. The specific form of the GRF can be selected according to the characteristics of an individual application. We believe that the GRF is a powerful tool to exploit the spatial dependency in various images, and is applicable to many image processing tasks.

Journal ArticleDOI
TL;DR: This article introduces scalable data parallel algorithms for image processing based on Gibbs and Markov random field model representation for textures that yields real-time algorithms for texture synthesis and compression that are substantially faster than the previously known sequential implementations.
Abstract: This article introduces scalable data parallel algorithms for image processing. Focusing on Gibbs and Markov random field model representation for textures, we present parallel algorithms for texture synthesis, compression, and maximum likelihood parameter estimation, currently implemented on Thinking Machines CM-2 and CM-5. The use of fine-grained, data parallel processing techniques yields real-time algorithms for texture synthesis and compression that are substantially faster than the previously known sequential implementations. Although current implementations are on Connection Machines, the methodology presented enables machine-independent scalable algorithms for a number of problems in image processing and analysis. >

Journal ArticleDOI
TL;DR: A new statistical method is proposed for reduction of truncation artifacts when reconstructing a function by a finite number of its Fourier series coefficients and a solution for selecting the parameters automatically is proposed.
Abstract: A new statistical method is proposed for reduction of truncation artifacts when reconstructing a function by a finite number of its Fourier series coefficients. Following the Bayesian approach, it is possible to take into account both the errors induced by the truncation of the Fourier series and some specific characteristics of the function. A suitable Markov random field is used for modeling these characteristics. Furthermore, in applications like Magnetic Resonance Imaging, where these coefficients are the measured data, the experimental random noise in the data can also be taken into account. Monte Carlo Markov chain methods are used to make statistical inference. Parameter selection in the Bayesian model is also addressed and a solution for selecting the parameters automatically is proposed. The method is applied successfully to both simulated and real magnetic resonance images. >

Journal ArticleDOI
TL;DR: A contextual VQ method based on the Markov Random Field theory is proposed to model the speech feature vector space and its superiority is confirmed by a series of comparative experiments in a speaker independent isolated word recognition task by using different VQ schemes as the front-end of DHMM.

Book ChapterDOI
05 Dec 1995
TL;DR: This work uses a multiresolution robust estimator to compute the motion models and explicitly detects areas where the error between the underlying motion and the one given by the estimated models is not whithin the precision η, which allows us to handle the appearance of new objects in the scene.
Abstract: Analysing the dynamic content of a scene observed by a mobile camera often requires the segmentation of each image of the sequence into region entities of apparent homogeneous motion. To each region is associated a 2D polynomial model (e.g., an affine one) able to describe at each location the underlying 2D “true” motion with a predefined precision η. Thanks to the use of a multiresolution robust estimator [1] to compute the motion models, the determination of the boundaries between the different regions, which is stated as a statistical regularization based on multiscale Markov Random Field (MRF) models, can be achieved in one pass only. This avoids the time consuming iterations between motion estimation and boundary identification that are encountered in almost all other motion-segmentation schemes (for instance [2, 3, 4]). We explicitly detect areas where the error between the underlying motion and the one given by the estimated models is not whithin the precision η. This allows us to handle the appearance of new objects in the scene. We have performed numerous experiments with real indoor and outdoor image sequences which demonstrate the efficiency of the method.

Journal ArticleDOI
TL;DR: Deterministic pseudo-annealing is a new deterministic optimization method for finding the maximum a posteriori (MAP) labeling in a Markov random field, in which the probability of a tentative labeling is extended to a merit function on continuous labelings.
Abstract: Deterministic pseudo-annealing (DPA) is a new deterministic optimization method for finding the maximum a posteriori (MAP) labeling in a Markov random field, in which the probability of a tentative labeling is extended to a merit function on continuous labelings. This function is made convex by changing its definition domain. This unambiguous maximization problem is solved, and the solution is followed down to the original domain, yielding a good, if suboptimal, solution to the original labeling assignment problem. The performance of DPA is analyzed on randomly weighted graphs. >