scispace - formally typeset
Search or ask a question

Showing papers on "Markov random field published in 2005"


Proceedings ArticleDOI
20 Jun 2005
TL;DR: A framework for learning generic, expressive image priors that capture the statistics of natural scenes and can be used for a variety of machine vision tasks, developed using a Products-of-Experts framework.
Abstract: We develop a framework for learning generic, expressive image priors that capture the statistics of natural scenes and can be used for a variety of machine vision tasks. The approach extends traditional Markov random field (MRF) models by learning potential functions over extended pixel neighborhoods. Field potentials are modeled using a Products-of-Experts framework that exploits nonlinear functions of many linear filter responses. In contrast to previous MRF approaches all parameters, including the linear filters themselves, are learned from training data. We demonstrate the capabilities of this Field of Experts model with two example applications, image denoising and image inpainting, which are implemented using a simple, approximate inference scheme. While the model is trained on a generic image database and is not tuned toward a specific application, we obtain results that compete with and even outperform specialized techniques.

1,167 citations


Proceedings Article
05 Dec 2005
TL;DR: This work begins by collecting a training set of monocular images (of unstructured outdoor environments which include forests, trees, buildings, etc.) and their corresponding ground-truth depthmaps, and applies supervised learning to predict the depthmap as a function of the image.
Abstract: We consider the task of depth estimation from a single monocular image. We take a supervised learning approach to this problem, in which we begin by collecting a training set of monocular images (of unstructured outdoor environments which include forests, trees, buildings, etc.) and their corresponding ground-truth depthmaps. Then, we apply supervised learning to predict the depthmap as a function of the image. Depth estimation is a challenging problem, since local features alone are insufficient to estimate depth at a point, and one needs to consider the global context of the image. Our model uses a discriminatively-trained Markov Random Field (MRF) that incorporates multiscale local- and global-image features, and models both depths at individual points as well as the relation between depths at different points. We show that, even on unstructured scenes, our algorithm is frequently able to recover fairly accurate depthmaps.

1,079 citations


Proceedings ArticleDOI
15 Aug 2005
TL;DR: A novel approach is developed to train the model that directly maximizes the mean average precision rather than maximizing the likelihood of the training data, and significant improvements are possible by modeling dependencies, especially on the larger web collections.
Abstract: This paper develops a general, formal framework for modeling term dependencies via Markov random fields. The model allows for arbitrary text features to be incorporated as evidence. In particular, we make use of features based on occurrences of single terms, ordered phrases, and unordered phrases. We explore full independence, sequential dependence, and full dependence variants of the model. A novel approach is developed to train the model that directly maximizes the mean average precision rather than maximizing the likelihood of the training data. Ad hoc retrieval experiments are presented on several newswire and web collections, including the GOV2 collection used at the TREC 2004 Terabyte Track. The results show significant improvements are possible by modeling dependencies, especially on the larger web collections.

996 citations


Journal ArticleDOI
TL;DR: A particle filter that effectively deals with interacting targets, targets that are influenced by the proximity and/or behavior of other targets, is described and a novel Markov chain Monte Carlo (MCMC) sampling step is replaced to obtain a more efficient MCMC-based multitarget filter.
Abstract: We describe a particle filter that effectively deals with interacting targets, targets that are influenced by the proximity and/or behavior of other targets. The particle filter includes a Markov random field (MRF) motion prior that helps maintain the identity of targets throughout an interaction, significantly reducing tracker failures. We show that this MRF prior can be easily implemented by including an additional interaction factor in the importance weights of the particle filter. However, the computational requirements of the resulting multitarget filter render it unusable for large numbers of targets. Consequently, we replace the traditional importance sampling step in the particle filter with a novel Markov chain Monte Carlo (MCMC) sampling step to obtain a more efficient MCMC-based multitarget filter. We also show how to extend this MCMC-based filter to address a variable number of interacting targets. Finally, we present both qualitative and quantitative experimental results, demonstrating that the resulting particle filters deal efficiently and effectively with complicated target interactions.

900 citations


Journal ArticleDOI
01 Jul 2005
TL;DR: A Markov Random Field-based similarity metric for measuring the quality of synthesized texture with respect to a given input sample is defined, which allows the synthesis problem as minimization of an energy function, which is optimized using an Expectation Maximization (EM)-like algorithm.
Abstract: We present a novel technique for texture synthesis using optimization. We define a Markov Random Field (MRF)-based similarity metric for measuring the quality of synthesized texture with respect to a given input sample. This allows us to formulate the synthesis problem as minimization of an energy function, which is optimized using an Expectation Maximization (EM)-like algorithm. In contrast to most example-based techniques that do region-growing, ours is a joint optimization approach that progressively refines the entire texture. Additionally, our approach is ideally suited to allow for controllable synthesis of textures. Specifically, we demonstrate controllability by animating image textures using flow fields. We allow for general two-dimensional flow fields that may dynamically change over time. Applications of this technique include dynamic texturing of fluid animations and texture-based flow visualization.

694 citations


Journal ArticleDOI
TL;DR: A new class of upper bounds on the log partition function of a Markov random field (MRF) is introduced, based on concepts from convex duality and information geometry, and the Legendre mapping between exponential and mean parameters is exploited.
Abstract: We introduce a new class of upper bounds on the log partition function of a Markov random field (MRF). This quantity plays an important role in various contexts, including approximating marginal distributions, parameter estimation, combinatorial enumeration, statistical decision theory, and large-deviations bounds. Our derivation is based on concepts from convex duality and information geometry: in particular, it exploits mixtures of distributions in the exponential domain, and the Legendre mapping between exponential and mean parameters. In the special case of convex combinations of tree-structured distributions, we obtain a family of variational problems, similar to the Bethe variational problem, but distinguished by the following desirable properties: i) they are convex, and have a unique global optimum; and ii) the optimum gives an upper bound on the log partition function. This optimum is defined by stationary conditions very similar to those defining fixed points of the sum-product algorithm, or more generally, any local optimum of the Bethe variational problem. As with sum-product fixed points, the elements of the optimizing argument can be used as approximations to the marginals of the original model. The analysis extends naturally to convex combinations of hypertree-structured distributions, thereby establishing links to Kikuchi approximations and variants.

498 citations


Journal ArticleDOI
TL;DR: The results show a significant increase in the accuracy of land cover maps at fine spatial resolution over that obtained from a recently proposed linear optimization approach suggested by Verhoeye and Wulf (2002).

266 citations


Book ChapterDOI
01 Jan 2005
TL;DR: This work presents an algorithm that enables teams of robots to build joint maps, even if their relative starting locations are unknown and landmarks are ambiguous—which is presently an open problem in robotics.
Abstract: We present an algorithm for the multi-robot simultaneous localization and mapping (SLAM) problem. Our algorithm enables teams of robots to build joint maps, even if their relative starting locations are unknown and landmarks are ambiguous—which is presently an open problem in robotics. It achieves this capability through a sparse information filter technique, which represents maps and robot poses by Gaussian Markov random fields. The alignment of local maps into a single global maps is achieved by a tree-based algorithm for searching similar-looking local landmark configurations, paired with a hill climbing algorithm that maximizes the overall likelihood by search in the space of correspondences. We report favorable results obtained with a real-world benchmark data set.

247 citations


Journal ArticleDOI
TL;DR: This article proposes a flexible new class of generalized multivariate conditionally autoregressive (GMCAR) models for areal data, and shows how it enriches the MCAR class.
Abstract: In the fields of medicine and public health, a common application of areal data models is the study of geographical patterns of disease. When we have several measurements recorded at each spatial location (for example, information on p>/= 2 diseases from the same population groups or regions), we need to consider multivariate areal data models in order to handle the dependence among the multivariate components as well as the spatial dependence between sites. In this article, we propose a flexible new class of generalized multivariate conditionally autoregressive (GMCAR) models for areal data, and show how it enriches the MCAR class. Our approach differs from earlier ones in that it directly specifies the joint distribution for a multivariate Markov random field (MRF) through the specification of simpler conditional and marginal models. This in turn leads to a significant reduction in the computational burden in hierarchical spatial random effect modeling, where posterior summaries are computed using Markov chain Monte Carlo (MCMC). We compare our approach with existing MCAR models in the literature via simulation, using average mean square error (AMSE) and a convenient hierarchical model selection criterion, the deviance information criterion (DIC; Spiegelhalter et al., 2002, Journal of the Royal Statistical Society, Series B64, 583-639). Finally, we offer a real-data application of our proposed GMCAR approach that models lung and esophagus cancer death rates during 1991-1998 in Minnesota counties.

225 citations


Journal ArticleDOI
TL;DR: In this research, images containing visually separable classes of either ice and water or multiple ice classes are segmented and a novel Bayesian segmentation approach is developed and applied.
Abstract: Environmental and sensor challenges pose difficulties for the development of computer-assisted algorithms to segment synthetic aperture radar (SAR) sea ice imagery. In this research, in support of operational activities at the Canadian Ice Service, images containing visually separable classes of either ice and water or multiple ice classes are segmented. This work uses image intensity to discriminate ice from water and uses texture features to identify distinct ice types. In order to seamlessly combine image spatial relationships with various image features, a novel Bayesian segmentation approach is developed and applied. This new approach uses a function-based parameter to weight the two components in a Markov random field (MRF) model. The devised model allows for automatic estimation of MRF model parameters to produce accurate unsupervised segmentation results. Experiments demonstrate that the proposed algorithm is able to successfully segment various SAR sea ice images and achieve improvement over existing published methods including the standard MRF-based method, finite Gamma mixture model, and K-means clustering.

196 citations


Proceedings ArticleDOI
20 Jun 2005
TL;DR: The expansion move algorithm is extended for energy functions with non-metric hard constraints, and modified for functions with "almost" metric soft terms, and it is shown that it gives good results in practice.
Abstract: This paper addresses the novel problem of automatically synthesizing an output image from a large collection of different input images. The synthesized image, called a digital tapestry, can be viewed as a visual summary or a virtual 'thumbnail' of all the images in the input collection. The problem of creating the tapestry is cast as a multi-class labeling problem such that each region in the tapestry is constructed from input image blocks that are salient and such that neighboring blocks satisfy spatial compatibility. This is formulated using a Markov random field and optimized via the graph cut based expansion move algorithm. The standard expansion move algorithm can only handle energies with metric terms, while our energy contains non-metric (soft and hard) constraints. Therefore we propose two novel contributions. First, we extend the expansion move algorithm for energy functions with non-metric hard constraints. Secondly, we modify it for functions with "almost" metric soft terms, and show that it gives good results in practice. The proposed framework was tested on several consumer photograph collections, and the results are presented.

Proceedings ArticleDOI
17 Oct 2005
TL;DR: It is shown that for standard benchmark stereo pairs, the global optimum can be found in about 30 minutes using a variant of the belief propagation (BP) algorithm.
Abstract: A wide range of low level vision problems have been formulated in terms of finding the most probable assignment of a Markov random field (or equivalently the lowest energy configuration). Perhaps the most successful example is stereo vision. For the stereo problem, it has been shown that finding the global optimum is NP hard but good results have been obtained using a number of approximate optimization algorithms. In this paper, we show that for standard benchmark stereo pairs, the global optimum can be found in about 30 minutes using a variant of the belief propagation (BP) algorithm. We extend previous theoretical results on reweighted belief propagation to account for possible ties in the beliefs and using these results we obtain easily checkable conditions that guarantee that the BP disparities are the global optima. We verify experimentally that these conditions are typically met for the standard benchmark stereo pairs and discuss the implications of our results for further progress in stereo.

Journal ArticleDOI
TL;DR: Novel supervised algorithms for the CCP and the CPP estimations are proposed which are appropriate for remote sensing images where the estimation process might to be done in high-dimensional spaces and results show that the proposed density estimation algorithm outperforms other algorithms forRemote sensing data over a wide range of spectral dimensions.
Abstract: A complete framework is proposed for applying the maximum a posteriori (MAP) estimation principle in remote sensing image segmentation. The MAP principle provides an estimate for the segmented image by maximizing the posterior probabilities of the classes defined in the image. The posterior probability can be represented as the product of the class conditional probability (CCP) and the class prior probability (CPP). In this paper, novel supervised algorithms for the CCP and the CPP estimations are proposed which are appropriate for remote sensing images where the estimation process might to be done in high-dimensional spaces. For the CCP, a supervised algorithm which uses the support vector machines (SVM) density estimation approach is proposed. This algorithm uses a novel learning procedure, derived from the main field theory, which avoids the (hard) quadratic optimization problem arising from the traditional formulation of the SVM density estimation. For the CPP estimation, Markov random field (MRF) is a common choice which incorporates contextual and geometrical information in the estimation process. Instead of using predefined values for the parameters of the MRF, an analytical algorithm is proposed which automatically identifies the values of the MRF parameters. The proposed framework is built in an iterative setup which refines the estimated image to get the optimum solution. Experiments using both synthetic and real remote sensing data (multispectral and hyperspectral) show the powerful performance of the proposed framework. The results show that the proposed density estimation algorithm outperforms other algorithms for remote sensing data over a wide range of spectral dimensions. The MRF modeling raises the segmentation accuracy by up to 10% in remote sensing images.

Journal ArticleDOI
TL;DR: It is shown that with the periodic boundary condition, the high-resolution image can be restored efficiently by using fast Fourier transforms and the preconditioned conjugate gradient method is applied.
Abstract: In this paper, we study the problem of reconstructing a high-resolution image from several blurred low-resolution image frames. The image frames consist of decimated, blurred and noisy versions of the high-resolution image. The high-resolution image is modeled as a Markov random field (MRF), and a maximum a posteriori (MAP) estimation technique is used for the restoration. We show that with the periodic boundary condition, the high-resolution image can be restored efficiently by using fast Fourier transforms. We also apply the preconditioned conjugate gradient method to restore the high-resolution image. Computer simulations are given to illustrate the effectiveness of the proposed method.

Journal ArticleDOI
TL;DR: This work develops a graphical model for sequences of Gaussian random vectors when changes in the underlying graph occur at random times, and a new block of data is created with the addition or deletion of an edge.
Abstract: Summary. When modelling multivariate financial data, the problem of structural learning is compounded by the fact that the covariance structure changes with time. Previous work has focused on modelling those changes by using multivariate stochastic volatility models. We present an alternative to these models that focuses instead on the latent graphical structure that is related to the precision matrix. We develop a graphical model for sequences of Gaussian random vectors when changes in the underlying graph occur at random times, and a new block of data is created with the addition or deletion of an edge. We show how a Bayesian hierarchical model incorporates both the uncertainty about that graph and the time variation thereof.

Journal ArticleDOI
TL;DR: This work proposes to use the tree-structured Markov random field model, which describes a K-ary field by means of a sequence of binary MRFs, each one corresponding to a node in the tree, for supervised segmentation of remote sensing images.
Abstract: Most remote sensing images exhibit a clear hierarchical structure which can be taken into account by defining a suitable model for the unknown segmentation map. To this end, one can resort to the tree-structured Markov random field (MRF) model, which describes a K-ary field by means of a sequence of binary MRFs, each one corresponding to a node in the tree. Here we propose to use the tree-structured MRF model for supervised segmentation. The prior knowledge on the number of classes and their statistical features allows us to generalize the model so that the binary MRFs associated with the nodes can be adapted freely, together with their local parameters, to better fit the data. In addition, it allows us to define a suitable likelihood term to be coupled with the TS-MRF prior so as to obtain a precise global model of the image. Given the complete model, a recursive supervised segmentation algorithm is easily defined. Experiments on a test SPOT image prove the superior performance of the proposed algorithm with respect to other comparable MRF-based or variational algorithms.

Proceedings ArticleDOI
17 Oct 2005
TL;DR: Experiments with natural and synthetic sequences illustrate how the learned optical flow prior quantitatively improves flow accuracy and how it captures the rich spatial structure found in natural scene motion.
Abstract: We develop a method for learning the spatial statistics of optical flow fields from a novel training database. Training flow fields are constructed using range images of natural scenes and 3D camera motions recovered from handheld and car-mounted video sequences. A detailed analysis of optical flow statistics in natural scenes is presented and machine learning methods are developed to learn a Markov random field model of optical flow. The prior probability of a flow field is formulated as a field-of-experts model that captures the higher order spatial statistics in overlapping patches and is trained using contrastive divergence. This new optical flow prior is compared with previous robust priors and is incorporated into a recent, accurate algorithm for dense optical flow computation. Experiments with natural and synthetic sequences illustrate how the learned optical flow prior quantitatively improves flow accuracy and how it captures the rich spatial structure found in natural scene motion.

Journal ArticleDOI
TL;DR: The region-merging approach based on spatial contextual information was shown to provide more accurate classification of images with smooth spatial patterns to increase computational efficiency while maintaining spatial connectivity in merging.
Abstract: A new multistage method using hierarchical clustering for unsupervised image classification is presented. In the first phase, the multistage method performs segmentation using a hierarchical clustering procedure which confines merging to spatially adjacent clusters and generates an image partition such that no union of any neighboring segments has homogeneous intensity values. In the second phase, the segments resulting from the first stage are classified into a small number of distinct states by a sequential merging operation. The region-merging procedure in the first phase makes use of spatial contextual information by characterizing the geophysical connectedness of a digital image structure with a Markov random field, while the second phase employs a context-free similarity measure in the clustering process. The segmentation procedure of region merging is implemented as a hierarchical clustering algorithm whereby a multiwindow approach using a pyramid-like structure is employed to increase computational efficiency while maintaining spatial connectivity in merging. From experiments with both simulated and remotely sensed data, the proposed method was determined to be quite effective for unsupervised analysis. In particular, the region-merging approach based on spatial contextual information was shown to provide more accurate classification of images with smooth spatial patterns.

Book ChapterDOI
Colin Daly1
01 Jan 2005
TL;DR: Approaches to modelling using higher order statistics from statistical mechanics and image analysis are considered and the unilateral model is considered as being a member of both classes.
Abstract: Approaches to modelling using higher order statistics from statistical mechanics and image analysis are considered. The mathematics behind a very general model is briefly reviewed including a short look at entropy. Simulated annealing is viewed as an approximation to this model. Sequential simulation is briefly introduced as a second class of methods. The unilateral model is considered as being a member of both classes. It is applied to simulations using learnt conditional distributions.

Book ChapterDOI
07 Jun 2005
TL;DR: An algorithm which computes an exact minimizer of the total variation under a convex data fidelity term is proposed and Binary solutions are found thanks to graph-cut techniques and it is shown how to derive a fast algorithm.
Abstract: This paper deals with the minimization of the total variation under a convex data fidelity term. We propose an algorithm which computes an exact minimizer of this problem. The method relies on the decomposition of an image into its level sets. Using these level sets, we map the problem into optimizations of independent binary Markov Random Fields. Binary solutions are found thanks to graph-cut techniques and we show how to derive a fast algorithm. We also study the special case when the fidelity term is the L1-norm. Finally we provide some experiments.

Journal ArticleDOI
01 Jun 2005
TL;DR: This work proposes a technique for super-resolution imaging of a scene from observations at different camera zooms, and suggests the use of either a Markov random field (MRF) or an simultaneous autoregressive (SAR) model to parameterize the field based on the computation one can afford.
Abstract: We propose a technique for super-resolution imaging of a scene from observations at different camera zooms. Given a sequence of images with different zoom factors of a static scene, we obtain a picture of the entire scene at a resolution corresponding to the most zoomed observation. The high-resolution image is modeled through appropriate parameterization, and the parameters are learned from the most zoomed observation. Assuming a homogeneity of the high-resolution field, the learned model is used as a prior while super-resolving the scene. We suggest the use of either a Markov random field (MRF) or an simultaneous autoregressive (SAR) model to parameterize the field based on the computation one can afford. We substantiate the suitability of the proposed method through a large number of experimentations on both simulated and real data.

Journal ArticleDOI
TL;DR: The results show that the new method can produce more accurate shifts in perceived age and an increase in realism, and is able to maintain perceived identity across the transforms while producing realistic fine-scale textures.
Abstract: The ability to transform facial images between groups (e.g. from young to old, or from male to female) has applications in psychological research, police investigations, medicine and entertainment. Current techniques suffer either from a lack of realism due to unrealistic or inappropriate textures in the output images, or a lack of statistical validity, e.g. by using only a single example image for training. This paper describes a new method for improving the realism and effectiveness of facial transformations (e.g. ageing, feminising etc.) of individuals. The method aims to transform low resolution image data using the mean differences between the two groups, but converges on more specific texture features at the finer resolutions. We separate high and low resolution information by transforming the image into a wavelet domain. At each point we calculate a mapping from the original set to the target set based on the probability distributions of the input and output wavelet values. These distributions are estimated from the example images, using the assumption that the distribution depends on the values in a local neighbourhood of the point (the Markov Random Field (MRF) assumption). We use a causal neighbourhood that spans multiple coarser scales of the wavelet pyramid. The distributions are estimated by smoothing the histogram of example values. By increasing the smoothing of the histograms at coarser resolutions we are able to maintain perceived identity across the transforms while producing realistic fine-scale textures. We use perceptual testing to validate the new method, and the results show that it can produce more accurate shifts in perceived age and an increase in realism.

Journal ArticleDOI
TL;DR: The proposed method is based on a Markovian regularization of an elevation field defined on a region adjacency graph (RAG) obtained by oversegmenting the optical image and takes into account discontinuities of buildings thanks to an implicit edge process.
Abstract: This paper deals with the estimation of an elevation model using a pair of synthetic aperture radar (SAR) images and an optical image in semiurban areas. The proposed method is based on a Markovian regularization of an elevation field defined on a region adjacency graph (RAG). This RAG is obtained by oversegmenting the optical image. The support for elevation hypotheses is given by the structural matching of features extracted from both SAR images. The regularization model takes into account discontinuities of buildings thanks to an implicit edge process. Starting from a good initialization, optimization is obtained through an iterated conditional mode algorithm.

Journal ArticleDOI
TL;DR: In this paper, a spatial-temporal autologistic regression model is proposed to capture the relationship between a binary response and potential explanatory variables, and adjusts for both spatial dependence and temporal dependence simultaneously by a space-time Markov random field.
Abstract: An autologistic regression model consists of a logistic regression of a response variable on explanatory variables and an autoregression on responses at neighboring locations on a lattice. It is a Markov random field with pairwise spatial dependence and is a popular tool for modeling spatial binary responses. In this article, we add a temporal component to the autologistic regression model for spatial-temporal binary data. The spatial-temporal autologistic regression model captures the relationship between a binary response and potential explanatory variables, and adjusts for both spatial dependence and temporal dependence simultaneously by a space-time Markov random field. We estimate the model parameters by maximum pseudo-likelihood and obtain optimal prediction of future responses on the lattice by a Gibbs sampler. For illustration, the method is applied to study the outbreaks of southern pine bettle in North Carolina. We also discuss the generality of our approach for modeling other types of spatial-temporal lattice data.

Journal ArticleDOI
TL;DR: This work presents models and methods for classification of multiresolution images based on the concept of a reference resolution, corresponding to the highest resolution in the dataset, and proposes a Bayesian framework for classification based on this multiscale model.
Abstract: Several earth observation satellites acquire image bands with different spatial resolutions, e.g., a panchromatic band with high resolution and spectral bands with lower resolution. Likewise, we often face the problem of different resolutions when performing joint analysis of images acquired by different satellites. This work presents models and methods for classification of multiresolution images. The approach is based on the concept of a reference resolution, corresponding to the highest resolution in the dataset. Prior knowledge about the spatial characteristics of the classes is specified through a Markov random field model at the reference resolution. Data at coarser scales are modeled as mixed pixels by relating the observations to the classes at the reference resolution. A Bayesian framework for classification based on this multiscale model is proposed. The classification is realized by an iterative conditional modes (ICM) algorithm. The parameter estimation can be based both on a training set and on pixels with unknown class. A computationally efficient scheme based on a combination of the ICM and the expectation-maximization algorithm is proposed. Results obtained on simulated and real satellite images are presented.

Journal ArticleDOI
TL;DR: A unified framework is proposed, based on the maximum a posteriori probability principle, by taking all these effects into account simultaneously in order to improve image segmentation performance of brain magnetic resonance (MR) images.
Abstract: Noise, partial volume (PV) effect, and image-intensity inhomogeneity render a challenging task for segmentation of brain magnetic resonance (MR) images. Most of the current MR image segmentation methods focus on only one or two of the above-mentioned effects. The objective of this paper is to propose a unified framework, based on the maximum a posteriori probability principle, by taking all these effects into account simultaneously in order to improve image segmentation performance. Instead of labeling each image voxel with a unique tissue type, the percentage of each voxel belonging to different tissues, which we call a mixture, is considered to address the PV effect. A Markov random field model is used to describe the noise effect by considering the nearby spatial information of the tissue mixture. The inhomogeneity effect is modeled as a bias field characterized by a zero mean Gaussian prior probability. The well-known fuzzy C-mean model is extended to define the likelihood function of the observed image. This framework reduces theoretically, under some assumptions, to the adaptive fuzzy C-mean (AFCM) algorithm proposed by Pham and Prince. Digital phantom and real clinical MR images were used to test the proposed framework. Improved performance over the AFCM algorithm was observed in a clinical environment where the inhomogeneity, noise level, and PV effect are commonly encountered.

Journal ArticleDOI
TL;DR: Experiments show that the proposed MRF-based segmentation method can detect spot areas and estimate spot intensities with higher accuracy and can be used as gold standards for the purposes of testing and comparing different segmentation methods, and optimizing segmentation parameters.
Abstract: Motivation: Spot segmentation is a critical step in microarray gene expression data analysis. Therefore, the performance of segmentation may substantially affect the results of subsequent stages of the analysis, such as the detection of differentially expressed genes. Several methods have been developed to segment microarray spots from the surrounding background. In this study, we have proposed a new approach based on Markov random field (MRF) modeling and tested its performance on simulated and real microarray images against a widely used segmentation method based on Mann--Whitney test adopted by QuantArray software (Boston, MA). Spot addressing was performed using QuantArray. We have also devised a simulation method to generate microarray images with realistic features. Such images can be used as gold standards for the purposes of testing and comparing different segmentation methods, and optimizing segmentation parameters. Results: Experiments on simulated and 14 actual microarray image sets show that the proposed MRF-based segmentation method can detect spot areas and estimate spot intensities with higher accuracy. Availability: The algorithms were implemented in Matlab™ (The Mathworks, Inc., Natick, MA) environment. The codes for MRF-based segmentation and image simulation methods are available upon request. Contact: demirkaya@ieee.org

Journal ArticleDOI
TL;DR: The results show that the proposed method, based on the MRF model with the multiscale fuzzy line process, successfully generates the patch-wise classification patterns, and simultaneously improved the accuracy and visual interpretation.

Journal ArticleDOI
TL;DR: The proposed video segmentation approach can be viewed as the compromise of previous motion based approaches and region merging approaches and generates spatiotemporally coherent segmentation results.
Abstract: This paper proposes a probabilistic framework for spatiotemporal segmentation of video sequences. Motion information, boundary information from intensity segmentation, and spatial connectivity of segmentation are unified in the video segmentation process by means of graphical models. A Bayesian network is presented to model interactions among the motion vector field, the intensity segmentation field, and the video segmentation field. The notion of the Markov random field is used to encourage the formation of continuous regions. Given consecutive frames, the conditional joint probability density of the three fields is maximized in an iterative way. To effectively utilize boundary information from the intensity segmentation, distance transformation is employed in local objective functions. Experimental results show that the method is robust and generates spatiotemporally coherent segmentation results. Moreover, the proposed video segmentation approach can be viewed as the compromise of previous motion based approaches and region merging approaches.

Book ChapterDOI
10 Jul 2005
TL;DR: A scheme for simultaneous segmentation and registration of breast ce-MRI is proposed within a Bayesian framework, based on a maximum a posteriori estimation method, and the results show the potential of the methodology to extract useful information for breast cancer detection.
Abstract: Breast Contrast-Enhanced MRI (ce-MRI) requires a series of images to be acquired before, and repeatedly after, intravenous injection of a contrast agent Breast MRI segmentation based on the differential enhancement of image intensities can assist the clinician detect suspicious regions Image registration between the temporal data sets is necessary to compensate for patient motion, which is quite often substantial Although segmentation and registration are usually treated as separate problems in medical image analysis, they can naturally benefit a great deal from each other In this paper, we propose a scheme for simultaneous segmentation and registration of breast ce-MRI It is developed within a Bayesian framework, based on a maximum a posteriori estimation method A pharmacokinetic model and Markov Random Field model have been incorporated into the framework in order to improve the performance of our algorithm Our method has been applied to the segmentation and registration of clinical ce-MR images The results show the potential of our methodology to extract useful information for breast cancer detection