scispace - formally typeset
Search or ask a question

Showing papers on "Markov random field published in 2002"


Journal ArticleDOI
TL;DR: The DDMCMC paradigm provides a unifying framework in which the role of many existing segmentation algorithms are revealed as either realizing Markov chain dynamics or computing importance proposal probabilities and generalizes these segmentation methods in a principled way.
Abstract: This paper presents a computational paradigm called Data-Driven Markov Chain Monte Carlo (DDMCMC) for image segmentation in the Bayesian statistical framework. The paper contributes to image segmentation in four aspects. First, it designs efficient and well-balanced Markov Chain dynamics to explore the complex solution space and, thus, achieves a nearly global optimal solution independent of initial segmentations. Second, it presents a mathematical principle and a K-adventurers algorithm for computing multiple distinct solutions from the Markov chain sequence and, thus, it incorporates intrinsic ambiguities in image segmentation. Third, it utilizes data-driven (bottom-up) techniques, such as clustering and edge detection, to compute importance proposal probabilities, which drive the Markov chain dynamics and achieve tremendous speedup in comparison to the traditional jump-diffusion methods. Fourth, the DDMCMC paradigm provides a unifying framework in which the role of many existing segmentation algorithms, such as, edge detection, clustering, region growing, split-merge, snake/balloon, and region competition, are revealed as either realizing Markov chain dynamics or computing importance proposal probabilities. Thus, the DDMCMC paradigm combines and generalizes these segmentation methods in a principled way. The DDMCMC paradigm adopts seven parametric and nonparametric image models for intensity and color at various regions. We test the DDMCMC paradigm extensively on both color and gray-level images and some results are reported in this paper.

638 citations


Journal ArticleDOI
TL;DR: Experimental results show that the proposed speckle reduction algorithm outperforms standard wavelet denoising techniques in terms of the signal-to-noise ratio and the equivalent-number-of-looks measures in most cases and achieves better performance than the refined Lee filter.
Abstract: The granular appearance of speckle noise in synthetic aperture radar (SAR) imagery makes it very difficult to visually and automatically interpret SAR data. Therefore, speckle reduction is a prerequisite for many SAR image processing tasks. In this paper, we develop a speckle reduction algorithm by fusing the wavelet Bayesian denoising technique with Markov-random-field-based image regularization. Wavelet coefficients are modeled independently and identically by a two-state Gaussian mixture model, while their spatial dependence is characterized by a Markov random field imposed on the hidden state of Gaussian mixtures. The Expectation-Maximization algorithm is used to estimate hyperparameters and specify the mixture model, and the iterated-conditional-modes method is implemented to optimize the state configuration. The noise-free wavelet coefficients are finally estimated by a shrinkage function based on local weighted averaging of the Bayesian estimator. Experimental results show that the proposed method outperforms standard wavelet denoising techniques in terms of the signal-to-noise ratio and the equivalent-number-of-looks measures in most cases. It also achieves better performance than the refined Lee filter.

414 citations


Journal ArticleDOI
TL;DR: An adaptive semiparametric technique for the unsupervised estimation of the statistical terms associated with the gray levels of changed and unchanged pixels in a difference image is presented and a change detection map is generated.
Abstract: A novel automatic approach to the unsupervised identification of changes in multitemporal remote-sensing images is proposed. This approach, unlike classical ones, is based on the formulation of the unsupervised change-detection problem in terms of the Bayesian decision theory. In this context, an adaptive semiparametric technique for the unsupervised estimation of the statistical terms associated with the gray levels of changed and unchanged pixels in a difference image is presented. Such a technique exploits the effectivenesses of two theoretically well-founded estimation procedures: the reduced Parzen estimate (RPE) procedure and the expectation-maximization (EM) algorithm. Then, thanks to the resulting estimates and to a Markov random field (MRF) approach used to model the spatial-contextual information contained in the multitemporal images considered, a change detection map is generated. The adaptive semiparametric nature of the proposed technique allows its application to different kinds of remote-sensing images. Experimental results, obtained on two sets of multitemporal remote-sensing images acquired by two different sensors, confirm the validity of the proposed approach.

407 citations


Journal ArticleDOI
TL;DR: This paper presents a new wavelet-based image denoising method, which extends a "geometrical" Bayesian framework and combines three criteria for distinguishing supposedly useful coefficients from noise: coefficient magnitudes, their evolution across scales and spatial clustering of large coefficients near image edges.
Abstract: This paper presents a new wavelet-based image denoising method, which extends a "geometrical" Bayesian framework. The new method combines three criteria for distinguishing supposedly useful coefficients from noise: coefficient magnitudes, their evolution across scales and spatial clustering of large coefficients near image edges. These three criteria are combined in a Bayesian framework. The spatial clustering properties are expressed in a prior model. The statistical properties concerning coefficient magnitudes and their evolution across scales are expressed in a joint conditional model. The three main novelties with respect to related approaches are (1) the interscale-ratios of wavelet coefficients are statistically characterized and different local criteria for distinguishing useful coefficients from noise are evaluated, (2) a joint conditional model is introduced, and (3) a novel anisotropic Markov random field prior model is proposed. The results demonstrate an improved denoising performance over related earlier techniques.

308 citations


Journal ArticleDOI
TL;DR: It is argued that it can restate both tasks as that of fitting a GMRF to a prescribed stationary Gaussian field on a lattice when both local and global properties are important, and that GMRFs with small neighbourhoods can approximate Gaussian fields surprisingly well even with long correlation lengths.
Abstract: This paper discusses the following task often encountered in building Bayesian spatial models: construct a homogeneous Gaussian Markov random field (GMRF) on a lattice with correlation properties either as present in some observed data, or consistent with prior knowledge. The Markov property is essential in designing computationally efficient Markov chain Monte Carlo algorithms to analyse such models. We argue that we can restate both tasks as that of fitting a GMRF to a prescribed stationary Gaussian field on a lattice when both local and global properties are important. We demonstrate that using the Kullback-Leibler discrepancy often fails for this task, giving severely undesirable behaviour of the correlation function for lags outside the neighbourhood. We propose a new criterion that resolves this difficulty, and demonstrate that GMRFs with small neighbourhoods can approximate Gaussian fields surprisingly well even with long correlation lengths. Finally, we discuss implications of our findings for likelihood based inference for general Markov random fields when global properties are also important.

257 citations


Journal ArticleDOI
TL;DR: The MRF is used to model both noiseless images obtained from the actual scene and change images, the sites of which indicate changes between a pair of observed images, to address the problem of image change detection based on Markov random field models.
Abstract: This paper addresses the problem of image change detection (ICD) based on Markov random field (MRF) models. MRF has long been recognized as an accurate model to describe a variety of image characteristics. Here, we use the MRF to model both noiseless images obtained from the actual scene and change images (CIs), the sites of which indicate changes between a pair of observed images. The optimum ICD algorithm under the maximum a posteriori (MAP) criterion is developed under this model. Examples are presented for illustration and performance evaluation.

230 citations


Journal ArticleDOI
TL;DR: An adaptive Bayesian contextual classification procedure that utilizes both spectral and spatial interpixel dependency contexts in estimation of statistics and classification is proposed, which can reach classification accuracies similar to that obtained by a pixelwise maximum likelihood pixel classifier with a very large training sample set.
Abstract: An adaptive Bayesian contextual classification procedure that utilizes both spectral and spatial interpixel dependency contexts in estimation of statistics and classification is proposed. Essentially, this classifier is the constructive coupling of an adaptive classification procedure and a Bayesian contextual classification procedure. In this classifier, the joint prior probabilities of the classes of each pixel and its spatial neighbors are modeled by the Markov random field. The estimation of statistics and classification are performed in a recursive manner to allow the establishment of the positive-feedback process in a computationally efficient manner. Experiments with real hyperspectral data show that, starting with a small training sample set, this classifier can reach classification accuracies similar to that obtained by a pixelwise maximum likelihood pixel classifier with a very large training sample set. Additionally, classification maps are produced that have significantly less speckle error.

211 citations


Journal ArticleDOI
TL;DR: A method for simultaneous estimation of video-intensity inhomogeneities and segmentation of US image tissue regions and how this multiplicative model can be related to the ultrasonic physics of image formation is explained to justify the approach.
Abstract: Displayed ultrasound (US) B-mode images often exhibit tissue intensity inhomogeneities dominated by nonuniform beam attenuation within the body. This is a major problem for intensity-based, automatic segmentation of video-intensity images because conventional threshold-based or intensity-statistic-based approaches do not work well in the presence of such image distortions. Time gain compensation (TGC) is typically used in standard US machines in an attempt to overcome this. However this compensation method is position-dependent which means that different tissues in the same TGC time-range (or corresponding depth range) will be, incorrectly, compensated by the same amount. Compensation should really be tissue-type dependent but automating this step is difficult. The main contribution of this paper is to develop a method for simultaneous estimation of video-intensity inhomogeneities and segmentation of US image tissue regions. The method uses a combination of the maximum a posteriori (MAP) and Markov random field (MRF) methods to estimate the US image distortion field assuming it follows a multiplicative model while at the same time labeling image regions based on the corrected intensity statistics. The MAP step is used to estimate the intensity model parameters while the MRF step provides a novel way of incorporating the distributions of image tissue classes as a spatial smoothness constraint. We explain how this multiplicative model can be related to the ultrasonic physics of image formation to justify our approach. Experiments are presented on synthetic images and a gelatin phantom to evaluate quantitatively the accuracy of the method. We also discuss qualitatively the application of the method to clinical breast and cardiac US images. Limitations of the method and potential clinical applications are outlined in the conclusion.

209 citations


Journal ArticleDOI
TL;DR: This paper proposes various block sampling algorithms in order to improve the MCMC performance and indicates that the largest benefits are obtained if parameters and the corresponding hyperparameter are updated jointly in one large block.
Abstract: Gaussian Markov random field (GMRF) models are commonly used to model spatial correlation in disease mapping applications. For Bayesian inference by MCMC, so far mainly single-site updating algorithms have been considered. However, convergence and mixing properties of such algorithms can be extremely poor due to strong dependencies of parameters in the posterior distribution. In this paper, we propose various block sampling algorithms in order to improve the MCMC performance. The methodology is rather general, allows for non-standard full conditionals, and can be applied in a modular fashion in a large number of different scenarios. For illustration we consider three different applications: two formulations for spatial modelling of a single disease (with and without additional unstructured parameters respectively), and one formulation for the joint analysis of two diseases. The results indicate that the largest benefits are obtained if parameters and the corresponding hyperparameter are updated jointly in one large block. Implementation of such block algorithms is relatively easy using methods for fast sampling of Gaussian Markov random fields (Rue, 2001). By comparison, Monte Carlo estimates based on single-site updating can be rather misleading, even for very long runs. Our results may have wider relevance for efficient MCMC simulation in hierarchical models with Markov random field components.

209 citations


Journal ArticleDOI
TL;DR: A new method for automatic segmentation of moving objects in image sequences for VOP extraction using a Markov random field, based on motion information, spatial information and the memory is presented.
Abstract: The emerging video coding standard MPEG-4 enables various content-based functionalities for multimedia applications. To support such functionalities, as well as to improve coding efficiency, MPEG-4 relies on a decomposition of each frame of an image sequence into video object planes (VOP). Each VOP corresponds to a single moving object in the scene. This paper presents a new method for automatic segmentation of moving objects in image sequences for VOP extraction. We formulate the problem as graph labeling over a region adjacency graph (RAG), based on motion information. The label field is modeled as a Markov random field (MRF). An initial spatial partition of each frame is obtained by a fast, floating-point based implementation of the watershed algorithm. The motion of each region is estimated by hierarchical region matching. To avoid inaccuracies in occlusion areas, a novel motion validation scheme is presented. A dynamic memory, based on object tracking, is incorporated into the segmentation process to maintain temporal coherence of the segmentation. Finally, a labeling is obtained by maximization of the a posteriori probability of the MRF using motion information, spatial information and the memory. The optimization is carried out by highest confidence first (HCF). Experimental results for several video sequences demonstrate the effectiveness of the proposed approach.

185 citations


Book ChapterDOI
25 Sep 2002
TL;DR: A framework is proposed for the segmentation of brain tumors from MRI that instead of training on pathology, the method trains exclusively on healthy tissue and attempts to recognize deviations from normalcy in order to compute a fitness map over the image associated with the presence of pathology.
Abstract: A framework is proposed for the segmentation of brain tumors from MRI. Instead of training on pathology, the proposed method trains exclusively on healthy tissue. The algorithm attempts to recognize deviations from normalcy in order to compute a fitness map over the image associated with the presence of pathology. The resulting fitness map may then be used by conventional image segmentation techniques for honing in on boundary delineation. Such an approach is applicable to structures that are too irregular, in both shape and texture, to permit construction of comprehensive training sets. The technique is an extension of EM segmentation that considers information on five layers: voxel intensities, neighborhood coherence, intra-structure properties, inter-structure relationships, and user input. Information flows between the layers via multi-level Markov random fields and Bayesian classification. A simple instantiation of the framework has been implemented to perform preliminary experiments on synthetic and MRI data.

Journal ArticleDOI
TL;DR: The proposed method, which is an adaptation of previous work to the specific case of urban areas, uses the clique potentials of the Markov random field that extracts the road network and a multiscale framework is used.
Abstract: This paper deals with the automatic extraction of the road network in dense urban areas using a few-meters-resolution synthetic aperture radar (SAR) images. The first part presents the proposed method, which is an adaptation of previous work to the specific case of urban areas. The major modifications are 1) the clique potentials of the Markov random field that extracts the road network are adapted and 2) a multiscale framework is used. Results on shuttle mission and aerial SAR images with different resolutions are presented. The second part is dedicated to road extraction combining two SAR images taken with different flight directions (orthogonal and antiparallel passes), and the obtained improvement is analyzed.

Proceedings ArticleDOI
07 Nov 2002
TL;DR: Experiments with real hyperspectral data show that this adaptive Bayesian contextual classification procedure can reach classification accuracies similar to that obtained by a pixelwise maximum likelihood classifier with a very large training sample set.
Abstract: In this paper an adaptive Bayesian contextual classification procedure that utilizes both spectral and spatial interpixel dependency contexts in statistics estimation and classification is proposed. Essentially, this classifier is the constructive coupling of an adaptive classification procedure and a Bayesian contextual classification procedure. In this classifier, the joint prior probabilities of the classes of each pixel and its spatial neighbors are modeled by the Markov random field. Experiments with real hyperspectral data show that, starting with a small training sample set, this classifier can reach classification accuracies similar to that obtained by a pixelwise maximum likelihood classifier with a very large training sample set. Additionally, classification maps are produced which have significantly less speckle error.

Proceedings ArticleDOI
05 Dec 2002
TL;DR: The paper presents a tracking system that simultaneously segments and tracks multiple body parts of interacting people in the presence of mutual occlusion and shadow and resembles a multi-target, multi-assignment framework.
Abstract: The paper presents a system to segment and track multiple body parts of interacting humans in the presence of mutual occlusion and shadow. The color image sequence is processed at three levels: pixel level, blob level, and object level. A Gaussian mixture model is used at the pixel level to train and classify individual pixel colors. A Markov random field (MRF) framework is used at the blob level to merge the pixels into coherent blobs and to register inter-blob relations. A coarse model of the human body is applied at the object level as empirical domain knowledge to resolve ambiguity due to occlusion and to recover from intermittent tracking failures. A two-fold tracking scheme is used which consists of blob to blob matching in consecutive frames and blob to body part association within a frame. The tracking scheme resembles a multi-target, multi-assignment framework. The result is a tracking system that simultaneously segments and tracks multiple body parts of interacting people. Example sequences illustrate the success of the proposed paradigm.

Patent
23 Jan 2002
TL;DR: In this article, a new system and method for synthesizing textures from an input sample is presented, which uses a unique accelerated patch-based sampling system to synthesize high-quality textures in real-time using a small input texture sample.
Abstract: The present invention involves a new system and method for synthesizing textures from an input sample. A system and method according to the present invention uses a unique accelerated patch-based sampling system to synthesize high-quality textures in real-time using a small input texture sample. The patch-based sampling system of the present invention works well for a wide variety of textures ranging from regular to stochastic. Potential feature mismatches across patch boundaries are avoided by sampling patches according to a non-parametric estimation of the local conditional Markov Random Field (MRF) density function.

Journal ArticleDOI
TL;DR: An unsupervised segmentation approach to classification of multispectral image is suggested here in Markov random field (MRF) frame work and the findings are found to be encouraging.
Abstract: An unsupervised segmentation approach to classification of multispectral image is suggested here in Markov random field (MRF) frame work. This work generalizes the work of Sarkar et al. (2000) on gray value images for multispectral images and is extended for landuse classification. The essence of this approach is based on capturing intrinsic characters of tonal and textural regions of any multispectral image. The approach takes an initially oversegmented image and the original. multispectral image as the input and defines a MRF over region adjacency graph (RAG) of the initially segmented regions. Energy function minimization associated with the MRF is carried out by applying a multivariate statistical test. A cluster validation scheme is outlined after obtaining optimal segmentation. Quantitative evaluation of classification accuracy of test data for three illustrations are shown and compared with conventional maximum likelihood procedure. Comparison of the proposed methodology with a recent work of texture segmentation in the literature has also been provided. The findings of the proposed method are found to be encouraging.

Journal ArticleDOI
TL;DR: A new adaptive noise reduction method for interferometric synthetic aperture radar (InSAR) complex-amplitude images by detecting residues in the phase image as well as their neighbors at first and decreasing the number of residues after the application.
Abstract: We propose a new adaptive noise reduction method for interferometric synthetic aperture radar (InSAR) complex-amplitude images. In the proposed method, we detect residues (singular points) in the phase image as well as their neighbors at first. Normal areas that contain no residue are used for the estimation of correct pixel values at the marked residues according to 5th order non-causal complex-valued Markov random field (CMRF) model. The process is performed block-wise with the assumption of a locally stationary condition of statistics. Using a CMRF lattice complex-valued neural-network, the error energy defined as the squared norm of distance between signal and estimated values is minimized by LMS steepest descent algorithm. Eventually, the number of residues is decreased. An application is also presented. An InSAR image around Mt. Fuji is processed by the proposed technique and then phase-unwrapped by the branch-cut method. It is found that after the application of the proposed method, a better phase unwrapped image can be obtained successfully.

Journal ArticleDOI
TL;DR: A Markov chain Monte Carlo maximum-likelihood (MCMCML) technique is presented, able to simultaneously achieve the estimation and the reconstruction of satellite image deconvolution.

Journal ArticleDOI
TL;DR: A class of such models (the double Markov random field) for images composed of several textures is described, which is considered to be the natural hierarchical model for such a task.
Abstract: Markov random fields are used extensively in model-based approaches to image segmentation and, under the Bayesian paradigm, are implemented through Markov chain Monte Carlo (MCMC) methods. We describe a class of such models (the double Markov random field) for images composed of several textures, which we consider to be the natural hierarchical model for such a task. We show how several of the Bayesian approaches in the literature can be viewed as modifications of this model, made in order to make MCMC implementation possible. From a simulation study, conclusions are made concerning the performance of these modified models.

Journal ArticleDOI
TL;DR: New MRF (Markov random field) models in a bidirectional Bayesian framework for accurate motion and occlusion field estimation are presented and the resultant computational speed is 5.5 times faster compared with the conventional "iterated conditional mode" relaxation using the proposed fast biddirectional relaxation.
Abstract: This paper presents new MRF (Markov random field) models in a bidirectional Bayesian framework for accurate motion and occlusion field estimation. With careful selection of the five free parameters required by the models, good experimental results have been obtained. The resultant computational speed is also 5.5 times faster compared with the conventional "iterated conditional mode" relaxation using the proposed fast bidirectional relaxation.

Journal ArticleDOI
TL;DR: In this article, the high resolution image is modeled as a Markov random field (MRF) and a maximum a posteriori (MAP) estimation technique is used for super-resolution restoration.
Abstract: This paper presents a new technique for generating a high resolution image from a blurred image sequences this is also referred to as super-resolution restoration of images. The image sequence consists of decimated, blurred and noisy versions of the high resolution image. The high resolution image is modeled as a Markov random field (MRF) and a maximum a posteriori (MAP) estimation technique is used for super-resolution restoration. Unlike other super-resolution imaging methods, the proposed technique does not require sub-pixel registration of given observations. A simple gradient descent method is used to optimize the functional. The discontinuities in the intensity process can be preserved by introducing suitable line processes. Superiority of this technique to standard methods of image expansion like pixel replication and spline interpolation is illustrated.

Proceedings ArticleDOI
11 Aug 2002
TL;DR: A new technique based on a Markov random field model of the document and the model parameters (clique potentials) are learned from training data and the binary image is estimated in a Bayesian framework is introduced.
Abstract: Binarization techniques have been developed in the document analysis community for over 30 years and many algorithms have been used successfully. On the other hand, document analysis tasks are more and more frequently being applied to multimedia documents such as video sequences. Due to low resolution and lossy compression, the binarization of text included in the frames is a non-trivial task. Existing techniques work without a model of the spatial relationships in the image, which makes them less powerful. We introduce a new technique based on a Markov random field model of the document. The model parameters (clique potentials) are learned from training data and the binary image is estimated in a Bayesian framework. The performance is evaluated using commercial OCR software.

Journal ArticleDOI
TL;DR: This paper presents a formulation for fusing texture and color in a manner that makes the segmentation reliable while keeping the computational cost low, with the goal of real-time target tracking.

Journal ArticleDOI
TL;DR: A method for choosing the number of colors or true gray levels in an image; this allows fully automatic segmentation of images and discusses a simpler approximation, MMIC (Marginal Mixture Information Criterion), which is based only on the marginal distribution of pixel values.
Abstract: We propose a method for choosing the number of colors or true gray levels in an image; this allows fully automatic segmentation of images. Our underlying probability model is a hidden Markov random field. Each number of colors considered is viewed as corresponding to a statistical model for the image, and the resulting models are compared via approximate Bayes factors. The Bayes factors are approximated using BIC (Bayesian Information Criterion), where the required maximized likelihood is approximated by the Qian-Titterington (1991) pseudolikelihood. We call the resulting criterion PLIC (Pseudolikelihood Information Criterion). We also discuss a simpler approximation, MMIC (Marginal Mixture Information Criterion), which is based only on the marginal distribution of pixel values. This turns out to be useful for initialization and it also has moderately good performance by itself when the amount of spatial dependence in an image is low. We apply PLIC and MMIC to a medical image segmentation problem.

01 Jan 2002
TL;DR: It is demonstrated that including phase information as a priori knowledge in a Markov random field (MRF) model can improve the quality of segmentation in vascular segmentation.
Abstract: This paper presents a statistical approach to aggregating speed and phase (directional) information for vascular segmentation of phase contrast magnetic resonance angiograms (PC-MRA). Rather than relying on speed information alone, as done by others and in our own work, we demonstrate that including phase information as a priori knowledge in a Markov random field (MRF) model can improve the quality of segmentation. This is particularly true in the region within an aneurysm where there is a heterogeneous intensity pattern and significant vascular signal loss. We propose to use a Maxwell-Gaussian mixture density to model the background signal distribution and combine this with a uniform distribution for modelling vascular signal to give a Maxwell-Gaussian-uniform (MGU) mixture model of image intensity. The MGU model parameters are estimated by the modified expectation-maximisation (EM) algorithm. In addition, it is shown that the Maxwell-Gaussian mixture distribution (a) models the background signal more accurately than a Maxwell distribution, (b) exhibits a better fit to clinical data and (c) gives fewer false positive voxels (misclassified vessel voxels) in segmentation. The new segmentation algorithm is tested on an aneurysm phantom data set and two clinical data sets. The experimental results show that the proposed method can provide a better quality of segmentation when both speed and phase information are utilised. © 2002 Elsevier Science B.V. All rights reserved.

Journal ArticleDOI
TL;DR: The authors propose a progressive minimization procedure of this energy function starting from initial reliably labeled pixels and involving only local computation.
Abstract: The early and accurate segmentation of low clouds during the night-time is an important task for nowcasting. It requires that observations can be acquired at a sufficient time rate as provided by the geostationary METEOSAT satellite over Europe. However, the information supplied by the single infrared METEOSAT channel available by night is not sufficient to discriminate between low clouds and ground during night from a single image. To tackle this issue, the authors consider several sources of information extracted from an infrared image sequence. Indeed, they exploit both relevant local motion-based measurements, intensity images and thermal parameters estimated over blocks, along with local contextual information. A statistical contextual labeling process in two classes, involving "low clouds" and "clear sky," is performed on the warmer pixels. It is formulated within a Bayesian estimation framework associated with Markov random field (MRF) models. This comes to minimize a global energy function comprising three terms: two data-driven terms (thermal and motion-based ones) and a regularization term expressing a priori knowledge on the label field (expected spatial contextual properties). The authors propose a progressive minimization procedure of this energy function starting from initial reliably labeled pixels and involving only local computation.

Journal ArticleDOI
TL;DR: In this paper, a statistical approach to aggregate speed and phase (directional) information for vascular segmentation of phase contrast magnetic resonance angiograms (PC-MRA) is presented.

Proceedings ArticleDOI
22 Apr 2002
TL;DR: A new electronic colon cleansing technology is presented, which employs a hidden Markov random filed (MRF) model to integrate the neighborhood information for overcoming the non-uniformity problems within the tagged stool/fluid region.
Abstract: Virtual colonoscopy provides a safe, minimal-invasive approach to detect colonic polyps using medical imaging and computer graphics technologies. Residual stool and fluid are problematic for optimal viewing of the colonic mucosa. Electronic cleansing techniques combining bowel preparation, oral contrast agents, and image segmentation were developed to extract the colon lumen from computed tomography (CT) images of the colon. In this paper, we present a new electronic colon cleansing technology, which employs a hidden Markov random filed (MRF) model to integrate the neighborhood information for overcoming the non-uniformity problems within the tagged stool/fluid region. Prior to obtaining CT images, the patient undergoes a bowel preparation. A statistical method for maximum a posterior probability (MAP) was developed to identify the enhanced regions of residual stool/fluid. The method utilizes a hidden MRF Gibbs model to integrate the spatial information into the Expectation Maximization (EM) model-fitting MAP algorithm. The algorithm estimates the model parameters and segments the voxels iteratively in an interleaved manner, converging to a solution where the model parameters and voxel labels are stabilized within a specified criterion. Experimental results are promising.

Journal ArticleDOI
TL;DR: It has been observed that, though using only chromatic information good segmentation results are obtained, the luminance information improves the quality of segmentation in some cases.

Journal ArticleDOI
TL;DR: A maximum a posteriori (MAP) framework for detecting tag lines using a Markov random field defined on the lattice generated by three-dimensional and four-dimensional (4-D) (3-D+t) uniform sampling of B-spline models is presented.
Abstract: Magnetic resonance (MR) tagging is a technique for measuring heart deformations through creation of a stripe grid pattern on cardiac images. In this paper, we present a maximum a posteriori (MAP) framework for detecting tag lines using a Markov random field (MRF) defined on the lattice generated by three-dimensional (3-D) and four-dimensional (4-D) (3-D+t) uniform sampling of B-spline models. In the 3-D case, MAP estimation is cast for detecting present tag features in the current image given an initial solid from the previous frame (the initial undeformed solid is manually positioned by clicking on corner points of a cube). The method also allows the parameters of the solid model, including the number of knots and the spline order, to be adjusted within the same framework. Fitting can start with a solid with less knots and lower spline order and proceed to one with more knots and/or higher order so as to achieve more accuracy and/or higher order of smoothness. In the 4-D case, the initial model is considered to be the linear interpolation of a sequence of optimal solids obtained from 3-D tracking. The same framework proposed for the 3-D case can once again be applied to arrive at a 4-D B-spline model with a higher temporal order.