scispace - formally typeset
Search or ask a question

Showing papers on "Markov random field published in 2003"


Journal ArticleDOI
TL;DR: A method to solve exactly a first order Markov random field optimization problem in more generality than was previously possible is introduced, which maps the problem into a minimum-cut problem for a directed graph, for which a globally optimal solution can be found in polynomial time.
Abstract: We introduce a method to solve exactly a first order Markov random field optimization problem in more generality than was previously possible. The MRF has a prior term that is convex in terms of a linearly ordered label set. The method maps the problem into a minimum-cut problem for a directed graph, for which a globally optimal solution can be found in polynomial time. The convexity of the prior function in the energy is shown to be necessary and sufficient for the applicability of the method.

602 citations


Proceedings ArticleDOI
13 Oct 2003
TL;DR: This work compares the belief propagation algorithm and the graph cuts algorithm on the same MRF's, which have been created for calculating stereo disparities, and finds that the labellings produced by the two algorithms are comparable.
Abstract: Recent stereo algorithms have achieved impressive results by modelling the disparity image as a Markov Random Field (MRF). An important component of an MRF-based approach is the inference algorithm used to find the most likely setting of each node in the MRF. Algorithms have been proposed which use graph cuts or belief propagation for inference. These stereo algorithms differ in both the inference algorithm used and the formulation of the MRF. It is unknown whether to attribute the responsibility for differences in performance to the MRF or the inference algorithm. We address this through controlled experiments by comparing the belief propagation algorithm and the graph cuts algorithm on the same MRF's, which have been created for calculating stereo disparities. We find that the labellings produced by the two algorithms are comparable. The solutions produced by graph cuts have a lower energy than those produced with belief propagation, but this does not necessarily lead to increased performance relative to the ground truth.

538 citations


Proceedings ArticleDOI
13 Oct 2003
TL;DR: This work presents discriminative random fields (DRFs), a discrim inative framework for the classification of image regions by incorporating neighborhood interactions in the labels as well as the observed data that offers several advantages over the conventional Markov random field framework.
Abstract: In this work we present discriminative random fields (DRFs), a discriminative framework for the classification of image regions by incorporating neighborhood interactions in the labels as well as the observed data. The discriminative random fields offer several advantages over the conventional Markov random field (MRF) framework. First, the DRFs allow to relax the strong assumption of conditional independence of the observed data generally used in the MRF framework for tractability. This assumption is too restrictive for a large number of applications in vision. Second, the DRFs derive their classification power by exploiting the probabilistic discriminative models instead of the generative models used in the MRF framework. Finally, all the parameters in the DRF model are estimated simultaneously from the training data unlike the MRF framework where likelihood parameters are usually learned separately from the field parameters. We illustrate the advantages of the DRFs over the MRF framework in an application of man-made structure detection in natural images taken from the Corel database.

512 citations


Journal ArticleDOI
TL;DR: A method of assigning functions based on a probabilistic analysis of graph neighborhoods in a protein-protein interaction network that exploits the fact that graph neighbors are more likely to share functions than nodes which are not neighbors.
Abstract: Motivation: The development of experimental methods for genome scale analysis of molecular interaction networks has made possible new approaches to inferring protein function. This paper describes a method of assigning functions based on a probabilistic analysis of graph neighborhoods in a protein-protein interaction network. The method exploits the fact that graph neighbors are more likely to share functions than nodes which are not neighbors. A binomial model of local neighbor function labeling probability is combined with a Markov random field propagation algorithm to assign function probabilities for proteins in the network. Results: We applied the method to a protein-protein interaction dataset for the yeast Saccharomyces cerevisiae using the Gene Ontology (GO) terms as function labels. The method reconstructed known GO term assignments with high precision, and produced putative GO assignments to 320 proteins that currently lack GO annotation, which represents about 10% of the unlabeled proteins in S. cere

387 citations


Journal ArticleDOI
TL;DR: A class of algorithms in which the idea is to deal with systems of independent variables corresponds to approximations of the pixels' interactions similar to the mean field approximation, and follows algorithms that have the advantage of taking the Markovian structure into account while preserving the good features of EM.

347 citations


Proceedings ArticleDOI
09 Dec 2003
TL;DR: The proposed DRF model exploits local discriminative models and allows to relax the assumption of conditional independence of the observed data given the labels, commonly used in the Markov Random Field (MRF) framework.
Abstract: In this paper we present Discriminative Random Fields (DRF), a discriminative framework for the classification of natural image regions by incorporating neighborhood spatial dependencies in the labels as well as the observed data. The proposed model exploits local discriminative models and allows to relax the assumption of conditional independence of the observed data given the labels, commonly used in the Markov Random Field (MRF) framework. The parameters of the DRF model are learned using penalized maximum pseudo-likelihood method. Furthermore, the form of the DRF model allows the MAP inference for binary classification problems using the graph min-cut algorithms. The performance of the model was verified on the synthetic as well as the real-world images. The DRF model outperforms the MRF model in the experiments.

252 citations


Journal ArticleDOI
TL;DR: This paper presents unsupervised models for both the detection and the shadow extraction phases of an automated classification system for mine detection and classification using high-resolution sidescan sonar.
Abstract: Mine detection and classification using high-resolution sidescan sonar is a critical technology for mine counter measures (MCM). As opposed to the majority of techniques which require large training data sets, this paper presents unsupervised models for both the detection and the shadow extraction phases of an automated classification system. The detection phase is carried out using an unsupervised Markov random field (MRF) model where the required model parameters are estimated from the original image. Using a priori spatial information on the physical size and geometric signature of mines in sidescan sonar, a detection-orientated MRF model is developed which directly segments the image into regions of shadow, seabottom-reverberation, and object-highlight. After detection, features are extracted so that the object can be classified. A novel co-operating statistical snake (CSS) model is presented which extracts the highlight and shadow of the object. The CSS model again utilizes available a priori information on the spatial relationship between the highlight and shadow, allowing accurate segmentation of the object's shadow to be achieved.

226 citations


Journal ArticleDOI
TL;DR: Three different statistical approaches to tackle automatic land cover classification from satellite images are considered: two of them, namely the well-known maximum likelihood classification and the support vector machine (SVM), are noncontextual methods, and iterated conditional modes (ICM), exploits spatial context by using a Markov random field.

177 citations


Journal ArticleDOI
TL;DR: A "mutual" MRF approach is proposed that aims at improving both the accuracy and the reliability of the classification process by means of a better exploitation of the temporal information and is an attractive alternative to the usual trial-and-error search procedure.
Abstract: Markov random fields (MRFs) provide a useful and theoretically well-established tool for integrating temporal contextual information into the classification process. In particular, when dealing with a sequence of temporal images, the usual MRF-based approach consists in adopting a "cascade" scheme, i.e., in propagating the temporal information from the current image to the next one of the sequence. The simplicity of the cascade scheme makes it attractive; on the other hand, it does not fully exploit the temporal information available in a sequence of temporal images. In this paper, a "mutual" MRF approach is proposed that aims at improving both the accuracy and the reliability of the classification process by means of a better exploitation of the temporal information. It involves carrying out a bidirectional exchange of the temporal information between the defined single-time MRF models of consecutive images. A difficult issue related to MRFs is the determination of the MRF model parameters that weight the energy terms related to the available information sources. To solve this problem, we propose a simple and fast method based on the concept of "minimum perturbation" and implemented with the pseudoinverse technique for the minimization of the sum of squared errors. Experimental results on a multitemporal dataset made up of two multisensor (Landsat Thematic Mapper and European Remote Sensing 1 synthetic aperture radar) images are reported. The results obtained by the proposed "mutual" approach show a clear improvement in terms of classification accuracy over those yielded by a reference MRF-based classifier. The presented method to automatically estimate the MRF parameters yielded significant results that make it an attractive alternative to the usual trial-and-error search procedure.

160 citations


Journal ArticleDOI
TL;DR: Numerical experiments on multispectral images show that the proposed algorithm is much faster than a similar reference algorithm based on "flat" MRF models, and its performance, in terms of segmentation accuracy and map smoothness, is comparable or even superior.
Abstract: We present a new image segmentation algorithm based on a tree-structured binary MRF model. The image is recursively segmented in smaller and smaller regions until a stopping condition, local to each region, is met. Each elementary binary segmentation is obtained as the solution of a MAP estimation problem, with the region prior modeled as an MRF. Since only binary fields are used, and thanks to the tree structure, the algorithm is quite fast, and allows one to address the cluster validation problem in a seamless way. In addition, all field parameters are estimated locally, allowing for some spatial adaptivity. To improve segmentation accuracy, a split-and-merge procedure is also developed and a spatially adaptive MRF model is used. Numerical experiments on multispectral images show that the proposed algorithm is much faster than a similar reference algorithm based on "flat" MRF models, and its performance, in terms of segmentation accuracy and map smoothness, is comparable or even superior.

159 citations


Journal ArticleDOI
TL;DR: A novel technique to simultaneously estimate the depth map and the focused image of a scene, both at a super-resolution, from its defocused observations as well as to estimate the true high resolution focused image.
Abstract: This paper presents a novel technique to simultaneously estimate the depth map and the focused image of a scene, both at a super-resolution, from its defocused observations. Super-resolution refers to the generation of high spatial resolution images from a sequence of low resolution images. Hitherto, the super-resolution technique has been restricted mostly to the intensity domain. In this paper, we extend the scope of super-resolution imaging to acquire depth estimates at high spatial resolution simultaneously. Given a sequence of low resolution, blurred, and noisy observations of a static scene, the problem is to generate a dense depth map at a resolution higher than one that can be generated from the observations as well as to estimate the true high resolution focused image. Both the depth and the image are modeled as separate Markov random fields (MRF) and a maximum a posteriori estimation method is used to recover the high resolution fields. Since there is no relative motion between the scene and the camera, as is the case with most of the super-resolution and structure recovery techniques, we do away with the correspondence problem.

Proceedings Article
18 Jun 2003
TL;DR: This work presents a novel framework for motion segmentation that combines the concepts of layer-based methods and featurebased motion estimation and achieves a dense, piecewise smooth assignment of pixels to motion layers using a fast approximate graphcut algorithm based on a Markov random field formulation.
Abstract: We present a novel framework for motion segmentation that combines the concepts of layer-based methods and featurebased motion estimation. We estimate the initial correspondences by comparing vectors of filter outputs at interest points, from which we compute candidate scene relations via random sampling of minimal subsets of correspondences. We achieve a dense, piecewise smooth assignment of pixels to motion layers using a fast approximate graphcut algorithm based on a Markov random field formulation. We demonstrate our approach on image pairs containing large inter-frame motion and partial occlusion. The approach is efficient and it successfully segments scenes with inter-frame disparities previously beyond the scope of layerbased motion segmentation methods.

Proceedings ArticleDOI
13 Oct 2003
TL;DR: A primal sketch model is proposed that integrates the descriptive Markov random field model and the generative wavelet/sparse coding model and, in addition, a Gestalt field model for spatial organization that produces meaningful sketches over a large number of generic images.
Abstract: In this paper, we present a mathematical theory for Marr's primal sketch. We first conduct a theoretical study of the descriptive Markov random field model and the generative wavelet/sparse coding model from the perspective of entropy and complexity. The competition between the two types of models defines the concept of "sketchability", which divides image into texture and geometry. We then propose a primal sketch model that integrates the two models and, in addition, a Gestalt field model for spatial organization. We also propose a sketching pursuit process that coordinates the competition between two pursuit algorithms: the matching pursuit (Mallat and Zhang, 1993) and the filter pursuit (Zhu, et al., 1997), that seek to explain the image by bases and filters respectively. The model can be used to learn a dictionary of image primitives, or textons in Julesz's language, for natural images. The primal sketch model is not only parsimonious for image representation, but produces meaningful sketches over a large number of generic images.

Journal ArticleDOI
TL;DR: A Bayesian approach for generalized linear models proposed by the author which uses a Markov random field to model the coefficients' spatial dependency and proves a result showing the equivalence between this model and other usual spatial regression models.
Abstract: Many spatial regression problems using area data require more flexible forms than the usual linear predictor for modelling the dependence of responses on covariates. One direction for doing this is to allow the coefficients to vary as smooth functions of the area's geographical location. After presenting examples from the scientific literature where these spatially varying coefficients are justified, we briefly review some of the available alternatives for this kind of modelling. We concentrate on a Bayesian approach for generalized linear models proposed by the author which uses a Markov random field to model the coefficients' spatial dependency. We show that, for normally distributed data, Gibbs sampling can be used to sample from the posterior and we prove a result showing the equivalence between our model and other usual spatial regression models. We illustrate our approach with a number of rather complex applied problems, showing that the method is computationally feasible and provides useful insights in substantive problems. Copyright © 2003 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: A Markov random field (MRF) based approach to high-level grid segmentation, which is robust to common problems encountered with array images and does not require calibration, and an active contour method for single-spot segmentation are proposed.
Abstract: This paper describes image processing methods for automatic spotted microarray image analysis. Automatic gridding is important to achieve constant data quality and is, therefore, especially interesting for large-scale experiments as well as for integration of microarray expression data from different sources. We propose a Markov random field (MRF) based approach to high-level grid segmentation, which is robust to common problems encountered with array images and does not require calibration. We also propose an active contour method for single-spot segmentation. Active contour models describe objects in images by properties of their boundaries. Both MRFs and active contour models have been used in various other computer vision applications. The traditional active contour model must be generalized for successful application to microarray spot segmentation. Our active contour model is employed for spot detection in the MRF score functions as well as for spot signal segmentation in quantitative array image analysis. An evaluation using several image series from different sources shows the robustness of our methods.

Proceedings ArticleDOI
08 Dec 2003
TL;DR: This work describes a multiple hypothesis particle filter for tracking targets that are influenced by the proximity and/or behavior of other targets, and shows how a Markov random field motion prior can model these interactions to enable more accurate tracking.
Abstract: We describe a multiple hypothesis particle filter for tracking targets that are influenced by the proximity and/or behavior of other targets. Our contribution is to show how a Markov random field motion prior, built on the fly at each time step, can model these interactions to enable more accurate tracking. We present results for a social insect tracking application, where we model the domain knowledge that two targets cannot occupy the same space, and targets actively avoid collisions. We show that using this model improves track quality and efficiency. Unfortunately, the joint particle tracker we propose suffers from exponential complexity in the number of tracked targets. An approximation to the joint filter, however, consisting of multiple nearly independent particle filters can provide similar track quality at substantially lower computational cost.

Journal ArticleDOI
TL;DR: A class of Random Field model, defined on a multiresolution array is used in the segmentation of gray level and textured images, using a simple boundary process to give accurate results even at low resolutions, and consequently at very low computational cost.
Abstract: In this paper, a class of Random Field model, defined on a multiresolution array is used in the segmentation of gray level and textured images. The novel feature of one form of the model is that it is able to segment images containing unknown numbers of regions, where there may be significant variation of properties within each region. The estimation algorithms used are stochastic, but because of the multiresolution representation, are fast computationally, requiring only a few iterations per pixel to converge to accurate results, with error rates of 1-2 percent across a range of image structures and textures. The addition of a simple boundary process gives accurate results even at low resolutions, and consequently at very low computational cost.

Journal ArticleDOI
TL;DR: By implicitly encoding the (local and global) shape information into an SA-tree, a variety of vision tasks, e.g., shape recognition, comparison, and retrieval, can be performed in a more robust and efficient way via various tree-based algorithms.
Abstract: Representing shapes in a compact and informative form is a significant problem for vision systems that must recognize or classify objects. We describe a compact representation model for two-dimensional (2D) shapes by investigating their self-similarities and constructing their shape axis trees (SA-trees). Our approach can be formulated as a variational one (or, equivalently, as MAP estimation of a Markov random field). We start with a 2D shape, its boundary contour, and two different parameterizations for the contour (one parameterization is oriented counterclockwise and the other clockwise). To measure its self-similarity, the two parameterizations are matched to derive the best set of one-to-one point-to-point correspondences along the contour. The cost functional used in the matching may vary and is determined by the adopted self-similarity criteria, e.g., cocircularity, distance variation, parallelism, and region homogeneity. The loci of middle points of the pairing contour points yield the shape axis and they can be grouped into a unique free tree structure, the SA-tree. By implicitly encoding the (local and global) shape information into an SA-tree, a variety of vision tasks, e.g., shape recognition, comparison, and retrieval, can be performed in a more robust and efficient way via various tree-based algorithms. A dynamic programming algorithm gives the optimal solution in O(N/sup 1/), where N is the size of the contour.

Journal ArticleDOI
TL;DR: A novel pixon-based adaptive scale method that is combined with a Markov random field model under a Bayesian framework for image segmentation and computational costs decrease dramatically compared with the pixel-based MRF algorithm.
Abstract: Image segmentation is an essential processing step for many image analysis applications. We propose a novel pixon-based adaptive scale method for image segmentation. The key idea of our approach is that a pixon-based image model is combined with a Markov random field (MRF) model under a Bayesian framework. We introduce a new pixon scheme that is more suitable for image segmentation than the "fuzzy" pixon scheme. The anisotropic diffusion equation is successfully used to form the pixons in our new pixon scheme. Experimental results demonstrate that our algorithm performs fairly well and computational costs decrease dramatically compared with the pixel-based MRF algorithm.

Journal ArticleDOI
TL;DR: This paper focused on non-purposive grouping (NPG), which is built on general expectations of a perceptually desirable segmentation as opposed to any object specific models, such that the grouping algorithm is applicable to any image understanding application.

Proceedings ArticleDOI
Resales1, Achan1, Frey1
01 Jan 2003
TL;DR: In this paper, a probabilistic inference and learning algorithm for inferring the most likely output image and learning the rendering style is proposed. But the task of image restoration is much more difficult since the algorithm must both infer correspondences between features in the input image and the source image, and infer the unknown mapping between the images.
Abstract: An interesting and potentially useful vision/graphics task is to render an input image in an enhanced form or also in an unusual style; for example with increased sharpness or with some artistic qualities. In previous work [10, 5], researchers showed that by estimating the mapping from an input image to a registered (aligned) image of the same scene in a different style or resolution, the mapping could be used to render a new input image in that style or resolution. Frequently a registered pair is not available, but instead the user may have only a source image of an unrelated scene that contains the desired style. In this case, the task of inferring the output image is much more difficult since the algorithm must both infer correspondences between features in the input image and the source image, and infer the unknown mapping between the images. We describe a Bayesian technique for inferring the most likely output image. The prior on the output image P(X) is a patch-based Markov random field obtained from the source image. The likelihood of the input P(Y/spl bsol/X) is a Bayesian network that can represent different rendering styles. We describe a computationally efficient, probabilistic inference and learning algorithm for inferring the most likely output image and learning the rendering style. We also show that current techniques for image restoration or reconstruction proposed in the vision literature (e.g., image super-resolution or de-noising) and image-based nonphotorealistic rendering could be seen as special cases of our model. We demonstrate our technique using several tasks, including rendering a photograph in the artistic style of an unrelated scene, de-noising, and texture transfer.

Book ChapterDOI
20 Jul 2003
TL;DR: The new approach leads to a generative model that produces highly homogeneous polygonized shapes and improves the capability of reconstruction of the training data and to an overall reduction in the total variance of the point distribution model.
Abstract: A method for building statistical point distribution models is proposed. The novelty in this paper is the adaption of Markov random field regularization of the correspondence field over the set of shapes. The new approach leads to a generative model that produces highly homogeneous polygonized shapes and improves the capability of reconstruction of the training data. Furthermore, the method leads to an overall reduction in the total variance of the point distribution model. Thus, it finds correspondence between semi-landmarks that are highly correlated in the shape tangent space. The method is demonstrated on a set of human ear canals extracted from 3D-laser scans.

Proceedings ArticleDOI
09 Mar 2003
TL;DR: A new approach to automatic grid segmentation of the raw fluorescence microarray images by Markov Random Field techniques, which requires weaker assumptions about the array printing process than previously published methods and produces excellent results on many real datasets.
Abstract: DNA microarray hybridisation is a popular high through-put technique in academic as well as industrial functional genomics research. In this paper we present a new approach to automatic grid segmentation of the raw fluorescence microarray images by Markov Random Field (MRF) techniques. The main objectives are applicability to various types of array designs and robustness to the typical problems encountered in microarray images, which are contaminations and weak signal.We briefly introduce microarray technology and give some background on MRFs. Our MRF model of microarray gridding is designed to integrate different application specific constraints and heuristic criteria into a robust and flexible segmentation algorithm. We show how to compute the model components efficiently and state our deterministic MRF energy minimization algorithm that was derived from the 'Highest Confidence First' algorithm by Chou et al. Since MRF segmentation may fail due to the properties of the data and the minimization algorithm, we use supplied or estimated print layouts to validate results.Finally we present results of tests on several series of microarray images from different sources, some of them test sets published with other microarray gridding software. Our MRF grid segmentation requires weaker assumptions about the array printing process than previously published methods and produces excellent results on many real datasets.An implementation of the described methods is available upon request from the authors.

Journal ArticleDOI
TL;DR: The method can be applied to noisy images with missing grid nodes and grid-node artifacts and the method accommodates a wide range of grid distortions including: large-scale warping, varying row/column spacing, as well as nonrigid random fluctuations of the grid nodes.
Abstract: A method for locating distorted grid structures in images is presented. The method is based on the theories of template matching and Bayesian image restoration. The grid is modeled as a deformable template. Prior knowledge of the grid is described through a Markov random field (MRF) model which represents the spatial coordinates of the grid nodes. Knowledge of how grid nodes are depicted in the observed image is described through the observation model. The prior consists of a node prior and an arc (edge) prior, both modeled as Gaussian MRFs. The node prior models variations in the positions of grid nodes and the arc prior models variations in row and column spacing across the grid. Grid matching is done by placing an initial rough grid over the image and applying an ensemble annealing scheme to maximize the posterior distribution of the grid. The method can be applied to noisy images with missing grid nodes and grid-node artifacts and the method accommodates a wide range of grid distortions including: large-scale warping, varying row/column spacing, as well as nonrigid random fluctuations of the grid nodes. The methodology is demonstrated in two case studies concerning (1) localization of DNA signals in hybridization filters and (2) localization of knit units in textile samples.

Journal ArticleDOI
TL;DR: This paper shows the application to this context of a parameter estimation method which is popular in the point process context and reviews other related methods.

Journal ArticleDOI
TL;DR: A new second-order method of texture analysis called Adaptive Multi-Scale Grey Level Co-occurrence Matrix (AMSGLCM), based on the well-known GLCM method, which demonstrates significant benefits over G LCM, including increased feature discriminatory power, automatic feature adaptability, and significantly improved classification performance.
Abstract: We introduce a new second-order method of texture analysis called Adaptive Multi-Scale Grey Level Co-occurrence Matrix (AMSGLCM), based on the well-known Grey Level Co-occurrence Matrix (GLCM) method. The method deviates significantly from GLCM in that features are extracted, not via a fixed 2D weighting function of co-occurrence matrix elements, but by a variable summation of matrix elements in 3D localized neighborhoods. We subsequently present a new methodology for extracting optimized, highly discriminant features from these localized areas using adaptive Gaussian weighting functions. Genetic Algorithm (GA) optimization is used to produce a set of features whose classification worth is evaluated by discriminatory power and feature correlation considerations. We critically appraised the performance of our method and GLCM in pairwise classification of images from visually similar texture classes, captured from Markov Random Field (MRF) synthesized, natural, and biological origins. In these cross-validated classification trials, our method demonstrated significant benefits over GLCM, including increased feature discriminatory power, automatic feature adaptability, and significantly improved classification performance.

Journal ArticleDOI
TL;DR: Coding and decoding results show that, with only 8/spl sim/30% additional bandwidth over a single view bit stream, one can transmit, store, and reconstruct stereoscopic video sequences with reasonably good performance.
Abstract: The paper proposes a stereo video coding system. To ensure compatibility with monoscopic transmission, one of the view sequences is coded and transmitted conforming to the MPEG standard, referred to as the reference stream, and the other view stream is referred to as target stream. Only a few frames of the latter are coded and transmitted, while the rest are skipped and reconstructed at the decoder using a novel stereoscopic frame compensation and interpolation technique, termed SFEI BLCF. In disparity estimation, smooth and accurate disparity fields are obtained by using hierarchical Markov random field (MRF) and Gibbs random field (GRF) models. A fast search method is used to improve the precision and computation speed. Coding and decoding results show that, with only 8/spl sim/30% additional bandwidth over a single view bit stream, one can transmit, store, and reconstruct stereoscopic video sequences with reasonably good performance.

Journal ArticleDOI
TL;DR: The proposed dual MMRF (DMMRF) modeling method offers significant improvement on both objective peak signal-to-noise ratio (PSNR) measurement and subjective visual quality of restored video sequence.
Abstract: A novel error concealment algorithm based on a stochastic modeling approach is proposed as a post-processing tool at the decoder side for recovering the lost information incurred during the transmission of encoded digital video bitstreams. In our proposed scheme, both the spatial and the temporal contextual features in video signals are separately modeled using the multiscale Markov random field (MMRF). The lost information is then estimated using maximum a posteriori (MAP) probabilistic approach based on the spatial and temporal MMRF models; hence, a unified MMRF-MAP framework. To preserve the high frequency information (in particular, the edges) of the damaged video frames through iterative optimization, a new adaptive potential function is also introduced in this paper. Comparing to the existing MRF-based schemes and other traditional concealment algorithms, the proposed dual MMRF (DMMRF) modeling method offers significant improvement on both objective peak signal-to-noise ratio (PSNR) measurement and subjective visual quality of restored video sequence.

Journal ArticleDOI
TL;DR: A method is proposed for the enhancement of the quality of a classification result by fusing this result with remote sensing images, based on a Markov random field approach, and it shows an excellent performance.
Abstract: A method is proposed for the enhancement of the quality of a classification result by fusing this result with remote sensing images, based on a Markov random field approach. The classification accuracy is estimated by a modified posterior probability, which is used for choosing the optimal classification result. The procedure is applied to a benchmark dataset for discrimination provided by the IEEE Geoscience and Remote Sensing Society Data Fusion Committee, and it shows an excellent performance. The classified result won the competition of the data fusion contest 2001 held by the same committee.

Journal ArticleDOI
TL;DR: This paper presents an approach to the problem of estimating a dense optical flow field based on a multiframe, irregularly spaced motion trajectory set, where each trajectory describes the motion of a given point as a function of time.
Abstract: This paper presents an approach to the problem of estimating a dense optical flow field. The approach is based on a multiframe, irregularly spaced motion trajectory set, where each trajectory describes the motion of a given point as a function of time. From this motion trajectory set a dense flow field is estimated using a process of interpolation. A set of localized motion models are estimated, with each pixel labeled as belonging to one of the motion models. A Markov random field framework is adopted, allowing the incorporation of contextual constraints to encourage region-like structures. The approach is compared with a number of conventional optical flow estimation algorithms taken over a number of real and synthetic sequences. Results indicate that the method produces more accurate results for sequences with known ground truth flow. Also, applying the method to real sequences with unknown flow results in lower DFD, for all of the sequences tested.