scispace - formally typeset
Search or ask a question

Showing papers on "Markov random field published in 2006"


Journal ArticleDOI
TL;DR: Algorithmic techniques are presented that substantially improve the running time of the loopy belief propagation approach and reduce the complexity of the inference algorithm to be linear rather than quadratic in the number of possible labels for each pixel, which is important for problems such as image restoration that have a large label set.
Abstract: Markov random field models provide a robust and unified framework for early vision problems such as stereo and image restoration. Inference algorithms based on graph cuts and belief propagation have been found to yield accurate results, but despite recent advances are often too slow for practical use. In this paper we present some algorithmic techniques that substantially improve the running time of the loopy belief propagation approach. One of the techniques reduces the complexity of the inference algorithm to be linear rather than quadratic in the number of possible labels for each pixel, which is important for problems such as image restoration that have a large label set. Another technique speeds up and reduces the memory requirements of belief propagation on grid graphs. A third technique is a multi-grid method that makes it possible to obtain good results with a small fixed number of message passing iterations, independent of the size of the input images. Taken together these techniques speed up the standard algorithm by several orders of magnitude. In practice we obtain results that are as accurate as those of other global methods (e.g., using the Middlebury stereo benchmark) while being nearly as fast as purely local methods.

1,560 citations


Journal ArticleDOI
TL;DR: This work investigated a penalized weighted least-squares (PWLS) approach to address this problem in two dimensions, where the WLS considers first- and second-order noise moments and the penalty models signal spatial correlations.
Abstract: Reconstructing low-dose X-ray computed tomography (CT) images is a noise problem. This work investigated a penalized weighted least-squares (PWLS) approach to address this problem in two dimensions, where the WLS considers first- and second-order noise moments and the penalty models signal spatial correlations. Three different implementations were studied for the PWLS minimization. One utilizes a Markov random field (MRF) Gibbs functional to consider spatial correlations among nearby detector bins and projection views in sinogram space and minimizes the PWLS cost function by iterative Gauss-Seidel algorithm. Another employs Karhunen-Loeve (KL) transform to de-correlate data signals among nearby views and minimizes the PWLS adaptively to each KL component by analytical calculation, where the spatial correlation among nearby bins is modeled by the same Gibbs functional. The third one models the spatial correlations among image pixels in image domain also by a MRF Gibbs functional and minimizes the PWLS by iterative successive over-relaxation algorithm. In these three implementations, a quadratic functional regularization was chosen for the MRF model. Phantom experiments showed a comparable performance of these three PWLS-based methods in terms of suppressing noise-induced streak artifacts and preserving resolution in the reconstructed images. Computer simulations concurred with the phantom experiments in terms of noise-resolution tradeoff and detectability in low contrast environment. The KL-PWLS implementation may have the advantage in terms of computation for high-resolution dynamic low-dose CT imaging

519 citations


Journal ArticleDOI
TL;DR: This work presents Discriminative Random Fields (DRFs) to model spatial interactions in images in a discriminative framework based on the concept of Conditional Random Fields proposed by lafferty et al.(2001).
Abstract: In this research we address the problem of classification and labeling of regions given a single static natural image. Natural images exhibit strong spatial dependencies, and modeling these dependencies in a principled manner is crucial to achieve good classification accuracy. In this work, we present Discriminative Random Fields (DRFs) to model spatial interactions in images in a discriminative framework based on the concept of Conditional Random Fields proposed by lafferty et al.(2001). The DRFs classify image regions by incorporating neighborhood spatial interactions in the labels as well as the observed data. The DRF framework offers several advantages over the conventional Markov Random Field (MRF) framework. First, the DRFs allow to relax the strong assumption of conditional independence of the observed data generally used in the MRF framework for tractability. This assumption is too restrictive for a large number of applications in computer vision. Second, the DRFs derive their classification power by exploiting the probabilistic discriminative models instead of the generative models used for modeling observations in the MRF framework. Third, the interaction in labels in DRFs is based on the idea of pairwise discrimination of the observed data making it data-adaptive instead of being fixed a priori as in MRFs. Finally, all the parameters in the DRF model are estimated simultaneously from the training data unlike the MRF framework where the likelihood parameters are usually learned separately from the field parameters. We present preliminary experiments with man-made structure detection and binary image restoration tasks, and compare the DRF results with the MRF results.

420 citations


Journal ArticleDOI
TL;DR: A generic model for unsupervised extraction of viewer's attention objects from color images by integrating computational visual attention mechanisms with attention object growing techniques and describes the MRF by a Gibbs random field with an energy function.
Abstract: This paper proposes a generic model for unsupervised extraction of viewer's attention objects from color images. Without the full semantic understanding of image content, the model formulates the attention objects as a Markov random field (MRF) by integrating computational visual attention mechanisms with attention object growing techniques. Furthermore, we describe the MRF by a Gibbs random field with an energy function. The minimization of the energy function provides a practical way to obtain attention objects. Experimental results on 880 real images and user subjective evaluations by 16 subjects demonstrate the effectiveness of the proposed approach.

408 citations


Journal ArticleDOI
TL;DR: A new and fast algorithm which computes an exact solution in the discrete framework of the discrete original problem is proposed and it is shown that minimization of total variation under L1 data fidelity term yields a self-dual contrast invariant filter.
Abstract: This paper deals with the total variation minimization problem in image restoration for convex data fidelity functionals. We propose a new and fast algorithm which computes an exact solution in the discrete framework. Our method relies on the decomposition of an image into its level sets. It maps the original problems into independent binary Markov Random Field optimization problems at each level. Exact solutions of these binary problems are found thanks to minimum cost cut techniques in graphs. These binary solutions are proved to be monotone increasing with levels and yield thus an exact solution of the discrete original problem. Furthermore we show that minimization of total variation under L 1 data fidelity term yields a self-dual contrast invariant filter. Finally we present some results.

254 citations


Journal ArticleDOI
TL;DR: A Markov random field image segmentation model, which aims at combining color and texture features through Bayesian estimation via combinatorial optimization (simulated annealing), and a parameter estimation method using the EM algorithm is proposed.

221 citations


Proceedings Article
04 Dec 2006
TL;DR: In this article, the problem of estimating the graph structure associated with a discrete Markov random field was studied in the high-dimensional setting, in which both the number of nodes p and maximum neighborhood sizes d are allowed to grow as a function of the total number of observations n, and it was shown that consistent neighborhood selection can be obtained under certain mutual incoherence conditions analogous to those imposed in previous work on linear regression.
Abstract: We focus on the problem of estimating the graph structure associated with a discrete Markov random field. We describe a method based on l1-regularized logistic regression, in which the neighborhood of any given node is estimated by performing logistic regression subject to an l-constraint. Our framework applies to the high-dimensional setting, in which both the number of nodes p and maximum neighborhood sizes d are allowed to grow as a function of the number of observations n. Our main result is to establish sufficient conditions on the triple (n, p, d) for the method to succeed in consistently estimating the neighborhood of every node in the graph simultaneously. Under certain mutual incoherence conditions analogous to those imposed in previous work on linear regression, we prove that consistent neighborhood selection can be obtained as long as the number of observations n grows more quickly than 6d6 log d + 2d5 log p, thereby establishing that logarithmic growth in the number of samples n relative to graph size p is sufficient to achieve neighborhood consistency.

215 citations


Journal ArticleDOI
TL;DR: The hybrid joint-separable HJS filter is derived from a joint Bayesian formulation of the problem, and shown to be efficient while optimal in terms of compact belief representation and able to resolve long-term occlusions between targets with identical appearance.
Abstract: Visual tracking of multiple targets is a challenging problem, especially when efficiency is an issue. Occlusions, if not properly handled, are a major source of failure. Solutions supporting principled occlusion reasoning have been proposed but are yet unpractical for online applications. This paper presents a new solution which effectively manages the trade-off between reliable modeling and computational efficiency. The hybrid joint-separable (HJS) filter is derived from a joint Bayesian formulation of the problem, and shown to be efficient while optimal in terms of compact belief representation. Computational efficiency is achieved by employing a Markov random field approximation to joint dynamics and an incremental algorithm for posterior update with an appearance likelihood that implements a physically-based model of the occlusion process. A particle filter implementation is proposed which achieves accurate tracking during partial occlusions, while in cases of complete occlusion, tracking hypotheses are bound to estimated occlusion volumes. Experiments show that the proposed algorithm is efficient, robust, and able to resolve long-term occlusions between targets with identical appearance

195 citations


Journal Article
TL;DR: The method is completely generic and can be used to segment and infer the pose of any specified rigid, deformable or articulated object.
Abstract: We present a novel algorithm for performing integrated segmentation and 3D pose estimation of a human body from multiple views. Unlike other related state of the art techniques which focus on either segmentation or pose estimation individually, our approach tackles these two tasks together. Normally, when optimizing for pose, it is traditional to use some fixed set of features, e.g. edges or chamfer maps. In contrast, our novel approach consists of optimizing a cost function based on a Markov Random Field (MRF). This has the advantage that we can use all the information in the image: edges, background and foreground appearances, as well as the prior information on the shape and pose of the subject and combine them in a Bayesian framework. Previously, optimizing such a cost function would have been computationally infeasible. However, our recent research in dynamic graph cuts allows this to be done much more efficiently than before. We demonstrate the efficacy of our approach on challenging motion sequences. Note that although we target the human pose inference problem in the paper, our method is completely generic and can be used to segment and infer the pose of any specified rigid, deformable or articulated object.

183 citations


Book ChapterDOI
07 May 2006
TL;DR: In this article, a cost function based on a Markov Random Field (MRF) is proposed to combine the information in the image: edges, background and foreground appearances, as well as the prior information on the shape and pose of the subject and combine them in a Bayesian framework.
Abstract: We present a novel algorithm for performing integrated segmentation and 3D pose estimation of a human body from multiple views. Unlike other related state of the art techniques which focus on either segmentation or pose estimation individually, our approach tackles these two tasks together. Normally, when optimizing for pose, it is traditional to use some fixed set of features, e.g. edges or chamfer maps. In contrast, our novel approach consists of optimizing a cost function based on a Markov Random Field (MRF). This has the advantage that we can use all the information in the image: edges, background and foreground appearances, as well as the prior information on the shape and pose of the subject and combine them in a Bayesian framework. Previously, optimizing such a cost function would have been computationally infeasible. However, our recent research in dynamic graph cuts allows this to be done much more efficiently than before. We demonstrate the efficacy of our approach on challenging motion sequences. Note that although we target the human pose inference problem in the paper, our method is completely generic and can be used to segment and infer the pose of any specified rigid, deformable or articulated object.

158 citations


Proceedings ArticleDOI
25 Jun 2006
TL;DR: Experiments carried out on synthetic data show that the quadratic approximations can be more accurate and computationally efficient than the linear programming and propagation based alternatives.
Abstract: Quadratic program relaxations are proposed as an alternative to linear program relaxations and tree reweighted belief propagation for the metric labeling or MAP estimation problem. An additional convex relaxation of the quadratic approximation is shown to have additive approximation guarantees that apply even when the graph weights have mixed sign or do not come from a metric. The approximations are extended in a manner that allows tight variational relaxations of the MAP problem, although they generally involve non-convex optimization. Experiments carried out on synthetic data show that the quadratic approximations can be more accurate and computationally efficient than the linear programming and propagation based alternatives.

Journal ArticleDOI
TL;DR: The key result of this paper is that in the computation-limited setting, using an inconsistent parameter estimator is provably beneficial, since the resulting errors can partially compensate for errors made by using an approximate prediction technique.
Abstract: Consider the problem of joint parameter estimation and prediction in a Markov random field: that is, the model parameters are estimated on the basis of an initial set of data, and then the fitted model is used to perform prediction (e.g., smoothing, denoising, interpolation) on a new noisy observation. Working under the restriction of limited computation, we analyze a joint method in which the same convex variational relaxation is used to construct an M-estimator for fitting parameters, and to perform approximate marginalization for the prediction step. The key result of this paper is that in the computation-limited setting, using an inconsistent parameter estimator (i.e., an estimator that returns the "wrong" model even in the infinite data limit) is provably beneficial, since the resulting errors can partially compensate for errors made by using an approximate prediction technique. En route to this result, we analyze the asymptotic properties of M-estimators based on convex variational relaxations, and establish a Lipschitz stability property that holds for a broad class of convex variational methods. This stability result provides additional incentive, apart from the obvious benefit of unique global optima, for using message-passing methods based on convex variational relaxations. We show that joint estimation/prediction based on the reweighted sum-product algorithm substantially outperforms a commonly used heuristic based on ordinary sum-product.

Journal ArticleDOI
TL;DR: A novel method, applicable to discrete-valued Markov random fields on arbitrary graphs, for approximately solving this marginalization problem, and finds that the performance of this log-determinant relaxation is comparable or superior to the widely used sum-product algorithm over a range of experimental conditions.
Abstract: Graphical models are well suited to capture the complex and non-Gaussian statistical dependencies that arise in many real-world signals A fundamental problem common to any signal processing application of a graphical model is that of computing approximate marginal probabilities over subsets of nodes This paper proposes a novel method, applicable to discrete-valued Markov random fields (MRFs) on arbitrary graphs, for approximately solving this marginalization problem The foundation of our method is a reformulation of the marginalization problem as the solution of a low-dimensional convex optimization problem over the marginal polytope Exactly solving this problem for general graphs is intractable; for binary Markov random fields, we describe how to relax it by using a Gaussian bound on the discrete entropy and a semidefinite outer bound on the marginal polytope This combination leads to a log-determinant maximization problem that can be solved efficiently by interior point methods, thereby providing approximations to the exact marginals We show how a slightly weakened log-determinant relaxation can be solved even more efficiently by a dual reformulation When applied to denoising problems in a coupled mixture-of-Gaussian model defined on a binary MRF with cycles, we find that the performance of this log-determinant relaxation is comparable or superior to the widely used sum-product algorithm over a range of experimental conditions

Proceedings ArticleDOI
20 Aug 2006
TL;DR: A novel probabilistic approach to summarize frequent itemset patterns that can effectively summarize a large number of itemsets and typically significantly outperforms extant approaches is proposed.
Abstract: In this paper, we propose a novel probabilistic approach to summarize frequent itemset patterns. Such techniques are useful for summarization, post-processing, and end-user interpretation, particularly for problems where the resulting set of patterns are huge. In our approach items in the dataset are modeled as random variables. We then construct a Markov Random Fields (MRF) on these variables based on frequent itemsets and their occurrence statistics. The summarization proceeds in a level-wise iterative fashion. Occurrence statistics of itemsets at the lowest level are used to construct an initial MRF. Statistics of itemsets at the next level can then be inferred from the model. We use those patterns whose occurrence can not be accurately inferred from the model to augment the model in an iterative manner, repeating the procedure until all frequent itemsets can be modeled. The resulting MRF model affords a concise and useful representation of the original collection of itemsets. Extensive empirical study on real datasets show that the new approach can effectively summarize a large number of itemsets and typically significantly outperforms extant approaches.

Journal ArticleDOI
TL;DR: This paper shows that convex levelable posterior energies can be minimized exactly using the level-independant cut optimization scheme seen in Part I, and shows that non-levelable models with convex local conditional posterior energies such as the class of generalized Gaussian models can be exactly minimized with a generalized coupled Simulated Annealing.
Abstract: In Part II of this paper we extend the results obtained in Part I for total variation minimization in image restoration towards the following directions: first we investigate the decomposability property of energies on levels, which leads us to introduce the concept of levelable regularization functions (which TV is the paradigm of). We show that convex levelable posterior energies can be minimized exactly using the level-independant cut optimization scheme seen in Part I. Next we extend this graph cut scheme to the case of non-convex levelable energies.We present convincing restoration results for images corrupted with impulsive noise. We also provide a minimum-cost based algorithm which computes a global minimizer for Markov Random Field with convex priors. Last we show that non-levelable models with convex local conditional posterior energies such as the class of generalized Gaussian models can be exactly minimized with a generalized coupled Simulated Annealing.

Journal ArticleDOI
TL;DR: A general processing framework for urban road network extraction in high-resolution synthetic aperture radar images is proposed, based on novel multiscale detection of street candidates, followed by optimization using a Markov random field description of the road network.
Abstract: A general processing framework for urban road network extraction in high-resolution synthetic aperture radar images is proposed. It is based on novel multiscale detection of street candidates, followed by optimization using a Markov random field description of the road network. The latter step, in the path of recent technical literature, is enriched by the inclusion of a priori knowledge about road junctions and the automatic choice of most of the involved parameters. Advantages over existing and previous extraction and optimization procedures are proved by comparison using data from different sensors and locations

Journal ArticleDOI
TL;DR: The proposed model exhibits a good fit to the clinical data and is extensively tested on different synthetic vessel phantoms and several 2D/3D TOF datasets acquired from two different MRI scanners, showing that it provides good quality of segmentation.

Journal ArticleDOI
TL;DR: The proposed distributed estimators are computationally simple, applicable to a wide range of sensing environments, and localized, implying that the nodes communicate only with their neighbors to obtain the desired results.
Abstract: We develop a hidden Markov random field (HMRF) framework for distributed signal processing in sensor-network environments. Under this framework, spatially distributed observations collected at the sensors form a noisy realization of an underlying random field that has a simple structure with Markovian dependence. We derive iterated conditional modes (ICM) algorithms for distributed estimation of the hidden random field from the noisy measurements. We consider both parametric and nonparametric measurement-error models. The proposed distributed estimators are computationally simple, applicable to a wide range of sensing environments, and localized, implying that the nodes communicate only with their neighbors to obtain the desired results. We also develop a calibration method for estimating Markov random field model parameters from training data and discuss initialization of the ICM algorithms. The HMRF framework and ICM algorithms are applied to event-region detection. Numerical simulations demonstrate the performance of the proposed approach

Journal ArticleDOI
Yu Guan1, Wei Chen1, Xiao Liang1, Zi'ang Ding1, Qunsheng Peng1 
TL;DR: This work proposes an iterative energy minimization framework for interactive image matting and demonstrates that the energy‐driven scheme can be extended to video matting, with which the spatio‐temporal smoothness is faithfully preserved.
Abstract: We propose an iterative energy minimization framework for interactive image matting. Our approach is easy in the sense that it is fast and requires only few user-specified strokes for marking the foreground and background. Beginning with the known region, we model the unknown region as a Markov Random Field (MRF) and formulate its energy in each iteration as the combination of one data term and one smoothness term. By automatically adjusting the weights of both terms during the iterations, the first-order continuous and feature-preserving result is rapidly obtained with several iterations. The energy optimization can be further performed in selected local regions for refined results. We demonstrate that our energy-driven scheme can be extended to video matting, with which the spatio-temporal smoothness is faithfully preserved. We show that the proposed approach outperforms previous methods in terms of both the quality and performance for quite challenging examples.

Journal ArticleDOI
TL;DR: A unified framework for the creation of classified maps of the seafloor from sonar imagery with significant challenges in photometric correction, classification, navigation and registration, and image fusion are addressed.
Abstract: This paper presents a unified framework for the creation of classified maps of the seafloor from sonar imagery. Significant challenges in photometric correction, classification, navigation and registration, and image fusion are addressed. The techniques described are directly applicable to a range of remote sensing problems. Recent advances in side-scan data correction are incorporated to compensate for the sonar beam pattern and motion of the acquisition platform. The corrected images are segmented using pixel-based textural features and standard classifiers. In parallel, the navigation of the sonar device is processed using Kalman filtering techniques. A simultaneous localization and mapping framework is adopted to improve the navigation accuracy and produce georeferenced mosaics of the segmented side-scan data. These are fused within a Markovian framework and two fusion models are presented. The first uses a voting scheme regularized by an isotropic Markov random field and is applicable when the reliability of each information source is unknown. The Markov model is also used to inpaint regions where no final classification decision can be reached using pixel level fusion. The second model formally introduces the reliability of each information source into a probabilistic model. Evaluation of the two models using both synthetic images and real data from a large scale survey shows significant quantitative and qualitative improvement using the fusion approach.

Journal ArticleDOI
TL;DR: A statistical approach to capturing inter-individual variability of high-deformation fields from a number of examples (training samples) and a model for generating tissue atrophy or growth in order to simulate intra-individual brain deformations are described.

Journal ArticleDOI
TL;DR: This work concludes that the texture feature based on Markov random field parameters combined with properly designed auxiliary features extracted from the texture context of the MCs can work outstandingly in the recognition of MCs in digital mammograms.

Journal ArticleDOI
TL;DR: This paper proposes an expectation-maximization algorithm with the mean field approximation to derive a procedure for estimating the mixing matrix, the sources, and their edge maps, and finds that a source model accounting for local autocorrelation is able to increase robustness against noise, even space variant.
Abstract: This paper deals with blind separation of images from noisy linear mixtures with unknown coefficients, formulated as a Bayesian estimation problem. This is a flexible framework, where any kind of prior knowledge about the source images and the mixing matrix can be accounted for. In particular, we describe local correlation within the individual images through the use of Markov random field (MRF) image models. These are naturally suited to express the joint pdf of the sources in a factorized form, so that the statistical independence requirements of most independent component analysis approaches to blind source separation are retained. Our model also includes edge variables to preserve intensity discontinuities. MRF models have been proved to be very efficient in many visual reconstruction problems, such as blind image restoration, and allow separation and edge detection to be performed simultaneously. We propose an expectation-maximization algorithm with the mean field approximation to derive a procedure for estimating the mixing matrix, the sources, and their edge maps. We tested this procedure on both synthetic and real images, in the fully blind case (i.e., no prior information on mixing is exploited) and found that a source model accounting for local autocorrelation is able to increase robustness against noise, even space variant. Furthermore, when the model closely fits the source characteristics, independence is no longer a strict requirement, and cross-correlated sources can be separated, as well.

Proceedings Article
04 Dec 2006
TL;DR: This paper presents a new method, called COMPOSE, for exploiting combinatorial optimization for sub-networks within the context of a max-product belief propagation algorithm, and describes highly efficient methods for computing max-marginals for subnetworks corresponding both to bipartite matchings and to regular networks.
Abstract: In general, the problem of computing a maximum a posteriori (MAP) assignment in a Markov random field (MRF) is computationally intractable. However, in certain subclasses of MRF, an optimal or close-to-optimal assignment can be found very efficiently using combinatorial optimization algorithms: certain MRFs with mutual exclusion constraints can be solved using bipartite matching, and MRFs with regular potentials can be solved using minimum cut methods. However, these solutions do not apply to the many MRFs that contain such tractable components as sub-networks, but also other non-complying potentials. In this paper, we present a new method, called COMPOSE, for exploiting combinatorial optimization for sub-networks within the context of a max-product belief propagation algorithm. COMPOSE uses combinatorial optimization for computing exact max-marginals for an entire sub-network; these can then be used for inference in the context of the network as a whole. We describe highly efficient methods for computing max-marginals for subnetworks corresponding both to bipartite matchings and to regular networks. We present results on both synthetic and real networks encoding correspondence problems between images, which involve both matching constraints and pairwise geometric constraints. We compare to a range of current methods, showing that the ability of COMPOSE to transmit information globally across the network leads to improved convergence, decreased running time, and higher-scoring assignments.

Journal ArticleDOI
TL;DR: This work addresses the problem of robust normal reconstruction by dense photometric stereo by forming the problem as a Markov network and investigates two important inference algorithms for Markov random fields (MRFs) - graph cuts and belief propagation - to optimize for the most likely setting for each node in the network.
Abstract: We address the problem of robust normal reconstruction by dense photometric stereo, in the presence of complex geometry, shadows, highlight, transparencies, variable attenuation in light intensities, and inaccurate estimation in light directions. The input is a dense set of noisy photometric images, conveniently captured by using a very simple set-up consisting of a digital video camera, a reflective mirror sphere, and a handheld spotlight. We formulate the dense photometric stereo problem as a Markov network and investigate two important inference algorithms for Markov random fields (MRFs) - graph cuts and belief propagation - to optimize for the most likely setting for each node in the network. In the graph cut algorithm, the MRF formulation is translated into one of energy minimization. A discontinuity-preserving metric is introduced as the compatibility function, which allows a-expansion to efficiently perform the maximum a posteriori (MAP) estimation. Using the identical dense input and the same MRF formulation, our tensor belief propagation algorithm recovers faithful normal directions, preserves underlying discontinuities, improves the normal estimation from one of discrete to continuous, and drastically reduces the storage requirement and running time. Both algorithms produce comparable and very faithful normals for complex scenes. Although the discontinuity-preserving metric in graph cuts permits efficient inference of optimal discrete labels with a theoretical guarantee, our estimation algorithm using tensor belief propagation converges to comparable results, but runs faster because very compact messages are passed and combined. We present very encouraging results on normal reconstruction. A simple algorithm is proposed to reconstruct a surface from a normal map recovered by our method. With the reconstructed surface, an inverse process, known as relighting in computer graphics, is proposed to synthesize novel images of the given scene under user-specified light source and direction. The synthesis is made to run in real time by exploiting the state-of-the-art graphics processing unit (GPU). Our method offers many unique advantages over previous relighting methods and can handle a wide range of novel light sources and directions

Book ChapterDOI
01 Oct 2006
TL;DR: A new method of information integration in a graph based framework where tissue priors and local boundary information are integrated into the edge weight metrics in the graph and inhomogeneity correction is incorporated by adaptively adjusting the edge weights according to the intermediate inhomogeneous estimation.
Abstract: Brain MRI segmentation remains a challenging problem in spite of numerous existing techniques. To overcome the inherent difficulties associated with this segmentation problem, we present a new method of information integration in a graph based framework. In addition to image intensity, tissue priors and local boundary information are integrated into the edge weight metrics in the graph. Furthermore, inhomogeneity correction is incorporated by adaptively adjusting the edge weights according to the intermediate inhomogeneity estimation. In the validation experiments of simulated brain MRIs, the proposed method outperformed a segmentation method based on iterated conditional modes (ICM), which is a commonly used optimization method in medical image segmentation. In the experiments of real neonatal brain MRIs, the results of the proposed method have good overlap with the manual segmentations by human experts.

Dissertation
30 Apr 2006
TL;DR: This thesis describes a model of fitness function that approximates the energy in the Gibbs distribution, and shows how this model can be fitted to a population of solutions to estimate the parameters of the MRF and proposes several variants of DEUM, which significantly outperform other EDAs.
Abstract: Estimation of Distribution Algorithms (EDAs) belong to the class of population based optimisation algorithms They are motivated by the idea of discovering and exploiting the interaction between variables in the solution They estimate a probability distribution from population of solutions, and sample it to generate the next population Many EDAs use probabilistic graphical modelling techniques for this purpose In particular, directed graphical models (Bayesian networks) have been widely used in EDA This thesis proposes an undirected graphical model (Markov Random Field (MRF)) approach to estimate and sample the distribution in EDAs The interaction between variables in the solution is modelled as an undirected graph and the joint probability of a solution is factorised as a Gibbs distribution The thesis describes a model of fitness function that approximates the energy in the Gibbs distribution, and shows how this model can be fitted to a population of solutions to estimate the parameters of the MRF The estimated MRF is then sampled to generate the next population This approach is applied to estimation of distribution in a general framework of an EDA, called Distribution Estimation using Markov Random Fields (DEUM) The thesis then proposes several variants of DEUM using different sampling techniques and tests their performance on a range of optimisation problems The results show that, for most of the tested problems, the DEUM algorithms significantly outperform other EDAs, both in terms of number of fitness evaluations and the quality of the solutions found by them There are two main explanations for the success of DEUM algorithms Firstly, DEUM builds a model of fitness function to approximate the MRF This contrasts with other EDAs, which build a model of selected solutions This allows DEUM to use fitness in variation part of the evolution Secondly, DEUM exploits the temperature coefficient in the Gibbs distribution to regulate the behaviour of the algorithm In particular, with higher temperature, the distribution is closer to being uniform and with lower temperature it concentrates near some global optima This gives DEUM an explicit control over the convergence of the algorithm, resulting in better optimisation

Proceedings ArticleDOI
07 Jun 2006
TL;DR: A novel implementation of Bayesian belief propagation for graphics processing units found in most modern desktop and notebook computers is presented, and applies it to the stereo problem.
Abstract: The power of Markov random field formulations of lowlevel vision problems, such as stereo, has been known for some time. However, recent advances, both algorithmic and in processing power, have made their application practical. This paper presents a novel implementation of Bayesian belief propagation for graphics processing units found in most modern desktop and notebook computers, and applies it to the stereo problem. The stereo problem is used for comparison to other BP algorithms.

Journal ArticleDOI
TL;DR: Experiments conducted on a set of five real remote sensing images acquired by different sensors and referring to different kinds of changes show the high robustness of the proposed unsupervised change detection approach.
Abstract: The most common methodology to carry out an automatic unsupervised change detection in remotely sensed imagery is to find the best global threshold in the histogram of the so-called difference image. The unsupervised nature of the change detection process, however, makes it nontrivial to find the most appropriate thresholding algorithm for a given difference image, because the best global threshold depends on its statistical peculiarities, which are often unknown. In this letter, a solution to this issue based on the fusion of an ensemble of different thresholding algorithms through a Markov random field framework is proposed. Experiments conducted on a set of five real remote sensing images acquired by different sensors and referring to different kinds of changes show the high robustness of the proposed unsupervised change detection approach

Patent
10 Oct 2006
TL;DR: In this paper, a Markov Random Field (MRF) approach is used to detect change in video streams in an indoor environment, where information from different sources are combined with additional constraints to provide the final detection map.
Abstract: A system and method for automated and/or semi-automated analysis of video for discerning patterns of interest in video streams. In a preferred embodiment, the present invention is directed to identifying patterns of interest in indoor settings. In one aspect, the present invention deals with the change detection problem using a Markov Random Field approach where information from different sources are naturally combined with additional constraints to provide the final detection map. A slight modification is made of the regularity term within the MRF model that accounts for real-discontinuities in the observed data. The defined objective function is implemented in a multi-scale framework that decreases the computational cost and the risk of convergence to local minima. To achieve real-time performance, fast deterministic relaxation algorithms are used to perform the minimization. The crowdedness measure used is a geometric measure of occupancy that is quasi-invariant to objects translating on the platform.