scispace - formally typeset
Search or ask a question

Showing papers on "Markov random field published in 2008"


Proceedings Article
08 Dec 2008
TL;DR: An approach to low-level vision is presented that combines the use of convolutional networks as an image processing architecture and an unsupervised learning procedure that synthesizes training samples from specific noise models to avoid computational difficulties in MRF approaches that arise from probabilistic learning and inference.
Abstract: We present an approach to low-level vision that combines two main ideas: the use of convolutional networks as an image processing architecture and an unsupervised learning procedure that synthesizes training samples from specific noise models. We demonstrate this approach on the challenging problem of natural image denoising. Using a test set with a hundred natural images, we find that convolutional networks provide comparable and in some cases superior performance to state of the art wavelet and Markov random field (MRF) methods. Moreover, we find that a convolutional network offers similar performance in the blind de-noising setting as compared to other techniques in the non-blind setting. We also show how convolutional networks are mathematically related to MRF approaches by presenting a mean field theory for an MRF specially designed for image denoising. Although these approaches are related, convolutional networks avoid computational difficulties in MRF approaches that arise from probabilistic learning and inference. This makes it possible to learn image processing architectures that have a high degree of representational power (we train models with over 15,000 parameters), but whose computational expense is significantly less than that associated with inference in MRF approaches with even hundreds of parameters.

869 citations


Journal ArticleDOI
TL;DR: This work proposes a model that incorporates both monocular cues and stereo (triangulation) cues, to obtain significantly more accurate depth estimates than is possible using either monocular or stereo cues alone.
Abstract: We consider the task of 3-d depth estimation from a single still image. We take a supervised learning approach to this problem, in which we begin by collecting a training set of monocular images (of unstructured indoor and outdoor environments which include forests, sidewalks, trees, buildings, etc.) and their corresponding ground-truth depthmaps. Then, we apply supervised learning to predict the value of the depthmap as a function of the image. Depth estimation is a challenging problem, since local features alone are insufficient to estimate depth at a point, and one needs to consider the global context of the image. Our model uses a hierarchical, multiscale Markov Random Field (MRF) that incorporates multiscale local- and global-image features, and models the depths and the relation between depths at different points in the image. We show that, even on unstructured scenes, our algorithm is frequently able to recover fairly accurate depthmaps. We further propose a model that incorporates both monocular cues and stereo (triangulation) cues, to obtain significantly more accurate depth estimates than is possible using either monocular or stereo cues alone.

679 citations


Journal ArticleDOI
TL;DR: A novel and efficient approach to dense image registration, which does not require a derivative of the employed cost function is introduced, and efficient linear programming using the primal dual principles is considered to recover the lowest potential of the cost function.

469 citations


Book ChapterDOI
12 Oct 2008
TL;DR: A new method for detecting objects such as bags carried by pedestrians depicted in short video sequences by comparing the temporal templates against view-specific exemplars generated offline for unencumbered pedestrians, which yields a segmentation of carried objects using the MAP solution.
Abstract: We propose a new method for detecting objects such as bags carried by pedestrians depicted in short video sequences. In common with earlier work [1,2] on the same problem, the method starts by averaging aligned foreground regions of a walking pedestrian to produce a representation of motion and shape (known as a temporal template) that has some immunity to noise in foreground segmentations and phase of the walking cycle. Our key novelty is for carried objects to be revealed by comparing the temporal templates against view-specific exemplars generated offline for unencumbered pedestrians. A likelihood map obtained from this match is combined in a Markov random field with a map of prior probabilities for carried objects and a spatial continuity assumption, from which we obtain a segmentation of carried objects using the MAP solution. We have re-implemented the earlier state of the art method [1] and demonstrate a substantial improvement in performance for the new method on the challenging PETS2006 dataset [3]. Although developed for a specific problem, the method could be applied to the detection of irregularities in appearance for other categories of object that move in a periodic fashion.

280 citations


Journal ArticleDOI
TL;DR: The proposed IRGS method provides the possibility of building a hierarchical representation of the image content, and allows various region features and even domain knowledge to be incorporated in the segmentation process.
Abstract: This paper proposes an image segmentation method named iterative region growing using semantics (IRGS), which is characterized by two aspects. First, it uses graduated increased edge penalty (GIEP) functions within the traditional Markov random field (MRF) context model in formulating the objective functions. Second, IRGS uses a region growing technique in searching for the solutions to these objective functions. The proposed IRGS is an improvement over traditional MRF based approaches in that the edge strength information is utilized and a more stable estimation of model parameters is achieved. Moreover, the IRGS method provides the possibility of building a hierarchical representation of the image content, and allows various region features and even domain knowledge to be incorporated in the segmentation process. The algorithm has been successfully tested on several artificial images and synthetic aperture radar (SAR) images.

223 citations


Proceedings Article
01 Jan 2008
TL;DR: The proposed method is applied to accelerate the cleanup step of a real-time dense stereo method based on plane sweeping with multiple sweeping directions, where the label set directly corresponds to the employed directions.
Abstract: This work presents a real-time, data-parallel approach for global label assignment on regular grids. The labels are selected according to a Markov random field energy with a Potts prior term for binary interactions. We apply the proposed method to accelerate the cleanup step of a real-time dense stereo method based on plane sweeping with multiple sweeping directions, where the label set directly corresponds to the employed directions. In this setting the Potts smoothness model is suitable, since the set of labels does not possess an intrinsic metric or total order. The observed run-times are approximately 30 times faster than the ones obtained by graph cut approaches.

188 citations


Journal ArticleDOI
TL;DR: The proposed approach, based on a Bayesian classifier, utilizes the adaptive mixtures method and Markov random field model to obtain and upgrade the class conditional probability density function (CCPDF) and the a priori probability of each class.

168 citations


Journal ArticleDOI
TL;DR: A novel classification method, taking regions as elements, is proposed using a Markov random field (MRF), using a Wishart-based maximum likelihood, based on regions, to obtain a classification map.
Abstract: The scattering measurements of individual pixels in polarimetric SAR images are affected by speckle; hence, the performance of classification approaches, taking individual pixels as elements, would be damaged. By introducing the spatial relation between adjacent pixels, a novel classification method, taking regions as elements, is proposed using a Markov random field (MRF). In this method, an image is oversegmented into a large amount of rectangular regions first. Then, to use fully the statistical a priori knowledge of the data and the spatial relation of neighboring pixels, a Wishart MRF model, combining the Wishart distribution with the MRF, is proposed, and an iterative conditional mode algorithm is adopted to adjust oversegmentation results so that the shapes of all regions match the ground truth better. Finally, a Wishart-based maximum likelihood, based on regions, is used to obtain a classification map. Real polarimetric images are used in experiments. Compared with the other three frequently used methods, higher accuracy is observed, and classification maps are in better agreement with the initial ground maps, using the proposed method.

158 citations


Journal ArticleDOI
TL;DR: This paper presents an edge-directed image interpolation algorithm that improves the subjective quality of the interpolated edges while maintaining a high PSNR level and a single-pass implementation is designed, which performs nearly as well as the iterative optimization.
Abstract: This paper presents an edge-directed image interpolation algorithm. In the proposed algorithm, the edge directions are implicitly estimated with a statistical-based approach. In opposite to explicit edge directions, the local edge directions are indicated by length-16 weighting vectors. Implicitly, the weighting vectors are used to formulate geometric regularity (GR) constraint (smoothness along edges and sharpness across edges) and the GR constraint is imposed on the interpolated image through the Markov random field (MRF) model. Furthermore, under the maximum a posteriori-MRF framework, the desired interpolated image corresponds to the minimal energy state of a 2-D random field given the low-resolution image. Simulated annealing methods are used to search for the minimal energy state from the state space. To lower the computational complexity of MRF, a single-pass implementation is designed, which performs nearly as well as the iterative optimization. Simulation results show that the proposed MRF model-based edge-directed interpolation method produces edges with strong geometric regularity. Compared to traditional methods and other edge-directed interpolation methods, the proposed method improves the subjective quality of the interpolated edges while maintaining a high PSNR level.

142 citations


Book ChapterDOI
25 Aug 2008
TL;DR: A simple algorithm for reconstructing the underlying graph defining a Markov random field on nnodes and maximum degree d given observations is analyzed, and it is shown that under mild non-degeneracy conditions it reconstructs the generating graph with high probability using i¾?(dlogn) samples which is optimal up to a multiplicative constant.
Abstract: Markov random fields are used to model high dimensional distributions in a number of applied areas. Much recent interest has been devoted to the reconstruction of the dependency structure from independent samples from the Markov random fields. We analyze a simple algorithm for reconstructing the underlying graph defining a Markov random field on nnodes and maximum degree dgiven observations. We show that under mild non-degeneracy conditions it reconstructs the generating graph with high probability using i¾?(dlogn) samples which is optimal up to a multiplicative constant. Our results seem to be the first results for general models that guarantee that thegenerating model is reconstructed. Furthermore, we provide an explicit O(dnd+ 2logn) running time bound. In cases where the measure on the graph has correlation decay, the running time is O(n2logn) for all fixed d. In the full-length version we also discuss the effect of observing noisy samples. There we show that as long as the noise level is low, our algorithm is effective. On the other hand, we construct an example where large noise implies non-identifiability even for generic noise and interactions. Finally, we briefly show that in some cases, models with hidden nodes can also be recovered.

137 citations


Journal ArticleDOI
TL;DR: In this article, the authors use Markov Random Fields (MRF) to establish the correspondence of features in alignment and robust optimization for projection model estimation for full-precision reconstruction.

Journal ArticleDOI
TL;DR: The model works without detailed a priori object-shape information, and it is also appropriate for low and unstable frame rate video sources, and a Markov random field model is used to enhance the accuracy of the separation.
Abstract: In in this paper, we propose a new model regarding foreground and shadow detection in video sequences. The model works without detailed a priori object-shape information, and it is also appropriate for low and unstable frame rate video sources. Contribution is presented in three key issues: 1) we propose a novel adaptive shadow model, and show the improvements versus previous approaches in scenes with difficult lighting and coloring effects; 2) we give a novel description for the foreground based on spatial statistics of the neighboring pixel values, which enhances the detection of background or shadow-colored object parts; 3) we show how microstructure analysis can be used in the proposed framework as additional feature components improving the results. Finally, a Markov random field model is used to enhance the accuracy of the separation. We validate our method on outdoor and indoor sequences including real surveillance videos and well-known benchmark test sets.

Journal ArticleDOI
01 Dec 2008
TL;DR: A new system for converting a user's freehand sketch of a tree into a full 3D model that is both complex and realistic-looking and shows a variety of natural-looking tree models generated from freehand sketches with only a few strokes.
Abstract: In this paper, we describe a new system for converting a user's freehand sketch of a tree into a full 3D model that is both complex and realistic-looking. Our system does this by probabilistic optimization based on parameters obtained from a database of tree models. The best matching model is selected by comparing its 2D projections with the sketch. Branch interaction is modeled by a Markov random field, subject to the constraint of 3D projection to sketch. Our system then uses the notion of self-similarity to add new branches before finally populating all branches with leaves of the user's choice. We show a variety of natural-looking tree models generated from freehand sketches with only a few strokes.

Journal ArticleDOI
TL;DR: A nonparametric Bayesian model for histogram clustering which automatically determines the number of segments when spatial smoothness constraints on the class assignments are enforced by a Markov Random Field is proposed.
Abstract: Image segmentation algorithms partition the set of pixels of an image into a specific number of different, spatially homogeneous groups. We propose a nonparametric Bayesian model for histogram clustering which automatically determines the number of segments when spatial smoothness constraints on the class assignments are enforced by a Markov Random Field. A Dirichlet process prior controls the level of resolution which corresponds to the number of clusters in data with a unique cluster structure. The resulting posterior is efficiently sampled by a variant of a conjugate-case sampling algorithm for Dirichlet process mixture models. Experimental results are provided for real-world gray value images, synthetic aperture radar images and magnetic resonance imaging data.

DOI
01 Jan 2008
TL;DR: This paper extends the Associative Markov Network model to learn directionality in the clique potentials, resulting in a new anisotropic model that can be efficiently learned using the subgradient method.
Abstract: In this paper we address the problem of automated three dimensional point cloud interpretation. This problem is important for various tasks from environment modeling to obstacle avoidance for autonomous robot navigation. In addition to locally extracted features, classifiers need to utilize contextual information in order to perform well. A popular approach to account for context is to utilize the Markov Random Field framework. One recent variant that has successfully been used for the problem considered is the Associative Markov Network (AMN). We extend the AMN model to learn directionality in the clique potentials, resulting in a new anisotropic model that can be efficiently learned using the subgradient method. We validate the proposed approach using data collected from different range sensors and show better performance against standard AMN and Support Vector Machine algorithms.

Book ChapterDOI
06 Sep 2008
TL;DR: This paper addresses the problem of automatically segmenting bone structures in low resolution clinical MRI datasets by combining physically-based deformable models with shape priors with a fast implicit integration scheme and results are an automatic multilevel segmentation procedure effective with low resolution images.
Abstract: This paper addresses the problem of automatically segmenting bone structures in low resolution clinical MRI datasets. The novel aspect of the proposed method is the combination of physically-based deformable models with shape priors. Models evolve under the influence of forces that exploit image information and prior knowledge on shape variations. The prior defines a Principal Component Analysis (PCA) of global shape variations and a Markov Random Field (MRF) of local deformations, imposing spatial restrictions in shapes evolution. For a better efficiency, various levels of details are considered and the differential equations system is solved by a fast implicit integration scheme. The result is an automatic multilevel segmentation procedure effective with low resolution images. Experiments on femur and hip bones segmentation from clinical MRI depict a promising approach (mean accuracy: 1.44±1.1 mm, computation time: 2mn43s).

Journal ArticleDOI
TL;DR: This paper presents building detection results on a set of synthetic and airborne images based on a stochastic image interpretation model, which combines both 2-D and 3-D contextual information of the imaged scene.
Abstract: The identification of building rooftops from a single image, without the use of auxiliary 3-D information like stereo pairs or digital elevation models, is a very challenging and difficult task in the area of remote sensing. The existing methodologies rarely tackle the problem of 3-D object identification, like buildings, from a purely stochastic viewpoint. Our approach is based on a stochastic image interpretation model, which combines both 2-D and 3-D contextual information of the imaged scene. Building rooftop hypotheses are extracted using a contour-based grouping hierarchy that emanates from the principles of perceptual organization. We use a Markov random field model to describe the dependencies between all available hypotheses with regard to a globally consistent interpretation. The hypothesis verification step is treated as a stochastic optimization process that operates on the whole grouping hierarchy to find the globally optimal configuration for the locally interacting grouping hypotheses, providing also an estimate of the height of each extracted rooftop. This paper describes the main principles of our method and presents building detection results on a set of synthetic and airborne images.

Book ChapterDOI
06 Sep 2008
TL;DR: A top-down segmentation approach based on a Markov random field model that combines probabilistic boosting trees (PBT) and lower-level segmentation via graph cuts that is applied to the challenging task of detection and delineation of pediatric brain tumors.
Abstract: In this paper we present a fully automated approach to the segmentation of pediatric brain tumors in multi-spectral 3-D magnetic resonance images. It is a top-down segmentation approach based on a Markov random field (MRF) model that combines probabilistic boosting trees (PBT) and lower-level segmentation via graph cuts. The PBT algorithm provides a strong discriminative observation model that classifies tumor appearance while a spatial prior takes into account the pair-wise homogeneity in terms of classification labels and multi-spectral voxel intensities. The discriminative model relies not only on observed local intensities but also on surrounding context for detecting candidate regions for pathology. A mathematically sound formulation for integrating the two approaches into a unified statistical framework is given. The proposed method is applied to the challenging task of detection and delineation of pediatric brain tumors. This segmentation task is characterized by a high non-uniformity of both the pathology and the surrounding non-pathologic brain tissue. A quantitative evaluation illustrates the robustness of the proposed method. Despite dealing with more complicated cases of pediatric brain tumorsthe results obtained are mostly better than those reported for current state-of-the-art approaches to 3-D MR brain tumor segmentation in adult patients. The entire processing of one multi-spectral data set does not require any user interaction, and takes less time than previously proposed methods.

Proceedings ArticleDOI
23 Jun 2008
TL;DR: The algorithm is automatic, unsupervised, and efficient at producing smooth segmentation regions on many non-ideal iris images and a comparison of the estimated iris region parameters with the ground truth data is provided.
Abstract: A non-ideal iris segmentation approach using graph cuts is presented. Unlike many existing algorithms for iris localization which extensively utilize eye geometry, the proposed approach is predominantly based on image intensities. In a step-wise procedure, first eyelashes are segmented from the input images using image texture, then the iris is segmented using grayscale information, followed by a post-processing step that utilizes eye geometry to refine the results. A preprocessing step removes specular reflections in the iris, and image gradients in a pixel neighborhood are used to compute texture. The image is modeled as a Markov random field, and a graph cut based energy minimization algorithm [2] is used to separate textured and untextured regions for eyelash segmentation, as well as to segment the pupil, iris, and background using pixel intensity values. The algorithm is automatic, unsupervised, and efficient at producing smooth segmentation regions on many non-ideal iris images. A comparison of the estimated iris region parameters with the ground truth data is provided.

Proceedings ArticleDOI
23 Jun 2008
TL;DR: The proposed method successfully segments object categories with highly varying appearances in the presence of cluttered backgrounds and large view point changes and outperforms published results on the Pascal VOC 2007 dataset.
Abstract: Object models based on bag-of-words representations can achieve state-of-the-art performance for image classification and object localization tasks. However, as they consider objects as loose collections of local patches they fail to accurately locate object boundaries and are not able to produce accurate object segmentation. On the other hand, Markov random field models used for image segmentation focus on object boundaries but can hardly use the global constraints necessary to deal with object categories whose appearance may vary significantly. In this paper we combine the advantages of both approaches. First, a mechanism based on local regions allows object detection using visual word occurrences and produces a rough image segmentation. Then, a MRF component gives clean boundaries and enforces label consistency, guided by local image cues (color, texture and edge cues) and by long-distance dependencies. Gibbs sampling is used to infer the model. The proposed method successfully segments object categories with highly varying appearances in the presence of cluttered backgrounds and large view point changes. We show that it outperforms published results on the Pascal VOC 2007 dataset.

Journal ArticleDOI
TL;DR: This paper proposes a new approach to kriging minimum mean squared error linear prediction for spatial data sets with many observations by using a Gaussian Markov random field on a lattice as an approximation of aGaussian field.

Journal ArticleDOI
TL;DR: This paper proposes to characterize image regions locally by defining local region descriptors (LRDs), essentially feature statistics from pixels located within windows centered on the evolving contour, and they may reduce the overlap between distributions.
Abstract: Edge-based and region-based active contours are frequently used in image segmentation. While edges characterize small neighborhoods of pixels, region descriptors characterize entire image regions that may have overlapping probability densities. In this paper, we propose to characterize image regions locally by defining local region descriptors (LRDs). These are essentially feature statistics from pixels located within windows centered on the evolving contour, and they may reduce the overlap between distributions. LRDs are used to define general-form energies based on level sets. In general, a particular energy is associated with an active contour by means of the logarithm of the probability density of features conditioned on the region. In order to reduce the number of local minima of such energies, we introduce two novel functions for constructing the energy functional which are both based on the assumption that local densities are approximately Gaussian. The first uses a similarity measure between features of pixels that involves confidence intervals. The second employs a local Markov Random Field (MRF) model. By minimizing the associated energies, we obtain active contours that can segment objects that have largely overlapping global probability densities. Our experiments show that the proposed method can accurately segment natural large images in very short time when using a fast level-set implementation.

Proceedings ArticleDOI
01 Sep 2008
TL;DR: This paper presents an algorithm for measuring hair and face appearance in 2D images by using learned mixture models of color and location information to suggest the hypotheses of the face, hair, and background regions.
Abstract: This paper presents an algorithm for measuring hair and face appearance in 2D images. Our approach starts by using learned mixture models of color and location information to suggest the hypotheses of the face, hair, and background regions. In turn, the image gradient information is used to generate the likely suggestions in the neighboring image regions. Either Graph-Cut or Loopy Belief Propagation algorithm is then applied to optimize the resulting Markov network in order to obtain the most likely hair and face segmentation from the background. We demonstrate that our algorithm can precisely identify the hair and face regions from a large dataset of face images automatically detected by the state-of-the-art face detector.

Journal ArticleDOI
TL;DR: A straightforward method is presented for determining the most appropriate weighting of the spectral and spatial contributions in the Markov random field approach to context classification.
Abstract: A straightforward method is presented for determining the most appropriate weighting of the spectral and spatial contributions in the Markov random field approach to context classification. The spectral and spatial components are each normalized to fall in the range (0,1) after which the appropriate value for the weighting coefficient can determined simply, guided by an assessment of the importance of the spatial contribution. Experimental results are presented using an artificial data set and real data recorded by the Landsat Thematic Mapper and Airborne Visible/Infrared Imaging Spectrometer.

Book ChapterDOI
10 Jun 2008
TL;DR: This work follows a Bayesian framework by modeling the original HR range as a Markov random field (MRF) and proposes the use of an edge-adaptive MRF prior to handle discontinuities.
Abstract: Photonic mixer device (PMD) range cameras are becoming popular as an alternative to algorithmic 3D reconstruction but their main drawbacks are low-resolution (LR) and noise. Recently, some interesting works have stressed on resolution enhancement of PMD range data. These works use high-resolution (HR) CCD images or stereo pairs. But such a system requires complex setup and camera calibration. In contrast, we propose a super-resolution method through induced camera motion to create a HR range image from multiple LR range images. We follow a Bayesian framework by modeling the original HR range as a Markov random field (MRF). To handle discontinuities, we propose the use of an edge-adaptive MRF prior. Since such a prior renders the energy function non-convex, we minimize it by graduated non-convexity.

Proceedings ArticleDOI
23 Jun 2008
TL;DR: A new method to increase the quality of 3D video, a new media developed to represent 3D objects in motion, by combining both super-resolution and dynamic 3D shape reconstruction problems into a unique Markov random field (MRF) energy formulation.
Abstract: This paper presents a new method to increase the quality of 3D video, a new media developed to represent 3D objects in motion. This representation is obtained from multi-view reconstruction techniques that require images recorded simultaneously by several video cameras. All cameras are calibrated and placed around a dedicated studio to fully surround the models. The limited quality and quantity of cameras may produce inaccurate 3D model reconstruction with low quality texture. To overcome this issue, first we propose super-resolution (SR) techniques for 3D video: SR on multi-view images and SR on single-view video frames. Second, we propose to combine both super-resolution and dynamic 3D shape reconstruction problems into a unique Markov random field (MRF) energy formulation. The MRF minimization is performed using graph-cuts. Thus, we jointly compute the optimal solution for super-resolved texture and 3D shape model reconstruction. Moreover, we propose a coarse-to-fine strategy to iteratively produce 3D video with increasing quality. Our experiments show the accuracy and robustness of the proposed technique on challenging 3D video sequences.

Book ChapterDOI
20 Oct 2008
TL;DR: A Markov Random Field model is described for grid patterns on building facades that can be modeled as Near Regular Textures (NRT) and a Markov Chain Monte Carlo (MCMC) optimization procedure is introduced for discovering them.
Abstract: As part of an architectural modeling project, this paper investigates the problem of understanding and manipulating images of buildings. Our primary motivation is to automatically detect and seamlessly remove unwanted foreground elements from urban scenes. Without explicit handling, these objects will appear pasted as artifacts on the model. Recovering the building facade in a video sequence is relatively simple because parallax induces foreground/background depth layers, but here we consider static images only. We develop a series of methods that enable foreground removal from images of buildings or brick walls. The key insight is to use a prioriknowledge about grid patterns on building facades that can be modeled as Near Regular Textures (NRT). We describe a Markov Random Field (MRF) model for such textures and introduce a Markov Chain Monte Carlo (MCMC) optimization procedure for discovering them. This simple spatial rule is then used as a starting point for inference of missing windows, facade segmentation, outlier identification, and foreground removal.

Journal ArticleDOI
TL;DR: This study presents a new method for adaptive regularization using the image and noise statistics, which addresses the blurry edges in Tikhonov regularization and the blocky effects in total variation (TV) regularization.
Abstract: SENSE reconstruction suffers from an ill-conditioning problem, which increasingly lowers the signal-to-noise ratio (SNR) as the reduction factor increases. Ill-conditioning also degrades the convergence behavior of iterative conjugate gradient reconstructions for arbitrary trajectories. Regularization techniques are often used to alleviate the ill-conditioning problem. Based on maximum a posteriori statistical estimation with a Huber Markov random field prior, this study presents a new method for adaptive regularization using the image and noise statistics. The adaptive Huber regularization addresses the blurry edges in Tikhonov regularization and the blocky effects in total variation (TV) regularization. Phantom and in vivo experiments demonstrate improved image quality and convergence speed over both the unregularized conjugate gradient method and Tikhonov regularization method, at no increase in total computation time. Magn Reson Med 60:414 – 421, 2008. © 2008 Wiley

Proceedings ArticleDOI
01 Mar 2008
TL;DR: A novel group dynamical model within a continuous time setting and a group structure transition model is developed and combined with an interaction model using Markov Random Fields to create a realistic group model.
Abstract: In this paper, we describe models and algorithms for detection and tracking of group and individual targets We develop two novel group dynamical models, within a continuous time setting, that aim to mimic behavioural properties of groups We also describe two possible ways of modeling interactions between closely spaced targets using Markov Random Field (MRF) and repulsive forces These can be combined together with a group structure transition model to create realistic evolving group models We use a Markov Chain Monte Carlo (MCMC)-Particles Algorithm to perform sequential inference Computer simulations demonstrate the ability of the algorithm to detect and track targets within groups, as well as infer the correct group structure over time

Book ChapterDOI
25 Aug 2008
TL;DR: It is proved that the problem of reconstructing bounded-degree models with hidden nodes is hard, and it is impossible to decide in randomized polynomial time if two models generate distributions whose statistical distance is at most 1/3 or at least 2/3.
Abstract: Markov random fields are often used to model high dimensional distributions in a number of applied areas. A number of recent papers have studied the problem of reconstructing a dependency graph of bounded degree from independent samples from the Markov random field. These results require observing samples of the distribution at all nodes of the graph. It was heuristically recognized that the problem of reconstructing the model where there are hidden variables (some of the variables are not observed) is much harder. Here we prove that the problem of reconstructing bounded-degree models with hidden nodes is hard. Specifically, we show that unless NP = RP, It is impossible to decide in randomized polynomial time if two models generate distributions whose statistical distance is at most 1/3 or at least 2/3. Given two generating models whose statistical distance is promised to be at least 1/3, and oracle access to independent samples from one of the models, it is impossible to decide in randomized polynomial time which of the two samples is consistent with the model. The second problem remains hard even if the samples are generated efficiently, albeit under a stronger assumption.