scispace - formally typeset
Search or ask a question

Showing papers on "Markov random field published in 1987"


Journal ArticleDOI
TL;DR: This work derives efficient algorithms and describes parallel implementations on digital parallel SIMD architectures, as well as a new class of parallel hybrid computers that mix digital with analog components.
Abstract: We formulate several problems in early vision as inverse problems. Among the solution methods we review standard regularization theory, discuss its limitations, and present new stochastic (in particular, Bayesian) techniques based on Markov Random Field models for their solution. We derive efficient algorithms and describe parallel implementations on digital parallel SIMD architectures, as well as a new class of parallel hybrid computers that mix digital with analog components.

833 citations


01 Oct 1987
TL;DR: A scheme to integrate intensity edges with stereo depth and motion field information and results from a Connection Machine algorithm are shown, showing the use of intensity edges to integrate other visual cues and to help discover discontinuities emerges as a general and powerful principle.
Abstract: Integration of several vision modules is likely to be one of the keys to the power and robustness of the human visual system. We suggest that integration is best performed at the location of discontinuities in early processes, such as discontinuities in image brightness, depth, motion, texture, and color. Coupled Markov Random Field models can be used to combine vision modalities with their discontinuities. We derive a scheme to integrate intensity edges with stereo depth and motion field information and show results from a Connection Machine algorithm on synthetic and natural images. The use of intensity edges to integrate other visual cues and to help discover discontinuities emerges as a general and powerful principle.

85 citations


Journal ArticleDOI
TL;DR: A segmentation algorithm based on deterministic relaxation with varying neighborhood structures for the segmentation of noisy images, modeled as a discrete-valued Markov random field, corrupted by additive, independent, Gaussian noise is presented.
Abstract: This paper presents a segmentation algorithm based on deterministic relaxation with varying neighborhood structures for the segmentation of noisy images. The image is modeled as a discrete-valued Markov random field (MRF), or equivalently a Gibbs random field, corrupted by additive, independent, Gaussian noise; although, additivity and Gaussian assumptions are not necessary for the algorithm. The algorithm seeks to determine the maximum a posteriori (MAP) estimate of the noiseless scene. Using varying neighborhoods during relaxation helps pick up certain directional features in the image which are otherwise smoothed out. The parallelism of the algorithm is underscored by providing its mapping to mesh-connected and systolic array processors suitable for VLSI implementation. Segmentation results are given for 2- and 4-level Gibbs distributed and geometric images corrupted by noise of different levels. A comparative study of this segmentation algorithm with other relaxation algorithms and a single-sweep dynamic programming algorithm, all seeking the MAP estimate, is also presented.

44 citations


Book ChapterDOI
01 Jan 1987
TL;DR: Two models are given for the extraction of boundaries in digital images, one for discriminating textures and the other for discriminating objects, where a Markov random field is constructed as a prior distribution over intensities and labels.
Abstract: Two models are given for the extraction of boundaries in digital images, one for discriminating textures and the other for discriminating objects. In both cases a Markov random field is constructed as a prior distribution over intensities (observed) and labels (unobserved); the labels are either the texture types or boundary indicators. The posterior distribution, i.e., the conditional distribution over the labels given the intensities, is then analyzed by a Monte-Carlo algorithm called stochastic relaxation. The final labeling corresponds to a local maximum of the posterior likelihood.

35 citations


Book ChapterDOI
01 Jan 1987
TL;DR: A unified treatment of the labeling and learning problems for the so-called hidden Markov chain model currently used in many speech recognition systems and the hidden Pickard random field image model, formulated in terms of Baum's classical forward-backward recurrence formulae.
Abstract: The paper outlines a unified treatment of the labeling and learning problems for the so-called hidden Markov chain model currently used in many speech recognition systems and the hidden Pickard random field image model (a small but interesting, causal sub-class of hidden Markov random field models). In both cases, labeling techniques are formulated in terms of Baum’s classical forward-backward recurrence formulae, and learning is accomplished by a specialization of the EM algorithm for mixture identification. Experimental results demonstrate that the approach is subjectively relevant to the image restoration and segmentation problems.

28 citations


Journal ArticleDOI
TL;DR: In this article, a binary Markov random field whose neighbor structure is specified by a countable graph with nodes of uniformly bounded degree is represented as the nodewise modulo 2 sum of two independent binary random fields, one of which is white binary noise of positive weight.
Abstract: Consider a binary Markov random field whose neighbor structure is specified by a countable graph with nodes of uniformly bounded degree. Under a minimal assumption we prove a decomposition theorem to the effect that such a Markov random field can be represented as the nodewise modulo 2 sum of two independent binary random fields, one of which is white binary noise of positive weight. Said decomposition provides the information theorist with an exact expression for the per-site rate-distortion function of the random field over an interval of distortions not exceeding this weight. We mention possible implications for communication theory, probability theory and statistical physics.

18 citations


Proceedings ArticleDOI
24 Sep 1987
TL;DR: This paper addresses methods for high and low level multi-sensor integration based on maintaining consistent labelings of features detected in different sensor domains and implementation in a concurrent computing environment.
Abstract: This paper addresses methods for high and low level multi-sensor integration based on maintaining consistent labelings of features detected in different sensor domains. Implementation in a concurrent computing environment is discussed. Keywords: Multi-Sensor Integration, Sensor Fusion, Consistent Labeling, Markov Random Field, Concurrent Computing, Hypercube, Simulated Annealing.

11 citations


01 Jan 1987
TL;DR: Gains in clarity of design, flexibility of use, and the robustness of the resulting algorithms justify a probabilistic approach to low-level vision problems.
Abstract: A probabilistic approach to low-level vision algorithms results in algorithms that are easy to tune for a particular application and modules that can be used for many applications. Several routines that return likelihoods can be combined into a single more robust routine. Thus it is easy to construct specialized yet robust low-level vision systems out of algorithms that calculate likelihoods. This dissertation studies algorithms that generate and use likelihoods. Probabilities derive from likelihoods using Bayes' rule. Thus vision algorithms that return likelihoods also generate probabilities. Likelihoods are used by Markov Random Field algorithms. This approach yields facet model boundary pixel detectors that return likelihoods. Experiments show that the detectors designed for the step edge model are on par with the best edge detectors reported in the literature. Algorithms are presented here that use the generalized Hough transform to calculate likelihoods for object recognition. Evidence, represented as likelihoods, from several detectors that view the same data with different models are combined here. The likelihoods that result are used to build robust detectors out of several specialized ones. Results are shown here for combining boundary detectors that assume several levels of noise and combining detectors of several sizes. The gains in clarity of design, flexibility of use, and the robustness of the resulting algorithms justify a probabilistic approach to low-level vision problems.

8 citations


Journal ArticleDOI
TL;DR: The author derives a sampling scheme that preserves the information content of the data whilst reducing their dimensionality considerably and derives a refinement of this sampling scheme where residual redundancy is removed by a more careful selection of what is sampled.
Abstract: The author advocates the use of Markov random field models to describe texture properties generally. For homogeneous textures he derives a sampling scheme that preserves the information content of the data whilst reducing their dimensionality considerably. He derives a refinement of this sampling scheme where residual redundancy is removed by a more careful selection of what is sampled. The results are related to the grey level co-occurrence method of texture classification and to the pattern recognition device that is known as WISARD.

5 citations


Proceedings ArticleDOI
10 Sep 1987
TL;DR: I use information theoretic techniques to derive schemes for the Bayesian analysis of images with spatially homogeneous statistical properties that are equivalent to deducing the structure of the Markov random field which models the data.
Abstract: I use information theoretic techniques to derive schemes for the Bayesian analysis of images with spatially homogeneous statistical properties. In any particular case the scheme is equivalent to deducing the structure of the Markov random field which models the data. This scheme may also be viewed as a generalised sampling technique where the data is reduced by a set of sampling functions to a more compact set of data, which nevertheless retains all the information content of the original data.

4 citations


Journal ArticleDOI
TL;DR: In this paper, the extreme value limit theorem for real-valued Markov random fields has been applied to the case of Markov Random Fields (MRFs) under appropriate conditions on the nearest-neighbor distributions.
Abstract: Let X,, ttZm, be a Markov random field assuming values in RM. Let I, be a rectangular box in Zm with its center at 0 and corner points with coordinates ?n. Let (A,) be a sequence of measurable subsets of RM such that P(X, E A I X,, s E neighborhood of t) -- 0, for n --+ o; and let f,(x) be the indicator of A,. Under appropriate conditions on the nearest-neighbor distributions of (X,), the conditional distribution of E,,,, fn(X,), given the values of X,, for s on the boundary of I,, converges to the Poisson distribution. An immediate application is an extreme value limit theorem for a real-valued Markov random field.

01 Jan 1987
TL;DR: The maximum entropy method is used to reconstruct clutter probability density functions from observed statistical properties; this leads to representations of clutter in terms of Markov random elds (MRF), and it is shown how the set of statistics which is used for each clutter type may be optimised in order to yield a more compact probabilistic model.
Abstract: A thorough understanding of clutter statistics is a prerequisite for the successful analysis of radar images. Usually very simple statistics such as moments and correlation properties are used, perhaps based on an underlying physical model of the scattering and imaging process. In this paper we use the maximum entropy method to reconstruct clutter probability density functions (PDF) from observed statistical properties; this leads to representations of clutter in terms of Markov random elds (MRF). Furthermore we show how the set of statistics which is used for each clutter type may be optimised in order to yield a more compact probabilistic model. The principal advantage of our results is that MRF clutter models may be mapped directly onto parallel image processing hardware, and they provide a rigorous framework for Bayesian decision making concerning the presence of objects embedded in clutter. Image segmentation is another very useful application of these MRF models.


Proceedings ArticleDOI
10 Sep 1987
TL;DR: This paper presents a new class of nonstationary signals, the class of semi-Markov random fields, and derives the likelihood function, which uniquely describes the statistical behavior of these random fields.
Abstract: In this paper we focus our attention on modeling nons-tationary texture behavior, thus resulting in accurate models for texture analysis and synthesis. We present a new class of nonstationary signals, the class of semi-Markov random fields. The likelihood function, which uniquely describes the statistical behavior of these random fields, is derived. We examine the validity of our two-dimensional semi-Markov random field models in synthesizing and analyzing textures. The appropriateness of the semi-Markov random field models for synthesizing images similar to real textures is studied and different models are optimally fitted to real textures by the use of a maximum-likelihood procedure.© (1987) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Book ChapterDOI
01 Jan 1987
TL;DR: This is an introduction to and interpretation of some techniques which have been developed recently for pattern processing, concerned with adaptive networks which “learn” to do jobs such as pattern classification, without necessarily containing explicit models of the data distributions.
Abstract: This is an introduction to and interpretation of some techniques which have been developed recently for pattern processing. The first part is concerned with the stochastic modelling approach to pattern recognition, which includes structural and statistical aspects. Various varieties of hidden Markov models, which are the basis of the most successful current automatic speech recognition systems, are viewed as a special case of Markov random fields. The second part is concerned with adaptive networks which “learn” to do jobs such as pattern classification, without necessarily containing explicit models of the data distributions. The main approaches covered are the Boltzmann machine (which is also interpreted as a Markov random field) and a recently invented multi-layer perceptron network.

01 Dec 1987
TL;DR: An algorithm for segmenting images corrupted by a high level of noise with different characteristics is developed, based on a Markov Random Field model of the image and uses Kalman Filtering techniques and Dynamic Programming in order to smooth within the regions.
Abstract: : The purpose of this thesis is to develop an algorithm for segmenting images corrupted by a high level of noise with different characteristics. In particular the images considered are composed of several regions describing different objects and background. The algorithm described is based on a Markov Random Field model of the image and uses Kalman Filtering techniques and Dynamic Programming in order to smooth within the regions. The theoretical background for one dimensional and two dimensional data which have different characteristics and simulation results are presented, with examples of synthetic data and underwater images. Keywords: Markov Random field; Dynamic programming; Kalman filtering techniques.