scispace - formally typeset
Search or ask a question

Showing papers on "Markov random field published in 1999"


Proceedings ArticleDOI
20 Sep 1999
TL;DR: A non-parametric method for texture synthesis that aims at preserving as much local structure as possible and produces good results for a wide variety of synthetic and real-world textures.
Abstract: A non-parametric method for texture synthesis is proposed. The texture synthesis process grows a new image outward from an initial seed, one pixel at a time. A Markov random field model is assumed, and the conditional distribution of a pixel given all its neighbors synthesized so far is estimated by querying the sample image and finding all similar neighborhoods. The degree of randomness is controlled by a single perceptually intuitive parameter. The method aims at preserving as much local structure as possible and produces good results for a wide variety of synthetic and real-world textures.

2,972 citations


Journal ArticleDOI
TL;DR: The estimation of 2D motion from time-varying images is reviewed, showing that even ideal constraints may not provide a well-defined estimation criterion and presenting several fast search strategies for the optimization of an estimation criterion.
Abstract: We have reviewed the estimation of 2D motion from time-varying images, paying particular attention to the underlying models, estimation criteria, and optimization strategies. Several parametric and nonparametric models for the representation of motion vector fields and motion trajectory fields have been discussed. For a given region of support, these models determine the dimensionality of the estimation problem as well as the amount of data that has to be interpreted or transmitted thereafter. Also, the interdependence of motion and image data has been addressed. We have shown that even ideal constraints may not provide a well-defined estimation criterion. Therefore, the data term of an estimation criterion is usually supplemented with a smoothness term that can be expressed explicitly or implicitly via a constraining motion model. We have paid particular attention to the statistical criteria based on Markov random fields. Because the optimization of an estimation criterion typically involves a large number of unknowns, we have presented several fast search strategies.

395 citations


Journal ArticleDOI
TL;DR: A new algorithm for segmentation of textured images using a multiresolution Bayesian approach which is a natural extension of the single-resolution "maximization of the posterior marginals" (MPM) estimate.
Abstract: We present a new algorithm for segmentation of textured images using a multiresolution Bayesian approach The new algorithm uses a multiresolution Gaussian autoregressive (MGAR) model for the pyramid representation of the observed image, and assumes a multiscale Markov random field model for the class label pyramid The models used in this paper incorporate correlations between different levels of both the observed image pyramid and the class label pyramid The criterion used for segmentation is the minimization of the expected value of the number of misclassified nodes in the multiresolution lattice The estimate which satisfies this criterion is referred to as the "multiresolution maximization of the posterior marginals" (MMPM) estimate, and is a natural extension of the single-resolution "maximization of the posterior marginals" (MPM) estimate Previous multiresolution segmentation techniques have been based on the maximum a posterior (MAP) estimation criterion, which has been shown to be less appropriate for segmentation than the MPM criterion It is assumed that the number of distinct textures in the observed image is known The parameters of the MGAR model-the means, prediction coefficients, and prediction error variances of the different textures-are unknown A modified version of the expectation-maximization (EM) algorithm is used to estimate these parameters The parameters of the Gibbs distribution for the label pyramid are assumed to be known Experimental results demonstrating the performance of the algorithm are presented

166 citations


Journal ArticleDOI
TL;DR: This paper provides a quantitative measure for the so-called nonaccidental statistics and, thus, justifies some empirical observations of Gestalt psychology by information theory.
Abstract: The goal of this paper is to study a mathematical framework of 2D object shape modeling and learning for middle level vision problems, such as image segmentation and perceptual organization. For this purpose, we pursue generic shape models which characterize the most common features of 2D object shapes. In this paper, shape models are learned from observed natural shapes based on a minimax entropy learning theory. The learned shape models are Gibbs distributions defined on Markov random fields (MRFs). The neighborhood structures of these MRFs correspond to Gestalt laws-colinearity, cocircularity, proximity, parallelism, and symmetry. Thus, both contour-based and region-based features are accounted for. Stochastic Markov chain Monte Carlo (MCMC) algorithms are proposed for learning and model verification. Furthermore, this paper provides a quantitative measure for the so-called nonaccidental statistics and, thus, justifies some empirical observations of Gestalt psychology by information theory. Our experiments also demonstrate that global shape properties can arise from interactions of local features.

146 citations


Journal ArticleDOI
TL;DR: This work details the theory required, and presents an algorithm that is easily implemented and practical in terms of computation time, and demonstrates this algorithm on three MRF models--the standard Potts model, an inhomogeneous variation of the Pottsmodel, and a long-range interaction model better adapted to modeling real-world images.
Abstract: Developments in statistics now allow maximum likelihood estimators for the parameters of Markov random fields (MRFs) to be constructed. We detail the theory required, and present an algorithm that is easily implemented and practical in terms of computation time. We demonstrate this algorithm on three MRF models-the standard Potts model, an inhomogeneous variation of the Potts model, and a long-range interaction model, better adapted to modeling real-world images. We estimate the parameters from a synthetic and a real image, and then resynthesize the models to demonstrate which features of the image have been captured by the model. Segmentations are computed based on the estimated parameters and conclusions drawn.

139 citations


Journal ArticleDOI
TL;DR: The authors extend the methodology proposed by A. H. Schistad et al. (1996), by demonstrating that the use of an effective search procedure, the genetic algorithm, leads to improved parameter estimation and hence higher classification accuracies.
Abstract: The use of contextual information for modeling the prior probability mass function has found applications in the classification of remotely sensed data. With the increasing availability of multisource remotely sensed data sets, random field models, especially Markov random fields (MRF), have been found to provide a theoretically robust yet mathematical tractable way of coding multisource information and of modeling contextual behavior. It is well known that the performance of a model is dependent both on its functional form (in this case, the classification algorithm) and on the accuracy of the estimates of model parameters. In dealing with multisource data, the determination of source weighting and MRF model parameters is a difficult issue. The authors extend the methodology proposed by A. H. Schistad et al. (1996), by demonstrating that the use of an effective search procedure, the genetic algorithm, leads to improved parameter estimation and hence higher classification accuracies.

112 citations


Journal ArticleDOI
TL;DR: An original method for analyzing, in an unsupervised way, images supplied by high resolution sonar, using a MRF monoscale model using a priori information based on physical properties of each region, which allows us to distinguish echo areas from sea-bottom reverberation.

111 citations


Journal ArticleDOI
TL;DR: In this paper, a common method is developed to derive efficient GNC-algorithms for the minimization of MAP energies which arise in the context of any observation system giving rise to a convex data-fidelity term and of Markov random field energies involving any nonconvex and/or nonsmooth PFs.
Abstract: This paper is concerned with the reconstruction of images (or signals) from incomplete, noisy data, obtained at the output of an observation system. The solution is defined in maximum a posteriori (MAP) sense and it appears as the global minimum of an energy function joining a convex data-fidelity term and a Markovian prior energy. The sought images are composed of nearly homogeneous zones separated by edges and the prior term accounts for this knowledge. This term combines general nonconvex potential functions (PFs) which are applied to the differences between neighboring pixels. The resultant MAP energy generally exhibits numerous local minima. Calculating its local minimum, placed in the vicinity of the maximum likelihood estimate, is inexpensive but the resultant estimate is usually disappointing. Optimization using simulated annealing is practical only in restricted situations. Several deterministic suboptimal techniques approach the global minimum of special MAP energies, employed in the field of image denoising, at a reasonable numerical cost. The latter techniques are not directly applicable to general observation systems, nor to general Markovian prior energies. This work is devoted to the generalization of one of them, the graduated nonconvexity (GNC) algorithm, in order to calculate nearly-optimal MAP solutions in a wide range of situations. In fact, GNC provides a solution by tracking a set of minima along a sequence of approximate energies, starting from a convex energy and progressing toward the original energy. In this paper, we develop a common method to derive efficient GNC-algorithms for the minimization of MAP energies which arise in the context of any observation system giving rise to a convex data-fidelity term and of Markov random field (MRF) energies involving any nonconvex and/or nonsmooth PFs. As a side-result, we propose how to construct pertinent initializations which allow us to obtain meaningful solutions using local minimization of these MAP energies. Two numerical experiments-an image deblurring and an emission tomography reconstruction-illustrate the performance of the proposed technique.

110 citations


Proceedings ArticleDOI
15 Mar 1999
TL;DR: A fully automatic technique to obtain image clusters is proposed, inspired from the Markov random field, that is less sensitive to noise as it filters the image while clustering it, and the filter parameters are enhanced in each iteration by the clustering process.
Abstract: This paper describes the application of fuzzy set theory in medical imaging, namely the segmentation of brain images. We propose a fully automatic technique to obtain image clusters. A modified fuzzy c-mean (FCM) classification algorithm is used to provide a fuzzy partition. Our new method, inspired from the Markov random field (MRF), is less sensitive to noise as it filters the image while clustering it, and the filter parameters are enhanced in each iteration by the clustering process. We applied the new method on a noisy CT scan and on a single channel MRI scan. We recommend using a methodology of over segmentation to the textured MRI scan and a user guided-interface to obtain the final clusters. One of the applications of this technique is TBI recovery prediction in which it is important to consider the partial volume. It is shown that the system stabilizes after a number of iterations with the membership value of the region contours reflecting the partial volume value. The final stage of the process is devoted to decision making or the defuzzification process.

108 citations


Journal ArticleDOI
TL;DR: A contribution to the automatic 3-D reconstruction of complex urban scenes from aerial stereo pairs is proposed, which consists of segmenting the scene into two different kinds of components: the ground and the above-ground objects.

105 citations


Journal ArticleDOI
TL;DR: The experimental results show that NLC has much better performance than Nearest Neighbor (NN) as the measurement in MRMRF modeling, and this paper proposes multiresolution MRF (MRMRF) modeling to describe textures.

Journal ArticleDOI
TL;DR: It is shown that the use of a contextual classifier or an existing map of the area can have larger influence on the classification accuracy than using data from an additional sensor.
Abstract: The use of a Markov random field model for multisource classification for map revision applications is investigated. A statistical model is presented, in which data from several remote sensing sensors is merged with spatial contextual information and a previous labeling of the scene from an existing thematic map to reach a consensus classification. The method is tested on two data sets for forest classification, and the classification performance is studied in terms of the effect of using remote sensing data from different sensors, the effect of spatial context, and the effect of using map data from previous surveys in the classification. It is shown that the use of a contextual classifier or an existing map of the area can have larger influence on the classification accuracy than using data from an additional sensor.

Journal ArticleDOI
TL;DR: This paper describes the real time implementation of a simple and robust motion detection algorithm based on Markov random field modeling, based on a hybrid architecture, associating pipeline modules with one asynchronous module to perform the whole process, from video acquisition to moving object masks visualization.
Abstract: This paper describes the real time implementation of a simple and robust motion detection algorithm based on Markov random field (MRF) modeling, MRF-based algorithms often require a significant amount of computations. The intrinsic parallel property of MRF modeling has led most of implementations toward parallel machines and neural networks, but none of these approaches offers an efficient solution for real-world (i.e., industrial) applications. Here, an alternative implementation for the problem at hand is presented yielding a complete, efficient and autonomous real-time system for motion detection. This system is based on a hybrid architecture, associating pipeline modules with one asynchronous module to perform the whole process, from video acquisition to moving object masks visualization. A board prototype is presented and a processing rate of 15 images/s is achieved, showing the validity of the approach.

Proceedings ArticleDOI
23 Jun 1999
TL;DR: An approach to feature-based object recognition, using maximum a posteriori (MAP) estimation under a Markov random field (MRF) model, which provides an efficienct solution for a wide class of priors that explicitly model dependencies between individual features of an object.
Abstract: We introduce an approach to feature-based object recognition, using maximum a posteriori (MAP) estimation under a Markov random field (MRF) model. This approach provides an efficienct solution for a wide class of priors that explicitly model dependencies between individual features of an object. These priors capture phenomena such as the fact that unmatched features due to partial occlusion are generally spatially correlated rather than independent. The main focus of this paper is a special case of the framework that yields a particularly efficient approximation method. We call this special case spatially coherent matching (SCM), as it reflects the spatial correlation among neighboring features of an object. The SCM method operates directly on the image feature map, rather than relying on the graph-based methods used in the general framework. We present some Monte Carlo experiments showing that SCM yields substantial improvements over Hausdorff matching for cluttered scenes and partially occluded objects.

Book ChapterDOI
01 Jan 1999
TL;DR: This chapter focuses on a probabilistic graph model called the multiscale hidden Markov model (MHMM), which captures the key inter-scale dependencies present in natural signals and images.
Abstract: Bayesian multiscale image analysis weds the powerful modeling framework of probabilistic graphs with the intuitively appealing and computationally tractable multiresolution paradigm. In addition to providing a very natural and useful framework for modeling and processing images, Bayesian multiscale analysis is often much less computationally demanding compared to classical Markov random field models. This chapter focuses on a probabilistic graph model called the multiscale hidden Markov model (MHMM), which captures the key inter-scale dependencies present in natural signals and images. A common framework for the MHMM is presented that is capable of analyzing both Gaussian and Poisson processes, and applications to Bayesian image analysis are examined.

Journal ArticleDOI
TL;DR: It is argued that the stochastic algorithm for computing medial axes is compatible with existing algorithms for image segmentation, such as region growing, snake, and region competition, and provides a new direction for Computing medial axes from texture images.
Abstract: Proposes a statistical framework for computing medial axes of 2D shapes. In the paper, the computation of medial axes is posed as a statistical inference problem not as a mathematical transform. The paper contributes to three aspects in computing medial axes. 1) Prior knowledge is adopted for axes and junctions so that axes around junctions are regularized. 2) Multiple interpretations of axes are possible, each being assigned a probability. 3) A stochastic jump-diffusion process is proposed for estimating both axes and junctions in Markov random fields. We argue that the stochastic algorithm for computing medial axes is compatible with existing algorithms for image segmentation, such as region growing, snake, and region competition. Thus, our method provides a new direction for computing medial axes from texture images. Experiments are demonstrated on both synthetic and real 2D shapes.

Proceedings ArticleDOI
07 Jun 1999
TL;DR: An algorithm for speaker's lip contour extraction using spatially varying coefficients and a Bayesian approach segments the mouth area using Markov random field modelling results in an accurate lip shape with inner and outer borders.
Abstract: An algorithm for speaker's lip contour extraction is presented in this paper. A color video sequence of the speaker's face is acquired under natural lighting conditions and without any particular make-up. First, a logarithmic color transform is performed from RGB to HI (hue, intensity) color space. A Bayesian approach segments the mouth area using Markov random field modelling. Motion is combined with red hue lip information into a spatiotemporal neighbourhood. Simultaneously, a region of interest and relevant boundary points are automatically extracted. Next, an active contour using spatially varying coefficients is initialised with the results of the preprocessing stage. Finally, an accurate lip shape with inner and outer borders is obtained with good quality results in this challenging situation.

Journal ArticleDOI
TL;DR: The proposed hybrid approach has the advantage that combines the fast convergence of the MRF-based iterative algorithm and the powerful global exploration of the GA.

Journal ArticleDOI
TL;DR: This article proposes an inhomogeneous Gaussian random field as a general prior model for many image-processing applications and claims that the problem is not with the Gaussian family, but rather with the assumption of homogeneity.
Abstract: Over recent years, the use of homogeneous Gibbs prior models in image processing has become widely accepted. There has been, however, much discussion over precisely which models are most appropriate. For most applications, the simplest Gaussian model tends to oversmooth reconstructions, so it has been rejected in favor of various edge-preserving alternatives. We claim that the problem is not with the Gaussian family, but rather with the assumption of homogeneity. In this article we propose an inhomogeneous Gaussian random field as a general prior model for many image-processing applications. The simplicity of the Gaussian model allows rapid calculation, and the flexibility of the spatially varying prior parameter allows varying degrees of spatial smoothing. This approach is in the spirit of adaptive kernel density methods where only the choice of the variable window width is important. The analysis of real single-photon emission computed tomography data is used to illustrate the methods, and simu...

Journal ArticleDOI
TL;DR: This paper examines the connection between loss networks without controls and Markov random field theory and yields insight into the structure and computation of network equilibrium distributions, and into the nature of spatial dependence in networks.
Abstract: This paper examines the connection between loss networks without controls and Markov random field theory. The approach taken yields insight into the structure and computation of network equilibrium distributions, and into the nature of spatial dependence in networks. In addition, it provides further insight into some commonly used approximations, enables the development of more refined approximations, and permits the derivation of some asymptotically exact results.

Proceedings ArticleDOI
15 Mar 1999
TL;DR: An unsupervised algorithm for speaker's lip segmentation using Markov random field modeling to segment the mouth shape using the red hue predominant region and motion in a spatiotemporal neighborhood is presented.
Abstract: An unsupervised algorithm for speaker's lip segmentation is presented. A color video sequence of the speaker's face is acquired, under natural lighting conditions and without any particular make-up. First, a logarithmic color transform is performed from the RGB to HI (hue, intensity) color space and sequence dependant parameters are evaluated. Second, a statistical approach using Markov random field modeling segment the mouth shape using the red hue predominant region and motion in a spatiotemporal neighborhood. Simultaneously, a region of interest (ROI) is automatically extracted. Third, the speaker's lip shape is extracted from the final hue field with good quality results in this challenging situation.

Journal ArticleDOI
TL;DR: A new algorithm, based on a tree-structured Markov random field model, to carry out the unsupervised classification of images, that is adaptive to the local characteristics of the image and provides useful side information about the segmentation process.
Abstract: We propose a new algorithm, based on a tree-structured Markov random field (MRP) model, to carry out the unsupervised classification of images. It presents several appealing features; due to the MRF model, it takes into account spatial dependencies, yet is computationally light because only binary MRFs are used and a progressive refinement of information takes place. Moreover, it is adaptive to the local characteristics of the image and provides useful side information about the segmentation process.

Journal ArticleDOI
TL;DR: Analytical results are developed for multispectral simultaneous autoregressive and Markov random field models which lead to practical procedures for calculating ML estimates and the superiority of the ML method is evidenced by experimental results provided.
Abstract: Considers the problem of estimating parameters of multispectral random field (RF) image models using maximum likelihood (ML) methods. For images with an assumed Gaussian distribution, analytical results are developed for multispectral simultaneous autoregressive (MSAR) and Markov random field (MMRF) models which lead to practical procedures for calculating ML estimates. Although previous work has provided least squares methods for parameter estimation, the superiority of the ML method is evidenced by experimental results provided in this work. The effectiveness of multispectral RF models using ML estimates in modeling color texture images is also demonstrated.

Proceedings ArticleDOI
15 Mar 1999
TL;DR: The proposed method outperforms the previous methods in the reconstruction of missing edges by adapting the model parameters based on the image characteristics determined in a large region around the damaged area.
Abstract: Loss of coded data during its transmission can affect a decoded video sequence to a large extent, making concealment of errors caused by data loss a serious issue. Previous work in spatial error concealment exploiting MRF models used a single pixel wide region around the erroneous area to achieve a reconstruction based on an optimality measure. This practically restricts the amount of available information that is used in a concealment procedure to a small region around the missing area. Incorporating more pixels usually means a higher order model and this is expensive as the complexity grows exponentially with the order of the MRF model. Using previously proposed approaches, the damaged area is reconstructed fairly well in very low frequency portions of the image. However, the reconstruction process yields blurry results with a significant loss of details in high frequency, or edge portions of the image. In our proposed approach, a MRF is used as the image a priori model. More available information is incorporated in the reconstruction procedure not by increasing the order of the model but instead by adaptively adjusting the model parameters. Adaptation is done based on the image characteristics determined in a large region around the damaged area. Thus, the reconstruction procedure can make use of information embedded in not only immediate neighborhood pixels but also in a wider neighborhood without a dramatic increase in computational complexity. The proposed method outperforms the previous methods in the reconstruction of missing edges.

Proceedings ArticleDOI
13 Aug 1999
TL;DR: The technique found by mapping the discrete Bayesian segmentation problem to a continuous optimization framework can compete easily with the MSTAR approach in speed, segmentation quality, and statistical optimality.
Abstract: DARPA's Moving and Stationary Target Acquisition and Recognition (MSTAR) program has shown that image segmentation of Synthetic Aperture Radar (SAR) imagery into target, shadow, and background clutter regions is a powerful tool in the process of recognizing targets in open terrain. Unfortunately, SAR imagery is extremely speckled. Impulsive noise can make traditional, purely intensity-based segmentation techniques fail. Introducing prior information about the segmentation image -- its expected 'smoothness' or anisotropy -- in a statistically rational way can improve segmentations dramatically. Moreover, maintaining statistical rigor throughout the recognition process can suggest rational sensor fusion methods. To this end, we introduce two Bayesian approaches to image segmentation of MSTAR target chips based on a statistical observation model and Markov Random Field (MRF) prior models. We compare the results of these segmentation methods to those from the MSTAR program. The technique we find by mapping the discrete Bayesian segmentation problem to a continuous optimization framework can compete easily with the MSTAR approach in speed, segmentation quality, and statistical optimality. We also find this approach provides more information than a simple discrete segmentation, supplying probability measures useful for error estimation.

Journal ArticleDOI
TL;DR: An integrated approach to robust analysis of SPOT images with the aid of map information as well as a priori knowledge about the contextual information of images is presented.
Abstract: With the rapid development of remote sensing, digital image processing has become an important tool for the quantitative and statistical analysis of remotely sensed images. These images most often contain complex natural scenes. The robust interpretation of such images requires the use of different sources of information about the scenes under consideration. This paper presents an integrated approach to robust analysis of SPOT images with the aid of map information as well as a priori knowledge about the contextual information of images. Markov random field theory and the Bayes formula are used to formulate the image analysis problem as a problem of optimization of an objective function, which in turn permits the application of various existing optimization algorithms to solve the problem. To increase the robustness of the result, several techniques are proposed to effectively use map information and image contextual information. The first one is concerned with the estimation of the parameters in the objective function with the help of these two sources of information. The second one is the integration of map information in Bayes image modeling using a Markov random field. The third one is a new optimization algorithm which takes into account map information and image contextual information by means of a feedback control scheme. The last technique proposed to increase the robustness of the result is concerned with the fusion of several (intermediate) analysis results by again using map knowledge and image contextual information for the estimation of the reliability of these results.

Journal ArticleDOI
TL;DR: The proposed two-pass algorithm is much faster than any other MAP-MRF motion estimation method reported in the literature so far and is supported by the experimental results from both synthetic and real-world image sequences.
Abstract: This paper presents a two-pass algorithm for estimating motion vectors from image sequences. In the proposed algorithm, the motion estimation is formulated as a problem of obtaining the maximum a posteriori in the Markov random field (MAP-MRF). An optimization method based on the mean field theory (MFT) is opted to conduct the MAP search. The estimation of motion vectors is modeled by only two MRFs, namely, the motion vector field and unpredictable field. Instead of utilizing the line field, a truncation function is introduced to handle the discontinuity between the motion vectors on neighboring sites. In this algorithm, a "double threshold" preprocessing pass is first employed to partition the sites into three regions, whereby the ensuing MPT-based pass for each MRF is conducted on one or two of the three regions. With this algorithm, no significant difference exists between the block-based and pixel-based MAP searches any more. Consequently, a good compromise between precision and efficiency can be struck with ease. To render our algorithm more resilient against noise, the mean absolute difference instead of mean square error is selected as the measure of difference, which is more reliable according to the knowledge of robust statistics. This is supported by our experimental results from both synthetic and real-world image sequences. The proposed two-pass algorithm is much faster than any other MAP-MRF motion estimation method reported in the literature so far.

Journal ArticleDOI
TL;DR: This study investigates whether combining several different image classifications together with an a priori image model of the expected spatial distribution of the classes can produce a better classification.
Abstract: This study investigates whether combining several different image classifications together with an a priori image model of the expected spatial distribution of the classes can produce a better classification. A maximum likelihood classifier and the cascade-correlation neural network architecture are used to generate various classification maps for satellite image data by varying the input features and network parameter settings. A likelihood for each pixel's class label is derived from the source classifications and combined with a Markov random field spatial image model to produce the final image classification. The method is applied to a ground cover type study based on Landsat Thematic Mapper (TM) imagery. It was found that a carefully selected combination could significantly improve individual classification results.

01 Jan 1999
TL;DR: In this article, a non-causal, nonparametric, multiscale, Markov random field (MRF) model is proposed for texture segmentation and recognition.
Abstract: The underlying aim of this research is to investigate the mathematical descriptions of homogeneous textures in digital images for the purpose of segmentation and recognition. The research covers the problem of testing these mathematical descriptions by using them to generate synthetic realisations of the homogeneous texture for subjective and analytical comparisons with the source texture from which they were derived. The application of this research is in analysing satellite or airborne images of the Earth's surface. In particular, Synthetic Aperture Radar (SAR) images often exhibit regions of homogeneous texture, which if segmented, could facilitate terrain classification. In this thesis we present noncausal, nonparametric, multiscale, Markov random field (MRF) models for recognising and synthesising texture. The models have the ability to capture the characteristics of, and to synthesise, a wide variety of textures, varying from the highly structured to the stochastic. For texture synthesis, we introduce our own novel multiscale approach incorporating a new concept of local annealing. This allows us to use large neighbourhood systems to model complex natural textures with high order statistical characteristics. The new multiscale texture synthesis algorithm also produces synthetic textures with few, if any, phase discontinuities. The power of our modelling technique is evident in that only a small source image is required to synthesise representative examples of the source texture, even when the texture contains long-range characteristics. We also show how the high-dimensional model of the texture may be modelled with lower dimensional statistics without compromising the integrity of the representation. We then show how these models -- which are able to capture most of the unique characteristics of a texture -- can be for the ``open-ended'' problem of recognising textures embedded in a scene containing previously unseen textures. Whilst this technique was developed for the practical application of recognising different terrain types from Synthetic Aperture Radar (SAR) images, it has applications in other image processing tasks requiring texture recognition.

Journal ArticleDOI
TL;DR: In this paper, the effects of spatial regularity and locality assumptions in the extended Kalman filter are examined for oceanic data assimilation problems, and a Markov random field (MRF) is used to impose locality through spatial regression.
Abstract: Effects of spatial regularity and locality assumptions in the extended Kalman filter are examined for oceanic data assimilation problems. Biorthogonal wavelet bases are used to implement spatial regularity through multiscale approximations, while a Markov random field (MRF) is used to impose locality through spatial regression. Both methods are shown to approximate the optimal Kalman filter estimates closely, although the stability of the estimates can be dependent on the choice of basis functions in the wavelet case. The observed filter performance is nearly constant over a wide range of values for the scalar weights (uncertainty variances) given to the model and data examined here. The MRF-based method, with its inhomogeneous and anisotropic covariance parameterization, has been shown to be particularly effective and stable in assimilation of simulated TOPEX/POSEIDON altimetry data into a reduced-gravity, shallow-water equation model.