scispace - formally typeset
Search or ask a question

Showing papers on "Contextual image classification published in 1984"


Journal ArticleDOI
TL;DR: Contextual statistical decision rules for classification of lattice-structured data such as pixels in multispectral imagery are developed and their recursive implementation is shown to have a strong resemblance to relaxation algorithms.

108 citations


Journal ArticleDOI
TL;DR: A simplified method of calculating the HTC discriminant functions from large-dimensional images by a small computer is described, useful when the within-class variation can be approximated by a covariance matrix of low rank.
Abstract: The Hotelling trace criterion (HTC) is useful for feature extraction so that multiclasses of statistical images can be separated by maximizing the between-class differences while minimizing the within-class variations. Optical implementation of the HTC has been successful by utilizing computer-generated spatial filters and a coded-phase processor. A simplified method of calculating the HTC discriminant functions from large-dimensional images by a small computer is also described. This method is useful when the within-class variation can be approximated by a covariance matrix of low rank.

34 citations


Proceedings ArticleDOI
09 Jan 1984
TL;DR: A global approach utilizing the contextual information in a scene currently discarded offers the most promise in overcoming the short-comings of current object classification methods.
Abstract: Existing strategies for the identification of objects in a scene are based upon classical pattern recognition approaches. The basic concept involved centers around the extraction of a set of statistical features for each object detected in a scene, followed by the application of a classifier which attempts to derive the decision boundaries that separate these objects into classes. As statistical features are quite sensitive to noise, this approach has led to problems due to the inability of classifiers to identify accurate feature set separation in less than ideal conditions. A global approach utilizing the contextual information in a scene currently discarded offers the most promise in overcoming the short-comings of current object classification methods.

10 citations


Proceedings ArticleDOI
04 Dec 1984
TL;DR: A new approach to the problem of classifying surface materials in satellite multi-spectral imagery is described and demonstrated and an example of its use in classifying Landsat Thematic Mapper imagery is presented.
Abstract: A new approach to the problem of classifying surface materials in satellite multi-spectral imagery is described and demonstrated in this paper. Surface material classes are defined heuristically using rules which describe the typical appearance of the material under specified conditions in terms of relative image measures. A knowledge-based approach allows expert knowledge of the domain to be used directly to develop classification rules. An expert system is currently being developed in the Zetalisp/Flavors programming environment on the Symbolics 3600 Lisp Machine. An example of its use in classifying Landsat Thematic Mapper imagery is presented.

9 citations


Proceedings ArticleDOI
04 Dec 1984
TL;DR: The use of gray scale together with the edge information present in the image is considered to obtain more precise segmentation of the target than obtained by using gray scale or edge information alone.
Abstract: In the automatic recognition of tactical targets in FLIR images, it is desired to obtain an accurate and precise representation of the boundary of the targets. It is very important since the features used in the classification of the target are normally based on the shape and gray scale of the segmented target and therefore the performance of a statistical or a structural classifier critically depends on the results of segmentation. Generally, only the gray scale of the image is used to extract the target from the background. The segmentation thus obtained normally depends upon several parameters of the technique used. It is possible to obtain better segmentation by using other sources of information present in the image such as contextual cues, temporal cues, gradient, a priori information etc. In this paper we consider specifically the use of gray scale together with the edge information present in the image to obtain more precise segmentation of the target than obtained by using gray scale or edge information alone. A model of FLIR images based on gray scale and edge information is incorporated in a gradient relaxation technique which explicitly maximizes a criterion function based on the inconsistency and ambiguity of classification of pixels with respect to its neighbors. Four variations of the basic relaxation technique are considered which provide automatic selection of threshold to segment FLIR images. A comparison of these methods is discussed.

2 citations


Journal ArticleDOI
TL;DR: This correspondence considers the problem of matching image data to a large library of objects when the image is distorted and demonstrates that, for classification purposes, distortions can be characterized by a small number of parameters.
Abstract: This correspondence considers the problem of matching image data to a large library of objects when the image is distorted. Two types of distortions are considered: blur-type, in which a transfer function is applied to Fourier components of the image, and scale-type, in which each Fourier component is mapped into another. The objects of the library are assumed to be normally distributed in an appropriate feature space. Approximate expressions are developed for classification error rates as a function of noise. The error rates they predict are compared with those from classification of artificial data, generated by a Gaussian random number generator, and with error rates from classification of actual data. It is demonstrated that, for classification purposes, distortions can be characterized by a small number of parameters.

2 citations


01 Jan 1984
TL;DR: In this article, a self-verifying, grid-sampled training point approach is proposed as a more statistically valid feature extraction technique, which replaced the full image scene with smaller statistical vectors which preserved the necessary characteristics for classification.
Abstract: Rectangular training fields of homogeneous spectroreflectance are commonly used in supervised pattern recognition efforts. Trial image classification with manually selected training sets gives irregular and misleading results due to statistical bias. A self-verifying, grid-sampled training point approach is proposed as a more statistically valid feature extraction technique. A systematic pixel sampling network of every ninth row and ninth column efficiently replaced the full image scene with smaller statistical vectors which preserved the necessary characteristics for classification. The composite second- and third-order average classification accuracy of 50.1 percent for 331,776 pixels in the full image substantially agreed with the 51 percent value predicted by the grid-sampled, 4,100-point training set.

2 citations