scispace - formally typeset
Search or ask a question

Showing papers on "Contextual image classification published in 1982"


Journal ArticleDOI
TL;DR: This work presents a new method of calculating the F–K basis functions for large dimensional imagery by using a small digital computer, when the intraclass variation can be approximated by correlation matrices of low rank.
Abstract: The Fukunaga–Koontz (F–K) transform is a linear transformation that performs image-feature extraction for a two-class image classification problem. It has the property that the most important basis functions for representing one class of image data (in a least-squares sense) are also the least important for representing a second image class. We present a new method of calculating the F–K basis functions for large dimensional imagery by using a small digital computer, when the intraclass variation can be approximated by correlation matrices of low rank. Having calculated the F–K basis functions, we use a coherent optical processor to obtain the coefficients of the F–K transform in parallel. Finally, these coefficients are detected electronically, and a classification is performed by the small digital computer.

41 citations


Journal ArticleDOI
TL;DR: The LSLMT is useful for performing a transform from large-dimensional observation or feature space to small-dimensional decision space for separating multiple image classes by maximizing the interclass differences while minimizing the intraclass variations.
Abstract: Utilizing the phase-coded optical processor, the least-squares linear mapping technique (LSLMT) has been optically implemented to classify large-dimensional images. The LSLMT is useful for performing a transform from large-dimensional observation or feature space to small-dimensional decision space for separating multiple image classes by maximizing the interclass differences while minimizing the intraclass variations. As an example, the classifier designed for handwritten letters was studied. The performance of the LSLMT was compared also with those of a matched filter and an average filter.

28 citations


Proceedings ArticleDOI
Jim Hinderer1
08 Mar 1982
TL;DR: A computer model for generating 3-D images of small vehicles that decomposes components into three-point facets which are the basis for generating an image.
Abstract: This paper describes a computer model for generating 3-D images of small vehicles. The paper shows examples and gives throughput, memory, and accuracy for implementation on a VAX computer. Each vehicle is described in terms of components such as wheels, chassis, and turret. The model decomposes these components into three-point facets which are the basis for generating an image. Each point of a facet can be assigned a specific temperature, emissivity, and reflectivity. Range contour imagery from the model is useful in developing identification and classification algorithms for laser radars.© (1982) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

13 citations


Proceedings ArticleDOI
King-Sun Fu1
22 Nov 1982
TL;DR: Three major approaches to pattern recognition, (1) template matching, (2) decision-theoretic approach, and (3) structural and syntactic approach, are briefly introduced and a more general method for automatic visual inspection of IC chips is proposed.
Abstract: Three major approaches to pattern recognition, (1) template matching, (2) decision-theoretic approach, and (3) structural and syntactic approach, are briefly introduced. The application of these approaches to automatic visual inspection of manufactured products are then reviewed. A more general method for automatic visual inspection of IC chips is then proposed. Several practical examples are included for illustration.© (1982) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

3 citations


01 May 1982

3 citations


01 Jan 1982
TL;DR: In this article, the authors discuss the implementation of change detection and masking techniques in the updating of Landsat-derived land-cover maps, which served to limit analysis of the update image and reduce comparison errors in unchanged areas.
Abstract: The California Integrated Remote Sensing System's San Bernardino County Project was devised to study the utilization of a data base at a number of jurisdictional levels. The present paper discusses the implementation of change-detection and masking techniques in the updating of Landsat-derived land-cover maps. A baseline landcover classification was first created from a 1976 image, then the adjusted 1976 image was compared with a 1979 scene by the techniques of (1) multidate image classification, (2) difference image-distribution tails thresholding, (3) difference image classification, and (4) multi-dimensional chi-square analysis of a difference image. The union of the results of methods 1, 3 and 4 was used to create a mask of possible change areas between 1976 and 1979, which served to limit analysis of the update image and reduce comparison errors in unchanged areas. The techniques of spatial smoothing of change-detection products, and of combining results of difference change-detection algorithms are also shown to improve Landsat change-detection accuracies.

2 citations


01 Jan 1982
TL;DR: In this article, the use of data layers from a geographic information system (GIS) as an integral part of the Landsat image classification process was investigated through a hierarchical modeling technique, elevation, aspect, land use, vegetation and growth management data from the project's data base were used to guide class labeling decisions in a 1976 Landsat MSS land cover classification.
Abstract: As part of the California Integrated Remote Sensing System's (CIRSS) San Bernardino County Project, the use of data layers from a geographic information system (GIS) as an integral part of the Landsat image classification process was investigated. Through a hierarchical modeling technique, elevation, aspect, land use, vegetation, and growth management data from the project's data base were used to guide class labeling decisions in a 1976 Landsat MSS land cover classification. A similar model, incorporating 1976-1979 Landsat spectral change data in addition to other data base elements, was used in the classification of a 1979 Landsat image. The resultant Landsat products were integrated as additional layers into the data base for use in growth management, fire hazard, and hydrological modeling.

2 citations


01 Jan 1982
TL;DR: In this article, the conditions under which a hybrid of clustering and canonical analysis for image classification produce optimum results were analyzed and the importance of the number of clusters input and the effect of other parameters of the clustering algorithm (ISOCLS) were examined.
Abstract: The conditions under which a hybrid of clustering and canonical analysis for image classification produce optimum results were analyzed. The approach involves generation of classes by clustering for input to canonical analysis. The importance of the number of clusters input and the effect of other parameters of the clustering algorithm (ISOCLS) were examined. The approach derives its final result by clustering the canonically transformed data. Therefore the importance of number of clusters requested in this final stage was also examined. The effect of these variables were studied in terms of the average separability (as measured by transformed divergence) of the final clusters, the transformation matrices resulting from different numbers of input classes, and the accuracy of the final classifications. The research was performed with LANDSAT MSS data over the Hazleton/Berwick Pennsylvania area. Final classifications were compared pixel by pixel with an existing geographic information system to provide an indication of their accuracy.

1 citations


Patent
23 Sep 1982
TL;DR: In this paper, a contour sensor with a narrow gap is proposed for image contour detection in the presence of various image defects and a strongly structured background, with the aid of statistical analysis of metallurgical structures.
Abstract: The information which is relevant for image analysis and image classification is essentially contained in the image contours. Conventional methods scan the entire image using a television camera in a fixed raster and transmit the entire image information for processing into a computer. The contour sensor described in the application carries out a two-dimensional correlation with a narrow gap, and calculates the tangential component of the correlation gradient (Figure 1) during rotation of the gap. The gap detects very many adjacent pixels for every possible contour direction, and thus largely suppresses interference points in the image and the local noise caused by inhomogeneities in optoelectronic transducers. Contour detection in the presence of various image defects and a strongly structured background have been demonstrated with the aid of statistical analysis of metallurgical structures, the control of manufacturing robots in tracing object edges, automatic weld examination, and the evaluation of aerial photographs.

1 citations


Proceedings ArticleDOI
08 Mar 1982
TL;DR: Frequently in image processing an unknown image is identified by matching features from the image to features of known images, and therefore the best match is called nearest neighbor matching.
Abstract: Bldg. 6/E147, Centinela & Teale Sts., Culver City, California 90230IntroductionFrequently in image processing an unknown image is identified by matching features fromthe image to features of known images. One method of doing this is to compute a distancemeasure between the features of the two images. This measure indicates the dissimilaritybetween the images. This distance measure is computed for each of a large set of differentknown images. The process of exhaustively locating the known image with the minimum dis-tance, and therefore the best match is called nearest neighbor matching.For the problem of estimating target orientation, the set of known images would be con-structed by obtaining images of the object for various aspect angles, and computing the fea-tures for each image. These features would then be stored in a data base as a function ofaspect angle.This approach works well for situations where there is adequate storage and sufficienttime to search the data base. It fails however, to take advantage of the continuous natureof the data base, that is, that features vary continuously between adjacent aspect angles.Below a method is discussed which uses this property to greatly simplify the approach.Determining orientationThis algorithm presented refers to moment features which are based on geometrical momentsof the object silhouette. The discussion does not depend in any way on these features;they are used only because they are available. The only requirements on features are thatthey vary continuously as a function of a continuously changing aspect angle, and that twoimages do not have identical features.Another convenient, yet not required, property of moments is their invariance to rotationof the image. Thus, if the desired orientation is described in terms of three properlychosen Euler angles, the indicated features are independent of one of them. This missingangle, however, may be calculated easily from the moments themselves and knowledge of theother two Euler angles. This is done by calculating an offset angle e between the per-ceived tilt of the object em and the defined axis direction 6. This is shown in Figure 1.