scispace - formally typeset
Search or ask a question

Showing papers on "Contextual image classification published in 1981"


01 Jan 1981
TL;DR: In this article, an investigation is conducted to evaluate the effect of spatial resolution on the ability to classify land cover types with per-pixel digital image classification techniques, and the documentation of changes in scene noise and the percentage of boundary pixels as a function of spatial resolutions is given to improve the understanding of the interrelationship between classification accuracy and spatial resolution.
Abstract: The benefits obtained from sensor systems for monitoring earth resources will depend on the application and interpretation methods used. A frequently used analysis method is supervised per-pixel multispectral classification with a typical application being land cover classification. An investigation is conducted to evaluate the effect of spatial resolution on the ability to classify land cover types with per-pixel digital image classification techniques. Attention is also given to the documentation of changes in scene noise and the percentage of boundary pixels as a function of spatial resolution, in order to improve the understanding of the interrelationship between classification accuracy and spatial resolution. It is found that scene noise varies considerably between land cover categories. Changes in scene noise with coarsening resolution occur at different rates for different categories.

140 citations


Journal ArticleDOI
TL;DR: Experimental results based on both simulated and real multispectral remote sensing data demonstrate the effectiveness of the contextual classifier.

80 citations


Journal ArticleDOI
TL;DR: The ultimate use of information extracted from remote sensing data is strongly affected by its compatability with other geographic data planes, and problems in achieving such compatibility in the framework of automated geographical information systems are discussed.
Abstract: Digital image processing is now widely available for users of remotely sensed data. Although such processing offers many new opportunities for the user (or analyst) it also makes heavy demands on the acquisition of new skills, if the data are to yield useful information efficiently. In deciding on the best approach for image classification the user faces a bewildering array of choices, many of which have been poorly evaluated. It is clear, however, that the use of both internal and external contextual information can be of great value in improving classification performance. The ultimate use of information extracted from remote sensing data is strongly affected by its compatability with other geographic data planes. Problems in achieving such compatibility in the framework of automated geographical information systems are discussed. The success of image analysis and classification methods is highly dependent on the relationships between the abilities of sensing systems themselves and the characte...

53 citations



Patent
02 Sep 1981
TL;DR: In this article, a photographic printer includes a detector system for measuring optical characteristics of each photographic film image to be printed of a plurality of defined areas, from after the optical characteristics measured, the printer identifies and classifies the images of the film in various kinds of scenes.
Abstract: A photographic printer includes a detector system (18) for measuring optical characteristics of each photographic film image to be printed of a plurality of defined areas. From after the optical characteristics measured, the printer identifies and classifies the images of the film in various kinds of scenes. The poses used for printing each dependent film image classification of the film image. A control (20) selectable by the user enables the latter to vary the sensitivity of the classification of one or more types of scenes in a substantially linear mode.

8 citations


Journal ArticleDOI
TL;DR: In this article, a Bayes decision function which minimizes the probability of misclassification is used for classification of binary images of hydrometeors (ice particles and raindrops) taken from cloud samples.
Abstract: The investigation reported here involves the automatic classification of binary (black and white) images of hydrometeors (ice particles and raindrops) taken from cloud samples. The goal is to classify such images (both complete and fractional) into the seven most common classes of hydrometeors by statistical pattern recognition techniques. Detailed investigation about the data acquisition system and preprocessing is made. Four moment invariants which yield good class separation were used as features for the classification process. A Bayes decision function which minimizes the probability of misclassification is used for classification. Bayes theorem is employed to update mean vectors and covariance matrices involved in the decision function. A discrete Kalman filtering algorithm is developed for the on-line estimation of the probability of occurrence of each class. For such estimation a discrete adaptive Kalman filtering algorithm is also developed which adjusts the filter gain matrix such as to ...

7 citations


01 Apr 1981
TL;DR: A Monte Carlo performance analysis is used to demonstrate the utility of the design approach by characterizing the ability of the algorithm to classify randomly positioned three-dimensional objects in the presence of additive noise, scale variations, and other forms of image distortion.
Abstract: : The introduction of high resolution scanning laser radar systems which are capable of collecting range and reflectivity images, is predicted to significantly influence the development of processors capable of performing autonomous target classification tasks. Actively sensed range images are shown to be superior to passively collected infrared images in both image stability and information content. An illustrated tutorial introduces cellular logic (neighborhood) transformations and two and three-dimensional erosion and dilation operations which are used for noise filters and geometric shape measurement. A unique 'cookbook' approach to selecting a sequence of neighborhood transformations suitable for object measurement is developed and related to false alarm rate and algorithm effectiveness measures. The cookbook design approach is used to develop an algorithm to classify objects based upon their 3-D geometrical features. A Monte Carlo performance analysis is used to demonstrate the utility of the design approach by characterizing the ability of the algorithm to classify randomly positioned three-dimensional objects in the presence of additive noise, scale variations, and other forms of image distortion. (Author)

5 citations


Book ChapterDOI
01 Jan 1981
TL;DR: The Markov Mesh model is more useful for the generation of images than in the estimation of image parameters for the classification of real images, for which other simpler procedures seem to work equally well or better as discussed by the authors.
Abstract: Publisher Summary This chapter reviews the Markov Mesh models as originally given in the works of Abend, Harley, and Kanal. It also presents some inputs on some related references and developments of Markov Random Fields (MRF) models. The Markov Mesh models presented in the works of these authors sought to incorporate spatial dependence in reducing the complexity of likelihood functions for image classification. Current attempts to use MRF as a model of textured digital images may have a better chance of producing useful results. The Markov Mesh model is more useful for the generation of images than in the estimation of image parameters for the classification of real images, for which other simpler procedures seem to work equally well or better.

3 citations


Proceedings ArticleDOI
07 Dec 1981
TL;DR: Hu et al. as mentioned in this paper applied classical pattern recognition technology to automatically classify ships using Forward Looking Infrared (FLIR) images, which is based on the extraction of features which uniquely describe the classes of ships.
Abstract: The Naval Weapons Center (NWC) is currently developing automatic target classification systems for future surveillance and attack aircraft and missile seekers. Target classification has been identified as a critical operational capability which should be included on new Navy aircraft and missile developments or systems undergoing significant modifications. The objective for the Automatic Classification Infrared Ship Imagery System is to provide the following new capablities for surveillance and attack aircraft and antiship missiles: near real-time automatic classification of ships in day and night at long standoff ranges with a wide area coverage imaging infrared sensor. The sensor applies classical pattern recognition technology to automatically classify ships using Forward Looking Infrared (FLIR) images. Automatic Classification of Infrared Ship Imagery is based on the extraction of features which uniquely describe the classes of ships. These features are used in conjunction with decision rules which are established during a training phase. Conventional classification techniques require labeled samples of all expected targets, threats and non-threats for this training phase. To overcome the resulting need for the collection of an immense data base, NWC developed a Generalized Classifier which, in the training phase, requires signals only from the targets of interest, such as high value combatant threats. In the testing phase, the signals from the combatants are classified and signals from other ships, which are sufficiently different from the training data, are classified as "other" targets. This technique provides a considerable savings in computer processing time, in memory requirements and data collection efforts. Since sufficient IIR images of the appropriate quality and quantity were not available for investigating automatic IIR ship classification, TV images of ship models were used for an initial feasibility demonstration. The initial investigation made use of the experience gained with preprocessing and classifying ROR and ISAR data. For this reason, the most expedient method was to collapse the 2-dimensional TV ship images onto the longitudinal axis by summing the amplitude data in the vertical ship axis. The resulting 128 point 1-dimensional profiles show the silhouette of the ship and bear an obvious similarity with the radar data. Based on that observation, a 128 point Fourier transform was computed and the ten low order squared amplitudes of the complex Fourier coefficients were then used as feature vectors for the Generalized Classifier. In contrast to the radar data, the size of TV or IIR images of ships changes as a function of range. It is therefore necessary to develop feature extraction algorithms which are scale invariant. The central moments, which have scale and rotational invariant properties were therefore implemented. This method was suggested in 1962 by M. K. Hu (IRE Transactions on Information Theory). Using the moments alone resulted in unsatisfactory classification performance and indicated that edge enhancement was necessary and that the background needed to be rejected. The images were therefore processed with the Sobel nonlinear edge enhancement algorithm, which also has the desirable property that it works for images with low signal-to-noise ratios and poorly defined edges. Satisfactory results were obtained. In another experiment, the feature vector was composed of the five lower-order invariant moments and the five lower-order FFT coefficient squared magnitudes, excluding the zero frequency coefficient. This paper will describe the data base, the processing and classification techniques, discuss the results and addresses the topic of "Processing of Images and Data Optical Sensors."

1 citations



01 Dec 1981
TL;DR: Optical implementation of the Fukunaga-Koontz transform (FKT) and the Least-Squares Linear Mapping Technique (LSLMT) is described in this paper.
Abstract: Optical implementation of the Fukunaga-Koontz transform (FKT) and the Least-Squares Linear Mapping Technique (LSLMT) is described. The FKT is a linear transformation which performs image feature extraction for a two-class image classification problem. The LSLMT performs a transform from large dimensional feature space to small dimensional decision space for separating multiple image classes by maximizing the interclass differences while minimizing the intraclass variations. The FKT and the LSLMT were optically implemented by utilizing a coded phase optical processor. The transform was used for classifying birds and fish. After the F-K basis functions were calculated, those most useful for classification were incorporated into a computer generated hologram. The output of the optical processor, consisting of the squared magnitude of the F-K coefficients, was detected by a T.V. camera, digitized, and fed into a micro-computer for classification. A simple linear classifier based on only two F-K coefficients was able to separate the images into two classes, indicating that the F-K transform had chosen good features. Two advantages of optically implementing the FKT and LSLMT are parallel and real time processing.