scispace - formally typeset
Search or ask a question

Showing papers on "Contextual image classification published in 1991"


Journal ArticleDOI
TL;DR: A bandwidth compression scheme suitable for transmission of radiometric data collected bytoday's sensitive and high-resolution sensors is presented, which readily provides some feature classification capability, such as cloud typing, through the interpretation of KL-transformed images.
Abstract: We present a bandwidth compression scheme suitable for transmission of radiometric data collected bytoday's sensitive and high-resolution sensors. Specific design constraints associated with this application are requirements for (1) near-lossless coding, (2) handling of a high dynamic range, and (3) placement of an upper bound on maximum coding error, as opposed to the average or rms coding error. In this approach both the spectral and spatial correlations in the data are exploited to reduce its bandwidth. Spectral correlation is first removed via the Karhunen-Loeve (KL) transformation. An adaptive discrete cosine transform coding technique is then applied to the resulting spectrally decorrelated data. Because the actual coding is done in the transform domain, each individual coding error spreads over an entire block of data when reconstructed. This helps to reduce significantly the maximum error and, as such, makes this approach very suitable for this application. A useful by-product of this approach is that it readily provides some feature classification capability, such as cloud typing, through the interpretation of KL-transformed images. Since each KL-transformed image is a linear combination of all the spectral images, it represents a blend of information present in the entire spectral image set. As such, it could solely render some useful information not readily detectable from the ensemble of spectral images. This may be of particular utility for situations in which a photo interpreter may not have the time or the opportunity to inspect the entire set of images.

53 citations


Journal ArticleDOI
TL;DR: An algorithm that integrates spectral and textural information in the classification process is presented and it is shown that the classification accuracy of the spectral texture pattern matching algorithm was higher for most classes than that of the maximum-likelihood classifier.
Abstract: Because of the difficulty of specifying general criteria for texture features, automated image analysis in the field of remote sensing has been largely restricted to the spectral domain. An algorithm that integrates spectral and textural information in the classification process is presented. The procedure is capable of classifying a region of arbitrary shape and size and operates effectively near class boundaries. Except for the requirement of user-defined training data, the algorithm can be completely automated. For all accuracy measures tested, the classification accuracy of the spectral texture pattern matching algorithm was higher for most classes than that of the maximum-likelihood classifier. Furthermore, errors with the spectral/textural algorithm are largely confined to omission, which gives a high degree of confidence to the classified pixels. >

42 citations


Journal ArticleDOI
TL;DR: A novel method of classifying remote image through a multimode optical fiber by using neural networks is proposed and experimentally tested.
Abstract: A novel method of classifying remote image through a multimode optical fiber by using neural networks is proposed and experimentally tested. A neural network is used to recognize the original image from a pattern transmitted through an optical fiber.

33 citations



02 Sep 1991
TL;DR: The parameters used are related to general 'shape' properties, but the technique can very easily be extended to other perceptually significant sets of parameters related to texture, colour, size, etc.
Abstract: Presents a powerful and general procedure for the parametric classification of image 'objects' The parameters used are related to general 'shape' properties, but the technique can very easily be extended to other perceptually significant sets of parameters related to texture, colour, size, etc Furthermore, the representation is such that queries can be constructed from iconic class representations, example images or even example sketches Five perceptually meaningful shape parameters are used These are the 'circularity', 'transparency', 'aspect ratio', 'irregularity' and the 'extreme point ratio' The values of these parameters, for a particular 'object', form a vector which represents a point in 'shape' space The classification is performed by identifying clusters of points in this space during a training phase Since the training data is spread over a continuum and the number of classes within the data is unknown prior to training it is appropriate to use an unsupervised classification technique

16 citations


Journal ArticleDOI
TL;DR: In this article, a computer program has been developed to implement a contextual classifier algorithm for the classification of satellite images using pixel-by-pixel conventional methods, which can be improved if the contextual information is considered jointly with the spectral information in the same strategy of classification.
Abstract: The classification accuracy obtained from the classification of satellite images using pixel-by-pixel conventional methods can be improved if the contextual information is considered jointly with the spectral information in the same strategy of classification. A computer program has been developed to implement a contextual classifier algorithm. The accuracy improvement was evaluated and tested in two pilot zones of central Spain. Two indices were defined in order to analyse the homogeneity effect produced by the contextual classifier.

15 citations


Proceedings ArticleDOI
TL;DR: Texture features obtained by fitting generalized Ising, auto-binomial, and Gaussian Markov random fields to homogeneous textures are evaluated and compared by visual examination and by standard pattern recognition methodology.
Abstract: Texture features obtained by fitting generalized Ising, auto-binomial, and Gaussian Markov random fields to homogeneous textures are evaluated and compared by visual examination and by standard pattern recognition methodology. The Markov random field model parameters capture the strong cues for human perception, such as directionality, coarseness, and/or contrast. The limited experiments for the classification of natural textures and sandpaper textures by using various classifiers suggest that both feature extraction and classifier design be carefully considered.

12 citations


Journal ArticleDOI
TL;DR: A method for computing this discriminant vector by quadratic programming is derived that scales with training set size rather than number of input variables and hence is well suited to the high dimensionality of image classification tasks.
Abstract: A useful discriminant vector for pattern classification is one that maximizes the minimum separation of discriminant function values for two pattern classes This optimality criterion can prove valuable in many situations because it emphasizes the class elements that are most difficult to classify A method for computing this discriminant vector by quadratic programming is derived The resulting calculation scales with training set size rather than number of input variables and hence is well suited to the high dimensionality of image classification tasks Digitized images are used to demonstrate application of the approach to two class and multiple-class image classification tasks

12 citations


Proceedings ArticleDOI
27 Mar 1991
TL;DR: The authors describe the use of an associative image classification architecture to estimate the camera-driven position of a land vehicle and show that the classification mechanism also operates correctly in the presence of very similar training patterns.
Abstract: The authors describe the use of an associative image classification architecture to estimate the camera-driven position of a land vehicle. Outdoor pictorial scenes are used within a vision system for the positioning of an autonomous land vehicle. Images supplied by sensorial input sources are matched with a set of simplified descriptions of possible scenes (prototypes). The associative classification module must supply a rough suggestion about the observed scene to a top-down expectation-driven recognition system. Easy training, low computation times, and ordering alternatives according to their evaluated reliabilities are some of the most important advantages of the approach described. Experimental results show that the classification mechanism also operates correctly in the presence of very similar training patterns. The system's structural flexibility allows efficient application-oriented implementations on low-cost parallel machinery. >

11 citations


Journal ArticleDOI
TL;DR: In this article, an unsupervised region-based image segmentation algorithm implemented with a pyramid structure was developed. But rather than relying on traditional local splitting and merging of regions with a similarity test of region statistics, the algorithm identifies the homogeneous and boundary regions at each level of the pyramid; the global parameters of each class are then estimated and updated with the values of homogeneous regions represented at that level of a pyramid using mixture distribution estimation.
Abstract: An unsupervised region-based image segmentation algorithm implemented with a pyramid structure has been developed. Rather than depending on traditional local splitting and merging of regions with a similarity test of region statistics, the algorithm identifies the homogeneous and boundary regions at each level of the pyramid; the global parameters of each class are then estimated and updated with the values of the homogeneous regions represented at that level of the pyramid using mixture distribution estimation. The image is then classified through the pyramid structure. Classification results obtained for both simulated and SPOT imagery are presented. >

10 citations


Proceedings ArticleDOI
01 Feb 1991
TL;DR: The design of a prototype system for real-time classification of wooden profiled boards is described and a multiprocessor system where each processor is specialized to solve a specific task in the recognition hierarchy is used.
Abstract: In this paper the design of a prototype system for real-time classification of wooden profiled boards is described. The presentation gives an overview of the algorithms and hardware developed to achieve classification in real-time at a data rate of 4Mpixel/sec. The system achieves its performance by a hierarchical processing strategy where the intensity information in the digital image is transformed into a symbolic description of small texture elements. Based on this symbolic representation a syntactic segmentation scheme is applied which produces a list of objects that are present on the board surface. The objects are described by feature vectors containing both numeric structural texture- and shape-related properties. A graph-like decision network is then used to recognize the various defects. The classification procedures were extensively tested for spruce boards on a large data set containing 500 boards taken from the production line at random. The overall rate of correct classification was 95 on this data set. The structure of these algorithms is reflected in the hardware design. We use a multiprocessor system where each processor is specialized to solve a specific task in the recognition hierarchy.

Proceedings ArticleDOI
01 Nov 1991
TL;DR: A greedy tree-growing algorithm is used in conjunction with an input-dependent weighted distortion measure to develop a tree-structured vector quantizer that can be used for preliminary classification as well as compression.
Abstract: Durand Building, Department of Electrical EngineeringStanford University, Stanford, CA, 94305-4055ABSTRACTA greedy tree-growing algorithm is used in conjunction with an input-dependent weighted distortion measureto develop a tree-structured vector quantizer. Vectors in the training set are classified, and weights are assigned to theclasses. The resulting weighted distortion measure forces the tree to develop better representations for those classesthat are considered important. Results on medical images and USC database images are presented. A tree-structuredvector quantizer grown in a similar manner can be used for preliminary classification as well as compression.1. INTRODUCTIONTree-structured vector quantization (TSVQ) is an image compression technique that is rapid for both theencoder and the decoder. A variable rate code can be implemented by an unbalanced tree, obtained either by growinga balanced tree and then pruning it back so that it becomes unbalanced, or by "greedily" growing an unbalanced treedirectly"2. In this paper we describe work in which this greedy growing algorithm is used in conjunction with aninput-dependent, weighted distortion measure. Weights are assigned to the vectors in the training set according to aclassification scheme; in our examples, the classification is based on brightness, texture, or on hand-labeled featuresin the training set. The weighted distortion causes the tree to have "growth spurts" for certain types of inputs thathave been declared a priori to be important, and to become "stunted" for inputs that are less important.A binary TSVQ consists of a tree with nodes labeled by candidate reproduction vectors. An input vector iscompared to the labels of the two child nodes available at the root node, and the node with the minimum distortionlabel (the nearest neighbor) is selected. The encoder then performs a similar test for the new node's children andcontinues in this manner until a terminal node is reached. The label of the terminal node is the final reproduction,and the binary vector describing the sequence of encoder decisions is the codeword stored or sent to the decoder.

01 May 1991
TL;DR: A procedure to extract discriminantly informative features based on a decision boundary for nonparametric classification is proposed and experiments show that the proposed algorithm finds effective features for the non parametric classifier with Parzen density estimation.
Abstract: Feature selection has been one of the most important topics in pattern recognition. Although many authors have studied feature selection for parametric classifiers, few algorithms are available for feature selection for nonparametric classifiers. In this paper we propose a new feature selection algorithm based on decision boundaries for nonparametric classifiers. We first note that feature selection for pattern recognition is equivalent to retaining 'discriminantly informative features', and a discriminantly informative feature is related to the decision boundary. A procedure to extract discriminantly informative features based on a decision boundary for nonparametric classification is proposed. Experiments show that the proposed algorithm finds effective features for the nonparametric classifier with Parzen density estimation.

Proceedings ArticleDOI
18 Nov 1991
TL;DR: The authors discuss the first subcomponent of a vision system being developed to combine the benefit of the flexible approach offered by using neural networks as classifiers and some traditional image classification techniques which have failed to produce the expected results due to the inadequacies of the classification system hitherto used.
Abstract: The authors discuss the first subcomponent of a vision system being developed to combine the benefit of the flexible approach offered by using neural networks as classifiers and some traditional image classification techniques which have failed to produce the expected results due to the inadequacies of the classification system hitherto used. By using a multiresolution pyramid to focus the attention of the system on areas that are likely to contain the features being sought, the performance of the system in terms of its accuracy and speed of location is improved. Furthermore, by breaking the problem of locating an object into subgoals, the construction and operation of the detectors is reduced in complexity. >

Patent
24 Jan 1991
TL;DR: In this paper, an electrophotographic copying apparatus is used to estimate a copy area of an original within a scan area based on copy condition data set and stored through an operation unit.
Abstract: An electrophotographic copying apparatus, which is able to classify an image in a copy area of the original. The electrophotographic copying apparatus estimates a copy area of an original within a scan area based on copy condition data set and stored through an operation unit. A sampler samples the scanned image data in the copy area to form image data. An identifying unit uses the image data thus obtained to identify an image classification of the copy area from a group of image patterns.

Proceedings ArticleDOI
03 Jun 1991
TL;DR: Two methods have been tested in order to improve land use mapping in a post-classification refinement process: supervised relaxation and an expert system, both of which use multiple sources of information and return satisfactory results.
Abstract: Two methods have been tested in order to improve land use mapping in a post-classification refinement process: supervised relaxation and an expert system. Both methods use multiple sources of information and return satisfactory results. Statistical measurements of texture have been used to provide ancillary information for land use mapping at a super-class level (general classes) in a land cover classification tree. The reasoning model of the supervised relaxation technique is based upon the Bayesian theory. In contrast, the expert system uses the Dempster-Shafer reasoning scheme and allows evidence to be propagated at various levels in the land cover taxonomic hierarchy. The result of this approach may be a mixed-level map product, if the available amount of evidence is insufficient to decide among singleton competing labels. Thus, limitations in the entry-data set for accurate and f i e classifications can be defined and resolved. The knowledge base of the expert system contains a set of 40 spatial context rules.


Proceedings ArticleDOI
02 Dec 1991
TL;DR: The recognition principle exploits the noise-insensitivity and content-addressability of associative memories to achieve robust classification in image processing and the implementation on a transputer-based architecture makes it possible at attain high structural flexibility, at relatively low machinery cost.
Abstract: A parallel architecture to support an associative system for image classification is presented. The recognition principle exploits the noise-insensitivity and content-addressability of associative memories to achieve robust classification in image processing; in addition, the implementation on a transputer-based architecture makes it possible at attain high structural flexibility, at relatively low machinery cost. After defining the basic associative-classification mechanism, the parallel structure is described. The resulting system proves quite fast and inexpensive, hence applications to real-time environments become feasible. Structural flexibility allows easy modifications to the system to tailor it to different application domains. The efficiency of the proposed architecture is evidenced by experimental results obtained in a real image-classification domain. >

Proceedings ArticleDOI
13 Oct 1991
TL;DR: Schemes are presented that greatly minimize the risk of converging to a partial solution and maximize the rule discovery process for rule-based learning.
Abstract: Many learning algorithms tend to converge into local minima that often represent partial solutions. Schemes are presented that greatly minimize the risk of converging to a partial solution and maximize the rule discovery process for rule-based learning. For the experiments, a generic algorithm rule-based learning system called a classifier system has been used. The new strategies are supported by presenting accelerations and completion of learning in higher order letter image classification problems. >



04 Feb 1991
TL;DR: The approach described is based on feature-based correspondence of regions in the two input images, which enables further properties of the scene to be deduced by combining information from multiple images, possibly at different times or with different sensors.
Abstract: A single remote-sensed image provides a snap shot of a scene at a particular instance in time. The measurement process, and hence the grey-levels of the final image, is dependent on the properties of the scene, the sensor and the sensing conditions. Combining information from multiple images, possibly at different times or with different sensors, enables further properties of the scene to be deduced. The motivation for this is to either improve classification accuracy of the scene, or else to quantify temporal variations in either shape or attributes of the scene. This latter problem is termed change detection. The approach described is based on feature-based correspondence of regions in the two input images. Images are processed from pixel data to be stored in a vector database, via a segmentation algorithm. Single-image classification and attribute extraction may be carried out prior to combination.

Proceedings ArticleDOI
01 Nov 1991
TL;DR: This study presents a new approach for pattern classification using pseudo 2-D binary cellular automata (CA) that resembles the memory network classifier in the sense that it is based on an adaptive knowledge based formed during a training phase, and also in the fact that both methods utilize pattern features that are directly available.
Abstract: Most classification techniques either adopt an approach based directly on the statistical characteristics of the pattern classes involved, or they transform the patterns in a feature space and try to separate the point clusters in this space. An alternative approach based on memory networks has been presented, its novelty being that it can be implemented in parallel and it utilizes direct features of the patterns rather than statistical characteristics. This study presents a new approach for pattern classification using pseudo 2-D binary cellular automata (CA). This approach resembles the memory network classifier in the sense that it is based on an adaptive knowledge based formed during a training phase, and also in the fact that both methods utilize pattern features that are directly available. The main advantage of this approach is that the sensitivity of the pattern classifier can be controlled. The proposed pattern classifier has been designed using 1.5 micrometers design rules for an N-well CMOS process. Layout has been achieved using SOLO 1400. Binary pseudo 2-D hybrid additive CA (HACA) is described in the second section of this paper. The third section describes the operation of the pattern classifier and the fourth section presents some possible applications. The VLSI implementation of the pattern classifier is presented in the fifth section and, finally, the sixth section draws conclusions from the results obtained.

Proceedings ArticleDOI
02 Nov 1991
TL;DR: A method for parameter estimation in image classification or segmentation is studied within the statistical frame of finite mixture distributions, and the parameters estimated from a simulated phantom are very close to those of the phantom.
Abstract: A method for parameter estimation in image classification or segmentation is studied within the statistical frame of finite mixture distributions. The method models an image as a finite mixture. Each mixture component corresponds to an image class. Each image class is characterized by parameters such as the intensity means, the standard deviation, and the number of image pixels in that class. The method uses a maximum likelihood (ML) approach to estimate the parameters of each class, and uses the information criteria of Akaike (AIC) and/or Schwarz (MDL) to determine the number of classes in the image. In computing the ML solution of the mixture, the method adopts the expectation maximization (EM) algorithm. The initial estimation and convergence of the ML-EM algorithm are studied. The parameters estimated from a simulated phantom are very close to those of the phantom. The determined number of image classes agrees with that of the phantom. The accuracies in determining the number of image classes using AIC and MDL are compared. The MDL criterion performs better than the AIC criterion. A modified MDL shows further improvement. >

Proceedings ArticleDOI
01 Mar 1991
TL;DR: In this article, various morphological image processing techniques are developed for the discrimination of images reflected by road surfaces in different states, and various texture analysis methods are also applied to the same images.
Abstract: Methods based on various morphological image processing techniques are developed for the discrimination of images reflected by road surfaces in different states. Numerous texture analysis methods are also applied tot he texture classification of the same images. Some of the methods developed are shown to be very efficient for automatic identification of the road surface states, whatever the granular structure of the road surface may be.

Proceedings ArticleDOI
01 Jul 1991
TL;DR: A decision support system that automatically sites the capillaroscopic analyzed image into one of the following classes: normal, diabetic and sclerodermic, which has been successfully used in experiments to obtain images of nailfold capillaries of the human finger.
Abstract: The aim of the paper is to describe a decision support system operating in the area of capillaroscopic images. The system automatically sites the capillaroscopic analyzed image into one of the following classes: normal, diabetic and sclerodermic. The automatic morphometric analysis attempts to imitate the physician behavior and requires the introduction of some particular features connected with the specific domain. These features allow a symbolic representation of the capillary partitioning it into three components: apex, arteriolar, and venular. Each component is qualified by specific attributes which allow the necessary shape evaluations in order to discriminate among the classes of capillaries. The system is hierarchically organized in two levels. The first level is concerned with the segmentation after a noise reduction and an enhancement of the digitized image. This level uses a shell, developed and successfully experimented for many heterogeneous classes of images. The second level is concerned with the effective classification of the previously processed image. It matches the visual data with a model constituted by a semantic network which embeds the geometric and structural a-priori knowledge of all kinds of capillaries. The system has been successfully used in experiments to obtain images of nailfold capillaries of the human finger.© (1991) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Proceedings ArticleDOI
01 Aug 1991
TL;DR: The conclusion is the K-means method is the least successful of the three tested methods, and the developed method is slightly more powerful than the k-nearest neighbor method for map sizes 9 X 9 and 10 X 10.
Abstract: Some neural network based methods for texture classification and segmentation have been published. The motivation for this kind of work might be doubted, because there are many traditional methods that work well. In this paper, a neural network based method for stochastic texture classification and segmentation suggested by Visa is compared with traditional K- means and k-nearest neighbor classification methods. Both simulated and real data are used. The complexity of the considered methods is also analyzed. The conclusion is the K-means method is the least successful of the three tested methods. The developed method is slightly more powerful than the k-nearest neighbor method for map sizes 9 X 9 and 10 X 10. The differences are, however, quite small. This means that the choice of classification method depends more on other aspects, like computational complexity and learning capability, than on the classification capability.

Proceedings ArticleDOI
01 Jun 1991
TL;DR: This work presents a solution method for the classification of tissue-types in brain MR images using Relaxation Labeling, a commonly used low-level technique in computer vision to resolve the ambiguity present in the initial classification by incorporating user specified compatibility coefficients.
Abstract: Classification of tissue-types in Magnetic Resonance (MR) images has received considerable attention in the medical image processing literature. Interpretation of MR images is based on multiple images corresponding to the same anatomy. Relaxation Labeling (RL) is a commonly used low-level technique in computer vision. We present a solution method for the classification of tissue-types in brain MR images using RL. Information from multiple images is combined to form an initial classification. RL is used to resolve the ambiguity present in the initial classification by incorporating user specified compatibility coefficients. A problem with RL is the smoothing of borders between tissue-types. We include edge information from the original images to overcome this problem. This results in a marked improvement in performance. We present results from patient images.

Proceedings ArticleDOI
TL;DR: A neural network approach for classification of images represented by translation, scale, and rotation invariant features is presented and proved to be more stable and faster than the optimal curve matching algorithm in classifying the objects after the training phase.
Abstract: A neural network approach for classification of images represented by translation, scale, and rotation invariant features is presented. The invariant features are the Fourier descriptors (FDs) derived from the boundary (shape) of the object. The network is a multilayer perceptron (MLP) classifier with one hidden layer and back propagation training (MLP-BP). Performance of the MLP algorithm is compared to optimal curve matching (OCM) for the recognition of mechanical tools. The test data were 14 objects with eight images per object, each image having significant differences in scaling, translation, and rotation. Only 10 harmonics of the 1024 FD coefficients were used as the input vector. The neural network approach proved to be more stable and faster than the optimal curve matching algorithm in classifying the objects after the training phase. The simple calculations needed for the Fourier descriptors and the small number of coefficients needed to represent the boundary result in an efficient system, excluding training, which can be done off-line. Results are shown comparing the classification accuracy of the OCM method with the MLP-BP algorithm using different size training sets. The method can be extended to any patterns that can be discriminated by shape information.© (1991) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Proceedings ArticleDOI
14 Apr 1991
TL;DR: A new approach is proposed for the integration of neural networks (NN) with machine learning techniques to build up an image classification system, using a symbolic technique for inductive learning from example to provide object models.
Abstract: A new approach is proposed for the integration of neural networks (NN) with machine learning techniques to build up an image classification system. In particular, the author uses a symbolic technique for inductive learning from example to provide object models. Such models are used to design the architecture and to initialize the weights of a backpropagation NN. Models include uncertainty aspects represented by fuzzy predicates, and relational properties for contextual classification. Both aspects are suitably mapped into the automatically designed NN. Preliminary results in a biomedical application are presented. >