scispace - formally typeset
Search or ask a question

Showing papers on "Feature extraction published in 1984"


Journal ArticleDOI
TL;DR: The work is described systematically and analyzed in terms of so-called feature matching, which is likely to be the mainstream of the research and development of machine recognition of handprinted Chinese characters.
Abstract: Machine recognition of handprinted Chinese characters has recently become very active in Japan. Both from the practical and the academic point of view, very encouraging results are reported. The work is described systematically and analyzed in terms of so-called feature matching, which is likely to be the mainstream of the research and development of machine recognition of handprinted Chinese characters. A database, ETL8 (881 Kanji, 71 hirakana, and 160 variations for each category), is explained, on which many experiments were performed. Recognition rates reported using this database can be compared, and so somewhat qualitative evaluation of these methods is described. Based on the comparative study, the merits and demerits of both feature and structural matching are discussed and some future directions are mentioned.

243 citations



Journal ArticleDOI
TL;DR: By first measuring the line integrals of a two-dimensional picture f(x, y) via the Radon transform, certain feature-extraction operations useful in pattern recognition may be computed very easily; that is, the computational overhead for these feature- Extraction operations is much less in theRadon space than in the direct space.
Abstract: We show that by first measuring the line integrals of a two-dimensional picture f(x, y) via the Radon transform, certain feature-extraction operations useful in pattern recognition may be computed very easily; that is, the computational overhead for these feature-extraction operations is much less in the Radon space than in the direct space. In particular, we consider the following features: (1) moments of f(x, y) invariant to translation, rotation, geometric scaling, and linear contrast scaling; (2) two geometric features, polar projections and convex hull, that have similar invariance properties; (3) the Fourier power spectrum of f(x, y) integrated along radial piewedge and annular bins in the Fourier space; and (4) the Hough transform for detection of straight edges. Much of the motivation for this work lies in its implementation as an optical pattern recognition system. We show an optical-digital hardware implementation for the rapid computation of both the Radon transform and the feature extractions. By "rapid," we mean that the transform and feature-extrac-tion operations on a 512 X 512 image may be accomplished at video rates (1 /30 s per image). We point out advantages of this system over more traditional optical pattern recognition schemes that rely on the use of Fourier optics. Some experimental results are shown.

88 citations


Journal ArticleDOI
01 Jan 1984
TL;DR: The feasibility of applying image processing techniques to metal surface inspection is demonstrated and methods of feature extraction and classification have been tested experimentally and the performances of different types of classifier have been compared.
Abstract: The feasibility of applying image processing techniques to metal surface inspection is demonstrated. Two methods for metal surface inspection are described. In the first method, the metal surface reflective power and the metal surface normal are related by a random surface scattering model. The metal surface profile can then be computed from the metal surface normal. The second method applies pattern recognition techniques to classify metal surfaces into classes of different roughness. Methods of feature extraction and classification have been tested experimentally and the performances of different types of classifier have been compared. A two-level tree classifier using nonparametric linear classifiers at each node gives better than 90% correct classification on the testing set.

54 citations


Journal ArticleDOI
P. Rummel1, W. Beutel1
TL;DR: A scene analysis system for the recognition and inspection of overlapping workpieces in visually noisy scenes is described, which consists of a preprocessing algorithm based on an edge-following operator and a model-based analysis algorithm.

53 citations


Journal ArticleDOI
TL;DR: The most recent developments in pattern recognition and computer vision are reviewed, with a view to analyzing pattern characteristics as well as designing recognition systems.
Abstract: With more powerful algorithms and greater computing power, the once \"unreachable\" pattern recognition and computer vision problems can now be resolved, simplifying complex decisions about input data. 274 In the last 20 years, interest in pattern recognition and computer vision problems has increased dramatically. This interest has in turn created a need for theoretical methods and experimental software and hardware to aid the design of computer vision and pattern recognition systems. Over 25 books have been published on these topics as have a number of conference proceedings and special issues of journals. * Pattern recognition machines and computer vision systems have been designed and built for everything from character recognition , target detection, medical diagnosis , analysis of biomedical signals and images, remote sensing, and identification of human faces and fingerprints , to reliability, socioeconomics, archaeology, speech recognition and understanding, machine part recognition , and automatic inspection. 1,2 In this article, we briefly review the most recent developments in pattern recognition and computer vision. Many definitions of pattern recognition have been proposed. We view pattern recognition here as being concerned primarily with the description and analysis of measurements taken from physical or mental processes. Pattern recognition often begins with some kind of preprocessing to remove noise and redundancy in the measurements , thereby ensuring an effective and efficient pattern description. Next, a set of characteristic measurements , numerical and/or nonnumeri-cal, and relations among these measurements are extracted to represent patterns. Patterns are then analyzed (classified and/or described) on the basis of the representation. Naturally, we need a good set of characteristic measurements and a firm idea of how they interrelate in representing patterns so that patterns can be easily recognized. Knowledge of the statistical and structural characteristics of patterns is vital to achieving this goal and should be fully utilized. From this point of view, then, pattern recognition means analyzing pattern characteristics as well as designing recognition systems.

52 citations


Proceedings ArticleDOI
19 Mar 1984
TL;DR: A Hierarchical Vector Quantization scheme that can operate on "supervectors" of dimensionality in the hundreds of samples is introduced and Gain normalization and dynamic codebook allocation are used in coding both feature vectors and the final data subvectors.
Abstract: This paper introduces a Hierarchical Vector Quantization (HVQ) scheme that can operate on "supervectors" of dimensionality in the hundreds of samples. HVQ is based on a tree-structured decomposition of the original super-vector into a large number of low dimensional vectors. The supervector is partitioned into subvectors, the subvectors into minivectors and so on. The "glue" that links subvectors at one level to the parent vector at the next higher level is a feature vector that characterizes the correlation pattern of the parent vector and controls the quantization of lower level feature vectors and ultimately of the final descendant data vectors. Each component of a feature vector is a scalar parameter that partially describes a corresponding subvector. The paper presents a three level HVQ for which the feature vectors are based on subvector energies. Gain normalization and dynamic codebook allocation are used in coding both feature vectors and the final data subvectors. Simulation results demonstrate the effectiveness of HVQ for speech waveform coding at 9.6 and 16 Kb/s.

45 citations


Journal ArticleDOI
TL;DR: This paper proposes a general system approach applicable to the automatic inspection of textured material in which the input image is preprocessed in order to be independent of non-uniformities and a tone-to-texture transform is performed.

45 citations


01 Jan 1984
TL;DR: An approach to image analysis using a region-based segmentation system that has been used to search a database of images that are in correspondence with a geodetic map to find occurrences of known buildings, roads, and natural features.
Abstract: In this paper we discuss the use of map descriptions to guide the extraction of man-made and natural features from aerial imagery. An approach to image analysis using a region-based segmentation system is described. This segmentation system has been used to search a database of images that are in correspondence with a geodetic map to find occurrences of known buildings, roads, and natural features. The map predicts the approximate appearance and position of a feature in an image. The map also predicts the area of uncertainty caused by errors in the image to map correspondence. The segmentation process then searches for image regions that satisfy 2dimensional shape and intensity criteria. If no initial region is found, the process attempts to merge together those regions that may satisfy these criteria. Several detailed examples of the segmentation process are given 1. Copyright © 1984 David M. McKeown, Jr. and Jerry L. Denlinger This research was sponsored by the Defense Advanced Research Projects Agency (DOD), ARPA Order No. 3597, monitored by the Air Force Avionics Laboratory Under Contract F33615-81-K-1539. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Defense Advanced Research Projects Agency or the US Government. This paper was presented at the IEEE Workshop on Computer Vision, April 30 May 2,1984 in Annapolis, Maryland. Map-Guided Feature Extraction Table of

42 citations


Journal ArticleDOI
TL;DR: A simplified method of calculating the HTC discriminant functions from large-dimensional images by a small computer is described, useful when the within-class variation can be approximated by a covariance matrix of low rank.
Abstract: The Hotelling trace criterion (HTC) is useful for feature extraction so that multiclasses of statistical images can be separated by maximizing the between-class differences while minimizing the within-class variations. Optical implementation of the HTC has been successful by utilizing computer-generated spatial filters and a coded-phase processor. A simplified method of calculating the HTC discriminant functions from large-dimensional images by a small computer is also described. This method is useful when the within-class variation can be approximated by a covariance matrix of low rank.

34 citations


Journal ArticleDOI
TL;DR: In this paper, a method is developed to extract additional image spatial features by means of linear and non-linear local filtering, and feature selection methods are also developed, since it is usually not possible to use all the generated features.
Abstract: Feature extraction is an important factor in determining the accuracy that can be attained in the classification of multispectral images. The traditional per point classification methods do not use all the available information, since they disregard the spatial relationships that exist among pixels belonging to the same class. In this paper, methods are developed to extract additional image spatial features by means of linear and non-linear local filtering. Feature selection methods are also developed, since it is usually not possible to use all the generated features. The classification stage is performed in a supervised mode using the maximum likelihood criterion. A quantitative analysis of the performance of the spatial features show that an overall increase in precision of classification is achieved, although at the expense of increased rejection levels, particularly on the borders between different fields.

Patent
21 Dec 1984
TL;DR: In this article, a computer implemented method for representation and recognition of waveforms using information derived from several groups of such waveforms of general similarity is proposed, where each signal is considered as an additive combination of the common signal of the respective group and a noise term.
Abstract: A computer implemented method for representation and recognition of waveforms uses information derived from several groups of such waveforms of general similarity. Each signal is considered as an additive combination of the common signal of the respective group and a noise term. The common signals of the groups of waveforms are extracted using a set of data-derived orthonormal basis waveforms, and a criterion is developed to evaluate the combination of individual basis waveforms used for the common signal extraction. The estimated common signals then represent a template pattern of the waveform or signal groups that can be used in waveform recognition.

Journal ArticleDOI
TL;DR: Certain algorithms and their computational complexity are examined for use in a VLSI implementation of the real-time pattern classifier described in Part I and it is shown that if the matrix of interest is centrosymmetric and the method for eigensystem decomposition is operator-based, the problem architecture assumes a parallel form.
Abstract: Several authors have studied the problem of dimensionality reduction or feature selection using statistical distance measures, e.g., the Chernoff coefficient, Bhattacharyya distance, I-divergence, and J-divergence because they generally felt that direct use of the probability of classification error expression was either computationally or mathematically intractable. We show that for the difficult problem of testing one weakly stationary Gaussian stochastic process against another when the mean vectors are similar and the covariance matrices (patterns) differ, the probability of error expression may be dealt with directly using a combination of classical methods and distribution function theory. The results offer a new and accurate finite dimensionality information-theoretic strategy to feature selection, and are shown, by use of examples, to be superior to the well-known Kadota-Shepp approach which employs distance measures and asymptotics in its formulation. The present Part I deals with the theory; Part II deals with the implementation of a computer-based real-time pattern classifier which takes into account a realistic quasi-stationarity of the patterns.

Journal ArticleDOI
TL;DR: An investigation into the feasibility of applying pattern recognition concepts to the classification of metallic objects by their electromagnetic induction responses was performed and it is noted that the classifier extension developed provides a viable approach to classification of responses that very continuously with respect to a single parameter.
Abstract: An investigation into the feasibility of applying pattern recognition concepts to the classification of metallic objects by their electromagnetic induction responses was performed. The effect on the response of a limited set of steel spheroids due to various factors such as object shape, size, and orientation was examined and a pattern recognition scheme based on these results was proposed. Implementation of the scheme involved the development of a novel extension to the nearest mean vector type of classifier in which the concept of the class mean as a point in feature space was generalized to be a curve. The resultant pattern recognition scheme was tested on a representative test set which included 815 responses, corresponding to 104 variations in object and orientation. A success rate of greater than 96 percent was achieved. It is noted that the classifier extension developed provides a viable approach to classification of responses that very continuously with respect to a single parameter.

Journal ArticleDOI
TL;DR: The design of a hybrid optical–digital image-processing system and the development of methods to perform a statistical analysis of the correlation signals are described and high classification results have been obtained for alphanumerical patterns.
Abstract: This paper describes the design of a hybrid optical–digital image-processing system and the development of methods to perform a statistical analysis of the correlation signals. The optical setup is based on a coherent correlator using binary-coded filters. The digital part is based on a microprocessor image-processing system. Six different algorithms for correlation signal evaluation are investigated. Feature reduction is achieved by multivariate analysis. High classification results have been obtained for alphanumerical patterns even in the presence of noise, rotation, change of scale, and shearing.

Journal ArticleDOI
TL;DR: An overview of the many types of knowledge that must be modeled in remote sensing and cartography is presented, and architectural and control aspects deemed important for cartographic expert systems are discussed.
Abstract: Expert or knowledge-based system approaches are currently being viewed with great interest for their potential to handle the many difficult problems encountered in image understanding and cartographic feature extraction from remotely sensed imagery. This article presents an overview of the many types of knowledge that must be modeled in remote sensing and cartography, and discusses architectural and control aspects deemed important for cartographic expert systems. A distributed architecture and a control structure based on a parallel non-directional search algorithm are described and open problems are mentioned.

Journal ArticleDOI
TL;DR: The performance using intensity and phase Fourier transform features and the performance in the presence of noise are studied and quantified for two different two-class pattern recognition data bases.
Abstract: Various feature extractors/classifiers for a hierarchical feature-space pattern recognition system are described. The system is intended to achieve multiclass distortion-invariant object identification. Although only a Fourier transform feature space is used, our basic hierarchical concepts, our theoretical analysis, and our general conclusions are applicable to other feature spaces. The performance using intensity and phase Fourier transform features and the performance in the presence of noise are studied and quantified for two different two-class pattern recognition data bases.

Journal ArticleDOI
TL;DR: Algorithms based on the Karhunen-Loeve expansion (KLE) and used for compression and feature extraction, come under the same generalized eigendata problem and can be looked upon from a unifying point of view.
Abstract: The aim here is to show that algorithms based on the Karhunen-Loeve expansion (KLE) and used for compression and feature extraction, come under the same generalized eigendata problem and can be looked upon from a unifying point of view. The KL expansion of 2-class data of signal and noise from the multidetector STEM images of ferritin molecules is studied as an example. Compression in KLE space is used for noise suppression (filtering) and dimensionality reduction (feature extraction) of these images.

Proceedings ArticleDOI
04 Dec 1984
TL;DR: A multicomputer system was developed to perform image processing and morphological feature extraction on large number of samples, with emphasis on system reliability and expandability, allowing for performance at a reduced rate when one or more computers malfunctioned.
Abstract: Erosion and dilation of images were compared with other edge detection techniques on a variety of marine organisms. Under certain conditions the erosion and dilation technique gave better results. The critical problem resolved by our approach was low contrast imaging of randomly oriented objects that displayed random variations due to appendages that frequently appearred with marine biological samples. A multicomputer system was developed to perform image processing and morphological feature extraction on large number of samples. Emphasis was given to system reliability and expandability, allowing for performance at a reduced rate when one or more computers malfunctioned. The system currently operates with seven computers but can be expanded to contain up to seventeen. Classification accuracy on zooplankton samples from New England coastal waters was approximately 92%.

Proceedings ArticleDOI
01 Mar 1984
TL;DR: Two feature extraction methods for classification of textures are presented and it is shown that the sample correlations over a symmetric window including the origin are optimal features for classification.
Abstract: Two feature extraction methods for classification of textures are presented. It is assumed that the given M × M texture is generated by a Gaussian Markov random field (GMRF) model, in the first method, the least square estimates of model parameters are used as features. In the second method, using the notion of sufficient statistics, it is shown that the sample correlations over a symmetric window including the origin are optimal features for classification. Simple minimum distance classifiers using these two feature sets yield classification accuracies of over 99% and 92% respectively for a seven class problem.

Proceedings ArticleDOI
09 Jan 1984
TL;DR: A global approach utilizing the contextual information in a scene currently discarded offers the most promise in overcoming the short-comings of current object classification methods.
Abstract: Existing strategies for the identification of objects in a scene are based upon classical pattern recognition approaches. The basic concept involved centers around the extraction of a set of statistical features for each object detected in a scene, followed by the application of a classifier which attempts to derive the decision boundaries that separate these objects into classes. As statistical features are quite sensitive to noise, this approach has led to problems due to the inability of classifiers to identify accurate feature set separation in less than ideal conditions. A global approach utilizing the contextual information in a scene currently discarded offers the most promise in overcoming the short-comings of current object classification methods.


Journal ArticleDOI
TL;DR: In this article, the authors focus on the effectiveness of Malina's criterion and at the same time point out the problem in his feature extraction method, indicating how to avoid the difficulties.
Abstract: In the discussion of the two-class feature extraction problem in pattern recognition, Malina [7] proposed recently a new criterion for overcoming the defects of Fisher's criterion. the discussion in this paper focuses on the effectiveness of Malina's criterion and at the same time points out the problem in his feature extraction method, indicating how to avoid the difficulties. First, no clear definition is made in Malina's method for the coordinate system in the feature space, and the coordinate system used is not optimal. Second, the eigenvalue problem must be solved including the solution of inverse matrix in order to determine the feature axis. For the first problem, this paper points out tha the coordinate axis di and dj in the coordinate system in Malina's method satisfies diTdj = 0, where is the sum of covariance matrices in the class. Then the optimal coordinate system from that viewpoint is given. For the second problem, the inverse matrix is eliminated by performing a certain transformation (A-1 transformation) to the coordinate system of the original pattern space. the validity of A-1 transformation is shown by the invariance of the distances among classes, and it is shown that the above optimal coordinate system under A-1 transformation is the optimal orthogonal coordinate system satisfying diTdj = 0, i j. Finally, Malina's criterion is extended to the multiclass case and the problems are discussed.

Proceedings ArticleDOI
04 Dec 1984
TL;DR: A two-level feature extraction classifier using a geometrical-moment feature space is described for multi-class distortion-invariant pattern recognition.
Abstract: A two-level feature extraction classifier using a geometrical-moment feature space is described for multi-class distortion-invariant pattern recognition. The first-level classifier provides object class and aspect estimates using multi-class Fisher projections and optimized two-class Fisher projections in a hierarchical classifier. Aspect estimates are provided from ratios of the computed moments. The second-level classifier provides the final class estimate, distortion parameter estimates and the confidence of the estimates. Extensive test results on a ship image database are presented.

Proceedings ArticleDOI
Roland Wilson1
01 Mar 1984
TL;DR: Finite forms of the two common eigenvalue problems associated with the uncertainty principle are introduced and it is shown that feature extraction filters produced using these methods are effective in the processing of natural images.
Abstract: Finite forms of the two common eigenvalue problems associated with the uncertainty principle are introduced. The results are extended to two dimensions and it is shown that feature extraction filters produced using these methods are effective in the processing of natural images.

01 Jun 1984
TL;DR: In this article, a status report about some of the research efforts within the Center for Remote Sensing (CRS) that are associated with image analysis is presented. Emphasis has been placed on the manual procedure of photo analysis, photo interpretation logic, classification schemes, and knowledge based systems.
Abstract: : This is a status report about some of the research efforts within the Center for Remote Sensing (CRS) that are associated with image analysis. Emphasis has been placed on the manual procedure of photo analysis, photo interpretation logic, classification schemes, and knowledge based Systems. Information derived from other sources and information presented by contributors are acknowledged in the appropriate sections, Keywords include: Photo Analysis, Photo Interpreation Logic, Feature Extraction, Air Photo Keys, Image Analysis, Remote Sensing, Vegetation Classification, and Knowledge Based SYstems.

Proceedings ArticleDOI
09 Jan 1984
TL;DR: A set of algorithms used to automatically detect, segment and classify tactical targets in FLIR (Forward Looking InfraRed) images are presented and implemented in an Intelligent Automatic Target Cueing (IATC) system.
Abstract: In this paper we present a set of algorithms used to automatically detect, segment and classify tactical targets in FLIR (Forward Looking InfraRed) images. These algorithms are implemented in an Intelligent Automatic Target Cueing (IATC) system. Target localization and segmentation is carried out using an intelligent preprocessing step followed by relaxation or a modified double gate filter followed by difference operators. The techniques make use of range, intensity and edge density information. A set of robust features of the segmented targets is computed. These features are normalized and decorrelated. Feature selection is done using the Bhattacharrya measure. Classification techniques include a set of linear, quadratic classifiers, clustering algorithms, and an efficient K-nearest neighbor algorithm. Facilities exist to use structural information, to use feedback to obtain more refined boundaries of the targets and to adapt the cuer to the required mission. The IATC incorporating the above algorithms runs in an automatic mode. The results are shown on a FLIR data base consisting of 480, 512x512, 8 bit air-to-ground images.© (1984) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Proceedings ArticleDOI
01 Jan 1984
TL;DR: The bag of tricks for an autonomous vehicle imaging system is large because of complexity of the information source and the various needs A complete system will include image enhancement, data reduction, frame to frame correlation and motion detection as well as image understanding.
Abstract: Images sensed by optical or acoustic imaging systems contain information that can be used for autonomous vehicle navigation and control. The merging of these technologies into one image acquisition module has great potential for vehicle control using image processing techniques. Feature extraction by edge detection and pattern recognition provide a base of high level information to be used by a vehicle controller or path planner. The bag of tricks for an autonomous vehicle imaging system is large because of complexity of the information source and the various needs A complete system will include image enhancement, data reduction, frame to frame correlation and motion detection as well as image understanding. In addition, these functions must operate in real time with available on board processing power.


Proceedings ArticleDOI
Y. Oh1, J. Ackenhusen, L. Breda, L. Rosa, M. Brown, L. Niles 
01 Mar 1984
TL;DR: An architecture for an integrated circuit to perform LPC-based feature measurement in real time has been developed for speech recognition applications and is expected to support more advanced concepts in speech recognition such as vector quantization.
Abstract: An architecture for an integrated circuit to perform LPC-based feature measurement in real time has been developed for speech recognition applications. The integrated circuit architecture is suitable for both isolated word recognition, in which the pattern matching occurs after the end of the utterance, and connected word recognition, where the pattern matching proceeds in synchrony with the speech input. A major feature of this architecture is the presence of stored program control which implements the LPC-based feature extraction algorithm on a single set of computational resources. Preliminary timing analysis indicates that a portion of real time remains unused. Thus, in addition to performing standard LPC-based feature analysis in real time, through program modification and memory addition, the architecture is expected to support more advanced concepts in speech recognition such as vector quantization. To aid in program development, software tools which include an assembler and a simulator and run on the UNIX* operating system have been developed. The projected chip complexity is approximately 20,000 transistors of random logic, 40,000 bits of ROM, and 2,500 bits of RAM.