scispace - formally typeset
Search or ask a question

Showing papers on "Feature vector published in 1976"


Journal ArticleDOI
01 Feb 1976
TL;DR: A pattern recognition system is described which is capable of identifying human faces from their full profile silhouettes and is compared to human observers with the result that the system performs no worse than the human observers.
Abstract: A pattern recognition system is described which is capable of identifying human faces from their full profile silhouettes. Each silhouette is preprocessed to remove noise, smooth edges, and extract the front edge. The processed silhouettes are then represented by a 12-dimensional feature vector, the components of which are obtained by a circular autocorrelation function. Using a weighted k-nearest neighbor decision rule it is shown that a recognition accuracy of 90 percent is attainable in a ten-class problem. An adaptive training procedure is also described which is used for setting up the authority files. This training procedure appears to identify those feature vectors representing a class which are either most important, from an information content point of view, or are observed most often. Finally, a comparison is made between the recognition accuracy obtained using circular autocorrelation features and moment invariant features. It is shown that the former outperforms, in this problem, the latter. The system is also compared to human observers with the result that the system performs no worse than the human observers.

84 citations


Patent
04 Jun 1976
TL;DR: In this article, a two-dimensional feature set patterned after the prototype set is stored with the signals ordered in dependence upon the occurrence of the peaks in the signature and accompanied by peak rank in terms of peak magnitude and stroke character in the vicinity of each peak.
Abstract: Signature verification where an image mosaic for a signature to be verified is stored in a memory and wherein a prototype feature set for said signature is stored in memory. Binary signals representative of the location and magnitude of positive and negative peaks in mosaic and the stroke character in the region of each of said peaks are generated. A two-dimensional feature set patterned after the prototype set is stored with the signals ordered in dependence upon the occurrence of the peaks in the signature and accompanied by (i) peak rank in terms of peak magnitude and (ii) stroke character in the vicinity of each peak. The feature vector set is then compared with the prototype vector set and identity is signaled when within predetermined limits the feature set matches the prototype set.

48 citations


Journal ArticleDOI
TL;DR: This note illustrates how one of Ohlander's scenes, a house, can be reasonably segmented by mapping it into a three-dimensional color space.
Abstract: Ohlander [1] has shown that a variety of scenes can be segmented into meaningful parts by histogramming the values of various point or local properties of the scene; extracting the region whose points gave rise to that peak; and repeating the process for the remainder of the scene. A generalization of this histogram analysis approach is to map the points of the scene into a multi-dimensional feature space, and to look for clusters in this space (a histogram is a mapping into a one-dimensional feature space, in which clusters are peaks). This note illustrates how one of Ohlander's scenes, a house, can be reasonably segmented by mapping it into a three-dimensional color space.

47 citations


S. G. Wheeler1, P. Misra1, Q. A. Holmes
01 Jan 1976
TL;DR: In this paper, a model for the Landsat multispectral scanner data, representing a generalization of the commonly used Gaussian model, has been formulated and analyzed, and the model hypothesizes that the data for different crop types essentially lie on distinct hyperplanes in the feature space.
Abstract: A model for the Landsat multispectral scanner data, representing a generalization of the commonly used Gaussian model, has been formulated and analyzed. The model hypothesizes that the data for different crop types essentially lie on distinct hyperplanes in the feature space. Tests of this model reveal that: (1) the agricultural data from any single acquisition (i.e., four-channel) of Landsat are essentially two dimensional, regardless of the crop type; and (2) the data from different sites and different stages of crop development all lie on planes which are parallel. These findings have significant implications for data display, classification, feature extraction, and signature extension.

15 citations


Journal ArticleDOI
TL;DR: This paper describes an attempt to use the grey-level histogram from a cell as a feature vector, in order to simplify part of the automatic feature-extraction process.

6 citations


20 Apr 1976
TL;DR: Two algorithms were developed at Rice University for optimal linear feature extraction based on the minimization of the risk (probability) of misclassification under the assumption that the class conditional probability density functions are Gaussian.
Abstract: : Two algorithms were developed at Rice University for optimal linear feature extraction based on the minimization of the risk (probability) of misclassification under the assumption that the class conditional probability density functions are Gaussian. In the present report, the second algorithm is described which is used when the dimension of the feature space is greater than one. Numerical results obtained from the application of the present algorithm to remotely sensed data from the Purdue C1 flight line are mentioned. Brief comparisons are made of these results with those obtained using a feature selection technique based on maximizing the Bhattacharyya distance. For the example considered, a significant improvement in classification is obtained by the present technique.

2 citations


01 Nov 1976
TL;DR: A FORTRAN computer program was written and tested, and the measurements consisted of 1000 randomly chosen vectors representing 1, 2, 3, 7, and 10 subclasses in equal portions.
Abstract: A FORTRAN computer program was written and tested. The measurements consisted of 1000 randomly chosen vectors representing 1, 2, 3, 7, and 10 subclasses in equal portions. In the first experiment, the vectors are computed from the input means and covariances. In the second experiment, the vectors are 16 channel measurements. The starting covariances were constructed as if there were no correlation between separate passes. The biases obtained from each run are listed.

1 citations


Journal ArticleDOI
TL;DR: It is shown that under a weak assumption of Gaussian statistics, the threshold necessary to attain a given probability of correct acceptance as a function of the number of dimensions or features can be theoretically calculated.
Abstract: The purpose of this paper is to investigate the applicability of long‐term feature averaging as an eventual means for performing text independent voice authentication (speaker verification). Based upon a set of long‐term feature vectors, a principal component analysis is performed to obtain a normalized reference coordinate system for each speaker. Features extracted from the test speaker are transformed to this coordinate system and then the Euclidean distance is measured. It is shown thatunder a weak assumption of Gaussian statistics, the threshold necessary to attain a given probability of correct acceptance as a function of the number of dimensions or features can be theoretically calculated. Results of several preliminary experiments are presented to illustrate the technique.

1 citations


DOI
01 May 1976
TL;DR: As a rule, in the construction of measurement equipment in a recognition system, a situation exists in which the quality of the measurement information is substantially defined by thequality of the given measurement device, characterized by the values of its basic parameters and depending on the cost of development of the device.
Abstract: As a rule, in the construction of measurement equipment in a recognition system, based practicaIly on arbitrary physical causes, a situation exists in which the quality of the measurement information (in particular, the probabili ty of determining the measured parameter) is substantially defined by the quality of the given measurement device, characterized by the values of its basic parameters and depending in the general case on the cost of development of the device.

Book ChapterDOI
01 Jan 1976
TL;DR: In some pattern recognition problems, the recognition process includes not only the capability of assigning the pattern to a particular class (to classify it), but also the capacity to describe aspects of the pattern which make it ineligible for assignment to another class.
Abstract: The many different pattern recognition methods may be grouped into two general approaches; namely, the decision-theoretic (or discriminant) approach and the syntactic (or structural) approach. In the decision-theoretic approach, a set of characteristic measurements, called features, are extracted from the patterns; the recognition of each pattern (assignment to a pattern class) is usually made by partitioning the feature space. Most of the developments in pattern recognition research during the past decade deals with the decision-theoretic approach and its applications [1–11]. In some pattern recognition problems, the structural information which describes each pattern is important, and the recognition process includes not only the capability of assigning the pattern to a particular class (to classify it), but also the capacity to describe aspects of the pattern which make it ineligible for assignment to another class. A typical example of this class of recognition problem is picture recognition or more generally speaking, scene analysis. In this class of recognition problems, the patterns under consideration are usually quite complex and the number of features required is often very large which makes the idea of describing a complex pattern in terms of a (hierarchical) composition of simpler subpatterns very attractive. Also, when the patterns are complex and the number of possible descriptions is very large it is impractical to regard each description as defining a class (for example in fingerprint and face identification problems, recognition of continuous speech, Chinese characters, etc.). Consequently, the requirement of recognition can only be satisfied by a description for each pattern rather than the simple task of classification.